text
stringlengths 14
1.76M
|
|---|
# Tomographic reconstruction of quasistatic surface polariton fields
Raphael Hauer Graz Centre for Electron Microscopy, Steyrergasse 17, 8010
Graz, Austria Institute for Electron Microscopy and Nanoanalysis, Graz
University of Technology, Steyrergasse 17, 8010 Graz, Austria Georg
Haberfehlner Graz Centre for Electron Microscopy, Steyrergasse 17, 8010 Graz,
Austria Institute for Electron Microscopy and Nanoanalysis, Graz University
of Technology, Steyrergasse 17, 8010 Graz, Austria Gerald Kothleitner Graz
Centre for Electron Microscopy, Steyrergasse 17, 8010 Graz, Austria Institute
for Electron Microscopy and Nanoanalysis, Graz University of Technology,
Steyrergasse 17, 8010 Graz, Austria Mathieu Kociak Université Paris-Saclay,
CNRS, Laboratoire de Physique des Solides, 91405 Orsay. France Ulrich
Hohenester Institute of Physics, University of Graz, Universitätsplatz 5,
8010 Graz, Austria
###### Abstract
We theoretically investigate the tomographic reconstruction of the three-
dimensional photonic environment of nanoparticles. As input for our
reconstruction we use electron energy loss spectroscopy (eels) maps for
different rotation angles. We perform the tomographic reconstruction of
surface polariton fields for smooth and rough nanorods, and compare the
reconstructed and simulated photonic local density of states, which are shown
to be in very good agreement. Using these results, we critically examine the
potential of our tomography scheme, and discuss limitations and directions for
future developments.
## I Introduction
Nano optics deals with light confinement at the nanoscale [1, 2]. This is
achieved by binding light to surface resonances of nanoparticles, such as
surface plasmon polaritons for metallic nanoparticles [3] or surface phonon
polaritons for dielectric nanoparticles [4, 5]. These resonances come along
with strongly localized fields and allow squeezing light into extreme sub-
wavelength volumes, which can be exploited for various applications [6].
Because of the diffraction limit of light, the strongly localized fields
cannot be directly imaged in optical microscopy. In recent years, electron
energy loss spectroscopy (eels) has become a highly successful technique for
imaging electromagnetic fields at the nanoscale and with high energy
resolution [7, 8, 9, 10]. In eels swift electrons pass by or through a
nanoparticle and loose with a certain probability energy by exciting surface
resonances. By raster-scanning the electron beam over the specimen and
measuring the number of electrons that have lost a certain amount of energy,
one obtains information about the electromagnetic fields at the nanoscale [11,
2]. However, the technique does not provide direct information about the
three-dimensional fields but only about the averaged interaction along the
entire electron trajectory.
eels tomography is a variant of electron tomography [12], where the three-
dimensional structure of a specimen is reconstructed from a collection of
transmission electron micrographs for various tilt angles. In eels the
reconstruction is complicated by the fact that the loss does not occur at a
specific position of the specimen, but is a highly nonlocal process [11]. eels
tomography of surface plasmons was first suggested independently in [13] and
[14], where the latter paper demonstrated experimentally the reconstruction of
localized surface plasmon modes for a silver nanocube. While these seminal
papers employed the quasistatic approximation [11, 2], successive work showed
how to extend the scheme to full retardation [15], and demonstrated its
applicability for single and coupled silver nanoparticles [16, 17].
In a recent paper [18], we have brought eels tomography from the optical to
the mid-infrared regime, and have demonstrated experimentally the
reconstruction of localized surface phonon polaritons for a MgO nanocube.
Contrary to surface plasmon polaritons, the use of the quasistatic
approximation is perfectly justified for surface phonon polaritons sustained
by nanoparticles with dimensions of a few hundred nanometers. This
considerably simplifies the methodology for the tomographic reconstruction.
While going full circle from the quasistatic tomography of surface plasmon
polaritons in our initial work [13] to quasistatic tomography of surface
phonon polaritons [18], we have gained quite some understanding of the
critical elements in eels tomography, and our approach has matured
considerably. Time is ripe for a critical re-examination and re-interpretation
of our tomography scheme.
In this paper we present a theoretical study of eels tomography for
prototypical dielectric nanoparticles. We submit a tilt series of simulated
eels maps to our tomography scheme, in order to extract parameters
characterizing the nanophotonic environment. For this parametrized photonic
environment we compute the photonic local density of states (ldos) [19, 1, 2],
which is compared with independent simulation results. From this comparison we
examine the strengths and weaknesses of our tomographic reconstruction scheme.
The photonic ldos is a concept borrowed from solid state physics, and accounts
for the number of photonic modes per unit frequency and volume. In free space
the photonic ldos is [1, 2] (we use SI units throughout)
$\rho_{0}(\omega)=\frac{\omega^{2}}{\pi^{2}c^{3}}\,,$ (1)
where $\omega$ is the angular frequency and $c$ the speed of light. The
photonic ldos governs the power dissipated by an oscillating dipole through
$P_{0}=\frac{\omega^{2}p^{2}}{12\varepsilon_{0}}\rho_{0}(\omega)\,,$ (2)
where $p$ is the oscillator’s dipole moment and $\varepsilon_{0}$ the free-
space permittivity. Alternatively, we can relate via
$P_{0}=\hbar\omega\,\gamma_{0}$ the power dissipation to the decay rate
$\gamma_{0}$ of a quantum emitter. The concept of the photonic ldos comes to
full glory in nanophotonics, where the light-matter interaction becomes
dramatically enhanced through surface excitations of nanoparticles, such as
surface plasmon or phonon polaritons. The enhancement of the photonic ldos
$\rho(\omega)$ can be in the range of hundreds to thousands in comparison to
its free-space value $\rho_{0}(\omega)$ [20]. Correspondingly, quantum
emitters can transfer energy to the nanophotonic environment more efficiently,
and their decay rate or power dissipation $P$ is increased by the ldos
enhancement according to
$P:P_{0}=\rho:\rho_{0}\,.$ (3)
Below we will compute the ldos enhancement $\rho:\rho_{0}$ using the photonic
environment reconstructed from eels maps. It is obvious that electrons and
oscillating dipoles couple quite differently to the nanophotonic environment.
For this reason, the ldos reconstruction from eels data is quite delicate and
provides a stringent testbed for our tomography approach.
We have organized our paper as follows. In Sec. II we present the theory and
methodology of our tomographic reconstruction. We have tried to keep the
presentation as compact and brief as possible, and refer to the literature for
the detailed derivations whenever possible. Some technical issues are
transfered to an appendix. In Sec. III we present the tomography results for
smooth and rough nanorods, and compare the reconstructed and the simulated
photonic ldos. Finally, in Sec. IV we put our tomography into a broader
context, examine critically the strengths and weaknesses of our approach, and
identify lines for future research.
## II Theory
For MgO nanoparticles the surface phonon polariton energies $h\nu$ are of the
order of 100 meV, corresponding to a free-space wavelength
$\lambda=\nicefrac{{c}}{{\nu}}\sim 12\,\mu$m. For nanoparticle dimensions of
approximately hundred nanometers we can thus safely introduce the quasistatic
approximation [2], where the electric field is expressed in terms of a
quasistatic potential $V(\bm{r})$ through $\bm{E}(\bm{r})=-\nabla V(\bm{r})$
and we keep the frequency dependence of the permittivity functions
$\varepsilon(\omega)$.
### II.1 Green’s functions
Figure 1: (a) Schematics of Green’s function. In free space the Green’s
function $G_{0}(\bm{r},\bm{r}^{\prime})$ gives the potential at position
$\bm{r}$ for a unit charge located at position $\bm{r}^{\prime}$. In presence
of a nanoparticle one must additionally add a reflected Green’s function that
accounts for the nanoparticle response. (b) The reflected Green’s function can
be expanded using a complete set of eigenpotentials $V_{k}(\bm{r})$. In our
tomography scheme we can also start from the modes associated with a simpler
reference boundary $\partial\Omega_{0}$ rather than the actual nanoparticle
boundary $\partial\Omega$, and expand the reflected Green’s function using the
reference modes. For details see text.
In the following we consider the problem depicted in Fig. 1(a), where a charge
located at position $\bm{r}^{\prime}$ interacts with a dielectric nanoparticle
situated in a background medium with dielectric constant $\varepsilon_{0}$.
Green’s functions provide an elegant and efficient method for solving such
problems. We first introduce the Green’s function defined through [21, 2]
$\nabla^{2}G(\bm{r},\bm{r}^{\prime})=-\delta(\bm{r}-\bm{r}^{\prime})\,,$ (4)
which gives the potential at position $\bm{r}$ for a unit charge located at
position $\bm{r}^{\prime}$. In an unbounded medium the Green’s function would
be given by the usual expression
$G_{0}(\bm{r},\bm{r}^{\prime})=\frac{1}{4\pi|\bm{r}-\bm{r}^{\prime}|}\,,$ (5)
and the potential associated with a charge distribution $\rho(\bm{r})$ can be
expressed as
$V_{\rm
inc}(\bm{r})=\int\frac{\rho(\bm{r}^{\prime})}{4\pi\varepsilon_{0}|\bm{r}-\bm{r}^{\prime}|}\,d^{3}r^{\prime}\,.$
(6)
In presence of the nanoparticle this incoming potential will induce a
reflected potential associated with the particle response. To account for
this, we split the total Green’s function into two parts
$G(\bm{r},\bm{r}^{\prime})=G_{0}(\bm{r},\bm{r}^{\prime})+G_{\rm
refl}(\bm{r},\bm{r}^{\prime})\,,$ (7)
where the reflected part is a solution of Laplace’s equation which is chosen
such that Maxwell’s boundary conditions are fulfilled at the nanoparticle
boundary. Suppose for a moment that the reflected Green’s function is at hand.
It can then be shown that in eels the loss probability is related to the
reflected Green’s function via [11, 2]
$\Gamma(\bm{R}_{0},\omega)=-\frac{1}{\pi\hbar}\int\mbox{Im}\Big{[}\rho_{\rm
el}^{*}(\bm{r})G_{\rm refl}(\bm{r},\bm{r}^{\prime})\rho_{\rm
el}(\bm{r}^{\prime})\Big{]}\,d^{3}rd^{3}r^{\prime}\,,$ (8)
where $\bm{R}_{0}=(x_{0},y_{0})$ is the impact parameter of the electron beam
propagating along the $z$ direction (aloof geometry), $\hbar\omega$ is the
loss energy and $\rho_{\rm el}(\bm{r})$ the charge distribution of the swift
electron. The term in brackets of Eq. (8) accounts for a self-interaction
process where the swift electron polarizes the nanoparticle, and the
polarization acts back on the electron. This nonlocal response is mediated by
the reflected Green’s function. Similarly, the power dissipated by a dipole
oscillating with frequency $\omega$ becomes [2]
$P=P_{0}-\frac{\omega}{2\varepsilon_{0}}\mbox{Im}\Big{[}(\bm{p}\cdot\nabla)(\bm{p}\cdot\nabla^{\prime})G_{\rm
refl}(\bm{r},\bm{r}^{\prime})\Big{]}_{\bm{r}=\bm{r}^{\prime}=\bm{r}_{0}}\,,$
(9)
where $P_{0}$ is the free-space dissipation, $\bm{p}$ is the dipole moment,
and $\bm{r}_{0}$ the position of the dipole. The ratio $P:P_{0}$ gives the
enhancement of the photonic ldos, see also Eq. (3). The expressions given in
Eqs. (8) and (9) are two examples for the enhancement of light-matter
interactions in presence of nanoparticles, and show that the nanophotonic
environment is fully characterized upon knowledge of the reflected Green’s
function.
### II.2 Eigenmode decomposition
A powerful and convenient representation of the reflected Green’s function is
in terms of geometric eigenmodes $u_{k}(\bm{s})$ and eigenvalues
$\lambda_{k}$, where $\bm{s}$ is a position located on the boundary of the
nanoparticle [22, 23, 2]. These eigenmodes form a complete set of basis
functions. To each eigenmode we can associate an eigenpotential
$V_{k}(\bm{r})=\oint_{\partial\Omega}\frac{u_{k}(\bm{s}^{\prime})}{4\pi|\bm{r}-\bm{s}^{\prime}|}\,dS^{\prime}\,,$
(10)
which is a solution of Laplace’s equation that fulfills Maxwell’s boundary
conditions at the nanoparticle boundary. We can then decompose the reflected
Green’s function outside the nanoparticle in terms of these eigenpotentials
via [23, 2]
$G_{\rm
refl}(\bm{r},\bm{r}^{\prime})=-\sum_{k}V_{k}(\bm{r})\left[\frac{\lambda_{k}+\frac{1}{2}}{\Lambda(\omega)+\lambda_{k}}\right]V_{k}(\bm{r}^{\prime})\,,$
(11)
where $\Lambda(\omega)$ is an expression that solely depends on the
permittivities of the nanoparticle and the embedding medium. Inserting Eq.
(11) into the eels loss probability of Eq. (8) leads us to
$\Gamma(\bm{R}_{0},\omega)=\frac{1}{\pi\hbar\varepsilon_{0}}\sum_{k}L_{k}(\omega)\left|\int\rho_{\rm
el}(\bm{r})V_{k}(\bm{r})\,d^{3}r\right|^{2}\,,$ (12)
with the lineshape function
$L_{k}(\omega)=\mbox{Im}\left[\frac{\lambda_{k}+\frac{1}{2}}{\Lambda(\omega)+\lambda_{k}}\right]\,.$
(13)
Eq. (12) is a particularly useful decomposition of the loss probability in
terms of surface phonon polariton eigenmodes. Each eigenmode contributes with
the lineshape function $L_{k}(\omega)$ and the oscillator strength given by
the square modulus term, which is governed by the interaction energy between
the charge distribution of the swift electron and the eigenpotential
$V_{k}(\bm{r})$. Similarly, the power dissipated by an oscillating dipole of
Eq. (9) can be decomposed into eigenmodes via
$P=P_{0}+\frac{\omega}{2\varepsilon_{0}}\sum_{k}L_{k}(\omega)\big{|}\bm{p}\cdot\nabla
V_{k}(\bm{r})\big{|}^{2}_{\bm{r}=\bm{r}_{0}}\,,$ (14)
with a corresponding interpretation in terms of lineshape functions and
oscillator strengths. From the dissipated power one can obtain the photonic
ldos using Eqs. (1) and (3), where one often additionally averages over all
dipole orientations to account for the random orientation of quantum emitters
in typical experiments [1].
### II.3 Tomographic reconstruction of eigenmodes
It is apparent from Eqs. (12) and (14) that we can compute the eels loss
probability $\Gamma(\bm{R}_{0},\omega)$ and the ldos enhancement $P:P_{0}$, or
any other related response function, once the geometric eigenmodes
$u_{k}(\bm{s})$ and the lineshape function $L_{k}(\omega)$ are at hand.
Expressed differently, the nanophotonic environment is fully characterized
upon knowledge of $u_{k}(\bm{s})$ and $L_{k}(\omega)$. We can now formulate
the goal of our tomography approach. Suppose that we are in possession of the
eels loss probabilities $\Gamma(\bm{R}_{0},\omega)$, ideally for various
impact parameters and electron propagation directions, but don’t know the
eigenmodes $u_{k}(\bm{s})$ and lineshape functions $L_{k}(\omega)$: can we
obtain through solution of an inverse problem a viable approximation for
$u_{k}(\bm{s})$ and $L_{k}(\omega)$? And if yes, how?
Figure 2: Schematics of tomographic reconstruction for a rough nanorod. The
reference boundary is formed by a smooth rod, see panels on top of the figure.
The experimental eels maps $\Gamma_{\rm exp}$ are obtained for a specific loss
energy and for various rotation angles, we only keep aloof electron
trajectories that do not penetrate the smooth rod. We start with some initial
guess for the optimization parameters $L_{k}$, $\mathbb{Q}$ and compute the
reprojected eels maps $\Gamma$ using Eq. (20). These parameters are optimized
until a local minimum is reached by the optimization algorithm. In the lowest
row we show the relative error $|\Gamma_{\rm exp}-\Gamma|:\Gamma_{\rm exp}$
between the experimental and optimized maps. The solid lines indicate the
contours for an error of 0.1%. Once the parameters $L_{k}$, $\mathbb{Q}$ are
at hand, we can compute other quantities such as the photonics ldos.
#### II.3.1 Optimization for modes on nanoparticle boundary
Consider first the situation that the nanoparticle boundary is known and that
we are seeking for the linshape functions and eigenmodes $L_{k}$,
$u_{k}(\bm{s})$. This corresponds to the situation previously investigated in
[18]. Let $u_{\ell}^{0}(\bm{s})$ be a complete set of basis functions on the
boundary. We shall refer to these modes as reference modes. As shown in
Appendix A, the eigenpotentials of Eq. (10) can be expanded in terms of these
modes via
$V_{k}(\bm{r})=\sum_{\ell}\mathbb{Q}_{k\ell}\oint_{\partial\Omega}\frac{u^{0}_{\ell}(\bm{s}^{\prime})}{4\pi|\bm{r}-\bm{s}^{\prime}|}\,dS^{\prime}\,,$
(15)
with $\mathbb{Q}$ being an orthogonal matrix. We can now formulate the
tomographic reconstruction scheme for a given set of experimental eels maps.
1. 1.
Find some reference modes $u_{\ell}^{0}(\bm{s})$ whose gross features are
expected to be similar to those of the true eigenmodes $u_{k}(\bm{s})$. This
point is irrelevant for a complete basis, but becomes crucial for actual
reconstructions where the basis has to be truncated.
2. 2.
Start with some initial guess for the lineshape function $L_{k}$ and
orthogonal matrix $\mathbb{Q}$, and compute the reprojected maps via Eq. (12).
Use an optimization routine for $L_{k}$, $\mathbb{Q}$ to obtain the best
possible agreement between experiment and reprojection. Note that in principle
$L_{k}(\omega)$ depends on frequency, but for a fixed loss energy the
lineshape functions can be treated as mere numbers.
3. 3.
Use the optimized parameters to compute other quantities, such as the photonic
ldos.
#### II.3.2 Optimization for modes on reference boundary
The above scheme can be also generalized to cases where the true nanoparticle
boundary $\partial\Omega$ is not known or is too complicated to be used in
actual reconstructions. We start by introducing a reference boundary
$\partial\Omega_{0}$ that fully encapsulates the nanoparticle, see also Fig.
1(b). In our modified approach we are not aiming for a reconstruction of the
eigenmodes $u_{k}(\bm{s})$ themselves, but of the eigenpotentials of Eq. (10)
outside of the reference boundary. There they can be expressed as generic
solutions of Laplace’s equation [21]
$V_{k}(\bm{r})=\oint_{\partial\Omega_{0}}\frac{\sigma_{k}(\bm{s}^{\prime})}{4\pi|\bm{r}-\bm{s}^{\prime}|}\,dS^{\prime}\,,$
(16)
where $\sigma_{k}(\bm{s})$ specifies the normal derivative of the potential on
$\partial\Omega_{0}$ (von Neumann boundary condition). We can now use a
complete set of basis functions $u_{\ell}^{0}(\bm{s})$ on $\partial\Omega_{0}$
for the expansion of $\sigma_{k}(\bm{s})$ to arrive at
$V_{k}(\bm{r})=\sum_{\ell}\mathbb{Q}_{k\ell}\oint_{\partial\Omega_{0}}\frac{u^{0}_{\ell}(\bm{s}^{\prime})}{4\pi|\bm{r}-\bm{s}^{\prime}|}\,dS^{\prime}\,,$
(17)
where $\mathbb{Q}$ is a non-orthogonal matrix formed by the expansion
coefficients. The tomographic reconstruction can now be performed in complete
analogy to the scheme presented above, with the only exception that
$\mathbb{Q}$ has to be replaced by a non-orthogonal matrix.
Figure 3: Reference and reconstructed modes. In our tomographic reconstruction
we use as reference modes $u_{\ell}^{0}(\bm{s})$ the eigenmodes of the
Laplace-Beltrami operator. Using the optimized parameters $L_{k}$,
$\mathbb{Q}$ we mix the modes to obtain the reconstructed modes shown on the
right hand side for the dipole and quadrupole resonances. For the smooth rod,
$\mathbb{Q}$ is an orthogonal matrix of size $n\times n$, where $n$ is the
truncation number of the basis. For the rough rod, $\mathbb{Q}$ is a full
matrix of size $m\times n$, where $m$ is the number of eigenpotentials to be
reconstructed. From the knowledge of $\mathbb{Q}$ we can compute the geometric
eigenpotentials $V_{k}(\bm{r})$ outside the reference boundary. The bar plot
on the right hand side reports the reconstructed lineshape parameters for the
dipole (blue) and quadrupole (red) resonances, the modes are sorted in
decreasing order of $L_{k}$ and the largest contributions are due to the modes
shown in the insets.
#### II.3.3 Optimization loop
In the following we discuss the optimization procedure in slightly more
detail, see also Figs. 2 and 3. We provide a unified description for the
optimizations using modes defined on either the nanoparticle or reference
boundary. In the first case, $\mathbb{Q}$ is an orthogonal matrix. In our
computational approach we have to truncate the basis and keep only $n$
representative modes, where $n$ is of the order of several tens to hundreds.
Correspondingly, $\mathbb{Q}_{n\times n}$ is a matrix of size $n\times n$, see
also Appendix A for the parametrization of this matrix. In the case of a
reference boundary, $\mathbb{Q}$ is a full matrix. In principle we can now use
different truncation numbers $m$ and $n$ for the reconstructed eigenpotentials
and basis functions, respectively, and $\mathbb{Q}_{m\times n}$ becomes a
matrix of size $m\times n$. In most cases it is sufficient to consider around
ten eigenpotentials, whereas the truncation number for the basis should be
chosen considerably larger. Let
$x_{i}=\big{\\{}\bm{R}_{0}^{(i)},\theta^{(i)}\big{\\}}$ (18)
be a set of impact parameters and tilt angles for a fixed loss energy, and
$\Gamma_{\rm exp}(x_{i})$ the corresponding experimental eels maps. We only
consider aloof electron trajectories that do not penetrate the nanoparticle.
The interaction energy between the swift electron and a reference mode
$u_{\ell}^{0}(\bm{s})$ is
$\mathscr{V}_{\ell}(x_{i})=\int\rho_{\rm
el}(\bm{r})V_{\ell}^{0}(\bm{r})\,d^{3}r=\oint_{\partial\Omega_{0}}V_{\rm
el}(\bm{s})u_{\ell}^{0}(\bm{s})\,dS\,,$ (19)
where $V_{\rm el}(\bm{r})$ is the potential associated with the charge
distribution $\rho_{\rm el}(\bm{r})$. When the nanoparticle boundary is known,
the reference boundary in the above boundary integral is identical to
$\partial\Omega$. The loss probability of Eq. (12) can then be written in the
compact form
$\Gamma(x_{i};L_{k},\mathbb{Q})=\frac{1}{\pi\hbar\varepsilon_{0}}\sum_{k}\sum_{\ell,\ell^{\prime}}\big{(}\mathbb{Q}_{k\ell}\mathscr{V}_{\ell}(x_{i})\big{)}L_{k}\big{(}\mathbb{Q}_{k\ell^{\prime}}\mathscr{V}_{\ell^{\prime}}(x_{i})\big{)}\,.$
(20)
We can now define a cost function
$J(L_{k},\mathbb{Q})=\frac{1}{2}\sum_{i}\Big{|}\Gamma_{\rm
exp}(x_{i})-\Gamma(x_{i};L_{k},\mathbb{Q})\Big{|}^{2}\longrightarrow\mbox{min}\,,$
(21)
that gives the “distance” between the experimental and reprojected eels maps.
This cost function is submitted to an optimization routine, such as a
conjugate-gradient or quasi-Newton one [24], which provides us with the
optimized expressions for $L_{k}$, $\mathbb{Q}$. Some details about the
parametrization of the orthogonal matrix, as well as the computation of the
derivative of the cost function with respect to the optimization parameters
are given in Appendix A.
## III Results
In [18] we have applied our tomography scheme to experimental eels maps for a
MgO nanocube. In this work we proceed differently and investigate the working
principle of our tomography scheme using simulated data only.
1. 1.
We first compute for each loss energy eels maps for a series of rotation
angles, see also Fig. 2. To be consistent with our previous notation, we
denote these simulated eels maps as $\Gamma_{\rm exp}$, and will refer to them
as experimental eels maps.
2. 2.
These maps are submitted to our tomography scheme based on Eq. (21), in order
to obtain the optimized parameters $L_{k}$, $\mathbb{Q}$ that specify the
nanophotonic environment.
3. 3.
Using Eq. (14) together with the optimized parameters, we compute the photonic
ldos, and will refer to it as the reconstructed photonic ldos.
4. 4.
Using Eq. (9) we compute the photonic ldos directly, with a simulation
approach to be discussed below, and will refer to it as the simulated photonic
ldos.
Figure 4: Loss spectra for smooth and rough nanorod, and for impact parameters
located on the long (blue) and short (red) rod axis, see inset. We consider
aloof electron trajectories with a propagation direction out of the image
plane. One observes a dipole resonance around 70 meV, a quadrupole resonance
around 80 meV, and a peak attributed to a multitude of modes around 90 meV.
For ideal reconstruction the simulated and reconstructed ldos maps should be
identical. Any deviation between the two maps can thus be attributed to
deficiencies of our approach, caused for instance by the truncation of the
reference basis $u_{\ell}^{0}(\bm{s})$ or a trapping of the optimization
algorithm in a local minimum.
We apply our tomography scheme to prototypical systems of a smooth and rough
nanorod with a diameter to length ratio of approximately $1:2.5$, see also
Fig. 4 and [25] for a detailed discussion of the rod modes. The rough rod has
been by generated by adding stochastic height variations to the smooth surface
of an ideal nanoparticle, following the prescription given in [26]. We shall
not be concerned whether such nanoparticles can indeed be fabricated with the
material system under investigation. As we are working within the quasistatic
regime, the actual size of the nanorods is irrelevant and the results can be
easily scaled to any size.
### III.1 Computational details
All our simulations are performed with the quasistatic classes of the nanobem
toolbox [27], which is based on a Galerkin scheme with linear shape elements.
See e.g. [2] for a detailed discussion. The parametrization of the MgO
dielectric function is the same as in [28, 18]. The nanorod boundaries are
discretized using more than 3000 boundary elements of triangular shape. We
checked that for such fine discretizations we obtained converged results. As
for the eels simulations, we consider the limit of large electron velocities
$v$ where the potential for a swift electron with impact parameter
$\bm{R}_{0}$ takes the form
$V_{\rm el}(\bm{r};\bm{R}_{0})=-\frac{e}{2\pi
v}\ln\big{|}\bm{R}-\bm{R}_{0}\big{|}\,,$ (22)
with $e$ being the elementary charge and $\bm{R}=(x,y)$. We have previously
shown in Fig. S9 of [18] that this simplified expression gives almost the same
results as simulations based on the full Maxwell’s equations.
As for the reference modes $u_{\ell}^{0}(\bm{s})$, we did not choose the usual
geometric eigenmodes [23, 2] for two reasons. First, in order to demonstrate
that our approach indeed works for any meaningful set of basis functions.
Second, we observed that the geometric eigenmodes computed with the nanobem
toolbox are often strongly localized around sharp corners or edges, such that
a large number of such modes would be needed for a useful expansion. In this
work, we choose for $u_{\ell}^{0}(\bm{s})$ the eigenmodes of the Laplace-
Beltrami operator, which is a generalization of the Laplace operator for
curved boundaries and is known to provide extremely smooth basis functions
[29]. The modes were additionally orthogonalized using Eq. (25).
In our optimization approach we truncate the Laplace-Beltrami basis using the
$n$ modes of highest eigenvalue, where a value of $n\approx 100$ turned out to
be a good compromise between reasonably fast optimizations and sufficiently
accurate results. The optimization was performed with the builtin matlab
function fminunc using a quasi-Newton algorithm together with a relatively
small function and optimality tolerance of $10^{-8}$. In all our simulation we
typically needed about 2000 iterations to reach a local minimum.
### III.2 Smooth rod
We start by discussing the smooth rod shown in Fig. 4. The loss spectra
exhibit three peaks, which can be attributed to a dipolar mode (70 meV), a
quadrupolar mode (80 meV), and a peak that is composed of a multitude of modes
(88 meV). For the smooth rod, the reference boundary $\partial\Omega_{0}$ is
identical to the true nanoparticle boundary $\partial\Omega$. Note that the
Laplace-Beltrami eigenmodes provide a (truncated) basis that does not coincide
with the true geometric eigenmodes. Simulated and reprojected maps originating
from our optimization algorithm are typically extremely similar, see lowest
row in Fig. 2 for the more difficult case of the rough nanorod.
Figure 5: Simulated and reconstructed ldos maps for the different loss
energies reported in the panels and in the different planes indicated on top
of the figure. The electron propagation direction is out of the image plane
and the lines at the rod centers indicate the tilt axis. (a) Simulated ldos
maps for dipole mode, (b,c) reconstructed ldos maps for different numbers $n$
of Laplace-Beltrami eigenmodes. Same for (d,e) quadrupole resonance and (f,g)
multitude of modes. The ldos maps in the first column are displayed for a
logarithmic color scale, in the other columns we use a linear color scale. All
maps are scaled to the maxima of the simulated maps. The solid lines report
the contours for 20% of the maximum of the simulated ldos in the respective
planes.
Figure 6: Cuts through the ldos maps shown in Figs. 5(a,b) along the long rod
axis at $y=0$. The solid lines report simulation results, the dashed lines
show the reconstructed results. The ldos enhancements are given in arbitrary
units, with a constant prefactor for the reconstructed ldos maps. For a
discussion see text. Larger ldos enhancements correspond to positions closer
to the nanorod, the colors are in agreement with those of the planes shown on
top of Fig. 5. Distances are given in units of the rod length $L$.
Figure 5 shows the simulated and reconstructed ldos maps in the symmetry plane
(left column) and in planes away from the rod (other columns), and for the
loss energies reported in the figure. We first consider the dipole mode shown
in panel (a). The ldos can be interpreted for an oscillating dipole as the
enhancement of the dissipated power, see Eq. (9), throughout we average over
all possible dipole orientations. Close to the rod, an oscillating dipole
couples with comparable strength to all surface phonon polariton modes. This
can be seen both in the symmetry plane of the rod (first column, logarithmic
color scale), as well as in the plane closest to the rod (second column,
linear color scale), where the photonic ldos is large and unstructured close
to the rod boundary. When moving away from the rod (other columns from left to
right), the coupling strength between the oscillating dipole and the rod
resonance modes have different distance dependencies, which are governed by
the oscillator strengths given in Eq. (14). For the chosen loss energy the
dipolar rod mode becomes strongest at larger distances, as can be inferred
from the two lobes in the ldos maps located at the rod caps.
Figure 7: Same as Fig. 5 but for a rough nanorod and for the (a–c) dipolar and
(d–f) quadrupolar rod resonance. In the reconstruction we consider $n=200$
reference modes for a smooth nanord (black contour shown on top) and $m=20$
modes to be reconstructed. We compare ldos values in planes (a–f) parallel and
(a*–f*) perpendicular to the electron propagation direction. The lines and
dots in the rod center indicate the tilt axis. In panels (b,e) we conisder a
tilt series where the nanoparticle is rotated around the $y$-axis only,
whereas in panels (c,f) we additionally consider a rotation around the
$x$-axis by 90∘ followed by the same tilt series around $y$.
Figures 5(b,c) show results for the reconstructed ldos using (b) $n=100$ and
(c) $n=20$ Laplace-Beltrami reference modes. Further away from the rod, the
simulated and reconstructed results agree well for both truncation numbers
$n$. For distances closer to the rod, the larger number of eigenmodes provides
better agreement. This is in accordance to our previous reasoning that
oscillating dipoles close to the rod couple to a larger number of eigenmodes,
and thus a larger number of modes is needed for the reconstruction.
In Fig. 6 we give a quantitative comparison between the simulated (full lines)
and reconstructed (dashed lines) ldos values for cuts along the long rod axis
and for dipole positions outside the nanoparticle. The true ldos enhancement
would depend on the actual size of the nanorod, for simplicity we give the
results in arbitrary units. Also the reconstruced ldos cuts are scaled by a
constant factor, where it is not obvious how this factor could be obtained in
absence of eels loss probabilities given in absolute numbers. We here do not
enter into the question of how to extract the absolute numbers of the
reconstructed ldos. Besides this unknown prefactor, the simulated and
reconstructed ldos values agree extremely well, with the possible exception of
the smallest distances where a larger number of eigenmodes might be needed.
Finally, in the remaining panels of Fig. 5 we compare the simulated and
reconstructed ldos for the (d,e) quadrupolar rod mode and the (f,g) multitude
of modes. It can be seen that the reconstruction works well for the
quadrupolar mode. Comparison with results for $n=20$ (not shown) reveal that
in this case a larger number of eigenmodes is strictly needed to obtain good
agreement. For the multitude of modes shown in panels (f,g) the agreement
between simulation and reconstruction is reasonable but not overly good. In
particular for the smallest distances the reconstructed maps show sharp or
asymmetric features, which are absent in the simulated maps. From these
results we conclude that the ldos reconstruction works best for loss peaks
that are governed by a few modes only.
Figure 8: ldos maps for dipole mode of rough nanorod and for different $(m,n)$
cutoffs used in the optimization. Here $m$ is the number of eigenpotentials to
be reconstructed and $n$ is the number of basis modes. As can be seen, the
reconstruced ldos does not depend decisevly on the chosen parameters.
### III.3 Rough rod
The case of the rough rod shown in Fig. 4 is considerably more difficult. We
keep considering the same reference modes as for the smooth rod, and select
the reference boundary $\partial\Omega_{0}$ such that it fully encapsulated
the boundary $\partial\Omega$ of the rough rod. Note that this reference
boundary is identical to the one of the smooth rod. Fig. 2 shows for the
dipolar mode the simulated (“experimental”) eels maps and the reprojected
ones. The relative error between these maps is small throughout.
In Fig. 7 we show the simulated and reconstructed ldos maps for the rough
nanorod. We compare different planes that are (a–f) parallel and (a*–f*)
perpendicular to the electron beam direction. The main difference between
these two configurations is that in the parallel case we reconstruct the ldos
throughout in regions through which swift electrons have traveled. In
contrast, for the perpendicular case we reconstruct the ldos also in planes
above the nanoparticle through which no electron has traveled because of our
restriction to aloof trajectories.
Let us consider the parallel case first. With the possible exception of the
smallest distance, the agreement between simulated and reconstructed ldos maps
is extremely good, both for the dipolar and quadrupolar mode. Both asymmetries
as well as hot spots, caused by localized fields in the vicinity of
protrusions of the rough rod, are well reproduced by our tomography scheme.
Things somewhat change for the perpendicular geometry shown in panels (b*) and
(e*), where the comparison is reasonable but not overly good. We performed
additional simulations where the tilt series for $\Gamma_{\rm exp}$ is
complemented by eels maps where the nanorod is first rotated around the
$x$-axis by $90^{\circ}$ before being submitted to the same tilt series around
$y$. As can be seen in panels (c*) and (f*), with this procedure we again
obtain extremely good agreement between simulated and reconstructed ldos maps.
This shows that our tomography scheme works best for regions through which
electrons have traveled.
We finally investigate in Fig. 8 the impact of the cutoff parameters $(m,n)$
on the reconstructed ldos for the dipole mode. Recall that $m$ is the number
of eigenpotentials to be reconstructed, and $n$ is the cutoff parameter for
the basis functions. Close to the particle (second column) a larger number $n$
of basis states leads to a better agreement with the simulated ldos, shown in
the first row. When moving away from the nanoparticle, the agreement between
simulated and reconstruced ldos maps is very good for all chosen simulation
parameters. This demonstrates that our reconstruction scheme is robust and
that the optimization results do not depend decisively on the input
parameters.
## IV Discussion
In the previous sections we have presented the methodology of our tomographic
reconstruction scheme and have investigated the approach for prototypical
nanophotonic structures. In this section we start by discussing our scheme
within a broader context, and then address limitations, dos and dont’s, as
well as extensions of our tomographic reconstructions.
### IV.1 Working principle
Figure 9: Working principle of our tomography scheme. The approach consists of
the triad of experiment, resonance modes, and references modes. The
experimental eels maps for various tilt angles provide the basic resource for
the reconstruction of the photonic environment. The resonance modes are used
to formulate the theory underlying the reconstruction, the reference modes
provide the parametrization of the photonic environment and are used for the
actual reconstruction. The parameters $L_{k}$, $\mathbb{Q}$ are obtained
through an optimization procedure in order to minimize the difference between
the measured and reprojected maps. As the potentials outside the nanoparticle
are solutions of Laplace’s equation, we can employ a boundary element method
(bem) scheme to express the potentials through their values on a boundary.
The basic working principle of our tomographic reconstruction is shown in Fig.
9 and consists of the triad formed by experiment, resonance modes, and
reference modes. In short, the resonance modes are needed to formulate the
theory, and the reference modes to provide a parametrization of the
nanophotonic environment and to perform the actual reconstruction. The
experimental data are the primary resource for the reconstruction. For this
reason, the quality of the experimental data directly influences the quality
of the tomographic reconstruction. Some further considerations about
experiments will be given below.
#### IV.1.1 Resonance modes
The nanophotonic environment outside the nanoparticle is fully characterized
in terms of the the reflected Green’s function of Eq. (11), which we repeat
here in compact form
$G_{\rm
refl}(\bm{r},\bm{r}^{\prime})=-\sum_{k}V_{k}(\bm{r})\big{(}M_{k}+iL_{k}\big{)}V_{k}(\bm{r}^{\prime})\,.$
(23)
$M_{k}$ and $L_{k}$ are the real and imaginary parts of the term given in
brackets of Eq. (11). The eigenpotentials $V_{k}(\bm{r})$ provide the
preferred physical basis, only with this basis the reflected Green’s function
can be written in the diagonal form of Eq. (23). A similar decomposition of
the Green’s function can be also obtained in the retarded case when using
quasinormal modes [30, 31, 32], as will be discussed below. For this reason,
from here on we use the more general expression of resonance modes rather than
geometric eigenmodes, for which we have developed our theory so far.
With these modes, both the eels loss probability of Eq. (12) as well as the
power dissipation of an oscillating dipole, Eq. (14), can be written as the
sum over individual loss channels. With any other basis one would obtain some
kind of mixing between different modes. This particular form has the
additional advantage that the lineshape function $L_{k}$ is always positive,
at least for lossy materials, which can be used in our optimization procedure
as a constraint, see Appendix A. Note that our tomography scheme only allows
for the reconstruction of $L_{k}$, which accounts for the loss properties of
the nanophotonic environment, but not for the propagation properties described
by $M_{k}$. As eels and ldos account for energy losses of electrons and
oscillating dipoles, respectively, this is not a problem here. However,
additional experimental input or a reconstruction for various loss energies
together with a Kramer-Kronig analysis would be needed for a reconstruction of
$M_{k}$.
To summarize this part, resonance modes are needed to formulate the abstract
theory, without making contact to the actual shape or composition of the
nanoparticle. Without resonance modes it would be unclear which properties of
the nanophotonic environment govern eels and ldos, and which properties can be
reconstructed using an inverse scheme. However, at no point of our approach we
require explicit knowledge of the actual form of the resonance modes or
lineshape functions.
#### IV.1.2 Reference modes
The reference modes are the device needed for the actual reconstruction. They
allow for a suitable parametrization of the nanophotonic environment, where
the viable parameters can be extracted from the optimization loop using the
experimental and reprojected eels maps. In principle, for a complete basis the
choice of the reference modes is irrelevant. However, in all practical cases
one has to truncate the basis, which should thus be based on an educated guess
and should include the gross features of the expected resonance modes from the
outset.
#### IV.1.3 Boundary element method approach
Although all our calculations presented here and elsewhere [13, 15, 16, 17,
18] have been performed using a boundary element method (bem) approach, it
doesn’t play an exceptional role in our tomography scheme. The reference modes
are fixed by specifying their values on a properly chosen reference boundary,
see Fig. 9. Away from the boundary the modes propagate according to Laplace’s
equation, see Eq. (10), or the source-free Maxwell’s equations in the retarded
case. This propagation is reminiscent of Huygens’ principle for the wavefront
propagation in free space and can be well described within bem, but otherwise
our tomography makes no particular use of it.
### IV.2 Frequently asked questions
All our cards are on the table now. Up to here, we have presented and examined
our tomography approach in some detail, and have put it into a broader
context. However, a number of open or not fully clear issues remains. In the
following we address these issues in the form of frequently asked questions.
As will become apparent, only some of these questions can be answered
definitely while others remain open. In this sense, the following discussion
is meant to summarize our present understanding of the field, to make aware
where things can go wrong, and to identify directions for future research.
How much preknowledge is needed? Any tomography or inverse scheme requires
some sort of preknowledge, less preknowledge usually makes an approach more
general and powerful. In our case, we assume that the nanophotonic enviromnent
can be expressed in terms of resonance modes, and that the potentials away
from the boundary propagate as solutions of Laplace’s equation.
How many reference modes (and which) are required? For any practical
reconstruction one has to truncate the basis. The proper choice and truncation
of the reference basis thus enters as an additional preknowledge. For
spectrally isolated resonances often a few tens of modes suffice, while in
other cases up to hundred modes might be needed. We are not aware of any
general approach for determining the correct number of reference modes, so we
advice potential users to vary the number and to obtain the best cutoff
parameter on a case-to-case basis.
Are more reference modes always better? More modes slow down the optimization
and require more iterations until reaching convergence. With the simulated
eels data used in this paper the quality of the reconstruction did not depend
decisively on the truncation number. However, things might change for noisy
experimental data because higher modes come along with spatially more
localized potential variations, and at some point one might end up
reconstructing these noisy features rather than the real physical ones.
What is the typical computational cost? Depending on the truncation number,
typical optimization times range from one to several minutes on a normal
computer. The code developments on top of a bem solver, such as the nanobem
one [27], are moderate. In the future we consider publishing our code to make
it accessible to interested users.
Are there differences between eels and ldos? The question is somewhat odd,
obviously eels accounts for the energy loss of swift electrons and ldos
accounts for the enhancement of the decay rate of oscillating dipoles. On the
other hand, the interaction potential of the swift electron to the
nanoparticle has a $\log(r)$ spatial dependence while the dipole has a
$\nicefrac{{1}}{{r^{2}}}$ dependence. For this reason, eels maps are governed
by the long-range features of the potential and ldos maps by the short-range
features. The prediction of ldos maps from experimental eels maps is thus a
challenging and difficult task, and the good agreement between simulated and
reprojected ldos maps reported in this work should not be taken as granted.
Does the optimization always succeed? In all reconstructions considered in
this work the optimization algorithm ended up in a minimum. However, there is
no guarantee that this is a global minimum. Our results never depended
decisively on the initial values for $L_{k}$, $\mathbb{Q}$, which we initially
set equal to one. Note that zeros would be a bad choice because the
derivatives of the cost function with respect to the optimization parameters
would equate to zero then. We generally recommend using quasi-Newton
optimizations rather than conjugate gradient ones, because they typically
access larger portions of the parameter space.
How much experimental input is needed? We will not give too much advice on the
experiments here, interested readers might consult our previous work [16, 17,
18] to see what worked for us. Depending on the electron microscope,
contamination might play a role and might limit the amount of experimental
data. As has been discussed before, our tomography seems to work best for
regions through which swift electrons have traveled. The reconstruction of
blind spots is possible, but the results should be handled with care.
How to address retardation? For surface phonon polaritons the quasistatic
approximation works perfectly, but things might be more problematic for the
reconstruction of other surface modes, such as surface plasmon polaritons. In
the past we have developed a methodology for surface plasmon tomography
including retardation, and have applied the scheme to experimental eels data
[15, 16, 17]. There we used a bi-orthogonal basis, which shares many features
with the resonance modes presented here, but provides no justification for a
strictly positive lineshape function. For this reason, we opted for a
compressed sensing optimization that favors expansions with as few modes as
possible, where luckily all of them contributed with a positive weight to the
loss probability. In light of our present analysis, we suggest a slight
modification of our previous approach. First, in the retarded case the
preferred basis is given by quasinormal modes [30, 31, 32, 27], which have
received considerable interest recently. With these modes we can decompose the
dyadic Green’s tensor for the full Maxwell’s equations in a form similar to
Eq. (23), and a tomographic reconstruction should be possible along the lines
sketched in the present work. There remain a number of open issues, such as
the proper choice of the reference modes or the consideration of complex mode
functions, but we don’t foresee any major roadblock. As a side remark, it is
no surprise that the decomposition of the reflected Green’s function in terms
of resonance modes looks similar in the quasistatic and retarded case: such
decompositions are in the spirit of generic singular value decompositions,
where the special structure is due to the symmetry property of Green’s
function originating from the reciprocity theorem of optics.
How to address membranes and grids? Membranes or grids are needed in
experiment to support the nanoparticle. One might wonder about the
consequences of such a support in our tomographic reconstruction. First,
modifications of the resonance energies or surface charge distributions of the
surface phonon polaritons can be already properly accounted for with the
present approach, as has been demonstrated in [18]. However, in principle also
the free-space Green’s function of Eq. (5) should be modified to account for
the dielectric environment in presence of a support. This modified Green’s
function should be used in Eq. (10) to propagate away the potentials from the
boundary. It is the resulting modification of the electron-nanoparticle
interaction that has to be considered. Whether this modification has
noticeable influence on the results has to be seen.
To summarize, eels tomography has become a successful scheme for
reconstructing the three-dimensional photonic environment of nanoparticles
with high spatial and energy resolution. In the past several case studies for
plasmonic and photonic nanostructures have provided beautiful results, which
would have been hard to achieve with other techniques. Yet, we feel that there
is still enough room for improvements and further investigations. In this
paper we have given an in-depth study of a prototypical nanophotonic system
and have demonstrated that tomographic reconstructions work reliably and
without major difficulties, at least for systems where the quasistatic
approximation can be employed. We hope that this will motivate more research
groups to enter the field, to investigate their systems with the tools
presented here, and to continue developing eels tomography with further
improvements.
## Acknowledgements
This project has received funding from the European Union’s Horizon 2020
Research and Innovation Program under grant agreement 823717 (ESTEEM3), from
the Austrian Science Fund FWF under project P 31264 and by NAWI Graz.
## Appendix A
In this appendix we first derive Eq. (15). We denote the nanoparticle boundary
with $\partial\Omega$ and the geometric eigenmodes with $u_{k}(\bm{s})$. These
eigenmodes form a complete set of basis functions and fulfill the
orthogonality relation [22, 23, 2]
$\oint_{\partial\Omega}\frac{u_{k}(\bm{s})u_{k^{\prime}}(\bm{s}^{\prime})}{4\pi|\bm{s}-\bm{s}^{\prime}|}\,dSdS^{\prime}=\delta_{kk^{\prime}}\,.$
(24)
Let $u^{0}_{\ell}(\bm{s}_{0})$ be the reference basis functions, which are
assumed to fulfill a similar orthogonality relation,
$\oint_{\partial\Omega}\frac{u^{0}_{\ell}(\bm{s})u^{0}_{\ell^{\prime}}(\bm{s}^{\prime})}{4\pi|\bm{s}-\bm{s}^{\prime}|}\,dSdS^{\prime}=\delta_{\ell\ell^{\prime}}\,.$
(25)
This can be always achieved for a set of basis functions using a Gram-Schmidt-
type orthogonalization. As $u^{0}_{\ell}(\bm{s})$ form a complete basis, we
can expand the eigenmodes via
$u_{k}(\bm{s})=\sum_{\ell}\mathbb{Q}_{k\ell}u_{\ell}^{0}(\bm{s})\,.$ (26)
Inserting this expression into Eq. (24) and using the orthogonality relation
of Eq. (25), we then immediately observe that $\mathbb{Q}$ is an orthogonal
matrix. In our computational approach we employ Cayley’s parameterization for
orthogonal matrices
$\mathbb{Q}=\big{(}\openone+\mathbb{X}\big{)}\big{(}\openone-\mathbb{X}\big{)}^{-1}\,,$
(27)
where $\mathbb{X}$ is a skew-symmetric matrix with $x_{ij}=-x_{ji}$. Eq. (27)
has the advantage that one can perform the derivative
$\nicefrac{{\partial\mathbb{Q}}}{{\partial x_{ij}}}$ analytically. To show
this, we start from
$\frac{\partial\mathbb{Q}}{\partial x}=\frac{\partial\mathbb{X}}{\partial
x}\big{(}\openone-\mathbb{X}\big{)}^{-1}+\big{(}\openone+\mathbb{X}\big{)}\frac{\partial}{\partial
x}\big{(}\openone-\mathbb{X}\big{)}^{-1}\,,$ (28)
where for notational clarity we have suppressed the subscripts of $x$. To
evaluate the second term on the right hand side, we differentiate
$(\openone-\mathbb{X})(\openone-\mathbb{X})^{-1}=\openone$ with respect to
$x$. After some manipulations this leads to
$\frac{\partial}{\partial
x}\big{(}\openone-\mathbb{X}\big{)}^{-1}=\big{(}\openone-\mathbb{X}\big{)}^{-1}\frac{\partial\mathbb{X}}{\partial
x}\big{(}\openone-\mathbb{X}\big{)}^{-1}\,.$
Insertion into Eq. (28) gives
$\frac{\partial\mathbb{Q}}{\partial x}=\frac{\partial\mathbb{X}}{\partial
x}\big{(}\openone-\mathbb{X}\big{)}^{-1}+\big{(}\openone+\mathbb{X}\big{)}\big{(}\openone-\mathbb{X}\big{)}^{-1}\frac{\partial\mathbb{X}}{\partial
x}\big{(}\openone-\mathbb{X}\big{)}^{-1}\,.$
We next use
$(\openone+\mathbb{X})(\openone-\mathbb{X})^{-1}=(\openone-\mathbb{X})^{-1}(\openone+\mathbb{X})$,
which can be easily proven using
$\openone+\mathbb{X}=2\openone-(\openone-\mathbb{X})$ and that both terms on
the right hand side commute with $(\openone-\mathbb{X})^{-1}$. We then arrive
at our final expression
$\frac{\partial Q}{\partial
x}=2\big{(}\openone-\mathbb{X}\big{)}^{-1}\frac{\partial X}{\partial
x}\big{(}\openone-\mathbb{X}\big{)}^{-1}\,.$ (29)
In our optimization of the cost function we express the lineshape function
through $L_{k}=s_{k}^{2}$, which guarantees that $L_{k}$ is always positive.
The optimization algorithm can be significantly accelerated by providing in
addition to the value of the cost function also the derivatives with respect
to the optimization parameters. Using Eq. (20) together with Eq. (29), the
derivative of the cost function (21) with respect to the parameters $s_{k}$
and $x_{k\ell}$ of the skew-symmetric matrix can be obtained analytically.
Things are considerably easier for a non-orthogonal matrix where the
derivatives with respect to the matrix elements can be performed
straightforwardly.
## References
* Novotny and Hecht [2006] L. Novotny and B. Hecht, _Principles of Nano-Optics_ (Cambridge University Press, Cambridge, 2006).
* Hohenester [2020] U. Hohenester, _Nano and Quantum Optics_ (Springer, Cham, Switzerland, 2020).
* Maier [2007] S. A. Maier, _Plasmonics: Fundamentals and Applications_ (Springer, Berlin, 2007).
* Kliewer and Fuchs [1974] K. Kliewer and R. Fuchs, _Theory of Dynamical Properties of Dielectric Surfaces_ , vol. 27 (Wiley, Weinheim, 1974).
* Caldwell et al. [2015] J. D. Caldwell, L. Lindsay, V. Giannini, I. Vurgaftman, T. L. Reinecke, S. A. Maier, and O. J. Glembocki, Nanophotonics 4, 44 (2015).
* Barbillon [2019] H. Barbillon, Materials 12, 1502 (2019).
* Nelayah et al. [2007] J. Nelayah, M. Kociak, O. Stephan, F. J. García de Abajo, M. Tence, L. Henrard, D. Taverna, I. Pastoriza-Santos, L. M. Liz-Martin, and C. Colliex, Nature Phys. 3, 348 (2007).
* Kociak and Stephan [2014] M. Kociak and O. Stephan, Chem. Soc. Rev. 53, 3865 (2014).
* Colliex et al. [2016] C. Colliex, M. Kociak, and O. Stephan, Ultramicroscopy 162, A1 (2016).
* Polman et al. [2019] A. Polman, M. Kociak, and J. F. Garcia de Abajo, Nature Materials 18, 1158 (2019).
* García de Abajo [2010] F. J. García de Abajo, Rev. Mod. Phys. 82, 209 (2010).
* Midgley and Dunin-Borkowski [2009] P. A. Midgley and R. E. Dunin-Borkowski, Nat. Mater. 8, 271 (2009).
* Hörl et al. [2013] A. Hörl, A. Trügler, and U. Hohenester, Phys. Rev. Lett. 111, 086801 (2013).
* Nicoletti et al. [2013] O. Nicoletti, F. de la Pena, R. W. Leary, D. J. Holland, C. Ducati, and P. A. Midgley, Nature 502, 80 (2013).
* Hörl et al. [2015] A. Hörl, A. Trügler, and U. Hohenester, ACS Photonics 2, 1429 (2015).
* Hörl et al. [2017] A. Hörl, G. Haberfehlner, A. Trüglerl, F. Schmidt, U. Hohenester, and G. Kothleitner, Nature Commun. 8, 37 (2017).
* Haberfehlner et al. [2017] G. Haberfehlner, F. P. Schmidt, G. Schaffernak, A. Hörl, A. Trügler, A. Hohenau, J. R. Krenn, U. Hohenester, and G. Kothleitner, Nano. Lett. 17, 6773 (2017).
* Li et al. [2021] X. Li, G. Haberfehlner, U. Hohenester, O. Stephan, G. Kothleitner, and M. Kociak, Science 371, 1364 (2021).
* García de Abajo and Kociak [2008] F. J. García de Abajo and M. Kociak, Phys. Rev. Lett. 100, 106804 (2008).
* Schuller et al. [2010] J. A. Schuller, E. S. Barnard, W. Cai, Y. C. Jun, J. S. White, and M. L. Brongersma, Nature Mat. 9, 193 (2010).
* Jackson [1999] J. D. Jackson, _Classical Electrodynamics_ (Wiley, New York, 1999).
* Ouyang and Isaacson [1989] F. Ouyang and M. Isaacson, Phil. Mag. B 60, 481 (1989).
* Boudarham and Kociak [2012] G. Boudarham and M. Kociak, Phys. Rev. B 85, 245447 (2012).
* Press et al. [2002] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, _Numerical Recipes in C++: The Art of Scientific Computing_ (Cambridge Univ. Press, Cambridge, 2002), 2nd ed.
* Loureno-Martins and Kociak [2017] H. Loureno-Martins and M. Kociak, Phys. Rev. X 7, 041059 (2017).
* Trügler et al. [2011] A. Trügler, J. C. Tinguely, J. R. Krenn, A. Hohenau, and U. Hohenester, Phys. Rev. B 83, 081412(R) (2011).
* Hohenester et al. [2022] U. Hohenester, N. Reichelt, and G. Unger, Comp. Phys. Comp. 276, 108337 (2022).
* Hohenester et al. [2018] U. Hohenester, A. A. Trügler, P. E. Batson, and M. J. Lagos, Phys. Rev. B 97, 165418 (2018).
* Bobenko and Springborn [2007] A. I. Bobenko and B. A. Springborn, Discrete Comput. Geom. 38, 740 (2007).
* Leung et al. [1994] P. T. Leung, S. Y. Liu, and K. Young, Phys. Rev. A 49, 3057 (1994).
* Lalanne et al. [2019] P. Lalanne, W. Yan, A. Gras, C. Sauvan, J.-P. Hugonin, M. Besbes, G. Demesy, M. D. Truong, B. Gralak, F. Zolla, et al., J. Opt. Soc. Am. A 36, 686 (2019).
* Kristensen et al. [2020] P. T. Kristensen, K. Herrmann, F. Intravaia, and K. Busch, Adv. Opt. Photon. 12, 612 (2020).
|
# Effects of the Central Mass Concentration on Bar Formation in Disk Galaxies
Dajeong Jang Department of Physics $\&$ Astronomy, Seoul National University,
Seoul 08826, Republic of Korea Woong-Tae Kim Department of Physics $\&$
Astronomy, Seoul National University, Seoul 08826, Republic of Korea SNU
Astronomy Research Center, Seoul National University, Seoul 08826, Republic of
Korea
###### Abstract
While bars are common in disk galaxies, their formation conditions are not
well understood. We use $N$-body simulations to study bar formation and
evolution in isolated galaxies consisting of a stellar disk, a classical
bulge, and a dark halo. We consider 24 galaxy models that are similar to the
Milky Way but differ in the mass and compactness of the classical bulge and
halo concentration. We find that the bar formation requires
$({Q_{T,\text{min}}}/1.2)^{2}+(\text{CMC}/0.05)^{2}\lesssim 1$, where
${Q_{T,\text{min}}}$ and CMC refers to the minimum value of the Toomre
stability parameter and the central mass concentration, respectively. Bars
tend to be stronger, longer, and rotate slower in galaxies with a less massive
and less compact bulge and halo. All bars formed in our models correspond to
slow bars. A model with the bulge mass of $\sim 10$–$20$% of the disk under a
concentrated halo produces a bar similar to the Milky-Way bar. We discuss our
findings in relation to other bar formation criteria suggested by previous
studies.
Disk Galaxies (391), Milky Way Galaxy (1054), Galaxy Bulges (578), Galaxy
Disks (589), Barred Spiral Galaxies (136), Galaxy Bars (2364)
## 1 Introduction
Bars are common in the universe. More than $\sim 60\%$ of disk galaxies in
optical and near-infrared images are known to possess a weak or strong bar in
the local universe (de Vaucouleurs, 1963; Sellwood & Wilkinson, 1993; Knapen
et al., 2000; Whyte et al., 2002; Laurikainen et al., 2004; Marinova & Jogee,
2007; Menéndez-Delmestre et al., 2007; Aguerri et al., 2009; Méndez-Abreu et
al., 2012; Buta et al., 2015; Díaz-García et al., 2016, 2019). The fraction of
barred disk galaxies decreases with redshift (Sheth et al., 2008; Melvin et
al., 2014), with a tendency that more massive galaxies are more likely barred.
Bar formation appears inhibited in dispersion-dominated galaxies and in halo-
dominated galaxies at low redshift (Sheth et al., 2012). These indicate that
the bar formation occurs preferentially in a late secular phase of galaxy
formation when the disks become dynamically cold (Kraljic et al., 2012).
Theoretically, bar formation is due to gravitational instability of a
rotationally-supported stellar disk (Toomre, 1964): non-axisymmetric
perturbations grow via swing amplification and initially circular stellar
orbits are deformed to elongated $x_{1}$ orbits that form and support a bar
(see, e.g., Sellwood 2014). A number of simulations have shown that the
presence of a dark halo affects the bar formation and evolution (Ostriker &
Peebles, 1973; Hohl, 1976; Debattista & Sellwood, 2000; Valenzuela & Klypin,
2003; Holley-Bockelmann et al., 2005; Weinberg & Katz, 2007). While the
gravity of a halo tends to suppress the bar formation by reducing the relative
strength of the disk’s self-gravity in equilibrium (Ostriker & Peebles, 1973),
angular momentum exchange between a bar and a live halo allows the former to
grow longer and stronger (Athanassoula, 2002). Also, the halo parameters such
as the axial ratio (Athanassoula, 2002; Athanassoula et al., 2013) and spin
(Collier et al., 2018, 2019; Kataria & Shen, 2022) lead to considerable
changes in the bar evolution.
In addition to a halo, a classical bulge can also strongly affect the bar
formation and evolution. Classical bulges are produced as a result of
major/minor mergers during galaxy formation (Kauffmann et al., 1993; Baugh et
al., 1996; Hopkins et al., 2009; Naab et al., 2014; Bournaud et al., 2007;
Hopkins et al., 2010). Unlike halos, classical bulges are highly centrally
concentrated and can thus stabilize the inner regions of disks, without
affecting the outer regions much. Early studies found that a strong bulge
suppresses swing amplification by interrupting a feedback loop that transforms
propagating trailing waves to leading ones (Sellwood, 1980; Toomre, 1981;
Binney & Tremaine, 2008), inhibiting bar formation (e.g., Saha & Elmegreen
2018; Kataria & Das 2018). Also, a live bulge can make a bar longer and
stronger by removing angular momentum from the latter, just like a live halo
(Sellwood, 1980). A bar that forms can increase a central mass, for example,
by driving gas inflows (e.g., Athanassoula, 1992; Buta & Combes, 1996; Kim et
al., 2012), which in turns weakens or destroys the bar by disturbing bar-
supporting $x_{1}$ orbits (e.g., Pfenniger & Norman, 1990; Hasan et al., 1993;
Norman et al., 1996; Shen & Sellwood, 2004; Bournaud et al., 2005;
Athanassoula et al., 2013).
While many numerical studies mentioned above are useful to understand the
effects of a bulge and a halo on the bar formation and evolution, the
quantitative conditions for bar-forming instability have still been under
debate. Using numerical models with a fixed halo and a disk with surface
density $\Sigma_{d}\propto R^{-1}$, Ostriker & Peebles (1973) suggested that
bar formation requires
$t_{\text{OP}}\equiv T/|W|>0.14,$ (1)
where $T$ and $W$ stand for the total rotational and gravitational potential
energies of a galaxy, respectively. Using two-dimensional (2D) models with a
fixed halo and an exponential disk, Efstathiou et al. (1982) showed that a bar
forms in galaxies with
$\epsilon_{\text{ELN}}\equiv\frac{V_{\rm max}}{(GM_{d}/R_{d})^{1/2}}<1.1,$ (2)
where $V_{\rm max}$, $M_{d}$, and $R_{d}$ refer to the maximum rotational
velocity, mass, and scale radius of the disk, respectively.
It is not until recent years that galaxy models for bar formation treat all
three components (disk, bulge, and halo) as being live (Polyachenko et al.,
2016; Salo & Laurikainen, 2017; Saha & Elmegreen, 2018; Fujii et al., 2018;
Kataria & Das, 2018, 2019; Kataria et al., 2020). In particular, Kataria & Das
(2018) used self-consistent $N$-body simulations with differing bulge masses,
and showed that bar formation requires that the ratio of the bulge to total
radial force initially satisfies
$\mathcal{F}_{\text{KD}}\equiv\frac{GM_{b}}{R_{d}V_{\text{tot}}^{2}}<0.35,$
(3)
where $M_{b}$ is the bulge mass and $V_{\text{tot}}$ is the total rotational
velocity at $R=R_{d}$. Using three-component galaxy models with differing disk
and bulge densities, Saha & Elmegreen (2018) argued that their models evolve
to barred galaxies provided
$\mathcal{D}_{\text{SE}}\equiv\frac{\left<\rho_{b}\right>}{\left<\rho_{d}\right>}<\frac{1}{\sqrt{10}},$
(4)
where $\left<\rho_{b}\right>$ and $\left<\rho_{d}\right>$ are the mean
densities of the bulge and disk, respectively, within the half-mass radius of
the bulge.
The several different conditions given above imply that there has not been
consensus regarding the quantitative criterion for bar formation. A part of
the reason for the discrepancies in the proposed conditions may be that some
models considered a fixed (rather than live) halo, and that some authors
explored parameter space by fixing either bulge or halo parameters. Also, it
is questionable whether the effects of the complicated physical processes
(swing amplification and feedback loop) involved in the bar formation can be
encapsulated by the single parameters given above. In this paper, we revisit
the issue of bar formation by varying both bulge and halo parameters
altogether. Our models will be useful to clarify what conditions are necessary
to produce a bar when the mass and compactness of the bulge and halo vary. We
will show that the two key elements that govern the bar formation are the
minimum value of the Toomre stability parameter ${Q_{T,\text{min}}}$ and the
central mass concentration (CMC), defined as the total galaxy mass inside the
central $0.1\;{\rm kpc}$ relative to the total disk mass: bars form more
easily in galaxies with smaller ${Q_{T,\text{min}}}$ and CMC. We also measure
the strength, length, and pattern speed of the bars that form in our
simulations and explore their dependence on the halo and bulge parameters.
This paper is organized as follows. In Section 2, we describe our galaxy
models and numerical methods we employ. In Section 3, we present temporal
changes of the bar properties such as bar strength, pattern speed, length, and
angular momentum transfer from a disk to halo and bulge. In Section 4, we
compare our numerical results with the previous bar formation conditions
mentioned above, and propose the new conditions in terms of
${Q_{T,\text{min}}}$ and CMC. We also use our numerical model to constrain the
classical bulge of the Milky Way. Finally, we conclude our work in Section 5.
Table 1: Model parameters and various dimensionless quantities of the initial galaxy models Model | $a_{h}$ | $M_{b}/M_{d}$ | $a_{b}$ | ${Q_{T,\text{min}}}$ | CMC | $t_{\text{OP}}$ | $\epsilon_{\text{ELN}}$ | $\mathcal{F}_{\text{KD}}$ | $\mathcal{D}_{\text{SE}}$
---|---|---|---|---|---|---|---|---|---
| (kpc) | | (kpc) | | | | | |
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10)
C00 | 30 | 0.0 | 0.4 | 1.062 | $0.04\times 10^{-2}$ | 0.449 | 0.89 | 0.0 | 0.0
C05 | 30 | 0.05 | 0.4 | 1.079 | $0.24\times 10^{-2}$ | 0.449 | 0.90 | 0.10 | 0.67
C10 | 30 | 0.1 | 0.4 | 1.085 | $0.44\times 10^{-2}$ | 0.447 | 0.91 | 0.19 | 1.30
C20 | 30 | 0.2 | 0.4 | 1.110 | $0.84\times 10^{-2}$ | 0.446 | 0.92 | 0.33 | 2.60
C30 | 30 | 0.3 | 0.4 | 1.140 | $1.22\times 10^{-2}$ | 0.445 | 0.94 | 0.44 | 3.90
C40 | 30 | 0.4 | 0.4 | 1.164 | $1.62\times 10^{-2}$ | 0.444 | 0.95 | 0.52 | 5.20
L00 | 40 | 0.0 | 0.4 | 0.954 | $0.04\times 10^{-2}$ | 0.442 | 0.80 | 0.0 | 0.0
L05 | 40 | 0.05 | 0.4 | 0.961 | $0.24\times 10^{-2}$ | 0.441 | 0.81 | 0.12 | 0.64
L10 | 40 | 0.1 | 0.4 | 0.975 | $0.44\times 10^{-2}$ | 0.440 | 0.82 | 0.22 | 1.30
L20 | 40 | 0.2 | 0.4 | 1.000 | $0.84\times 10^{-2}$ | 0.439 | 0.84 | 0.37 | 2.61
L30 | 40 | 0.3 | 0.4 | 1.023 | $1.24\times 10^{-2}$ | 0.438 | 0.86 | 0.49 | 3.91
L40 | 40 | 0.4 | 0.4 | 1.046 | $1.64\times 10^{-2}$ | 0.437 | 0.89 | 0.58 | 5.19
L50 | 40 | 0.5 | 0.4 | 1.056 | $2.06\times 10^{-2}$ | 0.436 | 0.99 | 0.64 | 6.05
C05c | 30 | 0.05 | 0.2 | 1.082 | $0.58\times 10^{-2}$ | 0.447 | 0.90 | 0.10 | 3.01
C10c | 30 | 0.1 | 0.2 | 1.086 | $1.16\times 10^{-2}$ | 0.446 | 0.91 | 0.18 | 5.76
C20c | 30 | 0.2 | 0.2 | 1.106 | $2.26\times 10^{-2}$ | 0.442 | 0.92 | 0.32 | 11.30
C30c | 30 | 0.3 | 0.2 | 1.127 | $3.34\times 10^{-2}$ | 0.439 | 1.07 | 0.42 | 16.80
C40c | 30 | 0.4 | 0.2 | 1.158 | $4.40\times 10^{-2}$ | 0.437 | 1.23 | 0.50 | 22.21
L05c | 40 | 0.05 | 0.2 | 0.961 | $0.58\times 10^{-2}$ | 0.439 | 0.81 | 0.12 | 2.89
L10c | 40 | 0.1 | 0.2 | 0.972 | $1.16\times 10^{-2}$ | 0.438 | 0.82 | 0.21 | 5.72
L20c | 40 | 0.2 | 0.2 | 0.999 | $2.26\times 10^{-2}$ | 0.435 | 0.88 | 0.36 | 11.28
L30c | 40 | 0.3 | 0.2 | 1.024 | $3.38\times 10^{-2}$ | 0.432 | 1.07 | 0.46 | 16.84
L40c | 40 | 0.4 | 0.2 | 1.049 | $4.48\times 10^{-2}$ | 0.430 | 1.23 | 0.54 | 22.06
L50c | 40 | 0.5 | 0.2 | 1.049 | $5.62\times 10^{-2}$ | 0.428 | 1.38 | 0.59 | 25.53
## 2 Galaxy Model and method
### 2.1 Galaxy Models
To study the effects of spheroidal components on the bar formation and
evolution in disk galaxies, we consider Milky Way-like, isolated galaxies. Our
galaxy models are three dimensional (3D), consisting of a dark matter halo, a
classical bulge, a stellar disk, and a central supermassive black hole.
For the stellar disk, we adopt the exponential-secant hyperbolic density
distribution
$\rho_{d}(R,z)=\frac{M_{d}}{4\pi
z_{d}R_{d}^{2}}\exp\left(-\frac{R}{R_{d}}\right){\rm
sech}^{2}\left(\frac{z}{z_{d}}\right),$ (5)
where $R$ is the cylindrical radius, $R_{d}$ is the disk scale radius, $z_{d}$
is the disk scale height, and $M_{d}$ is the total disk mass. We fix
$R_{d}=3\;{\rm kpc}$, $z_{d}=0.3\;{\rm kpc}$, and $M_{d}=5\times 10^{10}\;{\rm
M}_{\odot}$, similar to the Milky Way (Bland-Hawthorn & Gerhard 2016, Helmi
2020). Initially, we set the velocity anisotropy to
$f_{R}\equiv\sigma_{R}^{2}/\sigma_{z}^{2}=2.0$, where $\sigma_{R}$ and
$\sigma_{z}$ refer to the velocity dispersions of the disk particles in the
radial and vertical directions, respectively. We note that $f_{R}$ increases
with time as the disk evolves to form a bar, becoming similar to the observed
value of $f_{R}\sim 4$ near the solar neighborhood (e.g., Sharma et al. 2014;
Guiglion et al. 2015; Katz et al. 2018).
Figure 1: Radial distributions of the total rotational velocity
$v_{\textrm{rot}}$ for models C00, L00 (top), C10, L10 (middle), and C10c,
L10c (bottom). A more massive and compact bulge increase $v_{\textrm{rot}}$ at
small $R$. Models in the C series have higher $v_{\textrm{rot}}$, by $\sim
20\;{\rm km}\;{\rm s}^{-1}$ on average, than in the L series.
For both halo and classical bulge, we take the Hernquist (1990) profile
$\rho(r)=\frac{M}{2\pi}\frac{a}{r(r+a)^{3}},$ (6)
where $r=(R^{2}+z^{2})^{1/2}$ is the spherical radius, and $M$ and $a$ denote
the mass and the scale radius of each component, respectively. For the halo,
we fix its mass to $M_{h}=1.35\times 10^{12}\;{\rm M}_{\odot}=26M_{d}$ and
consider two scale radii: a centrally concentrated halo with $a_{h}=30\;{\rm
kpc}$ and a less concentrated halo with $a_{h}=40\;{\rm kpc}$, which we term C
and L series, respectively. For the bulge, we vary both mass
$M_{b}=(0$–$0.5)M_{d}$ and scale radius $a_{b}$ between $0.2$ to $0.4\;{\rm
kpc}$. We place a supermassive black hole with mass $M_{\mathrm{BH}}=4\times
10^{6}\;{\rm M}_{\odot}$ at the galaxy center.
Table 1 lists the names and initial parameters of all models. Column (1) gives
the model names. Column (2) gives the scale radius of the halo, while Columns
(3) and (4) list the bulge mass relative to the disk mass and the bulge scale
radius, respectively. The prefix C and L in the model names stand for the
centrally concentrated and less concentrated halo, respectively. The number
after the prefix denotes the bulge mass relative to the total disk mass. The
postfix c implies a compact bulge: the models with and without the postfix
have $a_{b}=0.2\;{\rm kpc}$ and $0.4\;{\rm kpc}$, respectively. For example,
model L20c has a less-concentrated halo with $a_{h}=40\;{\rm kpc}$ and a
compact bulge with $M_{b}=0.2M_{d}$ and $a_{b}=0.2\;{\rm kpc}$. Column (5)
lists the minimum value of the Toomre stability parameter. Column (6) lists
the CMC. Columns (7)–(10) give the values for the quantities defined in
Equations 1 to 4. We take model C10 as our fiducial model.
Figure 2: Radial profiles of $Q_{T}$ for models with $M_{b}/M_{d}=0,0.1$. The
minimum value ${Q_{T,\text{min}}}$ occurs at $R\sim 4$–$6\;{\rm kpc}$, which
tends to be larger for a galaxy with a more concentrated halo and/or more
massive bulge.
Figure 1 plots the radial distributions of the total rotational velocity
$v_{\textrm{rot}}$ for selected models. The black and blue lines correspond to
the models in the C and L series, respectively. It is apparent that increasing
the bulge mass enhances the rotational velocity. Models with a compact bulge
have higher rotational velocity in the inner regions with $R\lesssim a_{b}$.
At $R\lesssim 20\;{\rm kpc}$, models in the C series have larger
$v_{\textrm{rot}}$, by $\sim 20\;{\rm km}\;{\rm s}^{-1}$ on average, than the
L series counterparts.
The gravitational susceptibility of a disk can be measured by the Toomre
(1966) stability parameter
$Q_{T}=\frac{\kappa\sigma_{R}}{3.36G\Sigma_{d}},$ (7)
where $\kappa$ is the epicycle frequency and $\Sigma_{d}$ is the disk surface
density. Figure 2 plots the radial distributions of $Q_{T}$ for models with
$M_{b}/M_{d}=0$ and $0.1$. Overall, $Q_{T}$ is large at both small $R$ (due to
increase in $\kappa$) and large $R$ (due to decrease in $\Sigma_{d}$) and
attains a minimum value ${Q_{T,\text{min}}}$ at $R\sim 4$–$6\;{\rm kpc}$. Our
galaxy models have ${Q_{T,\text{min}}}$ in the range between 0.95 and 1.16
(Table 1): ${Q_{T,\text{min}}}$ tends to be larger for a galaxy with a
centrally concentrated halo and/or more massive bulge, while it is almost
independent of the bulge compactness.
### 2.2 Numerical Method
To construct the initial galaxy models, we make use of the GALIC code (Yurin &
Springel, 2014) which solves the collisionless Boltzmann equations to find a
desired equilibrium state by optimizing the velocities of individual
particles. We distribute $N_{d}=1.0\times 10^{6}$, and $N_{b}=5\times
10^{4}$–$5\times 10^{5}$, and $N_{h}=2.6\times 10^{7}$ particles for the disk,
bulge, and halo, respectively. We set the mass of each particle to $m=5\times
10^{4}\;{\rm M}_{\odot}$, which are equal for all three components.
Figure 3: Snapshots of the disk surface density in model C10.
We evolve our galaxy models by using a public version of the Gadget-4 code
(Springel et al., 2021). This version has improved force accuracy, time-
stepping, computational efficiency, and parallel scalability from Gadget-3. It
offers the Fast Multipole Method in which the tree is accelerated by multipole
expansion not only at the source side but also at the sink side. For our
galaxy models, we find the multipole expansion to order $p=4$ is fastest. In
addition, hierarchical time-integration scheme can effectively reduce the
computation time by constructing a tree only for the set of particles involved
in the current force calculation. We take the force accuracy parameter
$\alpha=3\times 10^{-4}$ which conserves the total angular momentum within
$\sim 0.1$ percent (see below). The softening parameters for dark halo,
stellar disk, and bulge particles are set to $0.05\;{\rm kpc}$, $0.01\;{\rm
kpc}$, and $0.01\;{\rm kpc}$, respectively.
## 3 Results
In this section, we present evolution of our models with a focus on the
temporal changes in the strength, pattern speed, and size of the bars that
form. The bar formation conditions will be discussed in Section 4.
### 3.1 Bar Formation and Strength
Figure 4: Snapshots of the disk surface density at $t=8.0\;{\rm Gyr}$ in all
the models. Each image is rotated such that the semi-major axis of a bar (or
an oval) is aligned parallel to the $x$-axis.
Figure 3 plots snapshots of the disk surface density for our fiducial model
C10. Figure 4 plots the snapshots of all the models at $t=8.0\;{\rm Gyr}$. In
model C10 with ${Q_{T,\text{min}}}=1.09$, non-axisymmetric perturbations
inherent in the particle distributions grow as they swing from leading to
trailing configurations (e.g., Binney & Tremaine 2008; Kim & Ostriker 2007;
Kwak et al. 2017; Seo et al. 2019), forming spiral arms at $t=0.5\;{\rm Gyr}$.
Since the inner Lindblad resonance (ILR) is weak in this model, trailing
spiral waves propagate toward the galaxy center to become leading waves at the
opposite side, which can grow further: successive swing amplifications
combined with multiple feedback loops eventually lead to a bar at $t\gtrsim
1.5\;{\rm Gyr}$. If ${Q_{T,\text{min}}}$ is quite small, as in models C00 and
L00, the swing amplification is so virulent that the inner parts of the spiral
arms are rapidly organized into a bar in less than $\sim 1\;{\rm Gyr}$. In
contrast, if ${Q_{T,\text{min}}}$ is large or the ILR is too strong, as in
models C40, C20c, and L30c, the feedback loop is blocked and the disks produce
only spirals, sometimes with an oval.
Figure 5: Temporal changes of the bar strength $A_{2}/A_{0}$ for models with
less compact bulges in the C series (top) and for models with
$M_{b}/M_{d}=0.1$ (bottom). Features with $A_{2}/A_{0}\leq 0.2$ are regarded
as ovals or spirals.
To quantify the bar strength, we first consider an annular region of the disk
centered at radius $R$ with width $\Delta R=1\;{\rm kpc}$ and calculate the
amplitudes of the $m=2$ Fourier modes as
$\displaystyle a_{2}(R)$ $\displaystyle=\sum_{i}m_{i}\cos{(2\theta_{i})},$
(8a) $\displaystyle b_{2}(R)$
$\displaystyle=\sum_{i}m_{i}\sin{(2\theta_{i})},$ (8b)
where $\theta_{i}$ and $m_{i}$ are the azimuthal angle and mass of the $i$-th
disk particle in the annulus, respectively. We then define the bar strength as
$\frac{A_{2}}{A_{0}}=\textrm{max}\left(\frac{\sqrt{a_{2}^{2}+b_{2}^{2}}}{\Sigma_{i}m_{i}}\right).$
(9)
Note that $A_{2}/A_{0}$ measures the strength of $m=2$ spirals when a bar is
absent or weak. For spirals, the position angle
$\psi(R)\equiv\frac{1}{2}\tan^{-1}\left(\frac{b_{2}}{a_{2}}\right)$ (10)
systematically varies with $R$, while $\psi(R)$ remains almost constant for a
bar.
Following Algorry et al. (2017), we regard galaxies with $A_{2}/A_{0}\geq 0.2$
and relatively constant $\psi(R)$ as being barred: features with
$A_{2}/A_{0}<0.2$ are considered as ovals if $\psi(R)$ is constant or spirals
if $\psi(R)$ changes with $R$. Figure 5 plots temporal evolution of
$A_{2}/A_{0}$ for models with less compact bulges in the C series (upper
panel) and for models with $M_{b}/M_{d}=0.1$ (lower panel). The evolution of
$A_{2}/A_{0}$ is dominated by spirals at early time ($\lesssim 1$–$2\;{\rm
Gyr}$) and then by a bar. Although the spirals are strong in the outer
regions, they can affect the inner disk where a bar exists before it fully
grows (see the $t\leq 3\;{\rm Gyr}$ panels in Figure 3). The spirals and bar
rotate about the galaxy center at different pattern speeds. When the spirals
and bar become in phase, $A_{2}/A_{0}$ achieves its peak value temporarily (at
$t=2.3\;{\rm Gyr}$ for model C10). Subsequently, $A_{2}/A_{0}$ decreases as
they become out of phase, although it increases again as the bar grows further
and dominates the inner disk. The presence of a more massive bulge makes the
bar forms later and weaker. The bar formation is completely suppressed in
models with $M_{b}/M_{d}\geq 0.4$ in the C series.
The compactness of halo and bulge has a significant effect on the bar
formation. In the L series with a less concentrated halo, disks are unstable
to form a bar even when the bulge mass amounts to $\sim 50\%$ of the disk
mass. In contrast, disks in the C series with a compact halo do not produce a
bar when $M_{b}/M_{d}\gtrsim 0.35$. Similarly, a more compact bulge tends to
suppress the bar formation. For example, the maximum bulge mass for bar
formation is decreased to 20% and 10% in the L and C series, respectively,
when the bulge is compact. Our result that a bar does not form in galaxies
with a very massive and compact bulge is qualitatively consistent with
previous studies (e.g., Kataria & Das 2018; Saha & Elmegreen 2018).
### 3.2 Buckling Instability
Figure 6: Time evolution of the ratio, $\sigma_{z}/\sigma_{R}$, of the
velocity dispersions at $R=1\;{\rm kpc}$ for models with $M_{b}/M_{d}\leq 0.2$
in the C series together with model L00. A rapid increase of
$\sigma_{z}/\sigma_{R}$ at $t\sim 5\;{\rm Gyr}$ in models C00 and L00 is due
to buckling instability. Figure 7: Time evolution of the B/P strength,
$P_{s}$, defined in Equation 11, for models shown in Figure 6. In most models,
$P_{s}$ increases relatively slowly, while it increases rapidly at $t\sim
5\;{\rm Gyr}$ in models C00 and L00, corresponding to buckling instability.
Figure 8: Contours of logarithm of the projected disk densities at
$t=6.0\;{\rm Gyr}$ in models with $M_{b}/M_{d}\leq 0.2$. The $x$\- and
$z$-axes correspond to the bar semi-major axis and the vertical direction,
respectively. The dotted contours denote
$\Sigma=10^{9.5},10^{9.0},10^{8.5},10^{8.0}\;{\rm M}_{\odot}\;{\rm kpc}^{-2}$
from inside to outside.
Figure 6 plots the temporal changes in the ratio, $\sigma_{z}/\sigma_{R}$, of
the vertical to radial velocity dispersion of the disk particles at $R=1\;{\rm
kpc}$ for models with $M_{b}/M_{d}\leq 0.2$ in the C series together with
model L00. Since a bar is supported by $x_{1}$ orbits elongated along the bar
semi-major axis, its growth naturally involves the increase in $\sigma_{R}$.
At the same time, the bar and spirals can excite the vertical motions of star
particles, enhancing $\sigma_{z}$ (e.g., Quillen et al. 2014). When a bar
grows rapidly, as in models C00 and L00, $\sigma_{R}$ increases faster than
$\sigma_{z}$, resulting in a decrease in the ratio $\sigma_{z}/\sigma_{R}$ at
early time. When a bar grow slowly, in contrast, $\sigma_{z}/\sigma_{R}$
remains more or less constant.
The increase in $\sigma_{z}$ leads to the disk thickening and the formation of
a boxy/peanut (B/P) bulge. All the bars that form in our models evolve to B/P
bulges. Figure 7 plots evolution of the B/P strength, defined as
$P_{s}=\text{max}\left(\frac{\tilde{|z|}}{|\widetilde{z_{0}}|}\right),$ (11)
where the tilde denotes the median and $z_{0}$ is the initial value (Iannuzzi
& Athanassoula, 2015; Fragkoudi et al., 2017; Seo et al., 2019), for models
with $M_{b}/M_{d}\leq 0.2$. In most models, the disk thickening occurs
secularly. However, $P_{s}$ (as well as $\sigma_{z}/\sigma_{R}$) in models C00
and L00 increases rapidly at $t\sim 5\;{\rm Gyr}$, which is due to vertical
buckling instability.
It is well known that a bar can undergo buckling instability when
$\sigma_{z}/\sigma_{R}$ is small. Toomre (1966) and Araki (1987) suggested
that non-rotating thin disks are unstable to the buckling instability if
$\sigma_{z}/\sigma_{R}\leq 0.3$. For realistic disks with spatially varying
$\sigma_{z}/\sigma_{R}$, Raha et al. (1991) found that the buckling
instability occurs if $\sigma_{z}/\sigma_{R}\lesssim 0.25$–$0.55$ in the mid-
disk regions. By varying the values of $\sigma_{z}/\sigma_{R}$ in $N$-body
simulations, Martinez-Valpuesta et al. (2006) suggested that the critical
value is at $\sigma_{z}/\sigma_{R}\sim 0.6$ (see also Kwak et al. 2017). This
is consistent with our numerical results that models L00 and C00 have
$\sigma_{z}/\sigma_{R}\lesssim 0.6$ before undergoing the buckling
instability, as shown in Figure 6.
Figure 8 plots the edge-on views of the projected density distributions of the
disks at $t=6.0\;{\rm Gyr}$ for models with $M_{b}/M_{d}\leq 0.2$. The $x$\-
and $z$-directions correspond to the direction parallel to the bar semi-major
axis and the vertical direction, respectively. The density distributions in
models L00 and C00 with no bulge are asymmetric with respect to the $z=0$
plane, evidencing the operation of buckling instability (Martinez-Valpuesta et
al., 2006). We note that the other models with a bulge also posses a B/P bulge
which develops on a timescale longer than in models L00 and C00. This is
consistent with Sellwood & Gerhard (2020) who showed that the presence of a
nuclear mass with only a small ($\sim 2.5\%$) fraction of the disk mass tends
to suppress the buckling instability. In models with a bulge, the disks appear
to thicken as the bar particles are excited vertically by the passage through
the $2:1$ vertical resonance (Quillen et al., 2014; Sellwood & Gerhard, 2020).
### 3.3 Angular Momentum and Pattern Speed
We calculate the angular momentum of each component as
$L_{z}=\sum_{i}m_{i}(xv_{y}-yv_{x}).$ (12)
Figure 9 plots temporal changes of $L_{z}$ relative to the initial disk
angular momentum for a disk (orange), halo (blue), bulge (green), as well as
the total (black) in model C10. The disk loses its angular momentum right
after the bar formation, while the halo and bulge absorb it. Since the bulge
occupies relatively a small volume in space in model C10, the amount of
angular momentum it gains is limited to $\sim 4\%$, while the halo absorbs the
remaining $\sim 96\%$. In model L50 with a large bulge mass, however, the
bulge absorbs about $\sim 26\%$ of the angular momentum lost by the disk. The
total angular momentum is conserved within $\sim 0.1\%$ in all models.
Figure 9: Temporal changes of the angular momentum of the disk (orange), halo
(blue), bulge (green), and the total (black) for model C10. All angular
momenta and their changes are relative to the initial angular momentum of the
disk $L_{zd}(0)$.
Figure 10: Temporal changes of the bar pattern speed $\Omega_{b}$ for the bar-
forming models shown in Figure 5. In all models, $\Omega_{b}$ decreases over
time due to angular momentum transfer from a bar to both halo and bulge.
To calculate the bar pattern speed $\Omega_{b}$, we use two methods: (1) the
cross-correlation of the disk surface density in the annular regions with
width $\Delta R=0.1\;{\rm kpc}$ at $R=2\;{\rm kpc}$ where most bars attain
their maximum strength and (2) the temporal rate of changes in the position
angle $\psi$, i.e., $\Omega_{b}=d\psi/dt|_{R=2\;{\rm kpc}}$. We check that the
two methods yield the pattern speeds that agree within $\sim 1\%$. Figure 10
plots evolution of the bar pattern speeds for selected models. The initial bar
pattern speed depends on the bulge mass in such a way that a more massive
bulge tends to have larger $\Omega_{b}$ (Kataria & Das, 2018, 2019). In all
models, a bar becomes slower over time due to the transfer of angular momentum
to both halo and bulge.
Figure 11: Temporal variations of the slow-down rate of the bar pattern speed
for the models shown in Figure 10($a$).
Figure 11 plots the temporal changes in the bar slow-down rate,
$-d\Omega_{b}/dt$, for the models shown in Figure 10($a$), showing that there
is no systematic dependence between the bar slow-down rate and the bulge mass.
This result is different from Kataria & Das (2019) who found that the rate is
higher for a more massive bulge. The discrepancy may be due to the differences
in the bulge (and halo) compactness. The models considered by Kataria & Das
(2019) have $R_{b}/R_{d}\leq 0.18$ with $R_{b}$ being the half-mass bulge
radius, which is more compact than our models in the C series that have
$R_{b}/R_{d}=(1+\sqrt{2})a_{b}/R_{d}=0.32$. Figure 12 of Kataria & Das (2018)
shows no systematic trend between the bar slow-down rate and the bulge mass
for models with $0.43<R_{b}/R_{d}<0.47$. This suggests the bulge should be
sufficiently compact to control the temporal evolution of the bar pattern
speed. In our models with less compact bulge than in Kataria & Das (2019),
angular momentum is predominantly absorbed by the halo (see Figure 9).
### 3.4 Bar Length
Figure 12: Radial distribution of $\psi-\psi(R_{\rm max})$ of the $m=2$ mode
in the disk of model C10 at $t=5.5\;{\rm Gyr}$. Here, $R_{\textrm{max}}$
denotes the radius where $A_{2}/A_{0}$ is maximized. The bar has a length
$R_{b}=6.8\;{\rm kpc}$ at this time.
One can use the position angle $\psi(R)$ defined in Equation 10 to measure the
bar length (e.g., Athanassoula & Misiriotis 2002; Scannapieco & Athanassoula
2012). Figure 12 plots the radial distribution of the position angle of the
$m=2$ mode in the disk of model C10 at $t=5.5\;{\rm Gyr}$. Note that $\psi(R)$
that remains more or less constant at small $R$ changes abruptly at $R\gtrsim
6.8\;{\rm kpc}$, indicating that the bar has a semi-major axis
$R_{b}=6.8\;{\rm kpc}$ at this time.
Figure 13 plots temporal changes of $R_{b}$ for the models shown in Figure 10.
First of all, bars are longer in models with a less massive and/or less
compact bulge since these allow stronger swing amplifications. Overall, the
bar length in our models increases with time, expedited by angular momentum
exchange with the halo and bulge (Athanassoula, 2003). The increasing rate of
the bar length is lower in models with more massive and compact bulge. We note
that the decrease of the bar length at $t\sim 3.8\;{\rm Gyr}$ in model C00,
$t\sim 5.3\;{\rm Gyr}$ in model C05, and $t\sim 3.3\;{\rm Gyr}$ C20 is caused
by the interactions with surrounding spiral arms (or an inner ring) which tend
to shorten the bar by perturbing particles on outer $x_{1}$ orbits. In model
L10, outer spiral arms are in phase with the bar at $t\sim 1\;{\rm Gyr}$,
making $R_{b}$ longer than the true bar length temporarily.
Figure 13: Temporal changes of the bar length $R_{b}$ for the models shown in
Figure 10.
Figure 14 plots the dependence of the bar pattern speed $\Omega_{b}$ and the
corotation radius $R_{\text{CR}}$ on the bar length $R_{b}$ in all models that
form a bar, with the symbol size representing the simulation time. In general,
longer bars tend to be slower. The ratio $\mathcal{R}=R_{\rm CR}/R_{b}$ is
useful to classify slow or fast bars: bars with $\mathcal{R}>1.4$ are
considered slow, while those with $\mathcal{R}<1.4$ are termed fast bars.
Models with a massive and compact bulge have larger $\mathcal{R}$ since they
have short bars compared to those with a less compact bulge. Note that all
bars are slow rotators for almost all time.
Figure 14: Relationship ($a$) between $\Omega_{b}$ and $R_{b}$ and ($b$)
between $R_{\text{CR}}$ and $R_{b}$ for all models that form a bar. The marker
sizes correspond to the simulation times. The red dashed and blue solid lines
in ($b$) draw $\mathcal{R}\equiv R_{\text{CR}}/R_{b}=1.0$ and 1.4,
respectively, indicating that all bars in our models are slow with
$\mathcal{R}>1.4$.
## 4 Discussion
In the preceding section, we have shown that models with a massive and compact
bulge and a concentrated halo are less likely to form a bar. In this section
we compare our numerical results with the previous bar formation conditions
mentioned in Section 1. We then propose a new two-parameter condition that is
consistent with the theory of bar formation. We also use our numerical results
to indirectly measure the mass of the classical bulge in the Milky Way.
Figure 15: Simulation outcomes in the $t_{\text{OP}}$–$\epsilon_{\text{ELN}}$
plane. The vertical dashed line marks $\epsilon_{\text{ELN}}=1.1$ (Equation
2). Blue symbols denote unstable models for bar formation, while red symbols
are for stable models. Circles and triangles are for models in the $\tt L$ and
$\tt C$ series, respectively. Open and filled symbols correspond to models
with compact and less compact bulges, respectively. Figure 16: Same as Figure
15 but in the $\mathcal{F}_{\text{KD}}$–$\mathcal{D}_{\text{SE}}$ plane. The
horizontal and vertical dashed lines draw $\mathcal{F}_{\text{KD}}=0.35$ and
$\mathcal{D}_{\text{SE}}=1/\sqrt{10}$ (see Equations 3 and 4), respectively.
### 4.1 Criteria for Bar Formation
Figure 15 plots the simulation outcomes in the
$t_{\text{OP}}$–$\epsilon_{\text{ELN}}$ plane, with the blue and red symbols
representing unstable and stable models to bar formation, respectively: the
values of $t_{\text{OP}}$ and $\epsilon_{\text{ELN}}$ of each model are listed
in Columns (7) and (8) of Table 1. Circles and triangles mark the models in
the $\tt L$ and $\tt C$ series, respectively, with the open (filled) symbols
corresponding to the compact (less compact) bulges. While all the models have
$t_{\text{OP}}>0.42$, some of them do not evolve to form a bar, suggesting
that $t_{\text{OP}}$ is not a good indicator of the disk stability against bar
formation. This is most likely because Ostriker & Peebles (1973) employed
models with a fixed halo, neglecting halo-disk interactions which are crucial
for the bar growth. Saha & Elmegreen (2018) also noted that the initial value
of $t_{\text{OP}}$ cannot determine whether a bar forms or not.
The abscissa of Figure 15 shows that all the bar-forming models satisfy the
ELN criterion (Equation 2). However, some galaxies with a massive bulge under
a concentrated halo remain stable even with $\epsilon_{\text{ELN}}<1.1$. The
discrepancy between the ELN criterion and our results is because it, based on
2D thin-disk models with a fixed halo, does not capture the disk-halo
interactions (e.g., Athanassoula 2008; Fujii et al. 2018). Analyses of
galaxies in recent simulations for cosmological galaxy formation such as EAGLE
and IllustrisTNG, etc. have also found that $\epsilon_{\text{ELN}}$ is
incomplete to predict whether galaxies formed are barred or not (Yurin &
Springel, 2015; Algorry et al., 2017; Marioni et al., 2022; Izquierdo-Villalba
et al., 2022).
Figure 16 plots our results in the
$\mathcal{F}_{\text{KD}}$–$\mathcal{D}_{\text{SE}}$ plane, with the blue and
and symbols corresponding to the unstable and stable models, respectively: the
values of $\mathcal{F}_{\text{KD}}$ and $\mathcal{D}_{\text{SE}}$ of each
model are given in Columns (9) and (10) of Table 1. The ordinate of Figure 16
shows that Equation 3 is overall consistent with the simulation results for
the models in the C series, although it fails for the models in the L series:
some models with a massive bulge under a less concentrate halo form a bar even
with $\mathcal{F}_{\text{KD}}>0.35$. In fact, all the models in Kataria & Das
(2018) belong to our C series, so that their criterion is unable to predict
bar formation in models with less concentrated halos.111The halos employed in
Kataria & Das (2018) have the scale radius of $a_{h}=17.88\;{\rm kpc}$ for the
MA models and $a_{h}=25.54\;{\rm kpc}$ for the MB models (S. K. Kataria, 2022,
private communication), which are smaller than $a_{h}=30\;{\rm kpc}$ for the
models in our C series.
Saha & Elmegreen (2018) found that a compact bulge suppresses feedback loops
by making the ILR strong. According to Equation 4 for bar formation, all of
our models except models C00 and L00 with no bulge should not form a bar.
However, the abscissa of Figure 16 shows that most models with
$\mathcal{D}_{\text{SE}}\lesssim(4$–$10$) are unstable to bar formation. It is
unclear why our results are so different from Equation 4, but the parts of the
reason may be that compared with our models, their halos are small in mass
with $M_{h}\sim 4M_{d}$ and their disks are thin with $z_{d}\sim 0.02R_{d}$.
Figure 17: Same as Figure 16 but in the ${Q_{T,\text{min}}}$–CMC plane. The
shaded regions correspond to Equation 13, within which all the bar-forming
models are located.
As mentioned earlier, bar formation in a disk involves several cycles of swing
amplifications and feedback loops. This naturally requires two conditions: (1)
the disk should have small ${Q_{T,\text{min}}}$ to be sufficiently susceptible
to self-gravitational instability and (2) the ILR should be weak enough for
incoming waves pass through the center, which is achieved when the CMC is
small. Motivated by these physical considerations, Figure 17 plots the
simulation outcomes in the ${Q_{T,\text{min}}}$–CMC plane. Models with more
compact bulge and halo have a higher CMC than the less concentrated
counterparts with the same $M_{b}/M_{d}$. Models with concentrated halos tend
to have higher ${Q_{T,\text{min}}}$, although ${Q_{T,\text{min}}}$ is
insensitive to the bulge compactness. Note that all the bar-forming models
satisfy
$\left(\frac{{Q_{T,\text{min}}}}{1.2}\right)^{2}+\left(\frac{\text{CMC}}{0.05}\right)^{2}<1,$
(13)
marked as a shade in Figure 17. In models with ${Q_{T,\text{min}}}$ or CMC
larger than Equation 13 implies, swing amplifications with suppressed feedback
loops are not strong enough to promote bar formation: these models end up with
only weak spiral arms in outer disks (see Figure 4).
Failure of Equations 3 and 4 as a criterion for bar formation is because they
account only for a bulge in setting the ILR. However, our results show that
not only the bulge mass but also the halo mass in the galaxy center are
important in determining the strength of the ILR.
### 4.2 Fast or Slow Bars
The fact that all bars in our models are slow is consistent with the results
of Roshan et al. (2021) who found that bars formed in cosmological
hydrodynamical simulations are preferentially slow, with the mean value of
$\mathcal{R}\sim 1.9$–$3.0$. However, Cuomo et al. (2020) showed that most
observed bars in 77 nearly galaxies are fast, with a mean value of
$\mathcal{R}\sim 0.92$. What causes the discrepancy in the bar properties
between observations and simulations is a challenging question. Frankel et al.
(2022) argued that the discrepancy arises not because the simulated bars are
too slow but because they are too short.
There is a large room for improvement in both simulations and observations for
more reliable comparisons. In simulations, our isolated-galaxy models need to
be more realistic by including a gaseous component, star formation, halo spin,
etc., which may affect the bar pattern speeds significantly. Cosmological
simulations still suffer from issues such as insufficient resolution and
calibration of feedback from star formation and active galactic nuclei. In
observations, the often-used Tremaine-Weinberg method in measuring the bar
pattern speeds depends critically on the assumptions that galaxies are in a
steady state and that there is a well-defined pattern (Tremaine & Weinberg,
1984), the validity of which is not always guaranteed. In addition, the bar
length depends considerably on the measurement methods such as Fourier
analysis, force ratio, ellipse fitting, etc. (Lee et al., 2022).
Theoretically, it is impossible to have a long-lived, quasi-steady bar with
$\mathcal{R}<1$ since the bar-supporting $x_{1}$ orbits exist only inside the
corotation radius (e.g., Contopoulos 1980; Contopoulos & Grosbøl 1989; Binney
& Tremaine 2008).
### 4.3 Classical Bulge of the Milky Way
The Milky Way is a barred galaxy dominated by a B/P bulge (e.g., Dwek et al.
1995; Martinez-Valpuesta & Gerhard 2011; Ness et al. 2013). Some early studies
reported that the bar in the Milky Way is fast and short, with
$50<\Omega_{b}<60\,\rm km\,s^{-1}\,kpc^{-1}$ and $R_{b}\sim 3\;{\rm kpc}$
(Fux, 1999; Debattista et al., 2002; Bissantz et al., 2003; Fragkoudi et al.,
2019; Dehnen, 2000), while recent studies suggested that it is rather slow and
long, with $33<\Omega_{b}<45\,\rm km\,s^{-1}\,kpc^{-1}$ and $R_{b}\sim
4.5$–$5\;{\rm kpc}$ (Wegg et al., 2015; Sormani et al., 2015; Portail et al.,
2017; Bland-Hawthorn & Gerhard, 2016; Clarke & Gerhard, 2022). By comparing
observed proper motions in the bar and bulge regions with dynamical models,
Clarke & Gerhard (2022) most recently reported $\Omega_{b}=33.29\pm 1.81\,\rm
km\,s^{-1}\,kpc^{-1}$, placing the corotation resonance at $R_{\text{CR}}\sim
5$–$7\;{\rm kpc}$.
Using our numerical results, we attempt to constrain the mass of the classical
bulge of the Milky Way. As Figures 10 and 13 show, the bar in model C10 has
$\Omega_{b}\sim 30$–$35\,\rm km\,s^{-1}\,kpc^{-1}$ and $R_{b}\sim
4.5$–$5\;{\rm kpc}$ at $t=2.5$–$3.5\;{\rm Gyr}$, which are well matched to the
observed properties of the Milky-Way bar. Model C20 produces a bar with
$\Omega_{b}\sim 36\,\rm km\,s^{-1}\,kpc^{-1}$ and $R_{b}\sim 4.2\;{\rm kpc}$
at $t=8\;{\rm Gyr}$. The bars in models C00 and C05 have a length of
$R_{b}\sim 5\;{\rm kpc}$ at $t\sim 2.5\;{\rm Gyr}$, but their pattern speeds
are smaller than $30\,\rm km\,s^{-1}\,kpc^{-1}$. These results suggest that
the Milky may possess a classical bulge with mass $\sim 10$–$20$% of the disk
mass. This is consistent with the claim of Shen et al. (2010) that the
classical bulge of the Milky Way should be less than 25% of the disk mass to
be fitted well with the observed stellar kinematics (see also Di Matteo et al.
2015). If the age of the Milky-Way bar is $\sim 3\;{\rm Gyr}$, as proposed by
Cole & Weinberg (2002) based on the ages of infrared carbon stars, the bar in
model C10 best represents the Milky Way bar. If it is instead $\sim 8\;{\rm
Gyr}$ old, as proposed by Bovy et al. (2019) based on the kinematic analyses
of APOGEE and _Gaia_ data, it would be better described by the bar in model
C20.
## 5 Conclusions
We have presented the results of $N$-body simulations to study the effects of
spherical components including a classical bulge and a dark halo on the
formation and evolution of a bar. For this, we have constructed 3D galaxy
models with physical conditions similar to the Milky Way and varied the bulge-
to-disk mass ratio as well as the compactness of the halo and bulge
components, while fixing the disk and halo masses. Our main conclusions are
highlighted below.
1. 1.
_Bar Properties_ – The presence of a massive bulge delays the bar formation. A
bar forms later and weaker in models with a more massive and compact bulge and
under a more concentrated halo. Bars are shorter and thus rotate faster in
models with more massive and compact bulges. Angular momentum transfer from a
bar to both halo and bulge makes the bar slower and longer over time, although
most of the angular momentum lost by the bar is absorbed by the halo. All the
bars in our models are slow rotators with
$\mathcal{R}=R_{\text{CR}}/R_{b}>1.4$.
2. 2.
_B/P Bulge and Buckling Instability_ – All the models that form a bar undergo
disk thickening and eventually develop a B/P bulge. In all models with a
bulge, this proceeds secularly as the bulge tends to suppress the bar
formation. However, two models (L00 and C00) with no bulge experience buckling
instability at $t\sim 5\;{\rm Gyr}$ during which the bar thickens rapidly. The
buckling instability occurs when $\sigma_{z}/\sigma_{R}$ is kept below $\sim
0.6$ and involves asymmetric density distribution of the disk across the $z=0$
plane.
3. 3.
_Conditions for Bar Formation_ – Our numerical results for bar formation are
not well explained by the singe-parameter criteria proposed by the previous
studies. We instead find that the bar formation in our galaxy models need to
satisfy Equation 13. In models with larger ${Q_{T,\text{min}}}$ or larger CMC,
the growth of perturbations via swing amplifications combined with feedback
loops is too weak to produce bar-supporting $x_{1}$ orbits.
4. 4.
_Classical Bulge of the Milky Way_ – Among our models, the bar at $t\sim
2.5$–$3.5\;{\rm Gyr}$ in model C10 or at $t\sim 8\;{\rm Gyr}$ in model C20 is
matched well with the observed ranges of the bar length and pattern speed in
the Milky Way. This suggests that the Milky Way is most likely to possess a
classical bulge with mass $\sim 10$–$20$% of the disk mass.
## acknowledgments
We are grateful to the referee, Dr. Sandeep Kumar Katari, for an insightful
report. This work was supported by the grants of National Research Foundation
of Korea (2020R1A4A2002885 and 2022R1A2C1004810). Computational resources for
this project were provided by the Supercomputing Center/Korea Institute of
Science and Technology Information with supercomputing resources including
technical support (KSC-2022-CRE-0017).
## References
* Aguerri et al. (2009) Aguerri, J. A. L., Méndez-Abreu, J., & Corsini, E. M. 2009, A&A, 495, 491, doi: 10.1051/0004-6361:200810931
* Algorry et al. (2017) Algorry, D. G., Navarro, J. F., Abadi, M. G., et al. 2017, MNRAS, 469, 1054, doi: 10.1093/mnras/stx1008
* Araki (1987) Araki, S. 1987, AJ, 94, 99, doi: 10.1086/114451
* Athanassoula (1992) Athanassoula, E. 1992, MNRAS, 259, 345, doi: 10.1093/mnras/259.2.345
* Athanassoula (2002) —. 2002, ApJ, 569, L83, doi: 10.1086/340784
* Athanassoula (2003) —. 2003, MNRAS, 341, 1179, doi: 10.1046/j.1365-8711.2003.06473.x
* Athanassoula (2008) —. 2008, MNRAS, 390, L69, doi: 10.1111/j.1745-3933.2008.00541.x
* Athanassoula et al. (2013) Athanassoula, E., Machado, R. E. G., & Rodionov, S. A. 2013, MNRAS, 429, 1949, doi: 10.1093/mnras/sts452
* Athanassoula & Misiriotis (2002) Athanassoula, E., & Misiriotis, A. 2002, MNRAS, 330, 35, doi: 10.1046/j.1365-8711.2002.05028.x
* Baugh et al. (1996) Baugh, C. M., Cole, S., & Frenk, C. S. 1996, MNRAS, 283, 1361, doi: 10.1093/mnras/283.4.1361
* Binney & Tremaine (2008) Binney, J., & Tremaine, S. 2008, Galactic Dynamics: Second Edition
* Bissantz et al. (2003) Bissantz, N., Englmaier, P., & Gerhard, O. 2003, MNRAS, 340, 949, doi: 10.1046/j.1365-8711.2003.06358.x
* Bland-Hawthorn & Gerhard (2016) Bland-Hawthorn, J., & Gerhard, O. 2016, ARA&A, 54, 529, doi: 10.1146/annurev-astro-081915-023441
* Bournaud et al. (2005) Bournaud, F., Combes, F., & Semelin, B. 2005, MNRAS, 364, L18, doi: 10.1111/j.1745-3933.2005.00096.x
* Bournaud et al. (2007) Bournaud, F., Jog, C. J., & Combes, F. 2007, A&A, 476, 1179, doi: 10.1051/0004-6361:20078010
* Bovy et al. (2019) Bovy, J., Leung, H. W., Hunt, J. A. S., et al. 2019, MNRAS, 490, 4740, doi: 10.1093/mnras/stz2891
* Buta & Combes (1996) Buta, R., & Combes, F. 1996, Fund. Cosmic Phys., 17, 95
* Buta et al. (2015) Buta, R. J., Sheth, K., Athanassoula, E., et al. 2015, ApJS, 217, 32, doi: 10.1088/0067-0049/217/2/32
* Clarke & Gerhard (2022) Clarke, J. P., & Gerhard, O. 2022, MNRAS, 512, 2171, doi: 10.1093/mnras/stac603
* Cole & Weinberg (2002) Cole, A. A., & Weinberg, M. D. 2002, ApJ, 574, L43, doi: 10.1086/342278
* Collier et al. (2018) Collier, A., Shlosman, I., & Heller, C. 2018, MNRAS, 476, 1331, doi: 10.1093/mnras/sty270
* Collier et al. (2019) —. 2019, MNRAS, 489, 3102, doi: 10.1093/mnras/stz2327
* Contopoulos (1980) Contopoulos, G. 1980, A&A, 81, 198
* Contopoulos & Grosbøl (1989) Contopoulos, G., & Grosbøl, P. 1989, A&A Rev., 1, 261, doi: 10.1007/BF00873080
* Cuomo et al. (2020) Cuomo, V., Aguerri, J. A. L., Corsini, E. M., & Debattista, V. P. 2020, A&A, 641, A111, doi: 10.1051/0004-6361/202037945
* de Vaucouleurs (1963) de Vaucouleurs, G. 1963, ApJ, 138, 934, doi: 10.1086/147696
* Debattista et al. (2002) Debattista, V. P., Gerhard, O., & Sevenster, M. N. 2002, MNRAS, 334, 355, doi: 10.1046/j.1365-8711.2002.05500.x
* Debattista & Sellwood (2000) Debattista, V. P., & Sellwood, J. A. 2000, ApJ, 543, 704, doi: 10.1086/317148
* Dehnen (2000) Dehnen, W. 2000, AJ, 119, 800, doi: 10.1086/301226
* Di Matteo et al. (2015) Di Matteo, P., Gómez, A., Haywood, M., et al. 2015, A&A, 577, A1, doi: 10.1051/0004-6361/201424457
* Díaz-García et al. (2019) Díaz-García, S., Salo, H., Knapen, J. H., & Herrera-Endoqui, M. 2019, A&A, 631, A94, doi: 10.1051/0004-6361/201936000
* Díaz-García et al. (2016) Díaz-García, S., Salo, H., Laurikainen, E., & Herrera-Endoqui, M. 2016, A&A, 587, A160, doi: 10.1051/0004-6361/201526161
* Dwek et al. (1995) Dwek, E., Arendt, R. G., Hauser, M. G., et al. 1995, ApJ, 445, 716, doi: 10.1086/175734
* Efstathiou et al. (1982) Efstathiou, G., Lake, G., & Negroponte, J. 1982, MNRAS, 199, 1069, doi: 10.1093/mnras/199.4.1069
* Fragkoudi et al. (2017) Fragkoudi, F., Di Matteo, P., Haywood, M., et al. 2017, A&A, 606, A47, doi: 10.1051/0004-6361/201630244
* Fragkoudi et al. (2019) Fragkoudi, F., Katz, D., Trick, W., et al. 2019, MNRAS, 488, 3324, doi: 10.1093/mnras/stz1875
* Frankel et al. (2022) Frankel, N., Pillepich, A., Rix, H.-W., et al. 2022, arXiv e-prints, arXiv:2201.08406. https://arxiv.org/abs/2201.08406
* Fujii et al. (2018) Fujii, M. S., Bédorf, J., Baba, J., & Portegies Zwart, S. 2018, MNRAS, 477, 1451, doi: 10.1093/mnras/sty711
* Fux (1999) Fux, R. 1999, A&A, 345, 787. https://arxiv.org/abs/astro-ph/9903154
* Guiglion et al. (2015) Guiglion, G., Recio-Blanco, A., de Laverny, P., et al. 2015, A&A, 583, A91, doi: 10.1051/0004-6361/201525883
* Hasan et al. (1993) Hasan, H., Pfenniger, D., & Norman, C. 1993, ApJ, 409, 91, doi: 10.1086/172644
* Helmi (2020) Helmi, A. 2020, ARA&A, 58, 205, doi: 10.1146/annurev-astro-032620-021917
* Hernquist (1990) Hernquist, L. 1990, ApJ, 356, 359, doi: 10.1086/168845
* Hohl (1976) Hohl, F. 1976, AJ, 81, 30, doi: 10.1086/111849
* Holley-Bockelmann et al. (2005) Holley-Bockelmann, K., Weinberg, M., & Katz, N. 2005, MNRAS, 363, 991, doi: 10.1111/j.1365-2966.2005.09501.x
* Hopkins et al. (2009) Hopkins, P. F., Somerville, R. S., Cox, T. J., et al. 2009, MNRAS, 397, 802, doi: 10.1111/j.1365-2966.2009.14983.x
* Hopkins et al. (2010) Hopkins, P. F., Bundy, K., Croton, D., et al. 2010, ApJ, 715, 202, doi: 10.1088/0004-637X/715/1/202
* Iannuzzi & Athanassoula (2015) Iannuzzi, F., & Athanassoula, E. 2015, MNRAS, 450, 2514, doi: 10.1093/mnras/stv764
* Izquierdo-Villalba et al. (2022) Izquierdo-Villalba, D., Bonoli, S., Rosas-Guevara, Y., et al. 2022, MNRAS, 514, 1006, doi: 10.1093/mnras/stac1413
* Kataria & Das (2018) Kataria, S. K., & Das, M. 2018, MNRAS, 475, 1653, doi: 10.1093/mnras/stx3279
* Kataria & Das (2019) —. 2019, ApJ, 886, 43, doi: 10.3847/1538-4357/ab48f7
* Kataria et al. (2020) Kataria, S. K., Das, M., & Barway, S. 2020, A&A, 640, A14, doi: 10.1051/0004-6361/202037527
* Kataria & Shen (2022) Kataria, S. K., & Shen, J. 2022, arXiv e-prints, arXiv:2210.14526. https://arxiv.org/abs/2210.14526
* Katz et al. (2018) Katz, D., Antoja, T., Romero-Gómez, M., et al. 2018, A&A, 616, A11, doi: 10.1051/0004-6361/201832865
* Kauffmann et al. (1993) Kauffmann, G., White, S. D. M., & Guiderdoni, B. 1993, MNRAS, 264, 201, doi: 10.1093/mnras/264.1.201
* Kim & Ostriker (2007) Kim, W.-T., & Ostriker, E. C. 2007, ApJ, 660, 1232, doi: 10.1086/513176
* Kim et al. (2012) Kim, W.-T., Seo, W.-Y., Stone, J. M., Yoon, D., & Teuben, P. J. 2012, ApJ, 747, 60, doi: 10.1088/0004-637X/747/1/60
* Knapen et al. (2000) Knapen, J. H., Shlosman, I., & Peletier, R. F. 2000, ApJ, 529, 93, doi: 10.1086/308266
* Kraljic et al. (2012) Kraljic, K., Bournaud, F., & Martig, M. 2012, ApJ, 757, 60, doi: 10.1088/0004-637X/757/1/60
* Kwak et al. (2017) Kwak, S., Kim, W.-T., Rey, S.-C., & Kim, S. 2017, ApJ, 839, 24, doi: 10.3847/1538-4357/aa674c
* Laurikainen et al. (2004) Laurikainen, E., Salo, H., & Buta, R. 2004, ApJ, 607, 103, doi: 10.1086/383462
* Lee et al. (2022) Lee, Y. H., Park, M.-G., Hwang, H. S., et al. 2022, ApJ, 926, 58, doi: 10.3847/1538-4357/ac3bc1
* Marinova & Jogee (2007) Marinova, I., & Jogee, S. 2007, ApJ, 659, 1176, doi: 10.1086/512355
* Marioni et al. (2022) Marioni, O. F., Abadi, M. G., Gottlöber, S., & Yepes, G. 2022, MNRAS, 511, 2423, doi: 10.1093/mnras/stac105
* Martinez-Valpuesta & Gerhard (2011) Martinez-Valpuesta, I., & Gerhard, O. 2011, ApJ, 734, L20, doi: 10.1088/2041-8205/734/1/L20
* Martinez-Valpuesta et al. (2006) Martinez-Valpuesta, I., Shlosman, I., & Heller, C. 2006, ApJ, 637, 214, doi: 10.1086/498338
* Melvin et al. (2014) Melvin, T., Masters, K., Lintott, C., et al. 2014, MNRAS, 438, 2882, doi: 10.1093/mnras/stt2397
* Méndez-Abreu et al. (2012) Méndez-Abreu, J., Sánchez-Janssen, R., Aguerri, J. A. L., Corsini, E. M., & Zarattini, S. 2012, ApJ, 761, L6, doi: 10.1088/2041-8205/761/1/L6
* Menéndez-Delmestre et al. (2007) Menéndez-Delmestre, K., Sheth, K., Schinnerer, E., Jarrett, T. H., & Scoville, N. Z. 2007, ApJ, 657, 790, doi: 10.1086/511025
* Naab et al. (2014) Naab, T., Oser, L., Emsellem, E., et al. 2014, MNRAS, 444, 3357, doi: 10.1093/mnras/stt1919
* Ness et al. (2013) Ness, M., Freeman, K., Athanassoula, E., et al. 2013, MNRAS, 432, 2092, doi: 10.1093/mnras/stt533
* Norman et al. (1996) Norman, C. A., Sellwood, J. A., & Hasan, H. 1996, ApJ, 462, 114, doi: 10.1086/177133
* Ostriker & Peebles (1973) Ostriker, J. P., & Peebles, P. J. E. 1973, ApJ, 186, 467, doi: 10.1086/152513
* Pfenniger & Norman (1990) Pfenniger, D., & Norman, C. 1990, ApJ, 363, 391, doi: 10.1086/169352
* Polyachenko et al. (2016) Polyachenko, E. V., Berczik, P., & Just, A. 2016, MNRAS, 462, 3727, doi: 10.1093/mnras/stw1907
* Portail et al. (2017) Portail, M., Wegg, C., Gerhard, O., & Ness, M. 2017, MNRAS, 470, 1233, doi: 10.1093/mnras/stx1293
* Quillen et al. (2014) Quillen, A. C., Minchev, I., Sharma, S., Qin, Y.-J., & Di Matteo, P. 2014, MNRAS, 437, 1284, doi: 10.1093/mnras/stt1972
* Raha et al. (1991) Raha, N., Sellwood, J. A., James, R. A., & Kahn, F. D. 1991, Nature, 352, 411, doi: 10.1038/352411a0
* Roshan et al. (2021) Roshan, M., Ghafourian, N., Kashfi, T., et al. 2021, MNRAS, 508, 926, doi: 10.1093/mnras/stab2553
* Saha & Elmegreen (2018) Saha, K., & Elmegreen, B. 2018, ApJ, 858, 24, doi: 10.3847/1538-4357/aabacd
* Salo & Laurikainen (2017) Salo, H., & Laurikainen, E. 2017, ApJ, 835, 252, doi: 10.3847/1538-4357/835/2/252
* Scannapieco & Athanassoula (2012) Scannapieco, C., & Athanassoula, E. 2012, MNRAS, 425, L10, doi: 10.1111/j.1745-3933.2012.01291.x
* Sellwood (1980) Sellwood, J. A. 1980, A&A, 89, 296
* Sellwood (2014) —. 2014, Reviews of Modern Physics, 86, 1, doi: 10.1103/RevModPhys.86.1
* Sellwood & Gerhard (2020) Sellwood, J. A., & Gerhard, O. 2020, MNRAS, 495, 3175, doi: 10.1093/mnras/staa1336
* Sellwood & Wilkinson (1993) Sellwood, J. A., & Wilkinson, A. 1993, Reports on Progress in Physics, 56, 173, doi: 10.1088/0034-4885/56/2/001
* Seo et al. (2019) Seo, W.-Y., Kim, W.-T., Kwak, S., et al. 2019, ApJ, 872, 5, doi: 10.3847/1538-4357/aafc5f
* Sharma et al. (2014) Sharma, S., Bland-Hawthorn, J., Binney, J., et al. 2014, ApJ, 793, 51, doi: 10.1088/0004-637X/793/1/51
* Shen et al. (2010) Shen, J., Rich, R. M., Kormendy, J., et al. 2010, ApJ, 720, L72, doi: 10.1088/2041-8205/720/1/L72
* Shen & Sellwood (2004) Shen, J., & Sellwood, J. A. 2004, ApJ, 604, 614, doi: 10.1086/382124
* Sheth et al. (2012) Sheth, K., Melbourne, J., Elmegreen, D. M., et al. 2012, ApJ, 758, 136, doi: 10.1088/0004-637X/758/2/136
* Sheth et al. (2008) Sheth, K., Elmegreen, D. M., Elmegreen, B. G., et al. 2008, ApJ, 675, 1141, doi: 10.1086/524980
* Sormani et al. (2015) Sormani, M. C., Binney, J., & Magorrian, J. 2015, MNRAS, 451, 3437, doi: 10.1093/mnras/stv1135
* Springel et al. (2021) Springel, V., Pakmor, R., Zier, O., & Reinecke, M. 2021, MNRAS, 506, 2871, doi: 10.1093/mnras/stab1855
* Toomre (1964) Toomre, A. 1964, ApJ, 139, 1217, doi: 10.1086/147861
* Toomre (1966) —. 1966, inn Geophysical Fluid Dynamics Ref. No. 66-46, ed. W. V. R. Malkus, (Woods Hole, MA: Woods Hole Oceanographic Institute), 111
* Toomre (1981) —. 1981, in The Structure and Evolution of Normal Galaxies (Cambridge:Cambridge Univ.Press)
* Tremaine & Weinberg (1984) Tremaine, S., & Weinberg, M. D. 1984, ApJ, 282, L5, doi: 10.1086/184292
* Valenzuela & Klypin (2003) Valenzuela, O., & Klypin, A. 2003, MNRAS, 345, 406, doi: 10.1046/j.1365-8711.2003.06947.x
* Wegg et al. (2015) Wegg, C., Gerhard, O., & Portail, M. 2015, MNRAS, 450, 4050, doi: 10.1093/mnras/stv745
* Weinberg & Katz (2007) Weinberg, M. D., & Katz, N. 2007, MNRAS, 375, 460, doi: 10.1111/j.1365-2966.2006.11307.x
* Whyte et al. (2002) Whyte, L. F., Abraham, R. G., Merrifield, M. R., et al. 2002, MNRAS, 336, 1281, doi: 10.1046/j.1365-8711.2002.05879.x
* Yurin & Springel (2014) Yurin, D., & Springel, V. 2014, MNRAS, 444, 62, doi: 10.1093/mnras/stu1421
* Yurin & Springel (2015) —. 2015, MNRAS, 452, 2367, doi: 10.1093/mnras/stv1454
|
# A Probabilistic-Logic based Commonsense Representation Framework for
Modelling Inferences with Multiple Antecedents and Varying Likelihoods
Shantanu Jaiswal Liu Yan Dongkyu Choi Kenneth Kwok
###### Abstract
Commonsense knowledge-graphs (CKGs) are important resources towards building
machines that can ‘reason’ on text or environmental inputs and make inferences
beyond perception. While current CKGs encode world knowledge for a large
number of concepts and have been effectively utilized for incorporating
commonsense in neural models, they primarily encode declarative or single-
condition inferential knowledge and assume all conceptual beliefs to have the
same likelihood. Further, these CKGs utilize a limited set of relations shared
across concepts and lack a coherent knowledge organization structure resulting
in redundancies as well as sparsity across the larger knowledge graph.
Consequently, today’s CKGs, while useful for a first level of reasoning, do
not adequately capture deeper human-level commonsense inferences which can be
more nuanced and influenced by multiple contextual or situational factors.
Accordingly, in this work, we study how commonsense knowledge can be better
represented by – (i) utilizing a probabilistic logic representation scheme to
model composite inferential knowledge and represent conceptual beliefs with
varying likelihoods, and (ii) incorporating a hierarchical conceptual ontology
to identify salient concept-relevant relations and organize beliefs at
different conceptual levels. Our resulting knowledge representation framework
can encode a wider variety of world knowledge and represent beliefs flexibly
using grounded concepts as well as free-text phrases. As a result, the
framework can be utilized as both a traditional free-text knowledge graph and
a grounded logic-based inference system more suitable for neuro-symbolic
applications. We describe how we extend the PrimeNet knowledge base with our
framework through crowd-sourcing and expert-annotation, and demonstrate how it
can be applied for deeper and more interpretable passage-based semantic
parsing and question answering.
###### keywords:
Commonsense representation and reasoning , Probabilistic logic programming
††journal:
capbtabboxtable[][]
organization=Social and Cognitive Computing Department, Institute of High
Performance Computing, A*STAR,country=Singapore
## 1 Introduction
The development of commonsense knowledge resources has been a prominent line
of research in enabling artificial intelligence (AI) models to not only
interpret text or environmental scenes, but to also derive inferences upon
them to support higher-level cognitive tasks [1, 2]. These knowledge-bases are
typically designed to capture knowledge of the world corresponding to the
mental model of an average human that is often implicit [3, 4, 5] and not
directly learnable from text or images alone.
While the first large-scale commonsense resources such as Cyc [6] utilized
specialized internal representations (e.g. in CycL) to encode knowledge, more
recent resources such as ConceptNet [7, 8] and ATOMIC [9, 10], encode
knowledge as free-text tuples of form (head, relation, tail) as part of a
larger knowledge graph. The relative simplicity of the latter representation
scheme has resulted in them being effective resources for integrating
commonsense in neural language [11, 12, 13, 14] and vision models [15, 16, 17,
18], designing appropriate ‘reasoning’ benchmarks [19, 20, 21] and being
utilized to enhance ‘reasoning abilities’ of large-scale language models [9,
22, 23].
However, a prominent limitation of the free-text (head, relation, tail)
representation scheme is that the knowledge it can express remains limited to
declarative facts as in ConceptNet [8] (e.g. (‘human’, ‘capable_of’, ‘buying
things’)) or single-condition inferential knowledge as in ATOMIC (e.g. (‘X
eats Y’, ‘has_subevent’ ‘X chews Y’)). Such knowledge although useful at a
surface level, only constitutes a portion of human commonsense which is often
more nuanced and contextually-informed. For example, current CKBs cannot
encode conceptual dependencies such as “a human can buy an object only if the
money or assets they possess are greater than cost of the object” or
contextual dependencies such as “if Y is a liquid, then the subevent of X eats
Y is much more likely to be X drinks Y (and much less likely to be X chews
Y)”. Further, the tuple representation scheme does not encode likelihoods to
differentiate certainties in knowledge. Consequently, as shown in fig. 1,
beliefs such as (‘human’, ‘desires’, ‘car’) or (‘X eats Y’, ‘subevent’ ‘X
chews Y’), that should be more tentative, are treated with the same likelihood
to beliefs such as (‘human’, ‘has’, ‘brain’) or (‘X eats Y’, ‘causes’, ‘X
gains energy’), that should presumably be more certain.
Need for probabilistic-logic based representations of commonsense. One may
look to address these limitations by simply representing conceptual or
contextual antecedents in a single-text phrase (e.g. (‘{X eats Y} and {Y is
liquid}’, ‘subevent’, ‘X drinks Y’)) and by adding a likelihood variable (e.g.
(‘human’, ‘capable_of’, ‘buying things’, 0.6)). However, this still does not
effectively capture the composite nature of such knowledge wherein the
inference and its likelihood is derived dynamically from a logical computation
of relevant antecedents. Hence, for a more comprehensive representation scheme
, we propose the utilization of probabilistic logic programming [24] wherein,
as illustrated in fig. 1, base or context-independent conceptual beliefs can
be represented as probabilistic facts (e.g. 0.95::has(person,brain);
0.6::desires(person,car)) and composite or contextually-derived beliefs can be
represented as probabilistic clauses (e.g. 0.95::capable_of(X,buy,Z):-has(X,Y)
& value(Y)>=value(Z) which states that “X can buy Z if X has Y and value of Y
is more than equal Z’s value”).
Figure 1: Our proposed representation scheme. Facts represent base beliefs
labelled with discrete certainty levels (e.g. ‘person desires car’ is
tentative while ‘person can breathe’ is more certain and ‘person is an animal’
is inherent). Composite clauses allow for more nuanced and dynamic inferences
based on antecedents (e.g. sub-event of ‘X eats Y’ has base tentative belief
as ‘chew’ which changes if knowledge of Y is available; see similarly for ‘X
can buy Z’). Further, knowledge can be encoded flexibly using both free-text
phrases (e.g. “X eats Y”, “breathing”) and grounded concepts (e.g. person,
solid, buy)
Identifying salient relations and structuring knowledge with a hierarchical
conceptual ontology. Another limitation of prominent CKGs is that they utilize
a small list of relations shared across all concepts (e.g. ConceptNet (v5.7)
[8] has 36 relations and ATOMIC-2020 [9] has 11). As noted by previous work
[9, 23], this effectively limits the variety of commonsense captured by these
CKGs resulting in them missing out on nuanced concept knowledge such as
affordances, social and situational properties that form a vital portion of
human commonsense and are also more difficult to capture through large-scale
pretraining. Further, these CKGs lack a coherent knowledge organization
structure and collect knowledge for a concept by only looking at the head
phrase (of a tuple) in isolation without considering previously collected
knowledge for related or parent concepts. This results in redundancies and
sparsity across the larger knowledge graph.
To address these limitations, we propose the incorporation of a multi-level
hierarchical conceptual ontology inspired by past cognitive models of human
concept categorization [25, 26]. As shown in fig. 2, such a hierarchical
ontology proceeds from broad ontological classes (e.g. REAL, MANMADE,
PHYSICAL, etc) to relatively more specific “conceptual groups” [27] (e.g.
VEHICLE, PERSON, etc) from which finally actual concepts are derived (e.g.
Vehicle – {Car, Aeroplane, ..}) which can be made progressively more specific
(e.g. Car – {SUV, Sedan, ..}). As shown in table 1, to each concept node are
attached relations and base beliefs that are inherited by their children nodes
(unless over-ridden). This enables relations to be identified specific to
conceptual-nodes (rather than globally for the entire knowledge graph) and to
be progressively inherited downwards (e.g. ‘Vehicle’ has specific relations
{mileage, top_speed, travel_area} and also inherits relations from parents
such as ‘Physical’: {size, location, etc} and ‘Manmade’:{usage, construction,
etc}). Consequently, two lower-level concepts can have different applicable
relations depending on their parents (e.g. ‘Car’ shares ‘Physical’ relations
with ‘Programmer’ but not ‘Vehicle’ relations).
Figure 2: Proposed hierarchical organization structure (condensed for
visualization) to arrange concepts and identify salient concept-relevant
relations and beliefs (refer table 1). Upper-level nodes reflect broad
ontological classes from which more specific conceptual groups like ‘Vehicle’
and ‘Person’ are derived. Concepts (and their instances, e.g. Alice) inherit
relations and beliefs from applicable parent conceptual groups (e.g.
‘flying_car’ inherits from both ‘car’ and ‘aeroplane’) which promotes re-use
of knowledge when adding new concepts.
Similarly, conceptual beliefs (facts/clauses) can also be encoded at different
conceptual levels and inherited downwards unless over-ridden (e.g. “(X, can,
think)” and “(X, has, feelings)” is inherited by ‘Programmer’ node from its
parent ‘Sentient’ and more specific beliefs such as “(programmer, uses,
terminal)” are specified at the ‘Programmer’ node). This effectively promotes
re-use of knowledge when adding new lower-level concepts (similar to human
few-shot concept learning) and helps better identify concept-specific beliefs
with lesser resultant redundancy across the knowledge-graph.
L | Node | Relations | Beliefs
---|---|---|---
0 | Root | isa; can; related_to | isa(X,Z):- isa(X,Y),isa(X,Z)
1 | Real | comprises; created_through .. | exists_in(X,world)
2 | Event | causes; duration; subevent .. | causes(X,change)
2 | Manmade | used_for; used_by; construction .. | created_through(X, manufacturing); 0.9: used_by(human)
3 | Physical | location; material; phy_state; velocity .. | made_of(X,matter); in(X,Z):-in(X,Y),in(Y,Z)
3 | Sentient | desires; believes; ment_state; lang.. | can(X,think) has(X,feeling)
6 | Vehicle | mileage; top_speed travel_area.. | used_for(X,travel) requires(X,energy)
7 | Programmer | prog_lang; stack_type.. | uses(X,terminal) can(X,program)
Table 1: Example ontology wherein relations and beliefs are identified
specific to concept nodes and inherited downwards by children nodes (refer
fig. 2). This allows for identification of concept-salient predicates (e.g.
Vehicle has relations such as mileage and travel_area besides inheriting
‘used_for’, etc from Manmade and ‘shape’, etc. from Physical). It also
promotes re-use of knowledge downwards (unless overridden) and reduces
redundancies (e.g. Programmer inherits beliefs from Sentient, Physical, etc.
allowing more specific beliefs to be identified for it). L refers to node
level; more elaborate table in sec. 3.
Probabilistic-logic based commonsense representation framework. Based on the
above two insights of utilizing a probabilistic-logic knowledge representation
scheme and incorporating a hierarchical conceptual ontology, we propose a
probabilistic-logic based commonsense representation framework. The framework
is designed to model a wide variety of world knowledge and composite
inferences including temporal verb-schemas, situational beliefs, physics laws
and higher-order beliefs (as detailed in sec. 3). Its representation scheme is
flexible such that knowledge can be represented through logical combinations
of both grounded concepts and free-text phrases (as shown in fig. 1 for
subevent(“X eats Y”, chew) and its antecedents – e.g. “malleable” is free-text
while ‘solid’ and ‘meat’ are grounded concepts). In effect, our framework
builds upon existing free-text phrase representation schemes and can be used
to enrich existing CKGs with a further variety of knowledge and reasoning
capabilities. In this work, we specifically extend the PrimeNet knowledge-base
[28] with our framework, and demonstrate its usage as a grounded logic-based
inference system more suitable for neuro-symbolic applications.
The rest of the paper is organized as follows – (i) first, we review relevant
work, (ii) second, we describe ProbNet in detail and how we construct a first
version of it through both crowdsourcing and manual annotation, (iii) third,
we illustrate its inference engine and how it can be applied for deeper and
more interpretable passage-based semantic parsing and question answering, and
(iv) finally, we discuss limitations of our current work and future
directions.
## 2 Related work
### 2.1 Commonsense knowledge resources and graphs
Initial works in creation of machine-readable commonsense resources include
the Cyc knowledge base [6] and the Open Mind Commonsense Sense project [29]
which evolved into ConceptNet[7, 8]. While Cyc attemped to hand-code
commonsense knowledge in specialized ‘CycL’ representations, ConceptNet
obtained commonsense knowledge from the general public in text-phrase tuple
forms through appropriate crowdsourcing mechanisms. Given its larger coverage
and relatively simple representation scheme, ConceptNet has found greater
utilization with recent neural (deep learning) approaches to improve
‘reasoning’ abilities. Other prominent knowledge resources have also utilized
relatively simpler representation schemes than Cyc, resulting in them being
able to encode knowledge at a large-scale and be utilized with neural methods.
Prominent examples include SenticNet [30] for modelling knowledge relevant to
sentiment analysis tasks, WebChild [31] and ASER knowledge graph [32]that
extract information through web crawling, ATOMIC [10, 9] which models ‘if-
then’ inferential knowledge for event situations and CSKG [33] that encodes
knowledge across different such resources to obtain a unified knowledge graph.
For a more comprehensive survey, we refer the reader to [34].
In relation to these works, our work focuses on how knowledge can be better
represented by utilizing probabilistic logic to model composite and dynamic
inferences, and incorporating a hierarchical ontology to structure beliefs and
identify salient relations. We believe this can be a complementary direction
to the development of further commonsense resources or updates in existing
ones, particularly given the growing developments in neuro-symbolic learning
methods [35, 36, 37] and their utilization for reasoning tasks [38, 39, 40].
### 2.2 Commonsense datasets and neural knowledge graphs
An alternative direction to developing commonsense knowledge resources in form
of knowledge-graphs or -bases has been to curate datasets for specific types
of commonsense reasoning. These datasets can then be used to train neural or
large-scale pretrained models to better capture the targeted type of
commonsense, besides serving as benchmarks for the same. Some prominent works
in this direction include Piqa [41] for physical commonsense, VCR for visual
commonsense [42], CICERO [43] for contextually-informed dialogue inferences,
SocialQA [44] for social situational commonsense and TIMEDIAL [45] for
temporal commonsense for dialogue inferences. Further, works [9] have also
looked into how commonsense resources can be combined with large-scale
pretrained language models (see [23] for an extensive survey) to be
represented as ‘neural knowledge graphs’ and queryable through prompting. In
relation to the above directions, we believe our work can support development
of newer datasets / benchmarks that evaluate composite logical processing
(similar to how CommonsenseQA [20] utilizes ConceptNet) besides being utilized
to potentially enable neural knowledge graphs to make contextually-dependent
and logical inferences with varying likelihoods / certainties.
### 2.3 Answer set programming frameworks
Answer set programming (ASP) [46, 47] is another prominent logic-based
knowledge representation and reasoning approach. In ASP, given knowledge
(rules and facts), valid models (specifically ‘stable Herbrand models’ that
contain no variables) are generated from which solutions can be inferred. This
is a bottom-up model generation approach in contrast to traditional Prolog-
like logic programming wherein given a query, backward chaining is performed
with matching rules/facts to prove (or disprove) the query. Recent works have
shown how ASP can be utilized for natural language inference tasks [48].
However, in contrast to probabilistic-logic frameworks such as ProbLog,
prominent ASP frameworks do not allow for representing uncertainties in
inference and lack neuro-symbolic integration, both of which motivate our
choice to use probabilistic logic programming for our work.
## 3 Probabilistic-logic based commonsense representation framework
We aim to design a knowledge representation and reasoning scheme that can
effectively model inferential knowledge comprising multiple antecedents and
represent beliefs with varying likelihoods. Additionally, our target
representation scheme should allow for the knowledge to be utilized as both a
free-text knowledge-graph and a grounded semantic inference system.
Accordingly, as mentioned previously, we will utilize a probabilistic logic
programming approach to model such knowledge and perform inferences.
Specifically, we utilize the ProbLog language [49, 24] for our work given its
well-developed framework [50, 51], and its extensions for relational learning
and neuro-symbolic processing [39], which are both relevant to future
directions of our work on learning such knowledge.
### 3.1 Probabilistic logic programming primer / syntax
Probabilistic logic programming extends traditional logic programming [52] (as
in Prolog) with probabilities attached to facts and clauses to model
uncertainties. Table 11 (appendix) lists probabilistic logic programming
terminology and ProbLog syntax relevant to our work’s discussion. An example
representation of the natural language belief ‘X can move with base likelihood
0.9 if X has a leg or a wheel and X is not in a stuck state’ in logical form
is “0.9::can(X,move):- (has(X,leg); has(X,wheel)), not(state(X,stuck)).” For a
more comprehensive overview of probabilistic logic programming, we refer the
reader to [24].
### 3.2 Extending PrimeNet with a base conceptual ontology and typed
predicates
PrimeNet is a commonsense knowledge base organised hierarchically around a
basic level of concepts, in terms of which humans tend to think about the
world. It was constructed by organising these basic level concepts under a
superordinate level of much fewer primitive classes (hence the name PrimeNet),
and over a subordinate level of many more entity classes which are
specialisations of the basic level concepts. PrimeNet is psychologically
inspired from observations that humans rely on a much smaller set of concepts
to function in the world compared to the millions of lexical concepts that
exist, and hypothesises that commonsense reasoning could similarly depend on a
concise core of primitive concepts as shown in fig. 3.
Our commonsense knowledge representation and reasoning scheme extends PrimeNet
as follows:
1. 1.
We ground the primitive level classes in PrimeNet to a base ontology adapted
from Dahlgren [27] as previously introduced in fig.2. We do this by equating
conceptual primitives in PrimeNet to Kind Types in the base ontology.
2. 2.
We introduce type-specific predicates (relations and attributes) to each
concept node that are inherited downwards by subordinate concepts.
3. 3.
We associate probabilities with each conceptual belief (facts/clauses) to
reflect the likelihood that people would hold such a belief.
Our ontology’s hierarchy was designed specifically to comprise a lesser number
of nodes and levels in comparison to the more extensive WordNet hierarchy
[53]. This was done as our aim was to utilize the hierarchy to identify
concept-salient predicates and encode beliefs at different levels in a
manageable manner. A subset of the ontology with salient predicates is shown
in table 3.2. With such a structure, new predicates, beliefs and concepts can
be added incrementally at lower-levels to gradually increase the variety of
knowledge expressible. For events, as we detail later, in addition to
conceptual predicates such as ‘causes’, ‘enables’, etc., we utilize semantic
roles drawn from VerbNet [54] to represent particular instances of events and
any beliefs pertaining to entities.
Figure 3: Existing three-layer knowledge representation of PrimeNet.
L | Nd | Parents | Predicates | Examples
---|---|---|---|---
0 | Root | - | isa can | isa(car,vehicle) can(article, “transmit ideas”)
1 | Real | Root | comprises created_by | comprises(book,sentences) created_by(car,construction)
2 | Numeric | Abstract -$>$Root | value more_than | value(temp,30,celsius) more_than(wtr_bp,room_tmp)
3 | Idea | Proposition -$>$Abstract | content author | content(theory,axioms) author(motion_law,“Newton”)
2 | Event | Real -$>$Root | causes duration theme subevent purpose | causes(heat_liq, evaporation) duration(make_home,‘months’) theme(eat,food) subevent(buy, negotiate) purpose(“fix item”,“use item”)
4 | Cycle | Process -$>$Event.. | trigger sequence | trigger(brayton_cycle, spark), seq.(rain_cy,[evap,cond,precip])
2 | Manmade | Real -$>$Root | used_for used_by | used_for(car,travel) used_by(wrench,mechanic)
3 | Physical | Entity -$>$Real.. | location phy_state has_part has_aspect temperature | location(car,garage) phy_state(water, liquid)) has_part(car, wheels, 4) has_aspect(cube, surface, 6) temp.(moon, 120, celsius)
3 | Sentient | Entity -$>$Real.. | mental_state desires has_trait | mental_state(person,happy) desires(victim,justice) trait(politician,‘good speaker’)
4 | Living | Physical -$>$Entity.. | age lifespan | age(cat1,12) lifespan(dog,10,14)
5 | Animal | Living, -$>$Physical.. | gender behavior | gender(hen,female) behavior(lion,“territorial”)
5 | Fluid | Non-living, -$>$Physical.. | viscosity boil_point | viscosity(water, 0.01, poise) boil_pt.(liq_oxygen,-194, cels.)
5 | Device | Non-living; ManMd.-$>$Real.. | energy_type power_used | energy_type(fridge,electric) power_used(bulb,50, watts)
6 | Vehicle | Device, -$>$Manmade.. | travel_area mileage | travel_area(plane,“continental”) mileage(sport_bike, 4, km$/$ltr)
Table 2: Subset of ontology (L and Nd refer to level & node) with examples of
salient predicates (4th col.) for conceptual groups.
### 3.3 Knowledge representation and reasoning scheme
To illustrate our knowledge representation and reasoning scheme, we make use
of the format as in table 3 where ‘Knowledge’ refers to encoded facts and
beliefs, ‘Example queries’ indicate inference or retrieval-time queries and
‘Inferences’ indicate their corresponding results. Note, that knowledge
indicated in further examples are to illustrate the representation scheme
capabilities with familiar concepts / examples and not all are necessarily
encoded in the first version of ProbNet’s knowledge-base version (which is
constructed for aerospace concepts as detailed in sec. 4).
Knowledge
---
F1 | 0.6::can(animal,move).
F2 | 0.8::has(car,wheel).
C1 | 0.9::can(X,move):-has(X,leg); has(X,wheel).
Example queries
Q1 | can(car, move)?
Q2 | can(X, move)?
Inferences
I1 | 0.72: can(car, move).
I2 | 0.6: can(X=animal, move), 0.72: can(X=car, move).
Table 3: Example format used to illustrate representation scheme and
inferences (F1 and F2 are distinct facts, C1 is a clause, Q1 and Q2 are
distinct queries while I1 and I2 are their corresponding inferences).
General representation scheme. We represent declarative facts in the following
format:
P::predicate([fact_id], [source_id], time_point, pred_arg1, pred_arg2, ..).
Here P refers to probability of fact, [fact_id] refers to a unique fact
identifier, [source_id] refers to source of fact, time_point indicates the
time at which a fact is valid and the arguments thereafter are the original
predicate specific arguments. While P of a fact can be a continuous number
between 0 to 1, in this work we utilize four discrete ‘certainty’ levels with
corresponding P. These are – (i) tentative (P=0.5), (ii) likely (P=0.7), (iii)
strongly likely (P=0.9), and (iv) inherently true (P=1.0).
The fact and source identifiers are both used for interpretability when multi-
hop inferences are performed with different knowledge sources (detailed
later). While one could achieve inference trace through just the fact_id, we
maintain a distinct source argument to enable source-dependent logic (e.g.
some knowledge sources might be known to be unreliable and thereby be given
low likelihood). Further, the trace retrieval can also be used to
differentiate cases wherein a query is evaluated False (or likelihood=0) due
to absence of any matching knowledge (in which case trace is empty) or due to
a rule/fact activation (in which case trace is not empty). This effectively
relaxes the closed-world processing assumption.
The time point indicator is introduced for temporal and event logic (detailed
later), and the default time point is t_g denoting general time (indicating a
fact is assumed to be generally valid irrespective of time unless over-ridden
for a particular time point).
We represent inference clauses with N antecedents in the following format:
P::head_predicate(Fh,Sh,Th,args):-
anc1_pred(F1,S1,T1,a1_args..),..
ancN_pred(FN,SN,TN,aN_args..),
union_ops(Fh,C_id,F1..FN),
union_ops(Sh,S1..SN).
The latter two ‘union_ops’ refer to list operations (detailed in 4 through
Prolog operators ‘append’ and ‘sort’) to derive Fh and Sh through union of F1
to FN and S1 to SN respectively. C_id refers to the unique clause id which is
added for interpretability trace.
Knowledge
---
F1 | 1::isa([f1],[‘wnt’], t_g, person, organism). %facts
F2 | 1::isa([f2],[‘kb’], t_g, programmer, person).
F3 | 0::isa([f3],[‘kb’], t_g, person, car).
C1 | %Example clause with trace 1::isa(F,S,T,X,Z):- isa(F1,S1,T,X,Y), isa(F2,S2,T,Y,Z), append([S1,S2], S3), sort(S3, S), append([F1, F2], F3), sort([c1$|$F3],F).
Example queries
Q1 | isa(F,S,T, person, organism)?
Q2 | isa(F,S,T, person, programmer)?
Q3 | isa(F,S,T, person, car)?
Q4 | isa(F,S,T, programmer, Y)?
Inferences
I1 | 1: isa(F=[f1], S=[’wnt’], T=t_g, person, organism).
I2 | 0: isa(F=?,S=?,T=?, person, programmer) {unknown}
I3 | 0: isa(F=[f3],S=[’kb’],T=t_g, person, car) {known}
I4 | 1: isa(F=[f2], S=[’kb’], T=t_g, programmer, Y=person) 1: isa(F=[f2,c1,f1],S=[’kb’,’wnet’],T=t_g, programmer, Y=organism)
Table 4: Base representation scheme with inference traces. As shown for
inference I4, the 1st derivation (Y=person) is direct fact lookup (F2), while
the 2nd derivation (Y=organism) applies clause C1 on F2 and then finds F1.
Further, traces relax closed-world assumption as shown for I2 where
isa(person,programmer) is indicated unknown, cf. I3 where isa(person,car) is
known from F3 to be false.
An example utilization of the above scheme is shown in table 4. Two basic
facts of isa(person, organism) and isa(programmer, person) are encoded with
the first obtained from source ‘wnet’ and second from source ‘kb’. An
inference rule for inheritance is represented as shown in C1 (the basic rule
being isa(X,Z):-isa(X,Y),isa(Y,Z)), and augmented with list operations for
trace encoding. The first query Q1 is solved through direct retrieval of F1,
the second query Q2 is indicated to be unknown as no matching facts were found
and the third query Q3 is indicated to be False based on fact F3. Finally, Q4
which asks what all things (variable ‘Y’) a programmer is, is resolved through
retrieval of F2 and then application of C1 and retrieval of F1.
For the remainder of our paper, we omit indication of variables F, S and T in
facts or clauses unless relevant.
Hierarchical knowledge inheritance. As shown in table 5, beliefs can be
attached at different levels (e.g. an ‘animal’ can do a ‘motor action’ is
represented in F1, while ‘bird’ can ‘fly’ is represented in F7). To inherit
beliefs (or any predicate in general), we utilize the inheritance rules listed
in C1 and C2. While C1 specifies that “X is Z if X is Y and Y is Z”, C2
specifies that “if X is Z and Z can do Y, then X can do Y provided that no
knowledge that X cannot do Y is mentioned”. As shown in Q1, the belief that a
‘sparrow’ can do a motion activity is inferred through inheritance of
properties from ‘sparrow’s’ parent node ‘animal’ through application of C2 and
C1. Similarly as shown in Q2, ‘duck’ inherits beliefs that it can fly from
parent ‘bird’ and in Q6, it has its own concept-level belief that ‘duck’ can
‘swim’ (which is unknown for a ‘sparrow’ as shown in Q7). While shown for the
predicate ‘can’, such inheritance can be applied with all existing predicates
through the same rule-type as in C2.
To specify that a belief is not true, the ‘not’ prefix is assigned to the
existing predicate. As shown for F13, it is used to specify that a ‘penguin
cannot fly’. Similarly, in F15, for a hypothetical ‘new_bird’, the inherited
belief that ‘X can fly’ is made tentative by adding the ‘not’ prefix. One may
attempt to override knowledge by changing C2’s last antecedent to be simply
not(can(X,Y)); however this will lead to a cyclic loop not permitted in
ProbLog. Similarly, simply setting the likelihood to 0 as done in F14 to
specify ‘penguin2’ cannot fly will also not work since in cases wherein more
than one trace exists (in this case C2 and F14), the belief is inferred
through noisy-or application: $P(f)=1-P(f=False)$ where
$P(f=False)=\prod_{k}{(1-P(trace\\_k))}$. As a result, for Q4 the belief that
“penguin2 can fly” remains true with a probability of 0.9 (computed as 1-
((1-0)*(1-0.9)). In contrast, for Q3 it is 0 as the inheritance is blocked,
while for Q5, the belief “new_bird can fly” has resultant probability of 0.45
(computed as 0.9*0.5).
Knowledge
---
F1 | 0.8::can(animal, motion_activity).
F2 | isa(bird, animal).
F3-4 | isa(fly, motion_activity). isa(swim, motion_activity).
F7 | 0.9::can(bird, fly).
F8-10 | isa(sparrow, bird). isa(duck, bird), isa(penguin, bird).
F11 | isa(penguin2, bird).
F12 | isa(new_bird, bird).
F13 | not_can(penguin, fly). %Overrides inherited fact
F14 | 0::can(penguin2, fly). %Will not override
F15 | 0.5::not_can(new_bird, fly). %Override to tentative
F16 | 0.9:: can(duck, swim).
C1 | isa(X,Z):- isa(X,Y),isa(Y,Z). %basic inheritance clause
C2 | %Below is property inheritance (unless exception) can(X,Y):-isa(X,Z),can(Z,Y), not(not_can(X,Y)).
Example queries
Q1 | can(sparrow, motion_activity)?
Q2,3 | can(duck, fly)? can(penguin, fly)?
Q4,5 | can(penguin2, fly)? can(new_bird, fly)?
Q6,7 | can(duck, swim)? can(sparrow, swim)?
Inferences
I1 | 0.8 can(sparrow, motion_activity). {from C2,F8,C1,..F1}
I2,3 | 0.9 can(duck, fly). 0 can(penguin,fly).
I4,5 | 0.9 can(penguin2, fly). 0.45 can(new_bird, fly).
I6 | 0.9 can(duck, swim).
I7 | 0 can(sparrow,swim). {unknown}
Table 5: Hierarchical knowledge inheritance. Clauses C1 and C2 enable beliefs
to be inherited downwards such as for I1 where two-hop computation is
performed. F13 overrides ‘can fly’ belief for penguin by using not_can
predicate (F14 attempts to override by setting likelihood of fact to be zero
but will not work due to noisy-or inference). F15 makes inherited belief ‘can
fly’ for ‘new_bird’ more tentative.
Grounded concepts and free-text phrase representations. In our representation
scheme, we distinguish between free-text phrases (denoted in “inverted
commas”) from grounded concepts. Whereas free-text phrases are merely lexical
labels, grounded concepts are concepts in our knowledge graph with associated
semantic properties that can be used for inference. We illustrate this
distinction by first describing how basic lookup of a concept is performed by
a free-text name (lexicon lookup). As shown in table 6, F1 refers to a
grounded concept bowl, which is specified as a type of container. F2 and F3
state that the concept bowl can be referred to by multiple free-text names
“bowl” and “basin” (the latter with lesser probability). However, the name
“bowl” can also refer to other applicable concepts, e.g. the action “roll” or
“bowling” (see F7).
Querying concept beliefs can then be performed in two ways – (i) specify the
actual concept (e.g. bowl) as in Q2, or (ii) specify the free-text phrase to
retrieve applicable concepts first (as in Q1, Q3, Q4) and then decide which
concept to utilize (effectively performing word-sense disambiguation as in Q3
and Q4 where conditions on applicable concepts are provided). This is in
contrast to CKRs wherein concepts and free-text phrases are treated as the
same, which can potentially lead to duplications and inter-mixing of knowledge
(e.g. beliefs pertaining to the event “bowl” or “roll” might be combined with
those of the entity “bowl”).
Knowledge
---
F1 | isa(bowl, container).
F2 | has_name(bowl, “bowl”).
F3 | 0.8::has_name(bowl, “basin”).
F4 | 0.6::used_for(bowl, “eating soup”). %symbol $+$ phrase fact
F5 | isa(roll_action, event).
F6 | has_name(roll_action, “roll’).
F7 | 0.6::has_name(roll_action, “bowl’). %e.g. bowl a ball
F8 | can_be(newobj, close_state). %i.e. can be closed
F9 | has(newobj, “solid enclosure”).
C1 | 0.6::used_for(X,phy_storage):- isa(X, container).
C2 | 0.7::can(X,“keep things”):- %phrase $+$ symbol clause can_be(X, close_state), has(X, “solid enclosure”).
C3 | 1::can(X,“keep things”):-used_for(X,phy_storage).
Example queries
Q1 | has_name(X,“bowl”)? %What concepts have name “bowl”
Q2 | has_name(bowl, Y)? %What names does concept bowl have
Q3 | has_name(Q,“bowl”),isa(Q,event)? %event named ‘bowl’?
Q4 | has_name(Q,“bowl”), used_for(Q,Y)? %exec. in sequence
Q5 | can(X, “keep things”)? %Free-text / phrase query
Inferences
I1 | 1.0 has_name(bowl, “bowl”), 0.6 has_name(roll_action, “bowl”).
I2 | 1.0 has_name(bowl, “bowl”), 0.8 has_name(bowl, “basin”)
I3 | 0.6 has_name(Q=roll_action,“bowl”), isa(roll_action,event)
I4 | 1 has_name(Q=bowl, “bowl”), 0.6 used_for(bowl,phy_storage), used_for(..,“eating soup”)
I5 | 0.7 can(X=newobj, “keep things”), 0.6 can(container, “keep things”), can(bowl, “keep things”)
Table 6: Utilization of grounded concepts and free-text phrases for knowledge
representation and lexicon lookup. A concept can have multiple free-text names
(e.g. in F2, F3 concept bowl has names “bowl” and “basin”) and conversely
free-text names can refer to multiple concepts (e.g. free-text “bowl” refers
to concepts bowl(obj) (F2) and roll(event) (F7)). Further, clauses & facts can
be queried or represented using both constants & free-text (e.g. Q5 queries X
that can “keep things”; similarly F4,F9 & C2,C3 for such facts & clauses).
More generally, beliefs for concepts can be encoded by using both free-text
phrases and grounded concepts. As shown in F4 and F9, free-text phrases can be
utilized when a grounded representation may not be yet feasible (e.g.
attributes such as “eating soup” or “solid enclosure”). In C2 knowledge that X
can be used to “keep things” has both a grounded antecedent (X can be
close_state) and a free-text antecedent (X has a “solid enclosure”). Similarly
C3 captures a free-text phrase implication with a grounded antecedent (X can
“keep things” if X used for phy_storage). As a result, for Q5 which performs a
free-text query (what objects can “keep things”), the concept newobj is found
(due to C2) in addition to container and bowl (due to C3, C1).
Knowledge
---
F1 | 1::isa(eat, consume_activity).
F2 | 0.9::theme(eat, food).% base beliefs
F3 | 0.9::agent(eat, animal).
F4 | 0.5::subevent(eat, “salivate”).
C1 | 0.9:: subevent(E1,chew):- isa(E1,eat), theme(E1,X), (phy_state(X,solid),property(X,“malleable”)).
C2 | 0.9:: subevent(E1,“crunch”):- isa(E1,eat), theme(E1,X), property(X,“crispy”).
C3 | 0.9:: subevent(E1, drink):- isa(E1, eat), theme(E1,X), phy_state(X,liquid).
C4 | 0.9:: instrument(E1,cutlery):- isa(E1,eat), agent(E1,X), isa(X,human), (loc(E1,fine_dining); attrb(E1,“formal”)).
C5 | 0.9:: instrument(E1,mouth):- isa(E1,eat), agent(E1,X), (isa(X,cat); isa(X,dog)).
Example queries (with background knowledge)
Q1 | isa(eat_ins, eat). agent(eat_ins, A)? subevent(eat_ins, E)?
Q2 | isa(eat_i, eat), theme(eat_i, nachos). property(nachos, “crispy”). subevent(eat_i, E)?
Q3 | isa(bob, human). isa(eat_1, eat). agent(eat_1, bob). isa(micheli, fine_dining). location(eat_1, micheli). instrument(eat_1,I?)?
Q4 | isa(husky, dog). isa(eat_2, eat). agent(eat_2, husky). isa(micheli, fine_dining). location(eat_2, micheli). instrument(eat_2,I?)?
Inferences
I1 | 0.9 agent(eat_ins, A=animal). 0.5 subevent(..E=“salivate”).
I2 | 0.9 subevent(eat_i, E=“crunch”). 0.5 subevent(..“salivate”).
I3 | 0.9 instrument(eat_1, I=cutlery).
I4 | 0.9 instrument(eat_1, I=mouth).
Table 7: Event and situational inferences using semantic role knowledge. Facts
F2-F4 represent base beliefs for event ‘eat’ (e.g. F2 the theme or thing eaten
is probably a type of food). C1-C3 represent clauses indicating different
possible subevents for ‘eat’ depending on knowledge of item being eaten (e.g.
C3 states that if item X being eaten is a liquid, then sub-event is more
likely to be drink). Similarly, C4 & C5 state the instrument used to eat
differs depending on agent or loc. of event (e.g. instrument is likely mouth
if agent= dog or cat).
Event representations and situational inferences. To represent event or
situational knowledge, we utilize semantic-roles [55, 56] (such as ‘theme’,
‘agent’, ‘instrument’, etc.) that we identify for broad event categories in
the ontology (utilizing VerbNet [54] wherever possible). An example is
illustrated in table 7 for the event ‘eat’ which is a type of
‘consume_activity’ (F1). F2 and F3 represent base context-independent beliefs,
more specifically, selectional preferences [57] for the event, e.g. that the
theme of ‘eat’ is a type of ‘food’ (F2), and the agent is a type of ‘animal’
(F3). F4 specifies a tentative belief that subevent of eat is ‘salivate’.
To capture contextual or situational dependencies, we make use of clauses that
allow different inferences to be made on provided contextual/situational
facts. For example, C1-C3 represent how the subevent of ‘eat’ can differ
depending on the properties of the ‘theme’ or the thing being eaten. C1 states
that the subevent is most likely chew if the theme X has physical_state =
solid and that it also has property= “malleable”. C2 states that subevent is
most likely “crunch” if theme X has property=“crispy” while C3 states subevent
is most likely “drink” if theme X is a type of liquid.
Thus, depending on the background context information provided during
querying, the inferences can vary. As shown for Q2, it is specified that the
entity nachos is being eaten and that nachos are “crispy”. Consequently, the
inferred subevent for this instance of eat is probably “crunch” in addition to
“salivate”. Similarly, C4-C5 capture inferences for type of instrument being
used for “eat” depending on agent or location of event. Q3 and Q4 illustrate
two different instances of ‘eat’ both happening at a fine_dining location
‘micheli’. In Q3, agent is bob known to be a human and hence inferred
instrument is probably a type of cutlery, while for Q4, the agent is a husky,
and instrument is more likely to be simply mouth.
Knowledge
---
C1 | location(T3,X,D,in):- isa(t_g,E,enter_phy), theme(t_g,E,X), destination(t_g,E,D), t_end(t_g,E,T3).
C2 | location(T1,X,S,at):-isa(t_g,E,enter_phy), theme(t_g,E,X), source(t_g,E,S), t_start(t_g,E,T1).
C3 | location(T2,X,C,at):-isa(t_g,E,enter_phy), theme(t_g,E,X), channel(t_g,E,C), t_during(t_g,E,T2).
C4 | location(T1,X,D,out):-isa(t_g,E,enter_phy), theme(t_g,E,X), destination(t_g,E,D), t_start(t_g,E,T1).
C5 | more_than(t_g,Tm2,Tm1):-isa(t_g,E,heat), theme(t_g,E,X), t_start(t_g,E,T1), t_end(t_g,E,T2), temp(T1,X,Tm1), temp(T2,X,Tm2).
C6 | 0.9::temp(T2,X,TmpD):-isa(t_g, E, heat), theme(t_g, E, X), t_end(t_g,E,T2), dest_attr(t_g,E,TmpD).
Example queries (with background knowledge)
Q1 | isa(enter1, enter_phy), theme(enter1, air), t_start(enter1,t1), t_during(enter1,t2), t_end(enter1,t3), dest(enter1,engine), channel(enter1,intake). %air enters engine through intake location(t1,air,?,?). location(t2,air,?,?). location(t3,air,?,?)
Q2 | Same as Q1 $+$ source(enter1, propeller). location(t1,air,?,?).
Q3 | isa(heat1, heat). theme(heat, water). t_start(heat1,t1). t_end(heat1,t3), temp(t1,water,tmp1), dest_attr(heat1,tdest). value(tdest,30,cels). temp(t3, heat1, X), more_than(X,t1)?
Inferences
I1 | 1.0 location(t1,air,engine,out), 1.0 location(t2,air,intake,at). 1.0 location(t3,air,engine,in).
I2 | 1.0 location(t1,air,propeller,at), location(t1,air,engine,out).
I3 | 0.9 temp(t3,heat1,X=tdest), more_than(X=tdest,t1)=True
Table 8: Event schemas with temporal implications of what happens before or
after to filled semantic roles shown for enter and heat.
Representing event temporal implications (verb schemas). To represent event
expectations or temporal implications (such as what happened before, during or
after), or more formally, verb schemas [58], we utilize the time_point
argument as introduced earlier. Similar to before, the temporal inferences can
differ based on provided situational or semantic role knowledge for the event.
As shown in table 8, C1-C4 represent temporal implications for the event
enter_phy (e.g. ‘the man entered the room’). Here, the additional roles of
source, destination and channel are utilized specifying where an object is
originating from (source), where it is entering into (destination) and what it
passes through (channel) in the process. Further, temporal attributes t_start,
t_during and t_end specify the time points when an event starts, when it is
going on and when it ends respectively.
C1 thus represents the inference that if X is a theme of E which is an
instance of event enter_phy whose destination is stated to be D, then at the
end of the event, the location of X is in D. Similarly, C2 and C3 capture
locative implications of the theme X before or while the event is taking
place. C4 additional represents the inference that an object X before entering
into D, is outside D. Q1 details the scenario “air enters an engine through
intake” in an approprite logical/semantic-role representation with enter_1
being an instance of enter_phy, its theme being the entity air, its
destination being engine and its channel being intake. t1, t2 and t3 represent
the time indicators specific to the instance enter_1. Based on this scenario
Q1 queries the locations of air before, during and after the event, and based
on application of C2-C4 infers them. Q2 represents a similar scenario but with
knowledge that the source was a propeller (“air enters the engine from the
propellor through the intake”). This enables it to make the additional
inference that location prior to enter was at the propellor as in I2.
Similarly, C5 and C6 represent temporal implications of the event heat with C5
indicating that the object being heated (theme) will have a higher temperature
after heating than it had prior to heating. C6 indicates that the temperature
after heating will be likely equivalent to specified ‘dest_attr’ (e.g. 50
degrees celsius in “Heat the batter to 50 degrees celsius”). Q3 details a
scenario of water being heated to 30 degree celsius and queries the
temperature after heating and whether it will be higher than it was at start
of heating.
Physical and comparative inferences. To perform physical and comparative
inferences, we build upon Problog’s inbuilt numerical operators (e.g.
$>,>=,<$, etc.). As shown in table 9, F2 and F4 represent boiling points of
water (F1) and olive oil (F3) respectively. C1 denotes the ‘more_than’
predicate inference which is derived by doing the appropriate $>$ operation on
the value of its arguments X and Y.
Using this, physical and comparative inferences can be represented such as in
C2 (“if the empty volume of X is more than size of Y, then X can contain Y”)
and C3 (“if X is a liquid and its temperature is more than its boiling point,
then X likely is boiling”). Q1 represents a scenario wherein ballx has size of
$30cm^{3}$ and cup1 has empty volume of $20cm^{3}$, and asks whether cup1 can
contain ballx. Based on C2 and C1, this evaluates to False. Similarly Q2
represents a scenario wherein a liquid is being heated to 120 degrees celsius,
and first queries the associated event for the liquid, if the liquid is water,
and then queries the same for if the liquid is olive oil. In the first case,
the inference that associated event of water is boiling is made (due to F1, C1
and C3 – the heating temperature is higher than boiling point of water) and in
the second case no inference can be made.
This scheme can also support more complex knowledge such as physics laws.
C4-C6 provide an example of the “ideal gas law”. C4 captures the implication
of proportionality – if {X and Y are proportional and Y increases}, or {X and
Y are inversely proportional and Y decreases} then X increases. C5 represents
similarly for X decreasing depending on antecedents. Finally, C6 represents
that if volume V of a gas X is constant, then its temperature T and pressure P
are proportional. Further rules for other combinations of P, V and T can be
similarly represented. Q3 represents a scenario wherein the volume of air is
known to be constant and its temperature is stated to be decreasing (which
could be also implicitly provided by stating that air is a ‘theme’ of event
cool). It queries whether the pressure decreases, and as shown in I3 evaluates
to True based on application of C6 and C5.
Knowledge
---
F1 | boiling_point(water,bp_water).
F2 | value(bp_water, 100, celsius).
F3 | boiling_point(olive_oil, bp_ooil).
F4 | value(bp_ooil, 300, celsius).
C1 | more_than(X,Y):- value(X,Vx,U), value(X,Vy,U), Vx$>$Vy
C2 | can(X,contain,Y):- empty_vol(X,Sx), size(Y,Sy), more_than(Sx,Sy).
C3 | assoc_event(X,boil):- isa(X,liquid), temp(X,T), boiling_point(X,Tbp), more_than(T,Tbp).
C4 | increases(X):- (proportional(X,Y), increases(Y)); (invproportional(X,Y), decreases(Y)).
C5 | decreases(X):- (invproportional(X,Y), increases(Y)); {..}
C6 | proportional(T,P):- isa(X,gas), temp(X,T), vol(X,V), pressure(X,P), constant(V). %gas law (other combns. omitted)
Example queries (with background knowledge)
Q1 | size(ballx, s1), empty_vol(cup1, s2), val(s1,30,cm3), val(s2,20,cm3). can(cup1,contain,ball1)?
Q2 | isa(heat1, heat), dest_attr(heat1,tdest), value(tdest,120,cels..) theme(heat1, water). assoc_event(water,E?)? theme(heat1, olive_oil). assoc_event(olive_oil,E?)?
Q3 | isa(air,gas), pressure(air,p1), vol(air,v1), temp(air,t1). constant(v1), decreases(t1). decreases(p1)?
Inferences
I1 | 0.0 can(cup1,contain,ballx) {False}
I2 | 1.0 assoc_event(water, E=boil). 0.0 assoc_event(oil, E=?). {unknown}
I3 | 1.0 decreases(p1). {True}
Table 9: Physical and comparative inferences. C1-C3 represent clauses that
perform numerical comparisons to perform inferences such as whether an item
can contain another object (C2) or whether a liquid will boil at a particular
temperature (C3) with example inferences I1 and I2. C4-C6 encode the gas law
and capture notion of proportionality/inverse proportionality with example
inference I3.
Higher-order predicate inferences. Certain inferences and beliefs require
higher-order logic wherein a predicate is nested in another predicate. This is
especially useful for sentient type predicates (e.g. believes, desires,
prefers, etc) where the second argument may itself be a predicate/belief, e.g.
believes(student, origin(universe, “big bang”)) indicating that a student
believes the origin of the universe is from the “big bang”.
Similarly, as shown in table 10, the implication of a person being a
vegetarian is captured in C2. It states that if an person is vegetarian, then
they would prefer eating items F (being the theme) that are not made of
animal. Note, this is different from what they can physically eat from an
anatomical perspective (by virtue of being an animal), which is captured in C1
(which states that if X is an animal and Y is a type of food, then X can
probably eat Y). Q1 represents a scenario wherein p1 is a vegetarian, and p2
is unspecified. Different food items including kebab, tofu and pizza are
indicated to be made of different items. When queried what all items p1 and p2
can eat, all items are returned as they are all types of food. However, when
queried what all items p1 prefers to eat, the returned items are tofu and
pizza (with 0.5 likelihood due to uncertainty of whether it is made of meat).
For p2’s case however, all items are returned as possible preferences.
A similar example is also provided in C3 to encode the inference whether an
agent (denoted by B) can buy an object (denoted by Z). As shown in the
antecedents, this inference is true if the owner of Z (denoted by S) desires
something that B owns, and also believes the value of that is more than equal
to the value of Z. Q2 and I2 show different owners of objects, and how only p3
can buy home1 as its owner (p1) wants a car and believes that the value of
p3’s car exceeds value of home1. Such a scheme could thus be potentially
useful for capturing theory of mind and relevant social commonsense.
Knowledge
---
F1 | 0.7::believes(student, origin(universe, “big bang”))
C1 | % Below captures X can eat Y if X animal and Y food 0.7::can(X, E, Y):- isa(X,animal), isa(Y, food), isa(E,eat).
C2 | % Below is X prefers eating Y not from animal if X is veg. prefers(X, theme(E,F)):- isa(X, person), isa(X,vegetarian), isa(E,eat), isa(F, food), agent(E,X), not(made_of(F,animal)).
C3 | % Below is X can buy Z depending on Z owner’s valuation. can(B, buy, Z):- owns(B,C), owns(S,Z), wants(S,C), believes(S, val(C,VC)), believes(S, val(Z,VZ)), more_than_equal(VC,VZ). %val denotes value
Example queries (with background knowledge)
Q1 | isa(p1, animal), isa(p1, vegetarian), isa(p2, animal), made_of(kebab,animal), made_of(tofu, soy), 0.5::made_of(pizza, animal). agent(eat1,p1). can(p1,eat,F)? prefers(p1,theme(eat1,F))? agent(eat2,p2). prefers(p2,theme(eat2,F))?
Q2 | owns(p1, home1), wants(p1, car), owns(p2, car2), owns(p2, home2), owns(p3, car3), believes(p1, val(home1, 30k)), believes(p1,val(car2,20k)), believes(p1,val(home2,50k), believes(p1,val(car3,40k).) can(p2, buy, home1)? can(p3, buy, home1)?
Inferences
I1 | 1.0 can(p1,eat,F={meat,kebab,tofu,pizza}) 1.0 prefers(p1,theme(eat1,F=tofu)), 0.5 prefers(.., F=pizza) 1.0 prefers(p2,theme(eat2,F={meat,kebab,tofu,pizza})).
I2 | 0.0 can(p2,buy,home1). 1.0 can(p3,buy, home1).
Table 10: Higher-order inferences. F1 represents a simple higher-order fact
wherein origin(..) is an argument of the belief of a student. C2 encodes the
implication of a person being vegetarian and differentiates the higher-order
predicate prefers from first-order predicate can (as shown in Q1 and I1). C3
represents usage of higher-order predicate believes for inferring whether a
person can buy an item.
## 4 Knowledge-base construction and applications
In this section, we detail how we extend PrimeNet with our proposed knowledge
representation scheme through crowdsourcing and manual annotation. We then
illustrate its application for passage-based semantic parsing and question-
answering. Our target domain for this exercise was aerospace documents and
concepts (due to project funding requirements) with a focus on interpretable
and ‘deeper’ semantic inferences for passage-based parsing.
### 4.1 Knowledge collection and analysis
A total of 574 concepts comprising original ontological nodes, nouns and verbs
were identified from frequency analysis of word usage in domain text. A total
of ten annotators (including the paper authors) were involved in identifying
applicable concept groups within the ontology and extending the ontology with
new concept groups, lower-level concepts and relevant predicates.
Crowdsourcing was used to collect factual knowledge on type-specific
predicates (relations and attributes) associated with each concept and to
estimate their probability. Manual annotation was used for knowledge clauses
and more domain-specific facts.
Crowdsourcing setup. 100 concepts were chosen to develop and evaluate the
crowdsourcing framework. A survey-based design was used for this. Each concept
was first illustrated with an example sentence to specify the word sense in
which the concept is to be used. This was followed by questions to collect
knowledge on relations and attributes for that concept requiring responses in
the form of free-text phrases, or by selecting applicable options from a list
(for knowledge that can be mapped to existing grounded concepts). Participants
were recruited via amazon mechanical turk across eight different countries. To
ensure high quality of responses, participants first had to pass a word sense
selection test involving 3 concepts, wherein for each concept they were shown
a sentence using a particular sense of the concept and had to identify the
correct sense from four possible options. Only participants who perfectly
completed the selection test were permitted to proceed with the survey.
Participants were then familiarized with the survey setup by doing a small
practice run. For the actual survey, each participant was asked to annotate 3
concepts and each concept was assigned randomly to at least 3 annotators.
Participants were given 10 minutes for each a concept, after which the survey
would automatically move on to the next concept to be annotated. Example UIs
for the first round of crowdsourcing are provided in appendix fig. 7. IRB
approval was obtained for the crowdsourcing exercise.
Figure 4: Crowdsourced certainty levels of facts for salient concept groups.
At least 3 annotators rated each fact and average score was used to filter and
identify discrete certainty levels.
Free-text responses in the collected knowledge was first mapped to the
grounded concepts in the existing knowledge-base through name retrieval and
manual checking. In cases where no matching concepts were found, free-text
responses were left as is. The resultant knowledge was then utilized for a
second round of crowdsourcing in which responders were asked to rate a
knowledge fact on a 5-point scale between {Highly disagree, Disagree, Neutral,
Agree, and Highly Agree}. Again, at least three participants were required to
rate each knowledge fact; each participant rated 30 facts. As before,
participants were familiarized with the task through examples. An average
score for each fact indicating possibility of it being true was computed based
on mapping the responses to a range of -2 to 2 (-2 corresponding to highly
disagree and 2 corresponding to highly agree). Finally, any fact having score
less than equal to 0 was discarded, while scores ranging from 0.0-0.7 were
counted as ‘tentative’, 0.7-1.4 as ‘likely’ and 1.4-2.0 as ‘strongly likely’.
The collected responses for salient conceptual types are specified in fig. 4.
As shown, a higher percentage of facts across concept types were found to be
rated as likely, followed by tentative and highly likely. Concepts of type
Manmade were found to have a generally lesser percentage (47.8%) of likely
facts in comparison to Process and Non-physical or Measure types.
Figure 5: Knowledge distribution specifying number of concepts, predicate,
facts (divided by 10 in fig. for scale) and clauses amongst event, entity,
abstract and all-together categories. Figure 6: Utilization of knowledge-base
for semantic parsing and question answering. Input sentences are mapped to
possible knowledge-base semantic forms through a syntactic dependency to
semantic predicate mapping rule-base. Possible forms are filtered through
selectional restriction checking and top-k valid interpretations are
maintained. Question answering is similarly performed by converting the
question into valid logical queries through a template rule-base, which are
then executed against the knowledge-base as well as text-parsed facts to
retrieve answers.
Manual annotation. To annotate clauses and more domain-specific facts, manual
annotation was performed by 6 annotators having knowledge of the domain.
Annotation was performed primarily to identify salient physical properties of
entities, part-whole connections, event schemas, inter-event relations,
entity-event relations and physics laws. Each annotated knowledge was checked
by at least one other annotator for quality checking, and the final knowledge
was added into the knowledge-base in form of concept facts as well as clauses.
Fig. 5 displays the resultant knowledge base statistics. As shown, it consists
of 284 events, 158 real entities and 114 abstract concepts. A total of 95
predicates are used with 45 being broadly event-typed predicates, 56 being
real world entity-typed and 39 being abstract-typed. Note, that predicates
amongst groups overlap based on inheritance from top-level nodes. Cumulatively
across concepts, the knowledge-base encodes a total of 6799 facts (of which
2190 are for events and 2160 are for real entities) and 299 clauses (of which
198 are for events).
### 4.2 Knowledge-based semantic parsing and question answering
The resultant knowledge-base was applied for passage based semantic parsing
and question answering. Passages were obtained from chapters of aerospace
manuals in the target domain. As shown in fig. 6, here semantic parsing
involved converting the input text passage into logical forms that have the
same representation format as facts in our knowledge-base. The predicates for
the target logical form are the same as the ones in our knowledge-base. Free-
text phrases are mapped to appropriate grounded knowledge-base concepts
through word-sense disambiguation (described later) or left as is (in case no
mapping concept was found). For example, as shown in the figure, the free-text
phrase “blockage” in the first sentence retrieves the concept
obstruction_state, which is thereafter used in logic semantic forms.
Semantic parsing module. While one could apply a deep-learning based semantic
parser [59, 60] for this task, we currently developed a rule-base parser as we
did not have a significant amount of annotated logic forms corresponding to
input texts to utilize as training data. We developed a rule-base that maps
syntactic dependency antecedents to applicable semantic forms (with predicates
from our knowledge-base). This mapping is many-to-many, i.e. a syntactic
relation combination can map to multiple semantic predicates while the same
semantic predicate can be activated by multiple syntactic relation
combinations. Hence, it can lead to multiple possible interpretations as shown
in fig. 6, wherein the first sentence “The blockage increases the pressure” is
mapped to two mutually-exclusive facts cause(increase, obstruction) and
agent(increase, obstruction). To resolve this, we perform selectional
restriction filtering, wherein for a given event role (or more generally
predicate), the applicable conceptual types for the filled argument are
specified. In interpretations wherein the filled argument does not satisfy the
applicable conceptual types, the interpretation is removed or its likelihood
is reduced. An example is given in C1 of the same figure’s knowledge-base
which states that the agent of increase_event is a type of sentient. Since
this fails for agent(increase, obstruction), this possible mapping is
discarded. However, in the case of the second sentence, “The temperature
increases with time”, for the two mutually-exclusive mappings cause(increase,
time) and co-theme(increase, time), both cases satisfy selectional restriction
checks and hence are stored as two distinct interpretations.
Distinct interpretations are also maintained in cases wherein multiple
concepts are retrieved for a given text phrase (during word sense
disambiguation) and satisfy further selectional restriction checks. Note that
after finding matching concepts, an ‘instance’ of the concept is created (e.g.
increase_Ins1 corresponds to an instance of event increase that was specified
by text-phrase “increases”, similarly, pressure_Ins1 corresponds to an
instance of pressure). An instance is re-used for future sentences if no
determiner is provided, and a new instance for a referred concept is only
created if the determiners are in {a, another, an}. This process is repeated
for all sentences of the given passage, and the top-k interpretations are
stored after each rule application (for our work, we use k=3). For each
passage, the parsed forms are reset, i.e. passages are parsed independently,
and we currently do not perform discourse linking.
Overall, 49 rules were identified based on domain text for mapping syntactic
dependency forms into semantic forms (corresponding to knowledge-base fact
representations). The spacy parser [61] was utilized to obtain dependency
trees for sentences. Further, for anaphora and coreference resolution, we used
the “neuralcoref” package within spacy, and will look into developing
appropriate knowledge-based methods in future work. The resultant parser was
applied on 267 sentences across 21 passages.
Question answering module. Given the parsed semantic forms and existing
knowledge in the knowledge-base, logical queries can be directly performed as
previously illustrated in sec. 3. The different parsed interpretations are
specified as different knowledge sources. Hence, when the inference is
performed, the source trace specifies which interpretation was used to derive
the answer in addition to the originally encoded knowledge in the knowledge-
base. To allow language questions to be queried, we developed an additional
template-based rule-base to convert the question into appropriate logical
queries. As shown in fig. 6, the question to logical query rule-base
identifies phrases matching to predicates(such as cause and to in What causes
A to X? which are mapped to event assertions theme or agent, and the query
cause(X,?)).
While some of the queries can be directly answered through the text-parsed
knowledge (as for What causes pressure to increase?), others may require
inference from both text-parsed knowledge and the knowledge-base (as in What
can decrease if the temperature is constant?). In both cases, a trace for the
answer is provided specifying which knowledge facts and clauses were utilized
in answering the query, thereby providing clear interpretability in answering
the question. Further, likelihoods for different possible answers can be
derived based on the activated probabilistic facts or clauses, and in cases
where no applicable knowledge can be found, the unknown answer is generated.
Overall, the resultant knowledge-based system allows for more interpretable
semantic parsing and question answering; however, the input sentence and
question processing capabilities are currently constrained to the rule-base
and will benefit from integration with neural methods in future work. Finally,
the described system can also be used to perform non-monotonic reasoning or
make defeasible inferences, wherein inferences and their likelihoods may
progressively change based on new evidence (or facts) provided by the text.
## 5 Conclusion
We introduced a probabilistic-logic based commonsense representation framework
to encode a larger variety of world knowledge and to represent conceptual
beliefs with varying likelihoods. We also proposed a hierarchical conceptual
ontology designed to identify salient concept relevant relations and enable
beliefs to be encoded at different conceptual levels and re-used where
applicable through inheritance. We illustrated the knowledge representation
scheme and how it can encode different types of commonsense knowledge
including contextually-dependent inferences, event schemas / temporal
implications, physical and comparitive inferences, and higher-order
inferences. We then applied the representation scheme and ontology to extend
the PrimeNet knowledge-base by crowdsourcing and manually-encoding knowledge
for the aerospace domain. Finally, we illustrated how the resultant knowledge-
base can be utilized for more interpretable passage-based semantic parsing and
question answering.
## Acknowledgement
This research is supported by A*STAR under its “K-EMERGE:Knowledge Extraction,
Modelling and Explainable Reasoning for General Expertise” programme (Grant
number A19E2b009). The crowdsourcing setup was approved by the A*STAR
Institutional Review Board (IRB Reference: 2019-016).
## References
* Davis and Marcus [2015] E. Davis, G. Marcus, Commonsense reasoning and commonsense knowledge in artificial intelligence, Communications of the ACM 58 (2015) 92–103.
* Cambria et al. [2009] E. Cambria, A. Hussain, C. Havasi, C. Eckl, Common sense computing: From the society of mind to digital intuition and beyond, in: J. Fierrez, J. Ortega, A. Esposito, A. Drygajlo, M. Faundez-Zanuy (Eds.), Biometric ID Management and Multimodal Communication, volume 5707 of Lecture Notes in Computer Science, Springer, Berlin Heidelberg, 2009, pp. 252–259.
* Hayes [1990] P. J. Hayes, The naive physics manifesto, in: The Philosophy of Artificial Intelligence, 1990\.
* Grice [1975] H. P. Grice, Logic and conversation, in: Speech acts, Brill, 1975, pp. 41–58.
* Hespos and Spelke [2004] S. J. Hespos, E. S. Spelke, Conceptual precursors to language, Nature 430 (2004) 453–456.
* Lenat [1995] D. B. Lenat, Cyc: A large-scale investment in knowledge infrastructure, Communications of the ACM 38 (1995) 33–38.
* Liu and Singh [2004] H. Liu, P. Singh, Conceptnet—a practical commonsense reasoning tool-kit, BT technology journal 22 (2004) 211–226.
* Speer et al. [2017] R. Speer, J. Chin, C. Havasi, Conceptnet 5.5: An open multilingual graph of general knowledge, in: Thirty-first AAAI conference on artificial intelligence, 2017.
* Hwang et al. [2021] J. D. Hwang, C. Bhagavatula, R. Le Bras, J. Da, K. Sakaguchi, A. Bosselut, Y. Choi, (comet-) atomic 2020: On symbolic and neural commonsense knowledge graphs, in: Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 2021, pp. 6384–6392.
* Sap et al. [2019] M. Sap, R. Le Bras, E. Allaway, C. Bhagavatula, N. Lourie, H. Rashkin, B. Roof, N. A. Smith, Y. Choi, Atomic: An atlas of machine commonsense for if-then reasoning, in: Proceedings of the AAAI conference on artificial intelligence, volume 33, 2019, pp. 3027–3035.
* Yasunaga et al. [2022] M. Yasunaga, A. Bosselut, H. Ren, X. Zhang, C. D. Manning, P. Liang, J. Leskovec, Deep bidirectional language-knowledge graph pretraining, in: Neural Information Processing Systems (NeurIPS), 2022.
* Yasunaga et al. [2021] M. Yasunaga, H. Ren, A. Bosselut, P. Liang, J. Leskovec, QA-GNN: Reasoning with language models and knowledge graphs for question answering, in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics, 2021, pp. 535–546.
* Young et al. [2018] T. Young, E. Cambria, I. Chaturvedi, H. Zhou, S. Biswas, M. Huang, Augmenting end-to-end dialogue systems with commonsense knowledge, in: Proceedings of the AAAI Conference on Artificial Intelligence, 2018, pp. 4970–4977.
* Wang et al. [2020] P. Wang, N. Peng, F. Ilievski, P. Szekely, X. Ren, Connecting the dots: A knowledgeable path generator for commonsense question answering, in: Findings of the Association for Computational Linguistics: EMNLP 2020, Association for Computational Linguistics, Online, 2020, pp. 4129–4140.
* Karthik et al. [2022] S. Karthik, M. Mancini, Z. Akata, Kg-sp: Knowledge guided simple primitives for open world compositional zero-shot learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 9336–9345.
* Zareian et al. [2020] A. Zareian, S. Karaman, S.-F. Chang, Bridging knowledge graphs to generate scene graphs, in: European conference on computer vision, Springer, 2020, pp. 606–623.
* Roy et al. [2022] A. Roy, D. Ghosal, E. Cambria, N. Majumder, R. Mihalcea, S. Poria, Improving zero-shot learning baselines with commonsense knowledge, Cognitive Computation 14 (2022) 2212–2222.
* Chen et al. [2021] H. Chen, Y. Huang, H. Takamura, H. Nakayama, Commonsense knowledge aware concept selection for diverse and informative visual storytelling, Proceedings of the AAAI Conference on Artificial Intelligence 35 (2021) 999–1008.
* Gao et al. [2022] S. Gao, J. D. Hwang, S. Kanno, H. Wakaki, Y. Mitsufuji, A. Bosselut, Comfact: A benchmark for linking contextual commonsense knowledge, in: Findings of EMNLP, 2022.
* Talmor et al. [2019] A. Talmor, J. Herzig, N. Lourie, J. Berant, CommonsenseQA: A question answering challenge targeting commonsense knowledge, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, 2019, pp. 4149–4158.
* Jin et al. [2022] Z. Jin, T. Men, H. Yuan, Z. He, D. Sui, C. Wang, Z. Xue, Y. Chen, J. Zhao, CogKGE: A knowledge graph embedding toolkit and benchmark for representing multi-source and heterogeneous knowledge, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 166–173.
* Guan et al. [2020] J. Guan, F. Huang, Z. Zhao, X. Zhu, M. Huang, A knowledge-enhanced pretraining model for commonsense story generation, Transactions of the Association for Computational Linguistics 8 (2020) 93–108.
* Bhargava and Ng [2022] P. Bhargava, V. Ng, Commonsense knowledge reasoning and generation with pre-trained language models: A survey, in: AAAI, 2022.
* De Raedt and Kimmig [2015] L. De Raedt, A. Kimmig, Probabilistic (logic) programming concepts, Machine Learning 100 (2015) 5–47.
* Collins and Quillian [1969] A. M. Collins, M. R. Quillian, Retrieval time from semantic memory, Journal of verbal learning and verbal behavior 8 (1969) 240–247.
* Rosch [1973] E. H. Rosch, Natural categories, Cognitive psychology 4 (1973) 328–350.
* Dahlgren and McDowell [1986] K. Dahlgren, J. McDowell, Kind types in knowledge representation, in: Coling 1986 Volume 1: The 11th International Conference on Computational Linguistics, 1986.
* Liu et al. [2023] Q. Liu, S. Han, E. Cambria, Y. Li, K. Kwok, PrimeNet: A framework for commonsense knowledge representation and reasoning based on conceptual primitives, https://sentic.net/primenet.pdf (2023).
* Singh et al. [2002] P. Singh, T. Lin, E. T. Mueller, G. Lim, T. Perkins, W. L. Zhu, Open mind common sense: Knowledge acquisition from the general public, in: OTM Confederated International Conferences” On the Move to Meaningful Internet Systems”, Springer, 2002, pp. 1223–1237.
* Cambria et al. [2022] E. Cambria, Q. Liu, S. Decherchi, F. Xing, K. Kwok, SenticNet 7: A commonsense-based neurosymbolic ai framework for explainable sentiment analysis, in: Proceedings of LREC, 2022, pp. 3829–3839.
* Tandon et al. [2014] N. Tandon, G. De Melo, F. Suchanek, G. Weikum, Webchild: Harvesting and organizing commonsense knowledge from the web, in: Proceedings of the 7th ACM international conference on Web search and data mining, 2014, pp. 523–532.
* Zhang et al. [2020] H. Zhang, X. Liu, H. Pan, Y. Song, C. W.-K. Leung, Aser: A large-scale eventuality knowledge graph, in: Proceedings of the web conference 2020, 2020, pp. 201–211.
* Ilievski et al. [2021] F. Ilievski, P. Szekely, B. Zhang, Cskg: The commonsense knowledge graph, in: European Semantic Web Conference, Springer, 2021, pp. 680–696.
* Storks et al. [2019] S. Storks, Q. Gao, J. Y. Chai, Recent advances in natural language inference: A survey of benchmarks, resources, and approaches, arXiv preprint arXiv:1904.01172 (2019).
* Sen et al. [2022] P. Sen, B. W. Carvalho, R. Riegel, A. G. Gray, Neuro-symbolic inductive logic programming with logical neural networks, in: AAAI, 2022.
* Glanois et al. [2022] C. Glanois, Z. Jiang, X. Feng, P. Weng, M. Zimmer, D. Li, W. Liu, J. Hao, Neuro-symbolic hierarchical rule induction, in: International Conference on Machine Learning, PMLR, 2022, pp. 7583–7615.
* Moghimifar et al. [2021] F. Moghimifar, L. Qu, T. Y. Zhuo, G. Haffari, M. Baktashmotlagh, Neural-symbolic commonsense reasoner with relation predictors, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Association for Computational Linguistics, Online, 2021.
* Winters et al. [2022] T. Winters, G. Marra, R. Manhaeve, L. D. Raedt, Deepstochlog: Neural stochastic logic programming, in: AAAI, 2022.
* Manhaeve et al. [2018] R. Manhaeve, S. Dumancic, A. Kimmig, T. Demeester, L. De Raedt, Deepproblog: Neural probabilistic logic programming, Advances in Neural Information Processing Systems 31 (2018).
* Mao et al. [2019] J. Mao, C. Gan, P. Kohli, J. B. Tenenbaum, J. Wu, The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision, arXiv preprint arXiv:1904.12584 (2019).
* Bisk et al. [2020] Y. Bisk, R. Zellers, J. Gao, Y. Choi, et al., Piqa: Reasoning about physical commonsense in natural language, in: Proceedings of the AAAI conference on artificial intelligence, volume 34, 2020, pp. 7432–7439.
* Zellers et al. [2019] R. Zellers, Y. Bisk, A. Farhadi, Y. Choi, From recognition to cognition: Visual commonsense reasoning, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 6720–6731.
* Ghosal et al. [2022] D. Ghosal, S. Shen, N. Majumder, R. Mihalcea, S. Poria, CICERO: A dataset for contextualized commonsense inference in dialogues, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 5010–5028.
* Sap et al. [2019] M. Sap, H. Rashkin, D. Chen, R. Le Bras, Y. Choi, Social IQa: Commonsense reasoning about social interactions, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, Hong Kong, China, 2019.
* Qin et al. [2021] L. Qin, A. Gupta, S. Upadhyay, L. He, Y. Choi, M. Faruqui, Timedial: Temporal commonsense reasoning in dialog, in: ACL, 2021.
* Lifschitz [2019] V. Lifschitz, Answer set programming, Springer Heidelberg, 2019.
* Gelfond and Kahl [2014] M. Gelfond, Y. Kahl, Knowledge representation, reasoning, and the design of intelligent agents: The answer-set programming approach, 2014\.
* Basu et al. [2021] K. Basu, S. C. Varanasi, F. Shakerin, J. Arias, G. Gupta, Knowledge-driven natural language understanding of english text and its applications, in: Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, 2021, pp. 12554–12563.
* Raedt et al. [2007] L. D. Raedt, A. Kimmig, H. T. Toivonen, Problog: A probabilistic prolog and its application in link discovery, in: IJCAI, 2007.
* Fierens et al. [2014] D. Fierens, G. V. den Broeck, J. Renkens, D. Shterionov, B. Gutmann, I. Thon, G. Janssens, L. D. Raedt, Inference and learning in probabilistic logic programs using weighted boolean formulas, Theory and Practice of Logic Programming 15 (2014) 358 – 401.
* Dries et al. [2015] A. Dries, A. Kimmig, W. Meert, J. Renkens, G. V. d. Broeck, J. Vlasselaer, L. D. Raedt, Problog2: Probabilistic logic programming, in: Joint european conference on machine learning and knowledge discovery in databases, Springer, 2015, pp. 312–315.
* Lloyd [2012] J. W. Lloyd, Foundations of logic programming, Springer Science & Business Media, 2012\.
* Miller [1995] G. A. Miller, Wordnet: a lexical database for english, Communications of the ACM 38 (1995) 39–41.
* Schuler [2005] K. K. Schuler, VerbNet: A broad-coverage, comprehensive verb lexicon, University of Pennsylvania, 2005\.
* Gruber [1965] J. S. Gruber, Studies in lexical relations., Ph.D. thesis, Massachusetts Institute of Technology, 1965.
* Fillmore [1967] C. J. Fillmore, The case for case. (1967).
* Resnik [1997] P. Resnik, Selectional preference and sense disambiguation, in: Tagging Text with Lexical Semantics: Why, What, and How?, 1997.
* Ferretti et al. [2001] T. R. Ferretti, K. McRae, A. Hatherell, Integrating verbs, situation schemas, and thematic role concepts, Journal of Memory and Language 44 (2001) 516–547.
* Dong and Lapata [2018] L. Dong, M. Lapata, Coarse-to-fine decoding for neural semantic parsing, arXiv preprint arXiv:1805.04793 (2018).
* Cambria et al. [2022] E. Cambria, R. Mao, S. Han, Q. Liu, Sentic parser: A graph-based approach to concept extraction for sentiment analysis, in: Proceedings of ICDM Workshops, 2022, pp. 413–420.
* Honnibal and Montani [2017] M. Honnibal, I. Montani, spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing, 2017\. To appear.
## Appendix A Probabilistic logic programming primer and relevant syntax
Table 11 lists probabilistic logic programming terminology and ProbLog syntax
relevant to our work’s discussion. An example representation of the natural
language belief ‘X can move with base likelihood 0.9 if X has a leg or a wheel
and X is not in a stuck state’ in logical form is “0.9::can(X,move):-
(has(X,leg); has(X,wheel)), not(state(X,stuck)).” For a more comprehensive
overview of probabilistic logic programming, we refer the reader to [24].
Term | Description | Examples
---|---|---
Predicate | A relation (e.g. isa, causes) or attribute (e.g. size, color) | isa, shape, related_to, color..
Constant | A term that is grounded and fixed (e.g. numbers, text-phrases and symbols with lower-case first letter) | ‘X eats Y’, -23, person, ‘Bob’, x
Variable | An ungrounded term that can take values during inference. Denoted by symbols with upper-case first character | X, Y, Event, Bob, Person
Atom | A predicate with n arguments; Syntax: pred(arg0,arg1..) | has_part(car,wheel,4), isa(X,X)
Logical operators | (i) ‘;’ denotes ‘logical or’, (ii) ‘,’ denotes ‘logical and’ (iii) ’not’ or ‘$\backslash+$’ denotes ‘not’ (iv) ‘.’ denotes end of fact/clause | isa(car,vehicle). not(isa(car,apple)). isa(car,on); isa(car,off).
Probabilistic fact | A declarative belief with a given base likelihood (which if unspecified assumed to be 1) | 0.5::can(x,buy,car). isa(X,X). 0.98::has(person,face). isa(person,physical).
Probabilistic clause or rule | An inferential belief where a head atom ‘h’ is inferred with a base clause likelihood $P_{c}$ from computation of body (or antecedent) set of atoms (denoted by ‘b’) as follows: $P(h)=P_{c}\times\prod{P(b)}$ | $P_{c}$:: head:- a1, (a2;a3). (Base syntax) 0.6::can(X,move):-has(X,leg). can(X,buy,Y):-has(X,Z),(value(Z)$>=$value(Y)).
Model | Corresponds to a set of facts and rules representing a world ‘model’ on which inferences can be performed | 0.5::can(item,move). 0.9::can(X,move):-(has(X,wheel);has(X,leg)).
Query | An atomic query to infer from a given world likelihood of a fact (e.g. can(human,move)?) or valid constants for variables for an atom (e.g. can(X,move)?) | can(human,move)? $=>$ 0.9 can(X,move)?$=>$ X=item (0.5); X= human(0.9) ..
Table 11: Terms in probabilistic logic programming and ProbLog syntax relevant
to our work’s discussion.
## Appendix B Crowdsourcing annotation UI
Figure 7: Crowdsourcing round 1 screening example Figure 8: Crowdsourcing
round 1 practice question example Figure 9: Crowdsourcing round 1 main task
example Figure 10: Crowdsourcing round 2 main task example
|
# WeatherFusionNet: Predicting Precipitation from Satellite Data
Jiří Pihrt
Faculty of Information Technology
Czech Technical University in Prague
<EMAIL_ADDRESS>
&Rudolf Raevskiy
Faculty of Information Technology
Czech Technical University in Prague
<EMAIL_ADDRESS>
&Petr Šimánek
Faculty of Information Technology
Czech Technical University in Prague
<EMAIL_ADDRESS>
&Matej Choma
Meteopress spol s.r.o.
Faculty of Information Technology
Czech Technical University in Prague
<EMAIL_ADDRESS>
###### Abstract
The short-term prediction of precipitation is critical in many areas of life.
Recently, a large body of work was devoted to forecasting radar reflectivity
images. The radar images are available only in areas with ground weather
radars. Thus, we aim to predict high-resolution precipitation from lower-
resolution satellite radiance images. A neural network called WeatherFusionNet
is employed to predict severe rain up to eight hours in advance.
WeatherFusionNet is a U-Net architecture that fuses three different ways to
process the satellite data; predicting future satellite frames, extracting
rain information from the current frames, and using the input sequence
directly. Using the presented method, we achieved 1st place in the NeurIPS
2022 Weather4cast Core challenge. The code and trained parameters are
available at https://github.com/Datalab-FIT-CTU/weather4cast-2022.
## 1 Introduction
Reliably predicting precipitation is a challenge with societal impact,
especially the prediction of extreme weather. The Weather4Cast 2022 [IARAI, ]
competition allowed us to work on the nowcasting problem for areas with no or
unreliable access to weather radar data. Using only satellite data to forecast
extreme precipitation is a very challenging task, and it complements our
present effort to master radar prediction. Weather4Cast 2022 follows the
previous challenge of Herruzo et al. [2021].
The joint team of Meteopress and Czech Technical University in Prague aims to
develop the best possible approach for precipitation prediction. We want to
overcome the limitations of the current optical methods by applying machine
learning. We currently work on improving nowcasting based on weather radar
data with very accurate results as presented by Choma et al. [2022]. In the
past, we used a basic U-Net Ronneberger et al. [2015] and later PredRNN
architecture [Wang et al., 2022] to solve this challenging task. We further
improved the models, but we later found it necessary to include more prior
physical knowledge in the architecture. The PhyDNet of Guen and Thome [2020]
demonstrated superior performance in radar prediction (in both storm structure
and movement) while increasing the interpretability of the model and also the
physical trustworthiness. We further improved the physical part of PhyDNet
[Choma et al., 2022].
In the Weather4Cast, the task is even more challenging than radar prediction.
We decided to apply our good experience with PhyDNet and enrich the model with
other architectural choices and fusion. We call the resulting model
WeatherFusionNet as it fuses three different ways to process the satellite
data; predicting future satellite images, extracting rain information from the
current frames, and using the input sequence directly.
## 2 Used data
The task of this competition was to predict 32 ground-radar (1 channel)
images, which is responsible for the next 8 hours, based on 4 satellite images
(1 hour). The entire dataset was provided by the organizers of the
Weather4cast competition. It covers areas of 7 European regions and contains
satellite and radar images for years 2019 and 2020. The satellite images are
composed of 2 visible (VIS), 2 water vapor (WV), and 7 infrared (IR) bands.
Each image has a size of $252\times 252$ and has a temporal resolution of 15
minutes. Satellite patch pixel corresponds to 12x12 square kilometers. The
target rain rate in the sequence to be predicted have higher spatial
resolution and 1 pixel corresponds to a spatial area of about 2x2 square
kilometers.
Training sequences are generated with a sliding window, using the provided
data loader, generating a total of 228928 samples. The validation set contains
predefined sequences, 840 in total.
Static data with latitude, longitude, and topological height were also
provided, but were not used in our model.
The competition was designed as a binary classification task. The rain
threshold was specified as 0.2 during the second stage.
## 3 Model Architecture
Our model consists of three modules, which are all trained separately, looking
at the data from different angles.
Figure 1: Diagram of the model architecture. The dimensions in parentheses
denote the temporal and channel dimension sizes, respectively.
Firstly, we use a module that we call sat2rad. This network is trained to
estimate the rainfall at the current time step of a single satellite frame. By
training it this way, we believe it can efficiently extract information about
the current rain situation in the input sequence, without having to predict
the future. This module is realized by a U-Net [Ronneberger et al., 2015] with
11 input channels and 1 output channel. In the overall model, it is applied to
all 4 input satellite input frames independently, generating 4 channels in
total. We take advantage of the spatial invariance feature of a convolutional
network, to predict the rainfall for the entire satellite area, even though we
only have radar data for a smaller area. More details on that, as well as the
specific U-Net architecture, are described later in this section.
Secondly, we use a recurrent convolutional network PhyDNet [Guen and Thome,
2020]. PhyDNet aims to disentangle physical dynamics from other complementary
visual information. Its architecture consists of two branches. One branch is
responsible for the physical dynamics, and features a recurrent physically
constrained cell called PhyCell, for performing PDE-constrained prediction in
latent space. The second branch extracts other residual information required
for future prediction, such as visual appearance and details, using ConvLSTM
cells [Shi et al., 2015].
Although PhyDNet is designed to work with radar frames, it was difficult to
apply it to this competition in a straightforward way, because of the
combination of relatively small spatial area and long prediction timeframe,
and the fact that we don’t have accurate input radar data during inference.
Instead, we trained PhyDNet only on the satellite data, which do not have
these limitations. Essentially, we used PhyDNet to extend the input sequence
of satellite frames. We decided to limit the output sequence length to 10,
based on our memory limitations and past experience with PhyDNet.
Finally, the outputs from the previously described two modules are
concatenated, along with the input sequence, and fused by another U-Net to
generate the final prediction. This U-Net has a total of 158 input channels
and 32 output channels. However, similarly, as in the sat2rad module, the
prediction covers the entire $3024\times 3024$ square kilometer area. But we
only need to predict and have the data for, the center $504\times 504$ square
kilometer area. Because of that, we crop the center part ($42\times 42$
pixels) of the prediction, and then upscale it back to the target $252\times
252$ resolution, as shown in Figure 1. We believe this approach can take more
advantage of the spatial invariance feature of a convolutional network such as
U-Net, than simply forcing it to cover the smaller spatial area directly. We
also use this approach when training the sat2rad module.
The upscale operation is realized simply by an Upsample layer with bilinear
interpolation and a scale factor of 6. We tried more sophisticated approaches
during the first stage of the competition, but we did not observe any
significant improvements. There is room for more research on this part.
For both of the U-Net modules in our architecture, we use the following
version;
* •
filter sizes of 32, 64, 128, 256, 512,
* •
each 3x3 convolution is followed by a BatchNorm layer and a ReLU,
* •
downscaling is realized by 2x2 max pooling with stride 2
* •
and upscaling is realized by Upsample layers with a scale factor of 2 and
bilinear interpolation.
One limitation of U-Net is that it only works with images of sizes divisible
by 32. To get around this, we pad the input images by 2 pixels (with padding
mode set to replicate) on each side to increase the size to 256x256, right
before inputting them to both U-Nets we use. Then we crop the output back to
252x252.
## 4 Training Procedure
As described in the previous section, the sat2rad module was not trained on
sequences, but on individual frames. To achieve this, we simply set the input
and output sequence length to 1 and subtracted 1 from the output indexes so
that they match the input. No further modifications were required. It was
trained with a batch size of 32, and the loss function used was
BCEWithLogitsLoss. Other hyperparameters are listed in Table 1.
Learning rate | 1e-3
---|---
Weight decay | 1e-2
Optimizer | AdamW
Model precision | 32
Positive weight | 2.58
Table 1: U-Net hyperparameters
The Satellite PhyDNet module is trained only using satellite data. We set the
satellite sequence length to 14 and then split it into 4 input and 10 output
frames. Because the provided validation set was not designed for longer
sequences, we used part of the training set as a validation split.
Specifically, the first 150000 samples were used for training, and the rest
was used to generate validation sequences, using a sliding window with a
stride of 32 samples. PhyDNet was trained with a loss function $L=L1+L2$. We
used teacher forcing, starting with probability of 1, decreased by 5e-5 every
step. Hyperparameters are listed in Table 2.
ConvLSTMCell |
---|---
Input dimension | 64
Hidden dimensions | [128, 128, 64]
Kernel size | (3, 3)
PhyCell |
Input dimension | 64
Hidden dimensions | [49]
Kernel size | (7, 7)
Optimizer | Adam
Learning rate | 1e-3
Batch Size | 16
Table 2: PhyDNet hyperparameters
The final U-Net model requires outputs from the previous two modules, but
during training they were frozen (their weights were not updated). This model
was also trained with BCEWithLogitsLoss, batch size 16, and other
hyperparameters listed in Table 1. No learning rate scheduling was used. The
evolution of CSI, F1, IoU metrics and loss on the validation dataset is
presented in Figure 3. The models have been trained using a single NVIDIA
A100-40GB.
## 5 Results
In this section, we show the empirical results of our model. We also compare
it to a baseline U-Net model, which has exactly the same hyperparameters and
training procedures, but it does not include the inputs from sat2rad and
PhyDNet modules. It is difficult to explain why the WeatherFusionNet performed
worse on the Transfer IoU without access to the test and heldout dataset.
Model | Validation IoU | Core Heldout IoU | Transfer Heldout IoU
---|---|---|---
UNet | 0.2988 | 0.2950 | 0.2567
WeatherFusionNet | 0.3212 | 0.3162 | 0.2488
Table 3: IoU metric on the validation set and from the final competition
leaderboards. Figure 2: IoU metric over time computed on the validation set.
Figure 2 shows how the IoU metric varies in time over a prediction. Figure 4
demonstrates a prediction of a satellite image sequence by PhyDNet. Figure 5
shows results of sat2rad U-Net module. Figure 6 presents a sample
WeatherFusionNet prediction.
Figure 3: Evolution of the validation loss, IoU, F1, and CSI metric during
training. Compared WeatherFusionNet with plain UNet. Figure 4: Satellite
PhyDNet prediction example. Each row is a different satellite channel, first
two columns are input data, and further columns show PhyDNet prediction of the
satellite for up to 150 minutes. Figure 5: sat2rad U-Net module prediction
examples. Each row illustrates one sample time instance, first three columns
show input satellite data (three different channels). The black/white square
highlights the target radar area. The fourth column presents predicted rain
probability and the last column is the target radar image. Figure 6:
WeatherFusionNet prediction example. The first row is the evolution of the
target radar image in 8 hours, the second row is the predicted probability and
the last row is the final prediction.
## 6 Conclusion
We presented our approach to forecasting precipitation in high resolution
based only on low-resolution satellite data. The method is called
WeatherFusionNet and was used as a solution to the Weather4cast challenge. The
model ingests three different sources of data, i.e. from satellite prediction,
sat2rad, and the satellite image. The model proved to be working well by
winning the Weather4cast 2022 Core challenge.
The current model is not trained end-to-end, mostly due to memory
requirements, but it would be interesting to try it in the future. Special
attention should be also paid to the upscaling part of the model. The
upscaling is not critical in the current setting but would be if we would try
to solve a regression task instead of classification. Especially to model the
storm with a reasonable structure which is a very difficult task for the
current deep-learning methods. Also, we may add static data that has not been
used in this paper.
The model is also well prepared for another source of data, especially if the
radar data are available, we can skip the sat2rad model.
## References
* Choma et al. [2022] Matej Choma, Jakub Bartel, and Petr Šimánek. Precipitation nowcasting by deep physics-constrained neural networks. Technical report, Copernicus Meetings, 2022.
* Guen and Thome [2020] Vincent Le Guen and Nicolas Thome. Disentangling physical dynamics from unknown factors for unsupervised video prediction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 11474–11484, 2020.
* Herruzo et al. [2021] Pedro Herruzo, Aleksandra Gruca, Llorenç Lliso, Xavier Calbet, Pilar Rípodas, Sepp Hochreiter, Michael Kopp, and David P. Kreil. High-resolution multi-channel weather forecasting – first insights on transfer learning from the weather4cast competitions 2021. In _2021 IEEE International Conference on Big Data (Big Data)_ , pages 5750–5757, 2021. doi: 10.1109/BigData52589.2021.9672063.
* [4] IARAI. Weather4cast – multi-sensor weather forecast competition. URL https://www.iarai.ac.at/weather4cast/.
* Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _International Conference on Medical image computing and computer-assisted intervention_ , pages 234–241. Springer, 2015.
* Shi et al. [2015] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. _Advances in neural information processing systems_ , 28, 2015.
* Wang et al. [2022] Yunbo Wang, Haixu Wu, Jianjin Zhang, Zhifeng Gao, Jianmin Wang, Philip Yu, and Mingsheng Long. Predrnn: A recurrent neural network for spatiotemporal predictive learning. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2022.
|
# Fractional backward stochastic differential equations with delayed generator
Jiaqiang Wen Department of Mathematics, Southern University of Science and
Technology, Shenzhen 518055, China (Email: wenjq@sustech.edu.cn). JW is
supported by National Natural Science Foundation of China (grant No. 12101291)
and Guangdong Basic and Applied Basic Research Foundation (grant No.
2022A1515012017).
Abstract: In this paper, we focus on the solvability of a class of fractional
backward stochastic differential equations (BSDEs, for short) with delayed
generator. In this class of equations, the generator includes not only the
values of the solutions of the present but also the past. Under Lipschitz
condition, the existence and uniqueness of such BSDEs are established. A
comparison theorem for this class of BSDEs is also obtained.
Key words: fractional backward stochastic differential equation; backward
stochastic differential equation; fractional Brownian motion; time delayed.
AMS subject classifications. 60H10; 60G22.
## 1 Introduction
A centered Gaussian process $B^{H}=\\{B^{H}_{t},t\geqslant 0\\}$ is called a
fractional Brownian motion (fBm, for short) with Hurst parameter $H\in(0,1)$
if its covariance is
$E(B^{H}_{t}B^{H}_{s})=\frac{1}{2}(t^{2H}+s^{2H}-|t-s|^{2H}),\quad~{}t,s\geqslant
0.$
When $H=\frac{1}{2}$, this process degenerates into a classical Brownian
motion. For $H>\frac{1}{2}$, $B^{H}_{t}$ exhibits the property of long-range
dependence, which makes fBm an important driving noise in many fields such as
finance, telecommunication networks, and physics.
In 1990, Pardoux and Peng [22] introduced the nonlinear backward stochastic
differential equations (BSDEs, for short). In the next two decades, it had
been widely used in different fields of mathematical finance [11], stochastic
control [26], and partial differential equations [23]. At the same time, for
better applications, BSDE itself has been developed into many different
branches. For example, recently, BSDEs driven by fractional Brownian motion,
also known as fractional BSDEs, were studied by Hu and Peng [17], where they
proved the existence and uniqueness of solutions when the Hurst parameter
$H>\frac{1}{2}$. Then Maticiuc and Nie [20] obtained some general results of
fractional BSDEs through a rigorous approach. Buckdahn and Jing [3] studied
fractional mean-field stochastic differential equations (SDEs, for short) with
$H>1/2$ and a stochastic control problem. Some other recent developments of
fractional BSDEs can be found in Bender [1], Bender and Viitasaari [2],
Borkowska [4], Douissi, Wen and Shi [10], Hu, Jolis and Tindel [14], Hu,
Nualart and Song [16], Jing [19], Wen and Shi [24, 25], etc., among theory and
applications. Furthermore, as a natural extension of BSDEs, Delong and
Imkeller [7, 8] studied BSDEs with delayed generator, owing to that
mathematical delayed approaches play an important role in many fields, such as
stochastic optimal control, financial risks, management of insurance, pricing
and hedging (see Delong [6] and the references therein). Shortly after, in
application, Chen and Wu [5] obtained the maximum principle for stochastic
optimal control problem with delay. Along this way, Huang and Shi [12] get the
Maximum principle for optimal control of fully coupled forward–backward
stochastic differential delayed equations.
As another important development of BSDEs, fractional BSDEs with delayed
generator have significant applications in stochastic optimal control problems
with delay. However, to our best knowledge, no study about fractional BSDEs
with delayed generator is available up to now. Therefore, in this paper, we
focus on the solvability of this type of BSDEs. Specifically, we study the
following fractional BSDEs with delayed generator:
$\begin{cases}-dY(t)=f(t,\eta(t),Y(t),Z(t),Y(t-\delta),Z(t-\delta))dt-Z(t)dB_{t}^{H},\
\ 0\leqslant t\leqslant T;\\\ Y(T)=\xi,\ \ Y(t)=\varphi(t),\ \ Z(t)=\psi(t),\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
-\delta\leqslant t<0,\end{cases}$ (1.1)
where $\delta\in(0,T]$ is a time delay parameter. We call $\xi(\cdot)$ the
terminal value and $f(\cdot)$ the generator of the corresponding BSDE (1.1).
And the delayed generator means that in (1.1), the generator $f(\cdot)$
includes not only the values of the solution $(Y,Z)$ of the present but also
the past. It should be pointed out that the stochastic integral in (1.1) is
the divergence type integral (see Decreusefond and Üstünel [9], and Nualart
[21]). In particular, we consider the case of the Hurst parameter
$H>\frac{1}{2}$. First, two different methods are proposed to study the
existence and uniqueness of the equation (1.1). Note that, even in the
classical BSDEs with delayed generator, as shown in Delong and Imkeller [7],
the existence and uniqueness cannot hold for arbitrary time horizon and time
delay. Similarly, our existence and uniqueness, proved by the first method,
hold under a small time horizon. However, by introducing the second method,
the existence and uniqueness hold for arbitrary time horizon. Besides, for its
wide applications to BSDEs, a comparison theorem of such type of equations is
obtained. In the coming future researches, we will focus on the application of
this equation in stochastic optimal control.
This article is organized as follows. In Section 2, some preliminaries about
fractional Brownian motion are presented. The existence and uniqueness of the
equation (1.1) are proved in Section 3, and a comparison theorem for such
fractional BSDEs is derived in Section 4.
## 2 Preliminaries
We recall some basic results about the fBm in this section, which are
important for our following study. For a deeper discussion, the readers may
refer to the articles such as Decreusefond and Üstünel [9], Hu [13] and
Nualart [21].
Denote by $(\Omega,\mathcal{F},P)$ a complete probability space, and under
which we assume $B^{H}=\\{B^{H}_{t},t\geqslant 0\\}$ is a fBm such that the
filtration $\mathcal{F}$ is generated by $B^{H}$. Let $H>\frac{1}{2}$
throughout this paper. Moreover, we define the function
$\phi(x)=H(2H-1)|x|^{2H-2}$, where $x\in\mathbb{R}$. Suppose
$\xi:[0,T]\rightarrow\mathbb{R}$ and $\eta:[0,T]\rightarrow\mathbb{R}$ are two
continuous functions. Define
$\langle\xi,\eta\rangle_{T}=\int_{0}^{T}\int_{0}^{T}\phi(u-v)\xi_{u}\eta_{v}dudv,\
\ and\ \ \|\xi\|_{T}^{2}=\langle\xi,\xi\rangle_{T}.$ (2.1)
It is easy to know that $\langle\xi,\eta\rangle_{T}$ is a Hilbert scalar
product. So, under this scalar product, we can denote by $\mathcal{H}$ the
completion of the continuous functions. Besides, we denote by
$\mathcal{P}_{T}$ the set of all polynomials of fBm over the interval $[0,T]$,
i.e., every element of $\mathcal{P}_{T}$ has the following form
$\Phi(\omega)=h\left(\int_{0}^{T}\xi_{1}(t)dB_{t}^{H},...,\int_{0}^{T}\xi_{n}(t)dB_{t}^{H}\right),$
where $h$ is a polynomial function and $\xi_{i}\in\mathcal{H},i=1,2,...,n$. In
addition, Malliavin derivative operator $D_{s}^{H}$ of
$\Phi\in\mathcal{P}_{T}$ is defined by:
$D_{s}^{H}\Phi=\sum\limits_{i=1}^{n}\frac{\partial h}{\partial
x_{i}}\left(\int_{0}^{T}\xi_{1}(t)dB_{t}^{H},...,\int_{0}^{T}\xi_{n}(t)dB_{t}^{H}\right)\xi_{i}(s),\
\ s\in[0,T].$
Since the derivative operator
$D^{H}:L^{2}(\Omega,\mathcal{F},P)\rightarrow(\Omega,\mathcal{F},\mathcal{H})$
is closable, one can denote by $\mathbb{D}^{1,2}$ the completion of
$\mathcal{P}_{T}$ under the following norm
$\|\Phi\|^{2}_{1,2}=E|\Phi|^{2}+E\|D^{H}_{s}\Phi\|^{2}_{T}.$
Furthermore, we introduce the following derivative
$\mathbb{D}_{t}^{H}\Phi=\int_{0}^{T}\phi(t-s)D_{s}^{H}\Phi ds,\ \ t\in[0,T].$
(2.2)
Now, we introduce the adjoint operator of Malliavin derivative operator
$D^{H}$. We call this operator the divergence operator, which represents the
divergence type integral and is denoted by $\delta(\cdot)$.
###### Definition 2.1.
We call a process $u\in L^{2}(\Omega\times[0,T];\mathcal{H})$ belongs to
$Dom(\delta)$, if there is a random variable $\delta(u)\in
L^{2}(\Omega,\mathcal{F},P)$ satisfying the duality relationship:
$E(\Phi\delta(u))=E(\langle D^{H}_{\cdot}\Phi,u\rangle_{T}),\ \ for\ every\
\Phi\in\mathcal{P}_{T}.$
Moreover, if $u\in Dom(\delta)$, we define
$\int_{0}^{T}u_{s}dB^{H}_{s}:=\delta(u)$ the divergence type integral of $u$
w.r.t. $B^{H}$.
It should be pointed out that, in this paper, unless otherwise specified, the
$dB^{H}$-integral represents the divergence type integral.
###### Proposition 2.2 (Hu [13], Proposition 6.25).
Denote by $\mathbb{L}^{1,2}_{H}$ the space of all processes
$F:\Omega\times[0,T]\rightarrow\mathcal{H}$ satisfying
$E\left(\|F\|_{T}^{2}+\int_{0}^{T}\int_{0}^{T}|\mathbb{D}_{s}^{H}F_{t}|^{2}dsdt\right)<\infty.$
Then, if $F\in\mathbb{L}^{1,2}_{H}$, the divergence type integral
$\int_{0}^{T}F_{s}dB_{s}^{H}$ exists in $L^{2}(\Omega,\mathcal{F},P)$, and
$E\left(\int_{0}^{T}F_{s}dB_{s}^{H}\right)=0;\ \
E\left(\int_{0}^{T}F_{s}dB_{s}^{H}\right)^{2}=E\left(\|F\|_{T}^{2}+\int_{0}^{T}\int_{0}^{T}\mathbb{D}_{s}^{H}F_{t}\mathbb{D}_{t}^{H}F_{s}dsdt\right).$
###### Proposition 2.3 (Hu [13], Theorem 11.1).
For $i=1,2$, let $g_{i}$ and $f_{i}$ be two real valued processes satisfying
$E\int_{0}^{T}(|g_{i}(s)|^{2}+|f_{i}(s)|^{2})ds<\infty$. Moreover, assume that
$D^{H}_{t}f_{i}(s)$ is continuously differentiable in its arguments
$(s,t)\in[0,T]^{2}$ for almost every $\omega\in\Omega$, and
$E\int_{0}^{T}\int_{0}^{T}|\mathbb{D}_{t}^{H}f_{i}(s)|^{2}dsdt<\infty$. Denote
$X_{i}(t)=\int_{0}^{t}g_{i}(s)ds+\int_{0}^{t}f_{i}(s)dB_{s}^{H},\ \
t\in[0,T].$
Then
$\begin{split}X_{1}(t)X_{2}(t)=&\int_{0}^{t}X_{1}(s)g_{2}(s)ds+\int_{0}^{t}X_{1}(s)f_{2}(s)dB_{s}^{H}+\int_{0}^{t}X_{2}(s)g_{1}(s)ds\\\
&+\int_{0}^{t}X_{2}(s)f_{1}(s)dB_{s}^{H}+\int_{0}^{t}\mathbb{D}_{s}^{H}X_{1}(s)g_{2}(s)ds+\int_{0}^{t}\mathbb{D}_{s}^{H}X_{2}(s)g_{1}(s)ds.\end{split}$
## 3 Existence and uniqueness
In this section, we study the solvability of the fractional BSDE (1.1). In
particular, we shall propose two different methods to prove the existence and
uniqueness of (1.1). And at the end of this section, we will make a comparison
for these two methods. In order to do this, let $\eta_{0}$ be a constant, and
$b:[0,T]\rightarrow\mathbb{R}$ and $\sigma:[0,T]\rightarrow\mathbb{R}$ be two
deterministic differentiable functions with $\sigma_{t}\neq 0$ (then either
$\sigma_{t}<0$ or $\sigma_{t}>0$). Let
$\eta_{t}=\eta_{0}+\int_{0}^{t}b_{s}ds+\int_{0}^{t}\sigma_{s}dB_{s}^{H},\quad~{}t\in[0,T].$
(3.1)
We recall that (see (2.1)),
$\|\sigma\|_{t}^{2}=\int_{0}^{t}\int_{0}^{t}\phi(u-v)\sigma_{u}\sigma_{v}dudv=H(2H-1)\int_{0}^{t}\int_{0}^{t}|u-v|^{2H-2}\sigma_{u}\sigma_{v}dudv,$
therefore, $\frac{d}{dt}(\|\sigma\|_{t}^{2})=2\hat{\sigma}_{t}\sigma_{t}>0$
for $t\in(0,T]$, where $\hat{\sigma}_{t}=\int_{0}^{t}\phi(t-v)\sigma_{v}dv$.
In the following, we study the equation (1.1). And for the sake of
convenience, we first investigate the fractional BSDE with delayed generator
as follows:
$\begin{cases}-dY(t)=f(t,\eta(t),Y(t-\delta),Z(t-\delta))dt-Z(t)dB_{t}^{H},\quad~{}0\leqslant
t\leqslant T;\\\ Y(T)=\xi,\ \ Y(t)=\varphi(t),\ \ Z(t)=\psi(t),\quad~{}\ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ -\delta\leqslant t<0,\end{cases}$ (3.2)
where the time delay $\delta\in(0,T]$. We introduce the following sets: for
$t_{1}\leqslant t_{2}$,
* $\bullet$
$C^{1,3}_{pol}([t_{1},t_{2}]\times\mathbb{R})$ is the space of all
$C^{1,3}$-functions over $[t_{1},t_{2}]\times\mathbb{R}$, which together with
their derivatives are of polynomial growth;
* $\bullet$
$\mathcal{V}_{[t_{1},t_{2}]}:=\bigg{\\{}Y=\phi\big{(}\cdot,\eta(|\cdot|)\big{)}\big{|}\phi\in
C_{pol}^{1,3}([t_{1},t_{2}]\times\mathbb{R})\ with\
\frac{\partial\phi}{\partial t}\in C_{pol}^{0,1}([t_{1},t_{2}]$
$\times\mathbb{R}),t_{1}\leqslant t\leqslant t_{2}\bigg{\\}},$
and by $\widetilde{\mathcal{V}}_{[t_{1},t_{2}]}$ and
$\widetilde{\mathcal{V}}_{[t_{1},t_{2}]}^{H}$ denote the completion of
$\mathcal{V}_{[t_{1},t_{2}]}$ under the following norm respectively,
$\|Y\|\triangleq\bigg{(}\int_{t_{1}}^{t_{2}}e^{\beta
t}\mathbb{E}|Y(t)|^{2}dt\bigg{)}^{\frac{1}{2}},\ \ \ \
\|Z\|\triangleq\bigg{(}\int_{t_{1}}^{t_{2}}|t|^{2H-1}e^{\beta
t}\mathbb{E}|Z(t)|^{2}dt\bigg{)}^{\frac{1}{2}},$
where $\beta\geqslant 0$ is a constant. Then
$\widetilde{\mathcal{V}}_{[t_{1},t_{2}]}^{H}$ and
$\widetilde{\mathcal{V}}_{[t_{1},t_{2}]}$ are Banach spaces. A pair
$(Y_{\cdot},Z_{\cdot})\in\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$
is said to be a solution of the equation (3.2), if it satisfies (3.2).
### 3.1 The first method
In this subsection, the first method is used to prove the existence and
uniqueness of the equation (3.2). In order to do this, the following
assumptions are needed.
* (H1)
$\xi=h(\eta(T))$, where $h\in C^{3}_{pol}(\mathbb{R})$ and,
$\varphi\in\widetilde{\mathcal{V}}_{[-\delta,0]}$ and
$\psi\in\widetilde{\mathcal{V}}_{[-\delta,0]}^{H}$.
* (H2)
The generator $f(t,x,y,z):[0,T]\times\mathbb{R}^{3}\rightarrow\mathbb{R}$ is a
$C_{pol}^{0,1}$-continuous function. Moreover, there exists $L\geqslant 0$
such that $f$ satisfies the following Lipschitz condition:
$|f(t,x,y,z)-f(t,x,y^{\prime},z^{\prime})|\leqslant
L\big{(}|y-y^{\prime}|+|z-z^{\prime}|\big{)},\ \ \forall
t\in[0,T],x,y,y^{\prime},z,z^{\prime}\in\mathbb{R}.$
###### Theorem 3.1.
Under (H1) and (H2), for a sufficiently small time horizon $T$, Eq. (3.2)
admits a unique solution.
###### Proof.
For a given pair
$(y_{\cdot},z_{\cdot})\in\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$,
we consider the following BSDE:
$\begin{cases}-dY_{t}=g(t,\eta_{t})dt-Z_{t}dB_{t}^{H},\qquad\qquad 0\leqslant
t\leqslant T;\\\ Y_{t}=\xi,\ \ Y_{t}=\varphi(t),\ \
Z_{t}=\psi(t),\qquad-\delta\leqslant t<0,\end{cases}$ (3.3)
where
$g(t,\eta_{t})=f(t,\eta_{t},y_{t-\delta},z_{t-\delta}),\quad~{}0\leqslant
t\leqslant T.$
Note that $(y_{\cdot},z_{\cdot})$ and $\delta$ are given. From Proposition 17
of Maticiuc and Nie [20], and note that $Y_{\cdot}$ and $Z_{\cdot}$ are given
in $[-\delta,0)$, we see that Eq. (3.3) has a unique solution
$(Y_{\cdot},Z_{\cdot})$. Then, we can define a mapping
$I:\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}\longrightarrow\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$
such that $I[(y_{\cdot},z_{\cdot})]=(Y_{\cdot},Z_{\cdot})$. It is easy to know
that, if we can prove $I$ is a contraction mapping on
$\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$,
then the desired result obtained. So, in the following, we are going to show
that $I$ is a contraction mapping.
For arbitrary pairs $(y_{\cdot},z_{\cdot})$ and
$(y^{\prime}_{\cdot},z^{\prime}_{\cdot})$ in
$\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$,
we let
$I[(y_{\cdot},z_{\cdot})]=(Y_{\cdot},Z_{\cdot}),\ \ \
I[(y^{\prime}_{\cdot},z^{\prime}_{\cdot})]=(Y^{\prime}_{\cdot},Z^{\prime}_{\cdot}).$
And define
$\begin{array}[]{ll}\displaystyle\hat{y}_{\cdot}\triangleq
y_{\cdot}-y^{\prime}_{\cdot},\quad~{}\hat{z}_{\cdot}\triangleq
z_{\cdot}-z^{\prime}_{\cdot},\\\ \vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\hat{Y}_{\cdot}\triangleq
Y_{\cdot}-Y^{\prime}_{\cdot},\quad~{}\hat{Z}_{\cdot}\triangleq
Z_{\cdot}-Z^{\prime}_{\cdot}.\end{array}$
Now, by applying Itô formula (Proposition 2.3) and taking expectation, we
obtain
$\begin{array}[]{ll}\displaystyle\mathbb{E}\left(e^{\beta
t}\hat{Y}_{t}^{2}+\beta\int_{t}^{T}e^{\beta
s}\hat{Y}_{s}^{2}ds+2\int_{t}^{T}e^{\beta
s}\mathbb{D}_{s}^{H}\hat{Y}_{s}\hat{Z}_{s}ds\right)\\\ \vskip 3.0pt plus 1.0pt
minus 1.0pt\cr\displaystyle=2\mathbb{E}\int_{t}^{T}e^{\beta
s}\hat{Y}_{s}\big{[}f(s,\eta_{s},y_{s-\delta},z_{s-\delta})-f(s,\eta_{s},y^{\prime}_{s-\delta},z^{\prime}_{s-\delta})\big{]}ds.\end{array}$
The main difficulty in the above equation comes from the term of the Malliavin
derivative $\mathbb{D}_{s}^{H}\hat{Y}_{s}$. By virtue of the result obtained
in Hu and Peng [17], and Maticiuc and Nie [20], we have the relation
$\mathbb{D}_{s}^{H}\hat{Y}_{s}=\frac{\hat{\sigma}_{s}}{\sigma_{s}}\hat{Z}_{s}$
(see [17, 20] for detail). Furthermore, from Remark 6 of [20], there is a
constant $M>0$ such that
$\frac{s^{2H-1}}{M}\leqslant\frac{\hat{\sigma}_{s}}{\sigma_{s}}\leqslant
Ms^{2H-1},\ \ \forall s\in[0,T].$
In the following, for the technical reasons, we choose $M>2$. Then, we deduce
$\begin{array}[]{ll}\displaystyle\mathbb{E}\left(e^{\beta
t}\hat{Y}_{t}^{2}+\beta\int_{t}^{T}e^{\beta
s}\hat{Y}_{s}^{2}ds+\frac{2}{M}\int_{t}^{T}e^{\beta
s}s^{2H-1}\hat{Z}_{s}^{2}ds\right)\\\ \vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant 2\mathbb{E}\int_{t}^{T}e^{\beta
s}\hat{Y}_{s}\big{[}f(s,\eta_{s},y_{s-\delta},z_{s-\delta})-f(s,\eta_{s},y^{\prime}_{s-\delta},z^{\prime}_{s-\delta})\big{]}ds.\end{array}$
So by choosing $\beta>1$, from (H2), we have
$\begin{array}[]{ll}\displaystyle\mathbb{E}\left(e^{\beta
t}|\hat{Y}_{t}|^{2}+\int_{t}^{T}e^{\beta
s}|\hat{Y}_{s}|^{2}ds+\frac{2}{M}\int_{t}^{T}e^{\beta
s}s^{2H-1}|\hat{Z}_{s}|^{2}ds\right)\\\ \vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant 2L\int_{t}^{T}e^{\beta
s}\mathbb{E}\left[|\hat{Y}_{s}|(|\hat{y}_{s-\delta}|+|\hat{z}_{s-\delta}|)\right]ds\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\leqslant
2L\int_{t}^{T}\big{(}e^{\beta
s}\mathbb{E}|\hat{Y}_{s}|^{2}\big{)}^{\frac{1}{2}}\cdot\left[e^{\beta
s}\mathbb{E}(|\hat{y}_{s-\delta}|+|\hat{z}_{s-\delta}|)^{2}\right]^{\frac{1}{2}}ds.\end{array}$
(3.4)
Denote $x(t)=\big{(}e^{\beta
t}\mathbb{E}|\hat{Y}_{t}|^{2}\big{)}^{\frac{1}{2}}$. Then from (3.4),
$x(t)^{2}\leqslant 2L\int_{t}^{T}x(s)\left[e^{\beta
s}\mathbb{E}(|\hat{y}_{s-\delta}|+|\hat{z}_{s-\delta}|)^{2}\right]^{\frac{1}{2}}ds.$
(3.5)
By applying Lemma 20 of [20] to (3.5), it follows that
$\begin{array}[]{ll}\displaystyle x(t)\leqslant L\int_{t}^{T}\left[e^{\beta
s}\mathbb{E}(|\hat{y}_{s-\delta}|+|\hat{z}_{s-\delta}|)^{2}\right]^{\frac{1}{2}}ds\\\
\vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\qquad\leqslant\sqrt{2}L\int_{t}^{T}\left[e^{\beta
s}\mathbb{E}(|\hat{y}_{s-\delta}|^{2}+|\hat{z}_{s-\delta}|^{2})\right]^{\frac{1}{2}}ds.\end{array}$
Therefore
$x(t)^{2}\leqslant 2L^{2}\bigg{(}\int_{0}^{T}\left[e^{\beta
s}\mathbb{E}(|\hat{y}_{s-\delta}|^{2}+|\hat{z}_{s-\delta}|^{2})\right]^{\frac{1}{2}}ds\bigg{)}^{2},\
\ t\in[0,T].$ (3.6)
From the inequality $\sqrt{|a|+|b|}\leqslant\sqrt{|a|}+\sqrt{|b|}$ and
Hölder’s inequality, note that $\delta\leqslant T$, one has
$\begin{array}[]{ll}\displaystyle\bigg{(}\int_{0}^{T}\left[e^{\beta
s}\mathbb{E}(|\hat{y}_{s-\delta}|^{2}+|\hat{z}_{s-\delta}|^{2})\right]^{\frac{1}{2}}ds\bigg{)}^{2}\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\leqslant
e^{\beta\delta}\bigg{(}\int_{-\delta}^{T}\left[e^{\beta
s}\mathbb{E}(|\hat{y}_{s}|^{2}+|\hat{z}_{s}|^{2})\right]^{\frac{1}{2}}ds\bigg{)}^{2}\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\leqslant
e^{\beta\delta}\bigg{(}\int_{-\delta}^{T}\left[e^{\beta
s}\mathbb{E}|\hat{y}_{s}|^{2}\right]^{\frac{1}{2}}ds+\int_{-\delta}^{T}\left[e^{\beta
s}\mathbb{E}|\hat{z}_{s}|^{2}\right]^{\frac{1}{2}}ds\bigg{)}^{2}\\\ \vskip
3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\leqslant
2e^{\beta\delta}\bigg{[}\bigg{(}\int_{-\delta}^{T}\left[e^{\beta
s}\mathbb{E}|\hat{y}_{s}|^{2}\right]^{\frac{1}{2}}ds\bigg{)}^{2}+\bigg{(}\int_{-\delta}^{T}\big{[}|s|^{1-2H}\cdot
e^{\beta
s}|s|^{2H-1}\mathbb{E}|\hat{z}_{s}|^{2}\big{]}^{\frac{1}{2}}ds\bigg{)}^{2}\bigg{]}\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\leqslant
2e^{\beta\delta}\bigg{[}(T+\delta)\int_{-\delta}^{T}e^{\beta
s}\mathbb{E}|\hat{y}_{s}|^{2}ds+\frac{T^{2-2H}+\delta^{2-2H}}{2-2H}\int_{-\delta}^{T}e^{\beta
s}|s|^{2H-1}\mathbb{E}|\hat{z}_{s}|^{2}ds\bigg{]}\\\ \vskip 3.0pt plus 1.0pt
minus 1.0pt\cr\displaystyle\leqslant 2e^{\beta
T}\bigg{(}2T+\frac{2T^{2-2H}}{2-2H}\bigg{)}\int_{-\delta}^{T}e^{\beta
s}\mathbb{E}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds.\end{array}$
(3.7)
Then combining (3.6) and (3.7), one has
$\int_{0}^{T}x(s)^{2}ds\leqslant 8L^{2}Te^{\beta
T}\bigg{(}T+\frac{T^{2-2H}}{2-2H}\bigg{)}\int_{-\delta}^{T}e^{\beta
s}\mathbb{E}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds.$
(3.8)
Again, from (3.6) and (3.7), similar as the above discussion, we obtain
$\begin{array}[]{ll}\displaystyle\int_{0}^{T}|s-\delta|^{1-2H}x(s)^{2}ds\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\leqslant
2L^{2}\int_{0}^{T}|s-\delta|^{1-2H}ds\cdot\bigg{(}\int_{0}^{T}\left[e^{\beta
s}\mathbb{E}(|\hat{y}_{s-\delta}|^{2}+|\hat{z}_{s-\delta}|^{2})\right]^{\frac{1}{2}}ds\bigg{)}^{2}\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\leqslant
2L^{2}\int_{-\delta}^{T}|s|^{1-2H}ds\cdot\bigg{(}\int_{0}^{T}\left[e^{\beta
s}\mathbb{E}(|\hat{y}_{s-\delta}|^{2}+|\hat{z}_{s-\delta}|^{2})\right]^{\frac{1}{2}}ds\bigg{)}^{2}\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\leqslant 8L^{2}e^{\beta
T}\bigg{(}T+\frac{T^{2-2H}}{2-2H}\bigg{)}\cdot\frac{2T^{2-2H}}{2-2H}\int_{-\delta}^{T}e^{\beta
s}\mathbb{E}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds.\end{array}$
(3.9)
Now, by combining (3.4), (3.8) and (3.9), one has
$\begin{array}[]{ll}\displaystyle\mathbb{E}\left(\int_{0}^{T}e^{\beta
s}|\hat{Y}_{s}|^{2}ds+\frac{2}{M}\int_{0}^{T}e^{\beta
s}s^{2H-1}|\hat{Z}_{s}|^{2}ds\right)\\\ \vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant 2L\mathbb{E}\int_{0}^{T}e^{\beta
s}\bigg{(}\frac{1}{v}\big{(}1+|s-\delta|^{1-2H}\big{)}|\hat{Y}_{s}|^{2}+v|\hat{y}_{s-\delta}|^{2}+v|s-\delta|^{2H-1}|\hat{z}_{s-\delta}|^{2}\bigg{)}ds\\\
\vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\frac{2L}{v}\mathbb{E}\int_{0}^{T}e^{\beta
s}\big{(}1+|s-\delta|^{1-2H}\big{)}|\hat{Y}_{s}|^{2}ds+2Lve^{\beta\delta}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds\\\ \vskip 3.0pt
plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\widetilde{L}\cdot\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds,\end{array}$
where $v>0$ is a constant, and
$\widetilde{L}=\frac{16L^{3}}{v}e^{\beta
T}\bigg{(}T+\frac{T^{2-2H}}{2-2H}\bigg{)}\cdot\bigg{(}T+\frac{2T^{2-2H}}{2-2H}\bigg{)}+2Lve^{\beta
T}.$
Note that $M>2$, and $\hat{Y}_{s}=0$ and $\hat{Z}_{s}=0$ when
$s\in[-\delta,0)$, we obtain
$\begin{array}[]{ll}\displaystyle\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\left(|\hat{Y}_{s}|^{2}+|s|^{2H-1}|\hat{Z}_{s}|^{2}\right)ds\\\ \vskip 3.0pt
plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\bigg{[}\frac{8L^{3}}{v}Me^{\beta
T}\bigg{(}T+\frac{2T^{2-2H}}{2-2H}\bigg{)}^{2}+LMve^{\beta
T}\bigg{]}\cdot\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds.\end{array}$
Choose $v$ such that $LMve^{\beta T}<\frac{1}{4}$, and $T$ sufficiently small
such that
$\frac{8L^{3}}{v}Me^{\beta
T}\bigg{(}T+\frac{2T^{2-2H}}{2-2H}\bigg{)}^{2}<\frac{1}{4}.$
Then
$\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\left(|\hat{Y}_{s}|^{2}+|s|^{2H-1}|\hat{Z}_{s}|^{2}\right)ds\leqslant\frac{1}{2}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds.$
Hence $I$ is a contraction mapping on
$\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}^{H}_{[-\delta,T]}$,
which implies (3.2) admits a unique solution. This completes the proof. ∎
###### Remark 3.2.
One may note that, in Theorem 3.1, the time horizon $T$ needs to be
sufficiently small. Next, we introduce the second method to study (3.2), where
the existence and uniqueness of (3.2) hold for arbitrary time horizon $T$.
### 3.2 The second method
In this subsection, we introduce another method to prove the solvability of
BSDE (3.2). It should be pointed out that this method is more convenient than
the first one. However, the price of doing this is that we should strengthen
the condition of the coefficient $f$ w.r.t. $z$.
* (H3)
The generator $f(t,x,y,z):[0,T]\times\mathbb{R}^{3}\rightarrow\mathbb{R}$ is a
$C_{pol}^{0,1}$-continuous function. Moreover, there exists $L\geqslant 0$
such that $f$ satisfies the following condition:
$\begin{array}[]{ll}\displaystyle|f(t,x,y,z)-f(t,x,y^{\prime},z^{\prime})|^{2}\leqslant
L\big{(}|y-y^{\prime}|^{2}+|t-\delta|^{2H-1}|z-z^{\prime}|^{2}\big{)},\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\qquad\qquad\qquad\forall
t\in[0,T],x,y,y^{\prime},z,z^{\prime}\in\mathbb{R}.\end{array}$ (3.10)
We present the following result, which will be used frequently in the
following study. For the detailed proof of the following lemma, the readers
may refer to Lemma 3.1 of Wen and Shi [24].
###### Lemma 3.3.
Suppose $h$ is a $C^{1}_{pol}(\mathbb{R})$-function and $f$ is a
$C_{pol}^{0,1}([0,T]\times\mathbb{R})$-function. Then BSDE
$Y_{t}=h(\eta_{T})+\int_{t}^{T}f(s,\eta_{s})ds-\int_{t}^{T}Z_{s}dB_{s}^{H},$
has a unique solution in
$\widetilde{\mathcal{V}}_{[0,T]}\times\widetilde{\mathcal{V}}^{H}_{[0,T]}$.
Moreover,
$\begin{array}[]{ll}\\\ ds\mathbb{E}\left(e^{\beta
t}|Y_{t}|^{2}+\frac{\beta}{2}\int_{t}^{T}e^{\beta
s}|Y_{s}|^{2}ds+\frac{2}{M}\int_{t}^{T}e^{\beta
s}s^{2H-1}|Z_{s}|^{2}ds\right)\\\ \vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\mathbb{E}\left(e^{\beta
T}|h(\eta_{T})|^{2}+\frac{2}{\beta}\int_{t}^{T}e^{\beta
s}|f(s,\eta_{s})|^{2}ds\right),\end{array}$ (3.11)
where $M,\beta>0$ are constants.
###### Theorem 3.4.
Under (H1) and (H3), for a small time delay $\delta$, (3.2) has a unique
solution.
###### Proof.
First, similar to the preceding method, for a given pair
$(y_{\cdot},z_{\cdot})\in\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$,
we consider the following BSDE:
$\begin{cases}-dY_{t}=g(t,\eta_{t})dt-Z_{t}dB_{t}^{H},\qquad\qquad 0\leqslant
t\leqslant T;\\\ Y_{t}=\xi,\ \ Y_{t}=\varphi(t),\ \
Z_{t}=\psi(t),\qquad-\delta\leqslant t<0,\end{cases}$ (3.12)
where
$g(t,\eta_{t})=f(t,\eta_{t},y_{t-\delta},z_{t-\delta}),\quad~{}0\leqslant
t\leqslant T.$
Note that $(y_{\cdot},z_{\cdot})$ and $\delta$ are given. From Lemma 3.3, wee
see that BSDE (3.12) has a unique solution $(Y_{\cdot},Z_{\cdot})$. Then, we
can define a mapping
$I:\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}\longrightarrow\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$
such that $I[(y_{\cdot},z_{\cdot})]=(Y_{\cdot},Z_{\cdot})$. In the following,
we use the second method to show that $I$ is a contraction mapping on
$\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$.
For arbitrary two pairs $(y_{\cdot},z_{\cdot})$ and
$(y^{\prime}_{\cdot},z^{\prime}_{\cdot})$ in
$\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$,
we let
$I[(y_{\cdot},z_{\cdot})]=(Y_{\cdot},Z_{\cdot}),\ \ \
I[(y^{\prime}_{\cdot},z^{\prime}_{\cdot})]=(Y^{\prime}_{\cdot},Z^{\prime}_{\cdot}).$
And define
$\hat{y}_{\cdot}\triangleq
y_{\cdot}-y^{\prime}_{\cdot},\quad~{}\hat{z}_{\cdot}\triangleq
z_{\cdot}-z^{\prime}_{\cdot},\quad~{}\hat{Y}_{\cdot}\triangleq
Y_{\cdot}-Y^{\prime}_{\cdot},\quad~{}\hat{Z}_{\cdot}\triangleq
Z_{\cdot}-Z^{\prime}_{\cdot}.$
From the estimate (3.11), we have
$\begin{array}[]{ll}\displaystyle\mathbb{E}\int_{0}^{T}e^{\beta
s}\left(\frac{\beta}{2}|\hat{Y}_{s}|^{2}+\frac{2}{M}s^{2H-1}|\hat{Z}_{s}|^{2}\right)ds\\\
\vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\frac{2}{\beta}\mathbb{E}\int_{0}^{T}e^{\beta
s}\big{|}f(s,\eta_{s},y_{s-\delta},z_{s-\delta})-f(s,\eta_{s},y^{\prime}_{s-\delta},z^{\prime}_{s-\delta})\big{|}^{2}ds.\end{array}$
Then, from (H3) and Fubini’s Theorem, we obtain
$\begin{array}[]{ll}\displaystyle\mathbb{E}\int_{0}^{T}e^{\beta
s}\left(\frac{\beta}{2}|\hat{Y}_{s}|^{2}+\frac{2}{M}s^{2H-1}|\hat{Z}_{s}|^{2}\right)ds\\\
\vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\frac{2L}{\beta}\mathbb{E}\int_{0}^{T}e^{\beta
s}\left(|\hat{y}_{s-\delta}|^{2}+|s-\delta|^{2H-1}|\hat{z}_{s-\delta}|^{2}\right)ds\\\
\vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\frac{2Le^{\beta\delta}}{\beta}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds.\end{array}$
Or
$\mathbb{E}\int_{0}^{T}e^{\beta
s}\bigg{(}\frac{M\beta}{4}|\hat{Y}_{s}|^{2}+s^{2H-1}|\hat{Z}_{s}|^{2}\bigg{)}ds\leqslant\frac{LMe^{\beta\delta}}{\beta}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds.$
Therefore, by choosing $\delta=\frac{1}{\beta}$ with $\beta=2LMe+\frac{4}{M}$,
we have
$\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\bigg{(}|\hat{Y}_{s}|^{2}+|s|^{2H-1}|\hat{Z}_{s}|^{2}\bigg{)}ds\leqslant\frac{1}{2}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}\left(|\hat{y}_{s}|^{2}+|s|^{2H-1}|\hat{z}_{s}|^{2}\right)ds.$
Hence $I$ is a contraction mapping on
$\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}^{H}_{[-\delta,T]}$,
which implies (3.2) admits a unique solution. ∎
###### Remark 3.5.
We make a comparison for the above two methods. It is easy to see that (H2) is
weaker than (H3). So from the point of view of the condition, the first method
is better than the second one. On the other hand, thanks to the concise proof
and the arbitrary horizon $T$, the second method is better.
Now, we return to the general equation (1.1). And the following assumption is
needed.
* (H4)
The generator
$f(t,x,y,z,y_{\delta},z_{\delta}):[0,T]\times\mathbb{R}^{5}\rightarrow\mathbb{R}$
is a $C_{pol}^{0,1}$-continuous function. Moreover, there exists $L\geqslant
0$ such that $f$ satisfies the following condition: for every
$t\in[0,T],x,y,y^{\prime},z,z^{\prime},$
$y_{\delta},y^{\prime}_{\delta},z_{\delta},z^{\prime}_{\delta}\in\mathbb{R},$
$\begin{array}[]{ll}\displaystyle|f(t,x,y,z,y_{\delta},z_{\delta})-f(t,x,y^{\prime},z^{\prime},y^{\prime}_{\delta},z^{\prime}_{\delta})|^{2}\\\
\vskip 3.0pt plus 1.0pt minus 1.0pt\cr\displaystyle\leqslant
L\big{(}|y-y^{\prime}|^{2}+t^{2H-1}|z-z^{\prime}|^{2}+|y_{\delta}-y_{\delta}^{\prime}|^{2}+|t-\delta|^{2H-1}|z_{\delta}-z^{\prime}_{\delta}|^{2}\big{)},\end{array}$
(3.13)
We have the following existence and uniqueness result for the general BSDE
(1.1).
###### Theorem 3.6.
Under (H1) and (H4), for a small time delay $\delta$, BSDE (1.1) admits a
unique solution
$(Y_{\cdot},Z_{\cdot})\in\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[-\delta,T]}^{H}$.
###### Remark 3.7.
Since the proof of Theorem 3.6 is essentially the same as the above second
method, we only present the result without a detailed proof. Furthermore,
similar to the first method in the preceding subsection, under (H1) and the
related Lipschitz condition (H2), if the time horizon $T$ is sufficiently
small, BSDE (1.1) also has a unique solution.
## 4 Comparison theorem
For its wide applications to BSDEs, a comparison theorem of the fractional
BSDEs with delayed generator is investigated in this section. In detail, we
study a comparison theorem for the solutions of the following type of
fractional BSDEs with delayed generator:
$\begin{cases}-dY(t)=f(t,\eta(t),Y(t),Z(t),Y(t-\delta))dt-Z(t)dB_{t}^{H},\qquad
0\leqslant t\leqslant T;\\\ Y(T)=h(\eta_{T}),\qquad
Y(t)=\varphi(t),\qquad-\delta\leqslant t<0.\end{cases}$ (4.1)
From Theorem 3.6, under (H1) and (H4), for a small time delay $\delta$, the
above equation has a unique solution in
$\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[0,T]}^{H}$.
###### Theorem 4.1.
For $i=1,2$, suppose $h_{i}$ and $\varphi_{i}$ satisfy (H1),
$f_{i}(t,x,y,z,y_{\delta})$ and $\partial_{y}f_{i}(t,x,y,z,y_{\delta})$
satisfy (H4) for every $(t,x,y,z,y_{\delta})\in[0,T]\times\mathbb{R}^{4}$.
Moreover, assume $f_{1}$ is increasing in $y_{\delta}$, i.e.,
$f_{1}(t,x,y,z,y_{\delta})\leqslant f_{1}(t,x,y,z,y_{\delta}^{\prime})$ when
$y_{\delta}\leqslant y_{\delta}^{\prime}$. Then, if
$\psi_{1}(t)\leqslant\psi_{2}(t)$, $t\in[-\delta,0]$, and
$h_{1}(x)\leqslant h_{2}(x),\ \ f_{1}(t,x,y,z,y_{\delta})\leqslant
f_{2}(t,x,y,z,y_{\delta}),\ \
(t,x,y,z,y_{\delta})\in[0,T]\times\mathbb{R}^{4},$
one has
$Y_{1}(t)\leqslant Y_{2}(t),\ a.e.,\ a.s.$
###### Proof.
Let $\widetilde{Y}_{0}(\cdot)=Y_{2}(\cdot)$ and consider the following BSDE:
$\begin{cases}-d\widetilde{Y}_{1}(t)=f_{1}(t,\eta_{t},\widetilde{Y}_{1}(t),\widetilde{Z}_{1}(t),\widetilde{Y}_{0}(t-\delta))dt-\widetilde{Z}_{1}(t)dB_{t}^{H},\quad
0\leqslant t\leqslant T;\\\
\widetilde{Y}_{1}(T)=h_{1}(\eta_{T}),\quad\widetilde{Y}_{1}(t)=\varphi_{1}(t),\quad-\delta\leqslant
t<0.\end{cases}$
From Theorem 3.6, the above equation has a unique solution
$(\widetilde{Y}_{1}(\cdot),\widetilde{Z}_{1}(\cdot))\in\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[0,T]}^{H}$.
Since
$\begin{cases}f_{1}(t,x,y,z,\widetilde{Y}_{0}(t-\delta))\leqslant
f_{2}(t,x,y,z,\widetilde{Y}_{0}(t-\delta)),\ \
(t,x,y,z)\in[0,T]\times\mathbb{R}^{3};\\\ h_{1}(x)\leqslant h_{2}(x),\ \
x\in\mathbb{R},\end{cases}$
from Theorem 12.3 of Hu et al. [15], we have
$\widetilde{Y}_{1}(t)\leqslant\widetilde{Y}_{0}(t)=Y_{2}(t),\ \ a.e.,\ a.s.$
Next, we consider the following BSDE:
$\begin{cases}-d\widetilde{Y}_{2}(t)=f_{1}(t,\eta_{t},\widetilde{Y}_{2}(t),\widetilde{Z}_{2}(t),\widetilde{Y}_{1}(t-\delta))dt-\widetilde{Z}_{2}(t)dB_{t}^{H},\quad
0\leqslant t\leqslant T;\\\
\widetilde{Y}_{2}(T)=h_{1}(\eta_{T}),\quad\widetilde{Y}_{2}(t)=\varphi_{1}(t),\quad-\delta\leqslant
t<0.\end{cases}$
and denote by
$(\widetilde{Y}_{2}(\cdot),\widetilde{Z}_{2}(\cdot))\in\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[0,T]}^{H}$
the unique solution of the above equation. Then, since $f_{1}(t,x,y,z,\cdot)$
is increasing, we have for all $(t,x,y,z)\in[0,T]\times\mathbb{R}^{3}$,
$f_{1}(t,x,y,z,\widetilde{Y}_{1}(t-\delta))\leqslant
f_{1}(t,x,y,z,\widetilde{Y}_{0}(t-\delta)).$
Hence, similar as the above discission,
$\widetilde{Y}_{2}(t)\leqslant\widetilde{Y}_{1}(t),\ \ a.e.,\ a.s.$
Then, by induction, one can construct a sequence
$\\{(\widetilde{Y}_{n}(\cdot),\widetilde{Z}_{n}(\cdot))\\}_{n\geqslant
1}\subseteq\widetilde{\mathcal{V}}_{[-\delta,T]}\times\widetilde{\mathcal{V}}_{[0,T]}^{H}$
such that
$\begin{cases}-d\widetilde{Y}_{n}(t)=f_{1}(t,\eta_{t},\widetilde{Y}_{n}(t),\widetilde{Z}_{n}(t),\widetilde{Y}_{n-1}(t-\delta))dt-\widetilde{Z}_{n}(t)dB_{t}^{H},\quad
0\leqslant t\leqslant T;\\\
\widetilde{Y}_{n}(T)=h_{1}(\eta_{T}),\quad\widetilde{Y}_{n}(t)=\varphi_{1}(t),\quad-\delta\leqslant
t<0.\end{cases}$
Similarly, we obtain
$Y_{2}(t)=\widetilde{Y}_{0}(t)\geqslant\widetilde{Y}_{1}(t)\geqslant\widetilde{Y}_{2}(t)\geqslant\cdots\geqslant\widetilde{Y}_{n}(t)\geqslant\cdots,\
a.e.,\ a.s.$
In the following, we shall show
$\\{(\widetilde{Y}_{n}(\cdot),\widetilde{Z}_{n}(\cdot))\\}_{n\geqslant 1}$ is
a Cauchy sequence.
Denote $\hat{Y}_{n}(t)=\widetilde{Y}_{n}(t)-\widetilde{Y}_{n-1}(t)$ and
$\hat{Z}_{n}(t)=\widetilde{Z}_{n}(t)-\widetilde{Z}_{n-1}(t),\ n\geqslant 4$.
From (3.11) and the assumption (H4), we have
$\begin{split}&\mathbb{E}\left(\frac{\beta}{2}\int_{0}^{T}e^{\beta
s}|\hat{Y}_{n}(s)|^{2}ds+\frac{2}{M}\int_{0}^{T}s^{2H-1}e^{\beta
s}|\hat{Z}_{n}(s)|^{2}ds\right)\\\
\leqslant&\frac{2}{\beta}\mathbb{E}\bigg{(}\int_{0}^{T}e^{\beta
s}\big{|}f_{1}(s,\eta_{s},\widetilde{Y}_{n}(s),\widetilde{Z}_{n}(s),\widetilde{Y}_{n-1}(s-\delta))\\\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \
-f_{1}(s,\eta_{s},\widetilde{Y}_{n-1}(s),\widetilde{Z}_{n-1}(s),\widetilde{Y}_{n-2}(s-\delta))\big{|}^{2}ds\bigg{)}\\\
\leqslant&\frac{2L}{\beta}\mathbb{E}\int_{0}^{T}e^{\beta
s}\big{(}|\hat{Y}_{n}(s)|^{2}+s^{2H-1}|\hat{Z}_{n}(s)|^{2}\big{)}ds+\frac{2Le^{\beta\delta}}{\beta}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}|\hat{Y}_{n-1}(s)|^{2}ds.\end{split}$
Then, by choosing $\delta=\frac{1}{\beta}$ with $\beta=8LMe+\frac{4}{M}$, one
has
$\begin{array}[]{ll}\displaystyle\mathbb{E}\int_{0}^{T}e^{\beta
s}\big{(}|\hat{Y}_{n}(s)|^{2}+s^{2H-1}|\hat{Z}_{n}(s)|^{2}\big{)}ds\\\ \vskip
3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\frac{1}{4}\mathbb{E}\int_{0}^{T}e^{\beta
s}\big{(}|\hat{Y}_{n}(s)|^{2}+s^{2H-1}|\hat{Z}_{n}(s)|^{2}\big{)}ds+\frac{1}{4}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}|\hat{Y}_{n-1}(s)|^{2}ds.\end{array}$
Hence
$\begin{array}[]{ll}\displaystyle\mathbb{E}\int_{0}^{T}e^{\beta
s}\big{(}|\hat{Y}_{n}(s)|^{2}+s^{2H-1}|\hat{Z}_{n}(s)|^{2}\big{)}ds\\\ \vskip
3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\frac{1}{3}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}|\hat{Y}_{n-1}(s)|^{2}ds\\\ \vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant\frac{1}{3}\bigg{(}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}|\hat{Y}_{n-1}(s)|^{2}ds+\mathbb{E}\int_{0}^{T}e^{\beta
s}s^{2H-1}|\hat{Z}_{n-1}(s)|^{2}ds\bigg{)}.\end{array}$
Therefore
$\begin{array}[]{ll}\displaystyle\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}|\hat{Y}_{n}(s)|^{2}ds+\mathbb{E}\int_{0}^{T}e^{\beta
s}s^{2H-1}|\hat{Z}_{n}(s)|^{2}ds\\\ \vskip 3.0pt plus 1.0pt minus
1.0pt\cr\displaystyle\leqslant(\frac{1}{3})^{n-4}\bigg{(}\mathbb{E}\int_{-\delta}^{T}e^{\beta
s}|\hat{Y}_{4}(s)|^{2}ds+\mathbb{E}\int_{0}^{T}e^{\beta
s}s^{2H-1}|\hat{Z}_{4}(s)|^{2}ds\bigg{)}.\end{array}$
It follows that $(\hat{Y}_{n}(\cdot))_{n\geqslant 4}$ is a Cauchy sequence in
Banach space $\widetilde{\mathcal{V}}_{[-\delta,T]}$, and
$(\hat{Z}_{n}(\cdot))_{n\geqslant 4}$ is a Cauchy sequence in Banach space
$\widetilde{\mathcal{V}}_{[0,T]}^{H}$. Denote their limits by
$\widetilde{Y}_{\cdot}$ and $\widetilde{Z}_{\cdot}$, respectively. Now from
the existence and uniqueness theorem (Theorem 3.6), we obtain
$\widetilde{Y}(t)=Y_{1}(t),\ \ a.e.,\ a.s.$
Then, we get
$Y_{1}(t)\leqslant Y_{2}(t),\ \ a.e.,\ a.s.$
Therefore, the desired result is obtained. ∎
###### Remark 4.2.
It should be pointed out that the results are based on the Hurst parameter $H$
greater then $1/2$. And due to the technical difficulty, the theory of the
fractional BSDEs with $H<1/2$ is still an open problem now. We hope to give
some related results when the Hurst parameter $H<\frac{1}{2}$ in the near
future.
###### Example 4.3.
Suppose we are facing with the following two BSDEs,
$\begin{cases}-dY_{1}(t)=\big{[}Y_{1}(t)+t^{2H-1}Z_{1}(t)+Y_{1}(t-\delta)-1\big{]}dt-
Z_{1}(t)dB_{t}^{H},\quad 0\leqslant t\leqslant T;\\\
Y_{1}(T)=h_{1}(\eta_{T}),\quad Y_{1}(t)=\varphi_{1}(t),\quad-\delta\leqslant
t<0,\end{cases}$
and
$\begin{cases}-dY_{2}(t)=\big{[}Y_{2}(t)+t^{2H-1}Z_{2}(t)+Y_{2}(t-\delta)+1\big{]}dt-
Z_{2}(t)dB_{t}^{H},\quad 0\leqslant t\leqslant T;\\\
Y_{2}(T)=h_{2}(\eta_{T}),\quad Y_{2}(t)=\varphi_{2}(t),\quad-\delta\leqslant
t<0,\end{cases}$
where for $i=1,2$, $h_{i}$ and $\varphi_{i}$ satisfy (H1) with
$h_{1}(x)\leqslant h_{2}(x)$ and $\varphi_{1}(t)\leqslant\varphi_{2}(t)$ for
every $t\in[-\delta,0],\ x\in\mathbb{R}$. Then, according to Theorem 4.1, we
get $Y_{1}(t)\leqslant Y_{2}(t),\ \ a.e.,\ a.s.$
## Conclusion
In this paper, we studied the fractional BSDEs with delayed generator. In
particular, we consider the case of the Hurst parameter $H>\frac{1}{2}$. We
proposed two different methods to prove the existence and uniqueness of such
BSDEs. We also prove a comparison theorem for this type of equations. It
should be pointed out that the results obtained in this article extend part of
the main results of Delong and Imkeller [7, 8] to fractional calculus. In the
coming future researches, we would like to focus on the application of this
equation in finance. The theory of the case when the Hurst parameter
$H<\frac{1}{2}$ is anther goal.
## References
* [1] C. Bender, Backward SDEs driven by Gaussian processes, Stochastic Process. Appl. 124 (2014) 2892–2916.
* [2] C. Bender, L. Viitasaari, A general non-existence result for linear BSDEs driven by Gaussian processes, Stochastic Process. Appl. 127 (2017) 1204–1233.
* [3] R. Buckdahn, S. Jing, Mean-field SDE driven by a fractional Brownian motion and related stochastic control problem, SIAM J. Control Optim. 55(3) (2017) 1500–1533.
* [4] K.J. Borkowska, Generalized BSDEs driven by fractional Brownian motion, Statist. Probab. Lett. 83 (2013) 805–811.
* [5] L, Chen, Z. Wu, Maximum principle for the stochastic optimal control problem with delay and application, Automatica, 46 (2010) 1074–1080.
* [6] L. Delong, Applications of time-delayed backward stochastic differential equations to pricing, hedging and portfolio management in insurance and finance, Applicationes Mathematicae 39(4) (2012) 463–488.
* [7] L. Delong, P. Imkeller, Backward stochastic differential equations with time delayed generators results and counter examples, Ann. Appl. Probab. 20 (2010) 1512–1536.
* [8] L. Delong, P. Imkeller, On Malliavin’s differentiability of BSDE with time delayed generators driven by Brownian motions and Poisson random measures, Stochastic Process. Appl. 120 (2010) 1748–1775.
* [9] L. Decreusefond, A.S. Üstünel, Stochastic analysis of the fractional Brownian motion, Potential Anal. 10 (1999) 177–214.
* [10] Soukaina Douissi, J. Wen, Y. Shi, Mean-field anticipated BSDEs driven by fractional Brownian motion and related stochastic control problem, Appl. Math. Comput. 355 (2019) 282–298.
* [11] N. El Karoui, S. Peng, M.C. Quenez, Backward stochastic differential equations in finance, Math. Finance 7 (1997) 1–71.
* [12] J. Huang, J. Shi, Maximum principle for optimal control of fully coupled forward-backward stochastic differential delayed equations, ESAIM: COCV. 18(4) (2012) 1073–1096.
* [13] Y. Hu, Integral transformations and anticipative calculus for fractional Brownian motions, Mem. Amer. Math. Soc. 175(825) (2005).
* [14] Y. Hu, M. Jolis, S. Tindel, On Stratonovich and Skorohod stochastic calculus for Gaussian processes, Ann. Probab. 41 (2013) 1656–1693.
* [15] Y. Hu, D. Ocone, J. Song, Some results on backward stochastic differential equations driven by fractional Brownian motions, Stoch. Anal. Appl. Finance (2012) 225–242.
* [16] Y. Hu, D. Nualart, X. Song, Malliavin calculus for backward stochastic differential equations and application to numerical solutions, Ann. Appl. Probab. 21(6) (2011) 2379–2423.
* [17] Y. Hu, S. Peng, Backward stochastic differential equation driven by fractional Brownian motion, SIAM J. Control Optim. 48 (2009) 1675–1700.
* [18] Y. Hu, X. Li, J. Wen, Anticipated backward stochastic differential equations with quadratic growth, J. Differ. Equations, (2020), Accept.
* [19] S. Jing, Nonlinear fractional stochastic PDEs and BDSDEs with Hurst parameter in (1/2, 1), Systems Control Letters 61 (2012) 655–665.
* [20] L. Maticiuc, T. Nie, Fractional backward stochastic differential equations and fractional backward variational inequalities, J. Theory Probab. 28 (2015) 337–395.
* [21] D. Nualart, The Malliavin Calculus and Related Topics, second ed, Springer, 2006.
* [22] E. Pardoux, S. Peng, Adapted solution of a backward stochastic differential equation, Systems Control Lett. 4 (1990) 55–61.
* [23] E. Pardoux, S. Peng, Backward SDEs and quasi-linear PDEs, Lecture Notes in Control and Inform Sci. 176 (1992) 200–217.
* [24] J. Wen, Y. Shi, Anticipative backward stochastic differential equations driven by fractional Brownian motion, Statist. Probab. Lett. 122 (2017) 118–127.
* [25] J. Wen, Y. Shi, Solvability of anticipated backward stochastic Volterra integral equations, Statist. Probab. Lett. 156 (2020) 108599.
* [26] J. Yong, X. Zhou, Stochastic Controls: Hamiltonian Systems and HJB Equations, Springer-Verlag, New York, 1999.
|
[a]Ryan Kellermann
# Inclusive semi-leptonic decays of charmed mesons with Möbius domain wall
fermions
Alessandro Barone Shoji Hashimoto Andreas Jüttner Takashi Kaneko
###### Abstract
We perform a non-perturbative lattice calculation of the decay rates for
inclusive semi-leptonic decays of charmed mesons. In view of the long-standing
tension in the determination of the CKM matrix elements $|V_{ub}|$ and
$|V_{cb}|$ from exclusive and inclusive processes, recently, the use of
lattice QCD has been extended towards the description of inclusive decays.
Since the determination of hadronic input parameters from QCD based methods
require independent tests, we focus on the charm sector, since it not only
offers experimental data, but also well determined CKM parameters. We carry
out a pilot lattice simulation for the $D_{s}\rightarrow X_{s}\ell\nu$ and
explore the improvement of existing techniques. Our simulation employs Möbius
domain-wall charm and strange quarks whose masses are tuned to be
approximately physical and we cover the whole kinematical region. We report on
our progress in analyzing different sources of systematic effects, such as the
extrapolation of the kernel function chosen for the Chebsyhev approximation as
well as the influence on the analysis in the region close to the kinematical
limit.
## 1 Introduction
In recent years, experiments have revealed a puzzling tension in B-decays,
namely, in the determination of the CKM parameters $|V_{ub}|$ and $|V_{cb}|$
from exclusive and inclusive methods [1]. This discrepancy provides an
opportunity for theorists to improve their understanding of these decays.
Furthermore, the search for new physics requires precise theoretical
predictions from the Standard Model. In view of these points, recently, ideas
to extend the application of lattice QCD towards the description of inclusive
decays have been proposed [2, 3, 4, 5, 6]. These approaches utilize either the
Chebyshev approximation or the Backus-Gilbert approach to obtain the energy
integral of the hadronic tensor, which defines the inclusive decay rates.
In this paper, we report on our progress in the application of this method
towards a precise calculation of the inclusive semi-leptonic decay of the
$D_{s}$-meson with a focus on presenting a method to control the systemtic
error, which appears in the approximation of the kernel function in the energy
integral.
First, we give a brief overview of the theoretical framework of our analysis
and present the formulas used in the Chebyshev approximation. And secondly, we
present our preliminary results of the analysis, as well as a first,
admittedly conservative, way to estimate the error in the limits that have to
be taken to properly estimate the inclusive decay rate.
## 2 Formulation of the Chebyshev approach
We start with the definition of the total decay rate [2]
$\Gamma=\frac{G_{F}^{2}|V_{cs}|^{2}}{24\pi^{3}}\int_{0}^{\boldsymbol{q^{2}_{\text{max}}}}d\boldsymbol{q}^{2}\sqrt{\boldsymbol{q}^{2}}\bar{X}(\boldsymbol{q}^{2})\,,$
(1)
where we introduce the short-hand notation for the energy integral
$\bar{X}=\int_{\omega_{\text{min}}}^{\omega_{\text{max}}}d\omega\,K_{\mu\nu}(\boldsymbol{q},\omega)W^{\mu\nu}\,.$
(2)
Here, $K_{\mu\nu}(\boldsymbol{q},\omega)$ is a kinematical factor given by the
leptonic tensor and $W^{\mu\nu}$ is the hadronic tensor given by
$W^{\mu\nu}(p,q)=\sum_{X_{s}}(2\pi)^{3}\delta^{(4)}(p-q-r)\frac{1}{2E_{D_{s}}}\braket{D_{s}(\boldsymbol{p})}{\tilde{J}^{\mu\dagger}(-\boldsymbol{q})}{X_{s}(\boldsymbol{r})}\braket{X_{s}(\boldsymbol{r})}{\tilde{J}^{\nu}(\boldsymbol{q})}{D_{s}(\boldsymbol{p})}\,,$
(3)
where we sum over all possible final states $X_{s}$ to represent the inclusive
decay and $\tilde{J}^{\nu}(\boldsymbol{q})$ is the Fourier transform of the
inserted current, defined through
$\tilde{J}^{\nu}(\boldsymbol{q})=\sum_{\boldsymbol{x}}e^{-i\boldsymbol{q}\cdot\boldsymbol{x}}J^{\nu}(x)$.
The energy integral in (2) can be rewritten as
$\bar{X}=\braket{D_{s}(\boldsymbol{p})}{\tilde{J}^{\mu\dagger}(-\boldsymbol{q})K_{\mu\nu}(\boldsymbol{q},\hat{H})\tilde{J}^{\nu}(\boldsymbol{q})}{D_{s}(\boldsymbol{p})}\,,$
(4)
where all intermediate states may contribute between the currents. On the
lattice, we calculate four-point correlation functions, which can be used to
extract the matrix element
$C_{JJ}^{\mu\nu}(\boldsymbol{q},t)=\frac{1}{V}\frac{1}{2m_{D_{s}}}\braket{D_{s}}{\tilde{J}_{\mu}^{\dagger}(-\boldsymbol{q})e^{-\hat{H}t}\tilde{J}_{\nu}(\boldsymbol{q})}{D_{s}}\,.$
(5)
We choose the rest frame of the initial $D_{s}$ meson, i.e.
$\boldsymbol{p}=0$.
By comparing (4) and (5), we see that we can obtain $\bar{X}$ once an
approximation of $K_{\mu\nu}(\boldsymbol{q},\hat{H})$ in terms of
$e^{-\hat{H}}$ of the form
$\displaystyle
K(\boldsymbol{q},\hat{H})=k_{0}+k_{1}e^{-\hat{H}}+...+k_{N}e^{-N\hat{H}}\,,$
can be constructed as it allows to create an approximation of the energy
integral (4)
$\displaystyle\bar{X}\sim\,$ $\displaystyle
k_{0}\underbrace{\braket{D_{s}}{\tilde{J}_{\mu}^{\dagger}(-\boldsymbol{q})\tilde{J}_{\nu}(\boldsymbol{q})}{D_{s}}}_{C_{\mu\nu}^{JJ}(0)}+k_{1}\underbrace{\braket{D_{s}}{\tilde{J}_{\mu}^{\dagger}(-\boldsymbol{q})e^{-\hat{H}}\tilde{J}_{\nu}(\boldsymbol{q})}{D_{s}}}_{C_{\mu\nu}^{JJ}(1)}+...$
$\displaystyle+k_{N}\underbrace{\braket{D_{s}}{\tilde{J}_{\mu}^{\dagger}(-\boldsymbol{q})e^{-\hat{H}N}\tilde{J}_{\nu}(\boldsymbol{q})}{D_{s}}}_{C_{\mu\nu}^{JJ}(N)}\,,$
where the matrix elements on the right hand side are determined by the lattice
data. In our case, we employ the shifted Chebyshev polynomials
$T_{j}^{*}(e^{-\omega})$ to create an approximation of $K(\hat{H})$ in the
integration range $[\omega_{0},\infty]$ with
$0\leq\omega_{0}<\omega_{\text{min}}$.
In the following, we show the behavior of the Chebyshev approximation
depending on the choice of the kernel function. This is shown in Figure 1.
First, we consider the kernel function $K(\boldsymbol{q},\omega)$ of a simple
Heaviside function
$K(\omega)=\theta(m_{D_{s}}-\sqrt{\boldsymbol{q}^{2}}-\omega)\,,$ (6)
to implement the upper limit of the $\omega$ integral. The approximation
results are shown in Figure 1(a) and we see that by simply increasing the
number of polynomials in the Chebyshev approximation from 5 to 20 increases
the oscillations of the approximation. In order to stabilize the
approximation, we smear the kernel function by introducing a smearing
parameter $\sigma$, i.e we employ a Sigmoid function of the form
$\theta_{\sigma}(m_{D_{s}}-\sqrt{\boldsymbol{q}^{2}}-\omega)\equiv\frac{1}{1+e^{-\frac{m_{D_{s}}-\sqrt{\boldsymbol{q}^{2}}-\omega}{\sigma}}}\,,$
(7)
and the approximation result is shown in Figure 1(b). While this approach
allows for a "smoother" approximation, it now requires us to take the limit of
$\sigma\rightarrow 0$ in addition to $N\rightarrow\infty$ to obtain a proper
estimate. This source of systematical error is the focus of this work.
(a) Heaviside function.
(b) Sigmoid function.
Figure 1: Chebyshev approximation of the kernel function, depending on the
choice of the kernel function. The blue solid line represents the Heaviside
function on both sides. On the left hand side, the dashed lines represent a
direct approximation of the Heaviside function. On the right hand side, the
colored solid lines show the Sigmoid function defined in Eq. (7), and the
dashed lines show their approximation. We use two choices of $N$ in both plots
and the smearing used on the right hand side plot is related to the number of
polynomials via $\sigma=1/N$.
To finalize this section, let us write down how we construct our
approximation. The $\omega$-integral can be approximated by
$\displaystyle\frac{\braket{\psi_{\mu}}{K(\hat{H})}{\psi_{\nu}}}{\braket{\psi_{\mu}}{\psi_{\nu}}}=\frac{c^{*}_{0}}{2}+\sum_{j=1}^{N}c_{j}^{*}\underbrace{\frac{\braket{\psi_{\mu}}{T^{*}_{j}(e^{-\hat{H}})}{\psi_{\nu}}}{\braket{\psi_{\mu}}{\psi_{\nu}}}}_{C(t+2t_{0})/C(2t_{0})}\,,$
(8)
where we define the state $\ket{\psi_{\nu}}\equiv
e^{-\hat{H}t_{0}}\tilde{J}_{\nu}(\boldsymbol{q})\ket{D_{s}(\boldsymbol{0})}$
on which the kernel operator is evaluated. The $T^{*}_{j}(x)$ are the shifted
Chebyshev polynomials, which can be obtained from the standard Chebyshev
polynoials as $T^{*}_{j}(x)\equiv T_{j}(2x-1)$. The first few terms are given
by $T^{*}_{0}(x)=1$, $T^{*}_{1}(x)=2x-1$, $T^{*}_{2}(x)=8x^{2}-8x+1$, and
higher orders are obtained recursively thorugh
$T^{*}_{j}(x)=(2x-1)T^{*}_{j-1}(x)-T^{*}_{j-2}(x)$. The coefficients
$c_{j}^{*}$ depend on the choice of the lower limit of the $\omega$-integral
$\omega_{0}$ and in the case of $\omega_{0}=0$ are simply given by
$\displaystyle c^{*}_{j}=\frac{2}{\pi}\int_{0}^{\pi}d\theta
K\left(-\ln\frac{1+\cos\theta}{2}\right)\cos(j\theta)\,.$ (9)
An important property of the Chebyshev polynomials is that the Chebyshev
matrix elements are confined between $[-1,1]$, i.e.
$\displaystyle\left|\frac{\braket{\psi_{\mu}}{T^{*}_{j}(e^{-\hat{H}})}{\psi_{\nu}}}{\braket{\psi_{\mu}}{\psi_{\nu}}}\right|\leq
1\,.$ (10)
This property can be used in two ways. Firstly, we can use it to suppress the
statistical noise for high orders of $j$ in the Chebyshev approximation where
we expect huge cancellation among different orders of $x$. And secondly, it
allows us to estimate the upper limit of the error, since all Chebyshev matrix
elements are bounded by $\pm 1$ for any $j$.
## 3 Numerical results
This computation is performed on the lattice data generated with $2+1$-flavor
Möbius domain wall fermions (ensemble "$M{\text{-}}ud3{\text{-}}sa$" from [7],
which has $1/a=$3.610(9)\text{\,}\mathrm{GeV}\text{/}$$). The charm and
strange quarks are simulated at near physical values, while the up and down
quark are simulated at a pion mass of
$m_{\pi}\simeq$300\text{\,}\mathrm{MeV}\text{/}$$. The lattice volume is
$48^{3}\times 96$ and the forward-scattering matrix elements are calculated
for spatial momenta $\boldsymbol{q}$ of $(0,0,0)$, $(0,0,1)$, $(0,1,1)$ and
$(1,1,1)$ in units of $2\pi/L$. All the data have been generated with Grid [8]
and Hadrons [9] software packages. Part of the fits in the analysis has been
performed using lsqfit [10].
The number of configurations averaged are 50 and the measurement is duplicated
with 8 different source time slices. For each fixed spatial momentum
$\boldsymbol{q}$ we calculate the four-point correlation function to extract
$C_{JJ}^{\mu\nu}(t,\boldsymbol{q})$ (further details on the lattice
calculation can be found in [11]) and determine the shifted Chebyshev matrix
elements from
$C_{JJ}^{\mu\nu}(t+2t_{0},\boldsymbol{q})/C_{JJ}^{\mu\nu}(2t_{0},\boldsymbol{q})$
as shown in (8) by performing a constrained fit imposing the condition (10).
The $\omega$-integral is then obtained by using the representation (8).
In Figure 2 we show the preliminary results for the energy integral $\bar{X}$
defined in (4), where we decompose $\bar{X}$ into different contributions,
i.e. whether we have vector (VV) or axial-vector (AA) current insertions, as
well as the polarization of the inserted currents, i.e. parallel ($\parallel$)
and perpendicular ($\perp$) to the momentum $\boldsymbol{q}$. Our results are
shown for a choice of $N=10$ and the smearing of the kernel function (7) is
defined through $\sigma=1/N=0.1$. With the available data, $N=10$ is the
highest order that we can achieve with the Chebyshev approximation, since the
statistical noise of the lattice data becomes too large for orders of $N>10$.
In Figure 2, we also include a contribution to $\bar{X}_{VV}^{\parallel}$ from
the exclusive semi-leptonic $D\rightarrow K$ decay, allowing us to surmise
that our results are in the right ballpark.
Figure 2: $\bar{X}$ contributions for different kinematical channels as a
function of $\boldsymbol{q}$. The vertical lines show the value
$\boldsymbol{q}^{2}_{\text{max}}$ for the vector (V) and pseudoscalar (PS)
meson, respectively. The approximation is obtained for $N=10$ Chebyshev
polynomials and the smearing of the kernel function is given by
$\sigma=1/N=0.1$. With the available lattice data $N=10$ is the upper limit
for the Chbeyshev approximation, since the statistical noise of the data
becomes too strong for higher orders.
We comment on the region close to the end of the phase space, i.e. the point
of $\boldsymbol{q}=(1,1,1)$ corresponding to
$\boldsymbol{q}^{2}\approx$0.66\text{\,}\mathrm{G}\mathrm{eV}^{2}$$ shown in
Figure 2, for $X^{\parallel}_{VV}$ and $X^{\parallel}_{AA}$. In this region, a
dominant contribution from the ground state is expected for
$X^{\parallel}_{VV}$, since the excited state energy exceeds the $D_{s}$ meson
mass, while the expected contribution to $X^{\parallel}_{AA}$ should already
be zero, because the lowest energy state ($s\bar{s}$-vector) has an energy
lager than $m_{D_{s}}$. Figure 2 shows large discrepancies between these
expected values (dashed lines) and our approximation (data points).
## 4 Study of systematic errors
### 4.1 Above the kinematical end-point: $X_{AA}^{\parallel}$
First, we consider the case of $X_{AA}^{\parallel}$. In this case we expect
contributions from the vector meson in the final state. At
$\boldsymbol{q}=(1,1,1)$, the energy of the lowest state is already above the
threshold, so that the expected result is zero. For a finite order of the
polynomials $N=10$ and a non-zero smearing width $\sigma=0.1$, Figure 2 shows
that this is not the case.
To take both, $\sigma\rightarrow 0$ and $N\rightarrow\infty$, limits
simultaneously, we set $\sigma=1/N$ and consider the evolution of our
approximation as a function of $1/N$. The result is shown in Figure 3, where
$N$ is taken to be between 10 and 100. To access the Chebyshev matrix elements
of orders higher than $N=10$, we use the property (10) of the Chebyshev matrix
elements, i.e. the fact that the Chebyshev matrix elements are bound by $0\pm
1$. It allows us to simply add up the absolute values of the Chebyshev
coefficients for $j>10$ in (8) to obtain a mathematical upper limit of the
error for any given order of $N$. Additionally, we include the result of our
approximation in that we only consider the ground state contribution. This
estimate is obtained by fitting the lattice data and the extracted ground
state energy is used to calculate the Chebyshev matrix elements up to
arbitrary order. We can see that even for the limited number of polynomials,
say $N=100$, the approximation approaches zero sufficiently fast.
Figure 3: Development of approximation results for $X_{AA}^{\parallel}$ at
$\boldsymbol{q}=(1,1,1)$ depending on the number of polynomials $N$ used in
the Chebyshev approximation. We set the smearing $\sigma$ of the kernel
function to be $\sigma=1/N$. The blue triangles show the approximation results
using the available lattice data, while the orange circles are obtained by
only considering the ground state contribution obtained from fitting the
lattice data.
### 4.2 Above the kinematical end-point: $X_{VV}^{\parallel}$
For $X_{VV}^{\parallel}$ we expect contributions from both the pseudoscalar
and vector mesons. We have to take both of these contributions into
consideration to obtain an estimate of the ground-state-only contribution,
which, together with the results obtained from using the inclusive data, are
shown in Figure 4. Here, it is important to note how the error on the
inclusive data is estimated. Taking into account the analytical form of the
approximation given in Eq. (8) and the fact that for higher orders of $N$ the
Chebyshev matrix elements are basically given by $0\pm 1$, we can construct an
error estimate by simply adding up the absolute values of the coefficients
$c_{j}^{*}$ appearing in the approximation. These error estimates are shown in
Figure 4. The figure shows that our error estimate is able to cover the
expected ground state contribution. Furthermore, the behavior of the ground
state contribution, i.e. the steady increase of the approximation value, shown
in the Figure 4 is expected. For $\bar{X}_{VV}^{\parallel}$ at
$\boldsymbol{q}=(1,1,1)$ the range of the energy integral is quite narrow and
this range is dominated by the ground state. So that depending of the choice
of the Chebyshev polynomials $N$, and consequently the smearing $\sigma$, our
approximation monotonously increases towards the true value.
Figure 4: Development of approximation results for $X_{VV}^{\parallel}$ at
$\boldsymbol{q}=(1,1,1)$ depending on the number of polynomials $N$ used in
the Chebyshev approximation. We set the smearing $\sigma$ of the kernel
function to be $\sigma=1/N$.The blue triangles show the approximation results
using the available lattice data, while the orange circles are obtained by
only considering the ground state contribution obtained from fitting the
lattice data.
Finally, let us close this section by showing how the results shown in Figure
2 change if we increase $N$ to 100 and apply the error estimation method
discussed above. The results are shown in Figure 5. The Figure shows that even
if the number of polynomials is increased the central values of the
approximation remains stable. This should also be the case if we take the
$N\rightarrow\infty$ limit. At the same time we see that the error bars also
start increasing significantly. We note that the error bars shown in this plot
are most likely overestimated since we are assuming the mathematical upper
limit. The actual error is expected to be smaller, but a proper estimate
requires knowledge on the spectrum. For instance, with a flat spectrum, the
errors should cancel around the threshold, while real problems might occur if
the spectrum is rapidly changing, although this is only expected near the
ground state.
Figure 5: Chebyshev approximation for $\bar{X}$ given in Eq. (2) decomposed
into different kinematical channels for $N=100$ Chebyshev polynomials. The
smearing of the kernel function is given by $\sigma=1/N=0.01$. The error bars
show the mathematical upper limit obtained by employing the properties of the
Chebyshev matrix elements.
## 5 Conclusion
We reported on our progress towards a lattice computation of the inclusive
$D\rightarrow X\ell\nu$ decay with fully controlled statistical effects. We
focused on the systematical error arising due to the approximation of the
kernel function and presented a conservative error estimate employing the
mathematical properties of the Chebyshev approximation. With this estimate we
are able to cover the expected ground state contribution for the region close
to the kinematical limit where a ground state dominance is expected. To obtain
more realistic error bars further study is required. Once a proper error
estimate is available we will obtain estimates for the total decay rate and
compare our results with experimental data [12]. Furthermore, we are going to
extend our analysis by including two more ensembles, as well as considering
different inclusive channels.
## Acknowledgments
The numerical calculations of the JLQCD collaboration were performed on SX-
Aurora TSUBASA at the High Energy Accelerator Research Organization (KEK)
under its Particle, Nuclear and Astrophysics Simulation Program, as well as on
Fugaku through the HPCI System Research Project (Project ID: hp220056).
The works of S.H. and T.K. are supported in part by JSPS KAKENHI Grant Numbers
22H00138 and 21H01085, respectively, and by the Post-K and Fugaku
supercomputer project through the Joint Institute for Computational
Fundamental Science (JICFuS).
## References
* [1] R. L. Workman et al. [Particle Data Group], PTEP 2022 (2022), 083C01 doi:10.1093/ptep/ptac097
* [2] P. Gambino and S. Hashimoto, Phys. Rev. Lett. 125 (2020) no.3, 032001 doi:10.1103/PhysRevLett.125.032001 [arXiv:2005.13730 [hep-lat]].
* [3] P. Gambino, S. Hashimoto, S. Mächler, M. Panero, F. Sanfilippo, S. Simula, A. Smecca and N. Tantalo, JHEP 07 (2022), 083 doi:10.1007/JHEP07(2022)083 [arXiv:2203.11762 [hep-lat]].
* [4] M. T. Hansen, H. B. Meyer and D. Robaina, Phys. Rev. D 96 (2017) no.9, 094513 doi:10.1103/PhysRevD.96.094513 [arXiv:1704.08993 [hep-lat]].
* [5] M. Hansen, A. Lupo and N. Tantalo, Phys. Rev. D 99 (2019) no.9, 094508 doi:10.1103/PhysRevD.99.094508 [arXiv:1903.06476 [hep-lat]].
* [6] J. Bulava, M. T. Hansen, M. W. Hansen, A. Patella and N. Tantalo, JHEP 07 (2022), 034 doi:10.1007/JHEP07(2022)034 [arXiv:2111.12774 [hep-lat]].
* [7] B. Colquhoun et al. [JLQCD], Phys. Rev. D 106 (2022) no.5, 054502 doi:10.1103/PhysRevD.106.054502 [arXiv:2203.04938 [hep-lat]].
* [8] Peter Boyle, Azusa Yamaguchi, Guido Cossu, and Antonin Portelli. Grid: Data parallel C++ mathematical object library. https://github.com/paboyle/Grid.
* [9] Antonin Portelli, Ryan Abott, Nils Asmussen, et al. Hadrons: Grid-based workflow man- agement system for lattice field theory simulations. https://github.com/aportelli/Hadrons.
* [10] Peter Lepage and Christoph Gohlke. gplepage/lsqfit: lsqfit version 12.0.3, December 2021.
* [11] S. Hashimoto, PTEP 2017 (2017) no.5, 053B03 doi:10.1093/ptep/ptx052 [arXiv:1703.01881 [hep-lat]].
* [12] M. Ablikim et al. [BESIII], Phys. Rev. D 104 (2021) no.1, 012003 doi:10.1103/PhysRevD.104.012003 [arXiv:2104.07311 [hep-ex]].
|
MLC at HECKTOR 2022: The Effect and Importance of Training Data when Analyzing Cases of Head and Neck Tumors using Machine Learning
Vajira Thambawita1 Andrea M. Storås1,2 Steven A. Hicks1
Pål Halvorsen1,2 Michael A. Riegler1,3
Thambawita et al.
Head and neck cancers are the fifth most common cancer worldwide, and recently, analysis of Positron Emission Tomography (PET) and Computed Tomography (CT) images has been proposed to identify patients with a prognosis. Even though the results look promising, more research is needed to further validate and improve the results. This paper presents the work done by team MLC for the 2022 version of the HECKTOR grand challenge held at MICCAI 2022.
For Task 1, the automatic segmentation task, our approach was, in contrast to earlier solutions using 3D segmentation, to keep it as simple as possible using a 2D model, analyzing every slice as a standalone image. In addition, we were interested in understanding how different modalities influence the results. We proposed two approaches; one using only the CT scans to make predictions and another using a combination of the CT and PET scans.
For Task 2, the prediction of recurrence-free survival, we first proposed two approaches, one where we only use patient data and one where we combined the patient data with segmentations from the image model. For the prediction of the first two approaches, we used Random Forest. In our third approach, we combined patient data and image data using XGBoost. Low kidney function might worsen cancer prognosis. In this approach, we therefore estimated the kidney function of the patients and included it as a feature.
Overall, we conclude that our simple methods were not able to compete with the highest-ranking submissions, but we still obtained reasonably good scores. We also got interesting insights into how the combination of different modalities can influence the segmentation and predictions.
§ INTRODUCTION
Head and neck cancers are among the most common cancer types worldwide. Early detection is critical as the tumor's size on diagnosis will dictate the patient's quality of life and chances of survival [10]. Medical image analysis and radiomics have shown promising results in detecting different diseases and cancers [16, 9, 2], including those found in the head and neck [20, 25, 18]. In this paper, we describe our approaches for the HEad and neCK TumOR (HECKTOR) grand challenge held at MICCAI 2022 [1, 17]. Of the two tasks presented at the challenge, we participated in both. In Task 1, the aim was to segment tumors from Computed Tomography (CT) and Positron Emission Tomography (PET) scans of the head and neck (examples shown in Figure <ref>). Task 2 asked for the prediction of RFS based on clinical information about the patients presented in a tabular format, which also could be combined with the outputs from Task 1.
As the provided dataset contained different types of data, our strategy to tackle the HECKTOR challenge was to explore how the inclusion and combination of different modalities change the prediction outcome. In this respect, we investigated how CT and PET scans can be used individually or combined for tumor segmentation in Task 1 and how RFS can be predicted using the meta-data with or without tumor information for Task 2. The main contributions of this paper are as follows:
* A comparison of simple segmentation methods using CT or PET slices individually or combined.
* Understanding the effect of combining different data modalities on the analysis results.
* Analysis of what features were most relevant for predicting RFS using patient-related data and image features.
§ METHODS
In this section, we describe the methods we applied to solve Task 1 and 2, respectively.
§.§ Task 1: Segmentation of CT and PET scans
Three approaches used for Task 1. Approach 1: uses only CT images and the corresponding ground truth (GT). Approach 2: input stack of CT, PET, mean of CT and PET. Approach 3: use two separate UNet models for CT and PET and another UNet for final predictions. Reshaping sizes used in Approach 1 is different from the sizes used for Approach 2 and 3.
For Task 1, we used the provided development dataset consisting of CT scans, PET scans, and corresponding segmentation masks. we experimented with three different approaches as follows (see Figure <ref>):
Approach 1: Only using individual slices of the CT scans to predict tumors with a UNet++-based model [28].
Approach 2: Combining the CT and PET scans by stacking CT, PET, and the mean of CT and PET images channel-wise and passing them through a UNet++-based model.
Approach 3: Analyzing CT and PET slices separately in an ensemble-like setup using a TriUnet-based model [24].
These three approaches utilize the data provided in the HECKTOR competition differently, from simple to more complex. The following sections describe all the steps of data pre-processing, sampling, augmentation, implementation of the models, and post-processing.
§.§.§ Image Data pre-processing:
We divided the development dataset into a training and a validation dataset containing $90\%$ and $10\%$ samples, respectively. For Approach 1, we extracted the slices from the CT and ground truth as .png images without applying re-sampling because the shape of the CT and provided ground truth were the same. However, for Approaches 2 and 3, we performed slice extraction after re-sampling (using SimpleITK [26]). We used the same re-sampling as provided by the task organizers[<https://github.com/voreille/hecktor/blob/master/src/resampling/resample_2022.py>] with default spacing $(2,2,2)$. In addition, we normalized all CT and PET images into the range between $0$ and $255$, but not ground truth images that contain pixel values of $[0,1,2]$. After the extraction process, we noticed that the training dataset contained large number of true negative samples. Therefore, to avoid bias, we re-balanced the training dataset by extracting only slices with true positive pixels for H&N Primary tumors (GTVp) and H&N nodal Gross Tumor (GTVn). The class rebalancing was done by combining an equal number of true positive slices with the true negative slices extracted from the initial training dataset. To make a challenging validation dataset, we extracted only slices with GTVp and GTVn from the validation images.
We applied similar image augmentation for all three approaches. The Albumentations [6] library provides a set of augmentation options for image segmentation tasks. More information about the input parameters of the augmentation methods can be found in our GitHub repository[<https://github.com/vlbthambawita/hecktor_2022_MLC>].
§.§.§ Model architectures, hyperparameters and inputs:
The models for Task 1 were implemented in Pytorch [19] using the Segmentation Models library [12]. All models were trained for $100$ epochs on hardware consisting of two Nvidia RTX 3080 Graphic Processing Units (GPUs) with 10 GB of memory each, an AMD Ryzen 9 3950X 16-core processor, and 64 GB memory. Submissions were made with the best-performing checkpoints, which were selected based on the performance on the validation dataset. For the first $50$ epochs, the learning rate was set to $0.0001$, then reduced to $0.00001$ for the remaining $50$ epochs. The Adam optimizer [14] with default parameters except the learning rate was used for all the experiments. Furthermore, we have used DiceLoss with skipped channel $0$ as the main loss function in the training process and the Intersection over Union (IoU) as a metric to evaluate our models.
In Approach 1, we have used a UNet++-based model with $se\_resnext50\_32x4d$ as the encoder. The model was trained using only single channel CT input images and the corresponding ground truth masks after resizing them into $256\times256$ in the augmentation step.
For Approach 2, we re-sampled the CT and PET slices and trained a UNet++ model. For this approach, we stack a CT slice, a PET slice, and the mean of the CT and PET slice in the color channel and use these as input to the model. The main objective of the second approach is to gain more information about using a UNet++-based architecture without making any major changes from the first approach.
Approach 3 used a different architecture, TriUnet, which we introduced in our previous study [24]. In this model, we input re-sampled single channel CT slices into one UNet [21] and PET slices into another UNet. Then, the output of the two networks was passed through another UNet model, which accepts six input channels (3 channels output from the first and second UNet model for representing three classes of the ground truth). We used the same hyperparameters and trained the network as a single model using a single back-propagation step. The reason for not using UNet++ for this approach was mainly due to the memory limitations imposed by our GPU.
§.§.§ Post-processing and submission preparations:
For all approaches, we re-shaped the test images into $256\times256$, which the size of training data. Then, we re-shaped the predicted segments back to the original shape of CT images using re-sampling. However, we had to re-shape the predictions back to the shape of re-sampled input data before re-sizing them into the original shape. In both re-shaping methods, interpolation introduced in OpenCV [13] library was used.
For all approaches in Task 1, we used the academic version of Weights and Biases [3] for tracking and analyzing experiments and the corresponding performance. All the experiments with the corresponding best checkpoints are available on GitHub[<https://github.com/vlbthambawita/hecktor_2022_MLC>].
§.§ Task 2: Prediction of Recurrence-Free Survival
§.§.§ Estimation of kidney function
We include the estimated kidney function as a feature for the XGBoost model from the third approach of Task 2 as this might improve the predictions of RFS.
Prior research indicates that chronic kidney disease can worsen the prognosis of cancer patients and that monitoring the kidney function of cancer patients is crucial [22].
The feature is created using the Cockraft-Gault formula, which is among the most widely used formulas for estimating the kidney function [8]. This formula requires the gender, age, body weight and serum creatinine concentration. Because serum creatinine is not available in the dataset, the average values for men and women are used instead [23]. Indeed, when plotting the correlation matrix for the training data, we observe that there is a positive correlation between the estimated kidney function and the RFS (correlation = $0.26$), indicating that higher kidney function is associated with a longer time to recurrence. The entire correlation matrix for the training dataset is shown in Figure <ref>.
§.§.§ Description of the approaches
For Task 2, we proposed three different approaches. The first approach used only the patient data, while the second and third approach also included features based on the image data. The image features arrived from the segmentation masks from the best approach of Task 1. Specifically, we calculated the number of pixels per class from the predicted masks in addition to the number of slices of the CT images in the z-dimension, resulting in four additional features. Moreover, the third approach used the estimated kidney function as a feature.
For all three approaches, we used $10$-fold cross-validation on the development data to determine the best hyperparameters and model.
The hyperparameters are selected based on the RMSE of the model, which should be as low as possible.
The final models are trained on the entire training dataset using the identified set of hyperparameters.
The resulting models are then used on the test dataset to make the predictions for the challenge evaluation. All experiments are performed using the scikit-learn library [5].
* Approach 1: The first approach used the Random Forest [4] algorithm to predict RFS using only the patient data. Random Forest was chosen because it is known to work well on tabular data and is often used as a baseline for medical-related machine learning problems. All features provided in the training data were used besides the patient ID. Based on the cross-validation results (RMSE of 988.47), the hyperparameters for the Random Forest were set as the following; max features as the number of features divided by three, and the number of trees was set to 100. All other hyperparameters used the default value set by scikit-learn.
* Approach 2: For the second approach, we used the same algorithm as the first approach, but with additional image features as described in the beginning of the subsection. The RMSE from the cross-validation of the training data was $962.83$, which was an improvement compared to the first approach showing that the inclusion of image data has a positive effect on the results. The hyperparameters used for the Random Forest in the second approach are as follows; max features as the number of features and the number of trees 200. All other hyperparameters used the default value as set by scikit-learn.
* Approach 3: Regarding the third approach, an XGBoost [7] regression model was trained to predict RFS using the available patient data and the image features with one additional feature representing the estimated kidney function. The feature representing alcohol consumption was removed because the majority of the patients in both training and test dataset do not have any registered value for this feature. The patient ID was not included in the training dataset. The RMSE from the cross-validation on the training data was $909.09$. The hyperparameters for the XGBoost model are: `n_estimators' = $120$, `learning_rate' = $0.05$, `max_depth' = $4$, `subsample' = $0.7$, `colsample_bytree' = $0.6$, `colsample_bynode' = $1$ and `colsample_bylevel' = $0.8$. The other hyperparameters used the default value.
§.§.§ Investigating feature importance
After training the XGBoost model, feature importance is explored using Shapley additive explanations (SHAP) [15]. SHAP approximates Shapley values, which origin from game theory and assigns values to the features based on how much they contribute to the prediction [15, 27]. Consequently, it is possible to investigate which features the model regards as most important.
§ DISCUSSION AND RESULTS
§.§ Task 1
Looking at the results for Task 1 in Table <ref>, we see that the first approach struggles to detect GTVp in both the validation and the test datasets. This is also shown in Figure <ref>, where the first model is unable to detect the presence of GTVp until the third slice. Despite not being able to detect GTVp very well, the first approach performs well on segmenting GTVn on the validation dataset, but not on the test dataset. This can most likely be attributed to the differences between the validation and test datasets, as the other approaches show similar results. Adding information from the PET scans for the latter two approaches seems to help in detecting GTVp, as evident by the improved scores in Table <ref>, and they are both able to detect GTVp in all eight slices from the example in Figure <ref>. The differences between Approaches 2 and 3 indicate that extracting features from the CT and PET scans independently seems to be the most suitable technique.
§.§ Task 2
For Task 2, the results can be seen in Table <ref>. From the results, we can observe several interesting insights. Firstly, adding additional data to the patient data gives better predictions. This can be observed in the difference between approach 1 and 2 when image data was added as additional features. We can also observe that XGBoost outperforms Random Forest by a large margin. This correlates with general findings in the literature that XGBoost is one of the best working methods. This questions also the general concept of using Random Forest as a baseline and suggest that in general it should be replaced with XGBoost instead.
The SHAP values estimated for the third approach are plotted in Figure <ref> and give us a better understanding of which features are most relevant to the model. We observe that the top five features are Tobacco, CenterID, HPVstatus, estimated glomerular filtration rate (eGFR) which represents the kidney function, and Weight (most important first). Tobacco consumption is the most important feature. This is not surprising as it is well-known that tobacco increases the risk of developing head and neck cancer, see for example [11]. The kidney function is ranked as number four and seems to be an important indicator for RFS. This finding is in line with earlier research, where a relationship between kidney function and prognoses for cancer patients has been identified [22]. An important limitation is that the true serum creatinine values were not available in the provided dataset. The creatinine concentration will to a large extent affect the estimated kidney function, and only applying the average gender values is not enough for getting accurate individual estimations. Despite this, we believe that serum creatinine should be included in future datasets to see if the model performance can be further improved. Interestingly, CenterID is ranked as the second most important feature, which is also confirmed by a positive correlation between the CenterID and RFS in the correlation plot (Figure <ref>). These observations might be due to different patient populations at different centers, e.g., there might be more severely ill patients treated at one center while less serious cases are treated at another center. Another possibility is that the medical doctors choose different strategies for treating the patients or that the surgical skills differ between the centers. However, neither Surgery or Chemotherapy are among the highest-ranked features. The CenterID was not encoded as a categorical feature. If this had been done, the results from the SHAP analysis might change. The five least important features are Chemotherapy, Gender, dim z, surgery, and Age (least important first). The image features rank in the middle range regarding feature importance showing that they can be an important factor in predicting RFS. Considering that our imaging method and the image features are simple, we assume that with more advanced image analysis methods and the corresponding resulting features, the importance of the image-related features might increase even further.
§ CONCLUSION
In conclusion, our simple methods were not able to perform at the same levels as the highest rankest submissions despite achieving reasonable results. We got some interesting insights regarding combinations of different data modalities showing that the combination of different sources improves the results even when simple methods are used. For Task 2, we also had a closer look at feature importance, revealing some interesting features such as the usefulness of eGFR. For future work, it would be interesting to apply the feature importance analysis to other solutions of the competition to investigate if they are leading to similar findings. Furthermore, we would also like to investigate the CenterID correlation to explore if a hospital-specific treatment or country-related factor is influencing it.
[1]
Andrearczyk, V., Oreiller, V., Jreige, M., Vallières, M., Castelli, J.,
Elhalawani, H., Boughdad, S., Prior, J.O., Depeursinge, A.: Overview of the
hecktor challenge at miccai 2022: automatic head and neck tumor segmentation
in PET/CT. In: Head and Neck Tumor Segmentation and Outcome
Prediction (2022)
[2]
Bandyk, M.G., Gopireddy, D.R., Lall, C., Balaji, K., Dolz, J.: Mri and ct
bladder segmentation from classical to deep learning based approaches:
Current limitations and lessons. Computers in Biology and Medicine
134, 104472 (2021)
[3]
Biewald, L.: Experiment tracking with weights and biases (2020),
<https://www.wandb.com/>, software available from wandb.com
[4]
Breiman, L.: Random forests. Machine learning 45(1), 5–32 (2001)
[5]
Buitinck, L., Louppe, G., Blondel, M., Pedregosa, F., Mueller, A., Grisel, O.,
Niculae, V., Prettenhofer, P., Gramfort, A., Grobler, J., Layton, R.,
VanderPlas, J., Joly, A., Holt, B., Varoquaux, G.: API design for machine
learning software: experiences from the scikit-learn project. In: ECML PKDD
Workshop: Languages for Data Mining and Machine Learning. pp. 108–122 (2013)
[6]
Buslaev, A., Iglovikov, V.I., Khvedchenya, E., Parinov, A., Druzhinin, M.,
Kalinin, A.A.: Albumentations: Fast and flexible image augmentations.
Information 11(2) (2020). 10.3390/info11020125,
[7]
Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. In:
Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining (KDD). pp. 785–794 (2016). 10.1145/2939672.2939785
[8]
Cockcroft, D.W., Gault, H.: Prediction of creatinine clearance from serum
creatinine. Nephron 16, 31–41 (1976). 10.1159/000180580
[9]
Duran, A., Dussert, G., Rouvière, O., Jaouen, T., Jodoin, P.M., Lartizien,
C.: Prostattention-net: A deep attention model for prostate cancer
segmentation by aggressiveness in mri scans. Medical Image Analysis
77, 102347 (2022)
[10]
Gerstner, A.: Early detection in head and neck cancer – current state and
future perspectives. GMS current topics in otorhinolaryngology, head and neck
surgery 7, Doc06 (01 2008)
[11]
Hashibe, M., Brennan, P., Chuang, S.c., Boccia, S., Castellsague, X., Chen, C.,
Curado, M.P., Dal Maso, L., Daudt, A.W., Fabianova, E., Fernandez, L.,
Wünsch-Filho, V., Franceschi, S., Hayes, R.B., Herrero, R., Kelsey, K.,
Koifman, S., La Vecchia, C., Lazarus, P., Levi, F., Lence, J.J., Mates, D.,
Matos, E., Menezes, A., McClean, M.D., Muscat, J., Eluf-Neto, J., Olshan,
A.F., Purdue, M., Rudnai, P., Schwartz, S.M., Smith, E., Sturgis, E.M.,
Szeszenia-Dabrowska, N., Talamini, R., Wei, Q., Winn, D.M., Shangina, O.,
Pilarska, A., Zhang, Z.F., Ferro, G., Berthiller, J., Boffetta, P.:
Interaction between Tobacco and Alcohol Use and the Risk of Head and Neck
Cancer: Pooled Analysis in the International Head and Neck Cancer
Epidemiology Consortium. Cancer Epidemiology, Biomarkers & Prevention
18(2), 541–550 (2009). 10.1158/1055-9965.EPI-08-0347
[12]
Iakubovskii, P.: Segmentation models pytorch.
<https://github.com/qubvel/segmentation_models.pytorch> (2019)
[13]
Itseez: Open source computer vision library.
<https://github.com/itseez/opencv> (2015)
[14]
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv
preprint arXiv:1412.6980 (2014)
[15]
Lundberg, S.M., Erion, G., Chen, H., DeGrave, A., Prutkin, J.M., Nair, B.,
Katz, R., Himmelfarb, J., Bansal, N., Lee, S.I.: Explainable ai for trees:
From local explanations to global understanding (2019).
[16]
Mittmann, B.J., Braun, M., Runck, F., Schmitz, B., Tran, T.N., Yamlahi, A.,
Maier-Hein, L., Franz, A.M.: Deep learning-based classification of dsa image
sequences of patients with acute ischemic stroke. International Journal of
Computer Assisted Radiology and Surgery pp. 1–9 (2022)
[17]
Oreiller, V., Andrearczyk, V., Jreige, M., Boughdad, S., Elhalawani, H.,
Castelli, J., Vallières, M., Zhu, S., Xie, J., Peng, Y., Iantsen, A., Hatt,
M., Yuan, Y., Ma, J., Yang, X., Rao, C., Pai, S., Ghimire, K., Feng, X.,
Naser, M.A., Fuller, C.D., Yousefirizi, F., Rahmim, A., Chen, H., Wang, L.,
Prior, J.O., Depeursinge, A.: Head and neck tumor segmentation in PET/CT:
The HECKTOR challenge. Medical Image Analysis 77, 102336 (2022).
[18]
Outeiral, R.R., Bos, P., van der Hulst, H.J., Al-Mamgani, A., Jasperse, B.,
Simões, R., van der Heide, U.A.: Strategies for tackling the class
imbalance problem of oropharyngeal primary tumor segmentation on magnetic
resonance images. Physics and Imaging in Radiation Oncology (2022)
[19]
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen,
T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E.,
DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L.,
Bai, J., Chintala, S.: Pytorch: An imperative style, high-performance deep
learning library. In: Wallach, H., Larochelle, H., Beygelzimer, A.,
d'Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in
Neural Information Processing Systems 32, pp. 8024–8035. Curran Associates,
Inc. (2019),
[20]
Ren, J., Eriksen, J.G., Nijkamp, J., Korreman, S.S.: Comparing different ct,
pet and mri multi-modality image combinations for deep learning-based head
and neck tumor segmentation. Acta Oncologica 60(11), 1399–1406
[21]
Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for
biomedical image segmentation. In: International Conference on Medical image
computing and computer-assisted intervention. pp. 234–241. Springer (2015)
[22]
e Silva, V.T.d.C., Costalonga, E.C., Coelho, F.O., Caires, R.A., Burdmann,
E.A.: Assessment of kidney function in patients with cancer. Advances in
chronic kidney disease 25(1), 49–56 (2018)
[23]
Staff, M.C.: Creatinine tests.
[24]
Thambawita, V.L.B., Hicks, S., Halvorsen, P., Riegler, M.: Divergentnets:
Medical image segmentation by network ensemble. In: Proceedings of the
International Workshop and Challenge on Computer Vision in Endoscopy
(EndoCV). pp. 27–38 (2021)
[25]
Wahid, K.A., Ahmed, S., He, R., van Dijk, L.V., Teuwen, J., McDonald, B.A.,
Salama, V., Mohamed, A.S., Salzillo, T., Dede, C., et al.: Evaluation of deep
learning-based multiparametric mri oropharyngeal primary tumor
auto-segmentation and investigation of input channel effects: Results from a
prospective imaging registry. Clinical and translational radiation oncology
32, 6–14 (2022)
[26]
Yaniv, Z., Lowekamp, B.C., Johnson, H.J., Beare, R.: Simpleitk image-analysis
notebooks: a collaborative environment for education and reproducible
research. Journal of digital imaging 31(3), 290–303 (2018)
[27]
Young, H.P.: Monotonic solutions of cooperative games. International Journal
of Game Theory 14, 65–72 (1985). 10.1007/BF01769885
[28]
Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: Unet++: Redesigning skip
connections to exploit multiscale features in image segmentation. IEEE
transactions on medical imaging 39(6), 1856–1867 (2019)
|
11institutetext: Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Via
Orabona 4, I-70126 Bari, Italy 22institutetext: Dipartimento Interateneo di
Fisica “M. Merlin”, Università e Politecnico di Bari,
via Orabona 4, 70126 Bari, Italy
# Holographic Study of the $Q\bar{Q}$ Chaotic Dynamics in General Thermal
Background
Nicola Losacco 1122<EMAIL_ADDRESS>
###### Abstract
The holographic approach is applied to study the chaotic behaviour of a
strongly coupled $Q\bar{Q}$ pair in general thermal background. We consider
two different backgrounds, one with finite temperature and baryon density, and
one with finite temperature and constant magnetic field along a fixed
direction. The results allow us to understand the dependence of the chaotic
dynamics on the background, to test the universal bound on chaos conjectured
by Maldacena, Shenker and Standford (MSS).
## 1 Introduction
We study the effects of chemical potential, related to a baryon reservoir, and
of a magnetic field on the chaotic behaviour of a strongly coupled $Q\bar{Q}$
pair in a finite temperature background Colangelo:2020tpr ; Colangelo:2021kmn
. Such systems can be investigated through holographic methods analyzing the
dynamics of an open string hanging in the bulk in presence of a black hole
(BH). The BH properties, as Hawking temperature and charge, are related to
properties of the boundary gauge theory. In Sekino_2008 ; susskind2011addendum
it is shown that BH belong to a set of systems called fast “scramblers”, in
particular BH are the fastest scramblers in nature: the time needed for a
system near a BH horizon to loose information depends logarithmically on the
number of the system degrees of freedom. As shown in Maldacena:2015waa these
systems present a bounded chaotic dynamics, with upper bound (MSS bound) on
the largest Lyapunov exponent $\lambda$ characterizing the chaotic behaviour
of the thermodynamic quantum system with temperature $T$:
$\lambda\leqslant 2\pi T.$ (1)
The connections between chaotic quantum systems and gravity have been
investigated in Shenker_2014 ; Shenker:2014cwa ; Kitaev ; Polchinski:2015cea ;
Giataganas:2021ghs . A generalization of the bound Eq. (1), proposed in the
literature for systems presenting a global symmetry Halder:2019ric :
$\lambda\leqslant\frac{2\pi
T}{1-\left|\frac{\mu}{\mu_{c}}\right|}\qquad\mu\ll\mu_{c}$ (2)
where $\mu$ is the chemical potential and $\,\mu_{c}$ a critical value, can be
checked. To test the bounds we consider the $Q\bar{Q}$ pair in a finite
temperature and baryon density background Colangelo:2020tpr , and with a
constant and uniform magnetic field at finite temperature Colangelo:2021kmn .
This allows us to test the generalized bound Eq. (2), and the features of
phenomenological contexts such as in heavy-ion collisions Arefeva:2020vae ;
Arefeva:2022avn ; Rannu:2022fxw .
## 2 Geometry
The gravity dual of a strongly coupled $Q\bar{Q}$ pair is an open string
hanging in a $5$-dimensional metric obtained solving Einstein equations with
suitable boundary conditions. The endpoints of the string are on the boundary,
they stand for the heavy quarks in the gauge boundary theory. For finite
baryon density the geometry is described by a ReissnerâNordstrom metric:
$\displaystyle ds^{2}$
$\displaystyle=-f\left(r\right)r^{2}dt^{2}+r^{2}d\mathbf{x}^{2}+\frac{1}{r^{2}f\left(r\right)}dr^{2},$
(3) $\displaystyle
f\left(r\right)=1-\frac{r_{h}^{4}}{r^{4}}-\frac{\mu^{2}r_{h}^{2}}{r^{4}}+\frac{r_{h}^{4}\mu^{2}}{r^{6}},$
where $r_{h}$ is the external BH horizon and the chemical potential $\mu$ is
related to the BH charge. For the case with magnetic field, an approximate
solution of the Einstein equations is used Li:2016gfn :
$\displaystyle ds^{2}=-f\left(r\right)r^{2}dt^{2}+r^{2}h\left(r\right)$
$\displaystyle(dx^{1})^{2}+r^{2}h\left(r\right)(dx^{2})^{2}+r^{2}q\left(r\right)(dx^{3})^{2}+\frac{1}{r^{2}f\left(r\right)}dr^{2},$
(4) $\displaystyle
f(r)=1-\frac{r_{h}^{4}}{r^{4}}-\frac{2B^{2}}{3r^{4}}\log{\frac{r}{r_{h}}}$
$\displaystyle q(r)=1-\frac{2B^{2}}{3r^{4}}\log r$ (5) $\displaystyle
h(r)=1+\frac{B^{2}}{3r^{4}}\log r\,.$
$B$ is related to the magnetic field in the $x^{3}$ direction. The magnetic
field breaks rotational invariance, hence $h\left(r\right)\neq
q\left(r\right)$.
## 3 Exploring Chaos
Let us consider the general metric
$ds^{2}=g_{tt}dt^{2}+g_{11}(dx^{1})^{2}+g_{22}(dx^{2})^{2}+g_{33}(dx^{3})^{2}+g_{rr}dr^{2}\,.$
(6)
The string profile is obtained from the Nambu-Goto (NG) action:
$\mathcal{S}=-\frac{1}{2\pi\alpha^{\prime}}\int dt\,d\ell\,\sqrt{-h}\,$ (7)
where $\alpha^{\prime}$ is the string tension and $h$ the determinant of the
induced metric $h_{ij}=g_{MN}\frac{\partial
X^{M}}{\partial\xi_{i}}\frac{\partial X^{N}}{\partial\xi_{j}}$, with
$\xi_{i,j}$ the worldsheet coordinates and $g$ the metric tensor. The static
solution depends on the parameters of the metric and on the proximity of the
string to the BH horizon, quantified by the string tip position $r_{0}$.
Analyzing the energy of the static configuration dependency on $r_{0}$, we
found that it has a maximum near the black hole horizon $r_{h}$. This unstable
configuration enhances the chaotic behaviour of the system, therefore we set
our static solution near the BH horizon. We perturb the static configuration
with a time dependent fluctuation orthogonal in each point of the string
Hashimoto:2018fkb :
$\displaystyle r\left(t,\ell\right)$
$\displaystyle=r_{BG}\left(\ell\right)+\xi\left(t,\ell\right)n^{r}\left(\ell\right),$
(8) $\displaystyle x\left(t,\ell\right)$
$\displaystyle=x_{BG}\left(\ell\right)+\xi\left(t,\ell\right)n^{x}\left(\ell\right),$
leaving unperturbed the endpoints, Fig. 1. $BG$ stands for background solution
obtained from the action in Eq. (7).
Figure 1: Static string profile and perturbation $\xi\left(t,\ell\right)$
along the direction orthogonal to the string in each point with coordinate
$\ell$.
We expand the action Eq. (7) to the second order in the perturbation $\xi$:
$\displaystyle
S^{\left(2\right)}=\frac{1}{2\pi\alpha^{\prime}}\int\mathrm{d}t\int_{-\infty}^{\infty}\mathrm{d}\ell\big{(}C_{tt}\dot{\xi}^{2}+C_{\ell\ell}\acute{\xi}^{2}+C_{00}\xi^{2}\big{)}.$
(9)
$C_{tt}$, $C_{\ell\ell}$ and $C_{00}$ depend on $\ell$ and on the parameters
of the metric. From the action Eq. (9) we obtain the equation of motion for
the perturbation. Factorizing the variables $\xi(t,l)=\xi(l)e^{i\omega t}$,
this becames a Sturm-Liouville equation
$\partial_{\ell}\left(C_{\ell\ell}\,\acute{\xi}\right)-C_{00}\,\xi=\omega^{2}C_{tt}\,\xi$.
We solve it for the first two eigenvalues. We write the perturbation as a
linear combination of the first two eigenfunctions with time dependent
coefficients describing the dynamics of the system:
$\xi\left(t,\ell\right)=c_{0}\left(t\right)e_{0}\left(\ell\right)+c_{1}\left(t\right)e_{1}\left(\ell\right).$
(10)
This allow us to evaluate the third order action using Eq. (10):
$\displaystyle S^{(3)}$
$\displaystyle=\frac{1}{2\pi\alpha^{\prime}}\int\mathrm{d}t\Big{[}\sum_{n=0,1}\left(\dot{c}_{n}^{2}-\omega_{n}^{2}c_{n}^{2}\right)+K_{1}c_{0}^{3}+K_{2}c_{0}c_{1}^{2}+K_{3}c_{0}\dot{c}_{0}^{2}+K_{4}c_{0}\dot{c}_{1}^{2}+K_{5}\dot{c}_{0}c_{1}\dot{c}_{1}\Big{]}.$
(11)
The coefficients $K_{1,\dots,5}$ are obtained from an integration over $\ell$
and depend on $r_{0}$, the tip position of the hanging string, and on the
parameters of the metric. The potential described by Eq. (11) is a trap
potential that confines the dynamics of the unstable string configurations. In
the trap region the kinetic term is negative. As shown in Hashimoto:2018fkb ;
Colangelo:2020tpr , we can substitute $c_{0,1}\to\tilde{c}_{0,1}$ in the
action, with
$c_{0}=\tilde{c}_{0}+\alpha_{1}\tilde{c}_{0}^{2}+\alpha_{2}\tilde{c}_{1}^{2}$
and $c_{1}=\tilde{c}_{1}+\alpha_{3}\tilde{c}_{0}\tilde{c}_{1}$, neglecting
$\mathcal{O}(\tilde{c}_{i}^{4})$ terms, setting the constants $\alpha_{i}$
ensuring the positivity of the kinetic term. This replacement does not affect
the dynamics of the system, and a chaotic behaviour shows up in the
transformed system. The chaotic dynamics can be studied by analyzing the
dynamics of $\tilde{c}_{0}(t)$ and $\tilde{c}_{1}(t)$ governed by the action
Eq. (11) after the substitution.
## 4 Results
The chaotic dynamics is analyzed through Poincaré plots and Lyapunov
exponents, evaluated at $r_{0}=1.1$ and $r_{h}=1$ for different $\mu$ and $B$.
In Fig. 2 we see that at $B=0$ increasing the chemical potential the
trajectories in the Poincaré section become more stable, hence the chemical
potential stabilizes the system.
Figure 2: Poincaré sections for a time-dependent perturbed string, obtained
changing the initial conditions, and increasing the chemical potential
$\mu=0.3$ (left) and $\mu=0.9$ (right), for $\tilde{c}_{1}=0$ and
$\dot{\tilde{c}}_{1}\geq 0$.
In the case of the magnetic field, either parallel or orthogonal to the
$Q\bar{Q}$ system, we observe a stabilization of the orbits increasing the
magnetic field from $B=0.3$ to $B=1$, Fig. 3.
Figure 3: Poincaré sections for a time-dependent perturbed string along the
magnetic field (top panels) and orthogonal to the magnetic field (bottom
panels), obtained changing the initial conditions, and increasing the magnetic
field $B=0.3$ (left) and $B=1$ (right), for $\tilde{c}_{1}=0$ and
$\dot{\tilde{c}}_{1}\geq 0$.
The anisotropy of the system is manifested in the different chaotic behaviour
in the two directions. When the $Q\bar{Q}$ system is orthogonal to the
magnetic field, the region in which the trajectories become scattered is wider
than the case in which the string is along the magnetic field, as we can see
in Fig. 3. It is necessary to approach the origin of the phase space to find
chaotic trajectories when the string is along the magnetic field. All these
properties are shown by the largest Lyapunov exponents for different values of
chemical potential and magnetic field, Fig. 4.
Figure 4: Largest Lyapunov exponent $\lambda_{MAX}$ versus $\mu$ (left) and
$B$ (right) for $r_{0}=1.1$. In the plot at right the results for the string
configurations orthogonal (blue squares) and along (red points) the magnetic
field are shown.
Increasing both chemical potential and magnetic field, the systems become less
chaotic since the largest Lyapunov exponent decreases. Moreover, the
configuration along the magnetic field stabilizes faster. In all cases the
largest Lyapunov exponents satisfy the MSS bound Eq. (1), even in the case of
baryon density where the generalized bound Eq. (2) is less stringent.
## 5 Conclusions
Chaos for the string dual to the $Q\bar{Q}$ system has been observed in the
Poincaré plots, characterized by scattered points in the region where the tip
of the string is close to the black hole horizon. The system becomes less
chaotic increasing $\mu$ and $B$. For the magnetic field case, an anisotropy
effect in two different orientations of the string is found. The MSS bound is
satisfied for the largest Lyapunov exponent and remains universal.
Acknowledgements. I thank P. Colangelo, F. De Fazio and F. Giannuzzi, co-
authors of the works on which this proceeding is based on. This study has been
carried out within the INFN project (Iniziativa Specifica) QFT-HEP.
## References
* (1) P. Colangelo, F. De Fazio, N. Losacco, Phys. Rev. D 102, 074016 (2020), 2007.06980
* (2) P. Colangelo, F. Giannuzzi, N. Losacco, Phys. Lett. B 827, 136949 (2022), 2111.09441
* (3) Y. Sekino, L. Susskind, JHEP 10, 065 (2008), 0808.2096
* (4) L. Susskind (2011), 1101.6048
* (5) J. Maldacena, S.H. Shenker, D. Stanford, JHEP 08, 106 (2016), 1503.01409
* (6) S.H. Shenker, D. Stanford, JHEP 03, 067 (2014), 1306.0622
* (7) S.H. Shenker, D. Stanford, JHEP 05, 132 (2015), 1412.6087
* (8) A. Kitaev, Breakthrough Prize Fundamental Physics Symposium, 11/10/2014, KITP seminar (2014)
* (9) J. Polchinski (2015), 1505.08108
* (10) D. Giataganas (2021), 2112.02081
* (11) I. Halder (2019), 1908.05281
* (12) I.Y. Aref’eva, K. Rannu, P. Slepov, JHEP 07, 161 (2021), 2011.07023
* (13) I.Y. Aref’eva, A. Ermakov, K. Rannu, P. Slepov (2022), 2203.12539
* (14) K. Rannu, I.Y. Aref’eva, P.S. Slepov, Rev. Mex. Fis. Suppl. 3, 0308126 (2022)
* (15) D. Li, M. Huang, Y. Yang, P.H. Yuan, JHEP 02, 030 (2017), 1610.04618
* (16) K. Hashimoto, K. Murata, N. Tanahashi, Phys. Rev. D 98, 086007 (2018), 1803.06756
|
# A proposal for leaky integrate-and-fire neurons by domain walls in
antiferromagnetic insulators
Verena Brehm Center for Quantum Spintronics, Department of Physics, Norwegian
University of Science and Technology, 7491 Trondheim, Norway these authors
contributed equally to this work<EMAIL_ADDRESS>Johannes W.
Austefjord Center for Quantum Spintronics, Department of Physics, Norwegian
University of Science and Technology, 7491 Trondheim, Norway these authors
contributed equally to this work Serban Lepadatu Jeremiah Horrocks Institute
for Mathematics, Physics and Astronomy, University of Central Lancashire,
Preston, PR1 2HE, United Kingdom Alireza Qaiumzadeh Center for Quantum
Spintronics, Department of Physics, Norwegian University of Science and
Technology, 7491 Trondheim, Norway
###### Abstract
Brain-inspired neuromorphic computing is a promising path towards next
generation analogue computers that are fundamentally different compared to the
conventional von Neumann architecture. One model for neuromorphic computing
that can mimic the human brain behavior are spiking neural networks (SNNs), of
which one of the most successful is the leaky integrate-and-fire (LIF) model.
Since conventional complementary metal-oxide-semiconductor (CMOS) devices are
not meant for modelling neural networks and are energy inefficient in network
applications, recently the focus shifted towards spintronic-based neural
networks. In this work, using the advantage of antiferromagnetic insulators,
we propose a non-volatile magnonic neuron that could be the building block of
a LIF spiking neuronal network. In our proposal, an antiferromagnetic domain
wall in the presence of a magnetic anisotropy gradient mimics a biological
neuron with leaky, integrating, and firing properties. This single neuron is
controlled by polarized antiferromagnetic magnons, activated by either a
magnetic field pulse or a spin transfer torque mechanism, and has properties
similar to biological neurons, namely latency, refraction, bursting and
inhibition. We argue that this proposed single neuron, based on
antiferromagnetic domain walls, is faster and has more functionalities
compared to previously proposed neurons based on ferromagnetic systems.
## 1 Introduction
Modern electronic digital computers are designed based on the socalled von
Neumann computing architecture. They rely on central processing units (CPU),
built upon complementary metal-oxide-semiconductor (CMOS) transistors [1]. In
contrast to that, inspired by the human brain and its complex neural network,
novel energy efficient analogue computing architectures with strongly
interconnected processing elements have been proposed that lead to the
emerging technology of neuromorphic computing and engineering [2, 3, 4, 5].
The conventional CPU-based von Neumann computing architecture is faster than
the current state of the art neuromorphic computing, but the latter
potentially can solve computationally intensive tasks, like speech and
character recognition, while offers a more energy efficient data processing
[6]. To achieve even higher energy efficiency as well as faster data
processing in neuromorphic computing architecture, it was proposed very
recently that neuromorphic principles may be implemented in spintronic-based
nanodevices. This leads to the emerging field of the neuromorphic spintronics
[7]. In spintronic-based nanotechnology, the intrinsic spin angular momentum
of electrons, rather than their charge, may be used for data storage and
processing. The magnetic insulators that host magnons and various topological
magnetic textures are key ingredients for efficient data processing and
information storage [8]. Consequently, ubiquitous Joule heating arising from
electron scatterings in metals and semiconductors is avoided in insulators.
Consequently, several ferromagnetic-based LIF neurons for SNN networks have
already been proposed [9, 10, 11]. However, recent theoretical and
experimental advances in spintronics have shown that antiferromagnetic (AFM)
systems have even much more advantages compared to their ferromagnetic (FM)
counterparts [12]. The absence of parasitic stray fields, operating at THz
frequencies in contrast to GHz in FM systems, existence of opposite
chiralities of magnons, and the abundance of room temperature AFM materials in
nature, make AFM-based spintronics as a highly promising candidate for the
hardware implementation of the next generation of ultrafast, low-energy-cost,
and miniaturized non-volatile neuromorphic chips. [13, 14, 15, 16].
Spiking neural networks (SNNs) are a class of neuromorphic computing
architecture that mimic human neural networks [17]. One of the most successful
spiking neural network models is the leaky integrate-and-fire (LIF) model
[18]. This model resembles the spiking behavior of a neuron at the onset of
critical accumulating stimuli and its slow decay to the equilibrium state
until the next spike [19]. LIF may be used as the building block of
neuromorphic chips [20].
In this paper, we propose a non-volatile AFM-based single neuron with leaky
integrate-and-fire properties that may be the building block for a LIF spiking
neural network. The state of this neuron is encoded in the position of a
domain wall (DW), which is controlled by AFM magnons. Leaky behavior is
ensured by a nonuniform magnetic anisotropy profile, while there is no standby
leakage in the neuron.
## 2 Theory of Neural Networks
In this section, we briefly summarize the key elements and ingredients of SNN
and LIF single neuron models. In the next sections, we show our proposed
single neuron has similar characteristic.
### 2.1 Spiking Neural Networks
A SNN takes the inspiration of human brain activity into computer science one
step further than other models of artificial neural networks, like feedforward
neural networks [21]. Information in this model is encoded as spike trains;
c.f., binary information coding, used in conventional computers. The network
has an explicit time dependency and the system is event-driven.
Figure 1: Schematic of a spiking neural network [22]. A LIF neuron $\Xi$
receives input spikes from several presynaptic neurons. In the present work,
we model $\Xi$ by an AFM DW. The spike trains are multiplied by weights
$w_{i}$ and merged before they get sent into $\Xi$. A non-linear function
determines whether the neuron should fire as a consequence of stimuli from its
synapses.
We first give a brief mathematical description of the SNN model. A generic
spiking neuron $\Xi$ is represented in Fig. 1. Let $V$ be a finite set of
spiking neurons, connected by a set of $E\subseteq V\times V$ synapses. For
each synapse $\langle i,j\rangle\in E$ between presynaptic neuron $j$ and
postsynaptic neuron $i$ there is associated a response function
$\epsilon_{ij}$ and a weight $w_{ij}$. The state variable of $i^{\rm{th}}$
neuron, $u_{i}(t)$, is then given by [18, 21],
$u_{i}(t)=\delta(t-t_{i}^{(f)})+\sum_{j}\sum_{f}w_{ij}\epsilon_{ij}(t-t_{j}^{(f)})+u_{0}.$
(1)
Here $u_{0}$ is the equilibrium potential, i.e. the value of $u_{i}(t)$ when
no stimuli has affected the neuron and $t_{j}^{(f)}$ indicates the firing
times, where $f$ is the label of each spike. In general the firing time
$t=t_{i}^{(f)}$ of a neuron $i$ is set when $u_{i}(t)$ reaches a threshold
value $u_{\text{threshold}}$,
$\displaystyle
u_{i}(t)=u_{\text{threshold}}\quad\wedge\quad\text{sgn}(u_{i}(t)-u_{0})\frac{du_{i}(t)}{dt}>0$
(2) $\displaystyle\Longrightarrow t=t_{i}^{(f)},$
where $\text{sgn}(x)$ is the sign function and $\epsilon_{ij}(t-t_{j}^{(f)})$
determines the response for postsynaptic neuron $i$ from stimuli from
presynaptic neuron $j$. Once a spike is initiated, $u_{i}(t)$ is immediately
reset to $u_{0}$. Equation (1) can therefore be used to model a human neuron:
after the action potential in a neuron has been raised and neurotransmitters
have been transferred, it relaxes back to its ground state until the next
activation happens.
It is worth noting that Eq. (1) assumes no time delay as signals travel the
synapses. This could easily be added with a delay time for each synapse [23].
### 2.2 Leaky Integrate-and-Fire Neurons
Figure 2: Leaky integrate-and-fire circuit [18]. A capacitor, $C$, and a
resistor, $R$, are connected in parallel. The voltage over the capacitor
$u(t)$ integrates the current input, while it leaks to ground. When $u(t)$
reaches a threshold value, a switch controlling the input wire is flipped,
stopping new currents into the system for a refractory period. During the
refractory period charge is completely depleted from the capacitor.
The rather general Eq. 1 can be used to model a variety of neuron models. LIF
models are one of the most prominent neuron types [18]. It can be modelled by
a resistor–capacitor circuit ($RC$) circuit as shown in Fig. 2. The neuron
voltage corresponds to the capacitor voltage $u_{i}(t)$. The LIF model is
described by a differential equation,
$\tau\frac{du_{i}}{dt}=-u_{i}(t)+RI_{i}(t),$ (3)
where $\tau=RC$ is the time constant of the $RC$ circuit, and $R$ and $C$ are
the resistance and capacitance of the resistor and capacitor, respectively.
The incoming current $I_{i}(t)$ is,
$I_{i}(t)=\sum_{j}w_{ij}\sum_{f}\delta(t-t_{j}^{(f)}).$ (4)
The weights $w_{ij}$ determine the connection strength from presynaptic neuron
$j$ to postsynaptic neuron $i$. The sum $\sum_{f}$ is over all presynaptic
spike times $(f)$.
The purpose of the LIF model is to describe how the spiking neuron $\Xi$
behaves as a function of external stimuli, or captures the dynamics of the
$\epsilon_{ij}$ response function in Eq. (1). The LIF model has a memory of
previous inputs $I_{i}(t)$, stored on the capacitor. The resistor ensures that
this memory only is short term. As before, a spike is fired when $u_{i}(t)$
reaches a threshold value by Eq. 2. A generalization to a non-linear leaky
integrate-and-fire model gives
$\tau\frac{du_{i}}{dt}=F(u_{i})+G(u_{i})I_{i}(t),$ (5)
where the functions $F(u_{i})$ and $G(u_{i})$ are arbitrary functions. It is
worth noting that Eq. 1 describes $u_{i}(t)$ as a function of time since the
last input, while Eqs. (3) and (5) are implicit equations.
## 3 Non-volatile Spintronic-Based LIF Neurons
In this section, we introduce our proposal of a non-volatile LIF neuron,
implemented with a magnetic DW in an AFM insulator with orthorhombic (or
biaxial) magnetic symmetry. Although, for computational convenience, we have
chosen toy model parameters, see Table 1, it can be shown that the
functionality of the proposed AFM-based neuron is robust against specific
material parameters or different system sizes and is scalable by tuning the
excitation amplitude and duration. In addition to showing the scalability and
robustness of our results, we present the result of micromagnetic simulations
with material parameters of hematite in Appendix A.
### 3.1 AFM Model
We consider a generic two-sublattice AFM insulator nanoribbon, with
orthorhombic magnetic structure, modelled by the following potential-energy
density for each sublattice,
$\displaystyle\mathcal{U}_{i}(\bm{m}_{i},\nabla\bm{m}_{i};\bm{r})=$
$\displaystyle
A(\bm{\nabla}\bm{m}_{i})^{2}+4A_{\text{h}}\bm{m_{i}}\cdot\bm{m_{j}}-\mu_{0}M_{\text{s}}\bm{m}_{i}\cdot\bm{H}-K_{\text{easy}}(\bm{m}_{i}\cdot\bm{\mathrm{e}}_{\text{easy}})^{2}+K_{\text{hard}}(\bm{m}_{i}\cdot\bm{\mathrm{e}}_{\text{hard}})^{2}$
(6)
$\displaystyle+D\bm{m}_{i}\cdot(\nabla\times\bm{m}_{i})+\eta_{i}\frac{D_{h}}{2}\bm{d}_{h}\times\bm{m}_{j},$
(7)
where $i\neq j\in\\{\text{A, B}\\}$ refer to two AFM sublattices. Within a
micromagnetic framework [24, 25], all magnetic contributions in a unit cell
with volume $V_{0}$ are averaged to a macrospin magnetic moment $\bm{M}$, with
a saturation magnetization value $M_{s}=|\bm{M}|$. The unit vector of
magnetization direction is $\bm{m}=\bm{M}/M_{s}$. $A$ and $A_{\text{h}}$
parameterize the AFM exchange stiffness and the homogeneous Heisenberg
exchange interaction, respectively, $K_{\text{easy (hard)}}>0$ parameterizes
single ion easy (hard) axis anisotropy energy along the
$\bm{\mathrm{e}}_{\text{easy (hard)}}$ direction, $\bm{H}$ is the applied
magnetic field, $D$ is the strength of the inhomogeneous bulk-type
Dzyaloshinskii-Morya interaction (DMI) while $D_{h}$ is the homogeneous DMI
along the direction $\bm{d}_{h}$ with a sublattice-dependent sign
$\eta_{A(B)}=+(-)1$.
We assume the AFM insulator supports a rigid magnetic domain wall (DW) that
connects two uniform AFM domains, see Fig. 3. Within the collective coordinate
approximation [26], the position of the DW center is considered as a dynamical
variable $\mathcal{X}_{\text{DW}}$. In order to control the equilibrium
position of DW center, the spatial profile of the anisotropy energy density
$K$ can be tuned by electric field via voltage-controlled magnetic anisotropy
(VCMA) effect [27, 28, 29, 30, 31] or strain-induced magnetic anisotropy [32,
33, 34, 35, 36, 37]. We model a spatially varying anisotropy as,
$\centering
K(\bm{x})=K_{0}\left[\frac{1}{L_{x}}\left(x-\mathcal{X}_{0}\right)^{2}+1\right],\@add@centering$
(8)
where $L_{x}$ is the length of the AFM nanoribbon along the $x$-direction.
This magnetic anisotropy profile creates a magnetic potential well along the
$x$-direction with a minimum value $K_{0}$ at $\mathcal{X}_{0}$ that can be
engineered. The AFM DW is at its minimum energy if the DW center is placed at
this minimum $\mathcal{X}_{0}$. If there is no spatial dependent magnetic
anisotropy, the system has translation invariance and AFM DWs have no
preferred equilibrium position. In our simulations, without loss of
generality, we set $\mathcal{X}_{0}={2L_{x}}/{3}$.
The spatial dependent of $K(\bm{x})$ ensures that the AFM DW always relaxes
back toward its ground-state position $\mathcal{X}_{0}$ in the absence of
stimuli, giving the neuron a leaky behavior. Due to this anisotropy profile,
the system is also non-volatile in the sense that the ground state of the
neuron is stable. Therefore there is not much standby leakage power in
contrast to common CMOS-based neurons.
Table 1: Numerical parameters used for micromagnetic simulations. The according effective field strength for exchange, easy (hard) anisotropy, and DMI are $\mu_{0}H_{\text{exchange}}=$400\text{\,}\mathrm{T}$$, $\mu_{0}H_{\text{easy (hard)}}=$20(10)\text{\,}\mathrm{T}$$ and $H_{\text{DMI}}=0$–$0.25\text{\,}\mathrm{T}$, respectively. Quantity | Value | Unit
---|---|---
Length of AFM ($L_{x}$) | $500\text{\,}$ | $\mathrm{nm}$
Width of AFM ($L_{y}$) | $20\text{\,}$ | $\mathrm{nm}$
Height of AFM ($L_{z}$) | $4\text{\,}$ | $\mathrm{nm}$
Simulation cell size | $4\text{\,}$ | $\mathrm{nm}$
Inhomogeneous exchange stiffness ($A$) | 1 | $\mathrm{pJ}\text{\,}{\mathrm{m}}^{-1}$
Homogeneous exchange energy ($A_{h}$) | $-200$ | $\mathrm{kJ}\text{\,}{\mathrm{m}}^{-3}$
Easy-axis anisotropy energy ($K_{\text{easy}}$) | $20$ | $\mathrm{kJ}\text{\,}{\mathrm{m}}^{-3}$
Hard-axis anisotropy energy ($K_{\text{hard}}$) | $10$ | $\mathrm{kJ}\text{\,}{\mathrm{m}}^{-3}$
Characteristic length scale $\left(\lambda_{\text{easy}}=\sqrt{A/2K_{\text{easy}}}\right)$ | $7$ | $\mathrm{n}\mathrm{m}$
Characteristic length scale $\left(\lambda_{\text{hard}}=\sqrt{A/2K_{\text{hard}}}\right)$ | $5$ | $\mathrm{n}\mathrm{m}$
Saturation magnetization ($M_{s}$) | $2.1$ | $\mathrm{kA}\text{\,}{\mathrm{m}}^{-1}$
Gilbert damping parameter ($\alpha$) | $0.002\text{\,}$ | 1
Inhomogeneous bulk DMI ($D$) | $0$–$250$ | $\mathrm{\SIUnitSymbolMicro J}\text{\,}{\mathrm{m}}^{-2}$
Homogeneous DMI ($D_{h}$) | $2$ | $\mathrm{kJ}\text{\,}{\mathrm{m}}^{-3}$
Applied magnetic field frequency ($\omega$) | $62.5$ | $\mathrm{rad}\text{\,}{\mathrm{ps}}^{-1}$
### 3.2 AFM DWs as LIF Neurons
Our proposed system is schematically presented in Fig. 3. It consists of an
AFM insulator stripe, an injector (modelling the receptor of a human neuron)
that excites magnons in the AFM insulator via either a circularly polarized
magnetic field pulse or current-induced (anomalous) spin Hall torque mechanism
[38, 39], and a detector (modelling the transmitter). The detector measures
the passing DW via inverse (anomalous) spin Hall effect of the injected spin-
pumping signal [40, 39, 38, 41, 42]. In a series of neuron networks, this
detector or transmitter must be connected to the injector or receptor of the
following neuron. At a given set of material parameters and excitation
strength, the position of the detector determines the neuron threshold
potential.
AFM DWs are 1D particle-like magnetic solitons that connect two magnetic
domains in magnetic materials. It was recently shown that the position of a DW
in an AFM insulator is controllable through magnon-DW interactions [43]. The
position of AFM DW may be used as a state variable for the LIF neuron,
$u(t)\longrightarrow\mathcal{X}_{\text{DW}}$ [44].
In the following, two generic magnetic geometries for possible implementation
of LIF neurons are investigated and compared, which we will call in-plane (IP)
and out-of-plane (OOP), referring to their magnetic ground-state orientation.
In order to model these two magnetic states using the potential energy density
expression given by Eq. 6, we set $\bm{e}_{\text{easy}}=\hat{e}_{x}$ and
$\bm{e}_{\text{hard}}=\hat{e}_{z}$ in IP case, while for OOP, we set
$\bm{e}_{\text{easy}}=\hat{e}_{z}$ and $\bm{e}_{\text{hard}}=\hat{e}_{x}$.
Therefore, in the IP geometry, the magnetic ground state lies along the
direction of magnon propagation, i.e., the $x$ axis, while in the OOP
geometry, the magnetic ground state is normal to the direction of magnon
propagation. In both cases, we assume the homogeneous DM vector lies parallel
to the hard axis, $\bm{d}_{h}\parallel\bm{e}_{\text{hard}}$.
Figure 3: Schematic setup of the AFM-based single neuron proposal in the IP
geometry. There are two domains in the AFM stripe, represented by the Néel
vectors in blue and red. The two domains are connected by a DW texture in
turquoise. On top of the AFM stripe, an injector is placed at the left side as
a source of magnons and two detectors are placed right and left of the
equilibrium position of the DW, the latter shown by $\mathcal{X}_{0}$.
### 3.3 Equation of Motions for AFM Systems
The dynamics of the normalized sublattice magnetic moments
$\bm{m}_{i\in\\{\text{A, B}\\}}(\bm{r},t)$, in finite temperature, is given by
the coupled stochastic Landau-Lifshitz-Gilbert (sLLG) equations,
$\centering\frac{\partial\bm{m}_{i}}{\partial
t}=-|\gamma_{\text{e}}|\mu_{0}\bm{m}_{i}\times(\bm{\mathcal{H}}_{i}+\bm{\mathcal{H}}_{i}^{th})+\alpha\bm{m}_{i}\times\frac{\partial\bm{m}_{i}}{\partial
t}+\bm{T}(\bm{r},t),\@add@centering$ (9)
with the electron gyromagnetic ratio $\gamma_{e}$, the vacuum permeability
$\mu_{0}$, and the Gilbert damping constant $\alpha$. The sublattice-dependent
effective magnetic field
$\bm{\mathcal{H}}_{i}=-(\mu_{0}M_{\text{s}})^{-1}\delta U/{\delta\bm{m}_{i}}$,
is given by the functional derivative of the total potential energy
$U[\bm{m}_{\text{A}},\bm{m}_{\text{B}};\bm{r},t]=\int
d\bm{r}\sum_{i}\mathcal{U}_{i}(\bm{m}_{i},\nabla\bm{m}_{i};\bm{r},t)$. The
current-induced spin transfer torque and magnetic field torque are denoted by
$\bm{T}$ in the sLLG equation. $\bm{T}(\bm{r},t)$ is finite only in the
injector region and during the excitation period.
Finite temperature dynamics is captured by adding an uncorrelated white noise
term in the LLG equations as an effective stochastic magnetic field
$\bm{H}^{\text{th}}$, derived by the fluctuation-dissipation theorem [24]. It
consists of a normalized Gaussian distribution that is scaled with the
prefactor $\xi_{th}=\sqrt{\frac{2\alpha k_{B}T}{\gamma_{e}\mu_{0}M_{s}V\Delta
t}}$, containing the thermal energy $k_{B}T$, with the Boltzmann constant
$k_{B}$, the cell size volume $V$ and the time step of the simulation $\Delta
t$. This prefactor corresponds to $1/\sigma$ in the standard definition of a
Gauss distribution. The time step of the simulation is set to $\Delta
t=$2\text{\,}\mathrm{f}\mathrm{s}$$ at zero temperature and $\Delta
t=$1\text{\,}\mathrm{f}\mathrm{s}$$ at finite temperature.
In general, spin pumping effect enhances the local Gilbert damping at injector
and detector regions [45]. In our simulations, we have ignored this small
spin-pumping-induced damping enhancement [46].
To solve coupled sLLG equations for our AFM system, we use the software Boris
Computational spintronics [25]. The list of parameters, used in the
micromagnetic simulations, is given in the Table 1.
## 4 Results
In this section, we characterize our proposed non-volatie AFM-based LIF
neuron. As we mentioned earlier, AFM DWs are displaced by AFM magnons that can
be generated by either magnetic field pulses or by (anomalous) spin Hall
torque. First, as a proof of concept of AFM-based neurons, we study the
interaction between monochromatic magnons, excited by a magnetic field pulse,
and AFM DWs at zero temperature. Since all-electric control of neurons is the
technologically relevant case, in the second part of this section, we show
that our proposed single neuron may indeed work by spin Hall torque at finite
temperature.
### 4.1 Magnon-Induced AFM DW Motion by Magnetic Fields
Magnetic field pulses may excite monochromatic AFM magnons with certain
polarizations. It was theoretically shown that these AFM magnons can displace
AFM textures in opposite directions depending on their polarizations, values
of DMI, and the Gilbert damping parameter [43, 47, 48].
Table 2: Four-stage protocol for magnon-induced DW movement, induced by a transverse magnetic field pulse. Stage | Magnetic Field Pulse | Polarization
---|---|---
Excitation 1 | $\bm{H}_{IP}(t)=\left(0,H_{0}\cos\omega t,H_{0}\sin\omega t\right)$ $\bm{H}_{OOP}(t)=\left(H_{0}\cos\omega t,H_{0}\sin\omega t,0\right)$ | $\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \bm{\circlearrowleft}$
Relaxation 1 | $H_{0}=0$ | -
Excitation 2 | $\bm{H}_{IP}(t)=\left(0,H_{0}\sin\omega t,H_{0}\cos\omega t\right)$ $\bm{H}_{OOP}(t)=\left(H_{0}\sin\omega t,H_{0}\cos\omega t,0\right)$ | $\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \bm{\circlearrowright}$
Relaxation 2 | $H_{0}=0$ | -
In this part, first, we demonstrate the control of the AFM DW in our setup. To
do so, a four-stage protocol is run, see Table 2: In the first excitation
stage, a small amplitude transverse magnetic field pulse with circular
polarization is applied in the injector region to excite the AFM magnon
eigenmodes in the magnetic layer. Afterwards, the magnetic field pulse is
turned off and the system may relax back to its ground state in the first
relaxation stage. Then, in the second excitation stage, the magnetic field
pulse is applied again but with the opposite helicity. Finally, it is turned
off again in the second relaxation stage. In Fig. 4, we present snapshots of
magnon-induced AFM DW motion in an IP geometry for one excitation followed by
one relaxation stage: while the magnetic field is turned on, the AFM DW
travels from its equilibrium position (Fig. 4a) towards the left (Figs. 4b and
4c). Once the magnetic field is turned off, it relaxes back toward its
equilibrium position (Figs. 4d and 4e). AFM-DW motion shows an inertial
behavior. When the magnonic forces exerted on the AFM DW vanish, the AFM DW
continues to move, until the Gilbert-damping-induced dissipative force stops
it and consequently the attractive potential of the magnetic anisotropy pulls
it back towards its equilibrium position. This inertial behavior can be seen
as a slight overshooting in the DW trajectories presented in Figs. 5, 6 and 7
[49, 50].
By tuning the excitation strength and the distance of the detector from the
magnetic anisotropy minimum, one can set the threshold for the firing
mechanism. Depending on the strength of the DMI $D$, the DW surface can be
tilted. This DMI-induced tilting was also reported in ferromagnetic DWs [51].
Figure 4: Snapshots of all-magnonic DW motion through an AFM-based neuron in
the IP configuration with magnetic field pulse excitation. In (a), the DW is
at equilibrium position $\mathcal{X}_{DW}=\mathcal{X}_{0}$, set by the
magnetic anisotropy profile. Once a left-handed magnetic field pulse with
strength $H_{0}$ is turned on, left-handed AFM magnons are excited at the
injector. As a result, the DW moves towards the magnon source, panels (b) and
(c). After switching the magnetic field off, the DW relaxes back to its
equilibrium position, panels (d) and (e). The illustrated movement corresponds
to the first excitation stage followed by the first relaxation stage in our
protocol. We set $D=$150\text{\,}\mathrm{\SIUnitSymbolMicro
J}\text{\,}{\mathrm{m}}^{-2}$$ in this case.
### 4.2 Direction and amplitude of the DW displacement
In this part, we show that the movement of AFM DWs can be controlled by
demand, which makes them more flexible than their ferromagnetic counterpart.
Besides the excitation strength (here the magnetic field strength), the magnon
polarization, and the inhomogeneous DMI strength have a major impact on the DW
displacement. In Fig. 5 the trajectory of the AFM DW center in the IP geometry
(Fig. 5(a)) and OOP (Fig. 5(b)) is shown during the four-stage protocol, see
Table 2. The orange areas in the plots sketch when and where the magnetic
field pulse is applied while arrows indicate the helicity of the magnetic
field pulse. The color map refers to the strength of the inhomogeneous bulk
DMI, starting from dark blue for $D=0$ and increasing over green to yellow for
$D=$250\text{\,}\mathrm{\SIUnitSymbolMicro J}\text{\,}{\mathrm{m}}^{-2}$$
($D=$200\text{\,}\mathrm{\SIUnitSymbolMicro J}\text{\,}{\mathrm{m}}^{-2}$$)
for the IP (OOP) geometry. Every single line represents one DW trajectory at a
given set of parameters. For example, at an intermediate DMI strength, the
dark green curve in the IP case (Fig. 5(a)), the DW moves towards the injector
during the first excitation stage ($0$–$25\text{\,}\mathrm{p}\mathrm{s}$),
then relaxes back to equilibrium position
($35$–$50\text{\,}\mathrm{p}\mathrm{s}$), and in the second excitation stage
with opposite helicity the AFM DW is pushed away from the injector
($50$–$75\text{\,}\mathrm{p}\mathrm{s}$) before relaxing back to the
equilibrium position again.
The first difference between the two cases is the polarization dependency of
AFM DW motion. The displacement of an AFM DW in the OOP geometry is
insensitive to the polarization of the excited AFM magnons, while the
displacement of an AFM DW in the IP case is polarization dependent.
Figure 5 shows that in the OOP geometry, only the strength of the
inhomogeneous DMI determines the direction of the DW motion, but in the IP
geometry, both the strength of the inhomogeneous DMI and the chirality of the
excited magnons set the direction of AFM DW displacement.
The amplitude and direction of the maximum displacement of the AFM DW center,
$\mathcal{X}_{\rm{DW}}^{\rm{max}}$, show a complicated relation with
inhomogeneous DMI strength, see the insets in Figs. 5(a) and 5(b). Recent
theoretical studies have shown that, in the presence of an inhomogeneous DMI,
several torques and forces act on the AFM DW, and thus the competition between
them determines the direction and amplitude of the DW displacement [43].
\begin{overpic}[width=433.62pt]{graphics/DMIscan-IP-both-WithInset-CurvedX-
weakerB.pdf}
\put(10.0,56.0){\small{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\end{overpic} (a) IP geometry
\begin{overpic}[width=433.62pt]{graphics/DMIscan-OOP-both-WithInset-
CurvedX.pdf}
\put(10.0,56.0){\small{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\end{overpic} (b) OOP geometry
Figure 5: DMI-dependent all-magnonic AFM DW movement. Left- and right-handed
AFM magnons are excited with polarized magnetic field pulses, see the orange
area. In the IP geometry (a) the direction and amplitude of the DW motion can
be tuned by DMI strength and the chirality of the excited magnons. However,
the direction of AFM DW motion in the OOP geometry (b) is independent of the
magnon chirality. The strength of DMI is encoded by colors, from lowest $D=0$
in blue to highest in yellow, see the insets. In the insets, the maximal
displacements of AFM DWs, $\mathcal{X}_{\text{DW}}^{\text{max}}$, are shown
for each excitation stage (crosses for the first and points for the second
excitation stage).
### 4.3 LIF Behavior of AFM DWs
As we discussed earlier, biological neurons have LIF characteristics: if the
input signal (or the sum of input spikes) reaches a threshold, the neuron
fires, and then relaxes back to its ground state. In this part, we demonstrate
that our proposed setup indeed can mimic the LIF behavior. In Fig. 6(a) the
time-dependent AFM DW position in the IP geometry is shown, excited with three
successive short magnetic field pulses. One single pulse is not strong enough
to move the AFM DW to the detector while three pulses can move the DW toward
the detector, where it triggers a spike in the read-out (see Fig. 6(b), more
explanation in the next section). This is a demonstration of the integrative-
and-fire behavior of our proposed non-volatile spintronic-based neuron. The
_leaky_ nature of the neuron becomes evident as the DW reverts towards its
equilibrium position, influenced by the magnetic anisotropy profile, in the
absence of the stimulating signal.
\begin{overpic}[width=433.62pt]{graphics/SpikeIP_LeakyDemo.pdf}
\put(13.0,60.0){\small{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\end{overpic} (a) Trajectory of the AFM DW
(b) Spin-pumping signal at the detector
Figure 6: Leaky integrate-and-fire behavior of the all-magnonic AFM DW motion
in the IP geometry with a DMI strength of
$D=$150\text{\,}\mathrm{\SIUnitSymbolMicro J}\text{\,}{\mathrm{m}}^{-2}$$. (a)
The integration of three separate pulses, denoted by orange areas, provides
enough energy to pull the DW away from its equilibrium position, denoted by
the gray dashed line, to the detector, denoted by the blue area. This is the
realization of the integrate-and-fire characteristic of the LIF model. During
the inter-pulse intervals, the DW undergoes relaxation towards its equilibrium
position, thereby exhibiting the _leaky_ property. After the last pulse, the
AFM DW relaxes back to the equilibrium position. (b) An impulse-like signal is
fired when the DW passes the detector at t=$25\text{\,}\mathrm{ps}$. This
spike, generated when the synaptic inputs to the neuron reach a certain
threshold value, represents the neuron action potential.
### 4.4 Electrical Readout of the AFM DW Position
A detector on top of the AFM stripe measures the passing of the AFM DW by
converting the spin-pumping signal, induced by AFM DW dynamics, to an electric
voltage via either the inverse spin Hall effect [52] or recently discovered
the inverse anomalous spin Hall effect [39]. In the former case, the detector
is a nonmagnetic heavy metal and can only measure the component of spin-
pumping signal parallel to the interface. In the latter case, the detector is
a ferromagnetic metal with a strong spin-orbit coupling that can measure
different components of the spin-pumping signal.
The interfacial spin accumulation that arises from the DW-dynamics-induced
spin-pumping, is given by [40, 53],
$\centering\bm{\mu}(t):=G_{r}^{\uparrow\downarrow}\big{\langle}\sum_{i=\text{A,
B}}\big{(}\bm{m}_{i}(t,\bm{r})\times\dot{\bm{m}}_{i}(t,\bm{r})\big{)}\big{\rangle},\@add@centering$
(10)
where $G_{r}^{\uparrow\downarrow}$ is the real part of the spin mixing
conductance [45] and $\langle...\rangle$ denotes spatial average over the
detector interface region. In the present calculations, we have ignored the
contribution of the imaginary part of the spin mixing conductance in the total
spin accumulation. This latter is sensitive to the quality of interfaces and
is negligible at disordered interfaces [40]. In Appendix B, we demonstrate
that the contribution of the imaginary part of the spin mixing conductance to
the spin pumping signal in our setup is in general negligible.
In Fig. 6(b), the temporal evolution of the spin accumulation signal
$\mu_{x}(t)$ is presented for the IP geometry. In this example, as shown in
Fig. 6(a) and described in the previous section, an AFM DW is pulled towards
the detector with several small pulses. At the detector, the spin-pumping
signal Eq. 11 is recorded over time. We subtract the background signal caused
by the pumped magnons to find the filtered spin pumping signal arising from
the AFM DW dynamics (blue curve). This signal clearly shows a maximum at
around $t=$15\text{\,}\mathrm{p}\mathrm{s}$$, which is when the AFM DW reaches
the detector.
\begin{overpic}[width=303.53267pt]{graphics/IP-both-60runs-
DMI=pm200and0uJperm2-V=0p295and0p41-curvedX}
\put(15.0,51.0){{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\end{overpic} Figure 7: Electrical control of the AFM DW motion in the IP
geometry. The orange areas depict the injector region that excites magnons via
spin transfer torque pulses with two opposite spin torques, indicated by the
arrow directions, at a finite temperature. Each trajectory is computed from an
ensemble average over 60 realizations, and the uncertainty environment
represents the standard deviation of the ensemble average. The equilibrium
position of DW at $\mathcal{X}_{0}$ is denoted by a horizontal gray dashed
line.
### 4.5 Magnon-Induced AFM-DW Motion by Spin Hall Torque
Depending on the application, it might be an advantage to have an artificial
single neuron that operates only electrically. To show our proposed setup has
also all-electrical functionality, we replace the incident magnetic field
pulse with a spin torque that results from a current-induced (anomalous) spin
Hall torque in a non magnetic (magnetic) heavy-metal lead on top of the AFM at
finite temperature. Through the (anomalous) spin Hall effect, a charge current
in the injector is converted to a spin accumulation at the interface of the
heavy metal and the AFM insulator. A nonequilibrium spin density with spin
angular momentum along the easy-axis anisotropy may excite incoherent magnons
in the AFM insulator via an interfacial spin transfer torque at finite
temperature [38, 54]. The chirality of excited magnons is controlled by the
charge current direction and consequently the sign of the spin transfer
torque.
Figure 7 represents the displacement of an AFM DW in the IP geometry. Similar
to the four-stage protocol used before, we run the following stages: After
initialization of the DW in its equilibrium position $\mathcal{X}_{0}$, the
spin transfer torque is turned on for $25\text{\,}\mathrm{p}\mathrm{s}$ as the
first excitation stage, and then turned off for the first relaxation stage. In
Fig. 7 we see that the time interval between turning on the injector and the
DW motion is much bigger compared to the previous case, where magnons were
excited by a magnetic field, see Fig. 5. This is because the spin transfer
torque excitation mechanism needs time to build up enough magnons in the
system.
In the second excitation stage, we change the sign of the spin accumulation
and thus spin transfer torque in the injector, which is equivalent to
reversing the direction of the charge current in the heavy metal layer. In
Fig. 7, three AFM DW trajectories for different inhomogeneous DMI strengths
are shown. Since temperature is finite and thus the time-evolution is non-
deterministic, we perform an ensemble average for each AFM DW trajectory. The
uncertainty environment for each line represents the standard deviation of the
ensemble average. In the absence of DMI (black line), the direction of spin
accumulation does not have an impact on the DW motion direction and the DW is
pulled towards the injector in both cases. This is consistent with our
previous result for magnon-induced by magnetic field case in which the
direction of AFM DW motion was polarization independent in the absence of
inhomogeneous DMI. Turning the DMI on, however, leads to polarization-
dependent DW motion.
### 4.6 Dynamical control of biologically realistic characteristics
Recently, an artificial neuron based on AFM auto-oscillators was proposed [55]
and it was shown that this neuron owns some main ingredients of biological
neurons. In this subsection, we assess how our proposed neurons which are
based on AFM DWs, intrinsically resemble some biological neurons
characteristics, namely latency, bursting, inhibition, and refraction. We
argue these features can be dynamically tuned in our proposed model.
Neuronal response latency– Latency describes the delay time between the
excitation and the firing [56]. In our proposed setup, this is the time
between the excitation of magnons at the injector, and the read-out of the
AFM-DW-induced spikes in the detector. This time is dependent on the
excitation strength, the anisotropy profile, the distance of the detector and
injector, and the material parameters. Thus, it can be tuned. In Figs. 5, 6
and 7, one can see the delay between the onset of the excitation (time window
of excitation indicated by orange areas) and the DW movement.
Burst firing– This is a dynamic state that happens when the input of neuron
(or excitation strength) exceeds a certain threshold and, as a consequence,
more than just one signal is fired [57]. In our system, this may happen when
the DW is moved to greater distances from equilibrium compared to the detector
distance. Then, it will pass underneath the detector twice, each time
triggering an output signal. An example is shown in Fig. 8, where the detector
is placed closer to the equilibrium position compared to the case shown in
Section 4.3. Like the latency, the bursting threshold is dependent on
excitation strength, the magnetic anisotroy profile, detector distance and
material parameters. In Fig. 8(b), an additional signal present at around
$12\text{\,}\mathrm{p}\mathrm{s}$ when DW passes the detector. We attribute
this signal to the magnon emission by DW motion [58, 59].
Absolute refractory period– The refractory period is the time that a neuron
needs to relax back into the resting state from which it can fire again [56].
In our system, the refractory period is non-zero if the DW passes the detector
position (which happens in the case of bursting described before). Then, it
has to relax back towards the equilibrium position before being able to fire
again.
\begin{overpic}[width=433.62pt]{graphics/BurstingDemo-alternative.pdf}
\put(12.0,60.0){{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\end{overpic} (a) Trajectory of the AFM DW
(b) Spin-pumping signal at the detector
Figure 8: Bursting behavior in the IP geometry with a DMI strength of
$D=$150\text{\,}\mathrm{\SIUnitSymbolMicro J}\text{\,}{\mathrm{m}}^{-2}$$. (a)
A longer magnon excitation, here by a magnetic field, provides enough energy
to pull the AFM-DW away from its equilibrium position, denoted by the gray
dashed line, and passes the detector, denoted by the blue area. (b) An
impulse-like signal with opposite polarity is fired each time the AFM-DW
passes the detector in opposite directions.
Neural inhibition– Biological neurons can exert inhibitory control over their
connected neurons. Inhibitory neurons modulate the firing behavior of other
neurons, signaling them to refrain from firing [60]. In the network structure,
inhibition corresponds to negative weights [61]. In our proposal, negative
weights can be achieved by placing a detector to the left and right of the
equilibrium position of the DW. As demonstrated in Fig. 5(a) and Fig. 7, the
helicity of the applied magnetic field and the direction of the spin torque
control the direction of the DW displacement, determine whether the signal is
detected at the left or right detector during spike readout. Subsequently, it
becomes feasible to attribute a positive weight to one of the readout signals
and a negative weight to the other. Consequently, upon integration into the
subsequent layer, these weights correspond to the helicity or spin torque
direction. Thus, during the integration of pulses in the next neuron,
competing forces can act on the DW. An example is shown in Fig. 9 where one of
the tree excitation pulses has opposite chirality and thus pushes the DW away
from the detector. To the best of knowledge, inhibition has not been
incorporated in FM DW-based neurons thus far. However, as demonstrated in our
proposal, the chirality of magnons in AFM systems represents a crucial degree
of freedom that enables this particular feature of biological neurons.
\begin{overpic}[width=216.81pt]{graphics/DemoInhibition_IP.pdf}
\put(12.0,60.0){{\color[rgb]{.5,.5,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,.5,.5}\pgfsys@color@gray@stroke{.5}\pgfsys@color<EMAIL_ADDRESS>\end{overpic} Figure 9: Demonstration of inhibition in the IP geometry:
Integration over excitation pulses with different helicities demonstrates the
possibility of modelling inhibition. Compare to Fig. 6(a) where pulses with
same chirality are integrated and lead to a spiking event.
### 4.7 Suggested network structure
In this article, a detailed study of an AFM-based single neuron was conducted,
focusing on the demonstration of its LIF properties. Although further
implementations extend beyond the scope of this work, a brief outlook will be
provided on the construction of a SNN using the proposed neurons.
As explained in Section 2.1, the input to each neuron involves the
accumulation of multiple spike trains. In our system, this process is modeled
using pulses of either a magnetic field or an electric current-induced SHE.
Within the neuron, the integration of incoming pulses may or may not lead to a
spiking event. The output is an electrical readout of the spiking event, which
is subsequently forwarded to the next layer. To facilitate network training,
incoming signals can be scaled with trainable weights, denoted as $w_{i}$, see
Fig. 1. In our system, the amplitude of these weights corresponds to the
excitation strength and/or duration, while the sign can be set by evaluating
which detector reads out the spike. If the weight is negative, in the
subsequent neuron, magnons of opposite chirality are excited by reversing the
helicity of the magnetic field or the current direction respectively.
## Summary and Concluding Remarks
In this paper, we have proposed a non-volatile, low-energy cost, and fast
operating single neuron, which is based on a DW texture in an AFM insulator
with an anisotropy gradient. Our proposed AFM-based neuron shows a leaky
integrated-fire behavior, which can model a biological neuron. This single
neuron is activated by AFM magnons, which can be excited at the source region
by either a magnetic field pulse or spin transfer torque mechanism. The source
region that injects magnons into the system resembles a dendrite in a nerve
cell.
Our proposed AFM-based single neuron can have two detectors, which makes it
possible to model inhibition feature of biological neurons. The detectors act
as transmitters, resembling synaptic terminals of neurons, and will be
connected to neighboring neurons. In general, one can replace the AFM DW in
our setup with topologically stable AFM skyrmions as well. Synchronization and
functionality of the connected single neurons remain as an important open
question that should be explored further theoretically and experimentally in
next studies.
## References
* [1] Mahmoud, A. _et al._ An introduction to spin wave computing. _J. Appl. Phys._ 128, 161101, DOI: 10.1063/5.0019328 (2020).
* [2] Christensen, D. V. _et al._ 2022 roadmap on neuromorphic computing and engineering. _Neuromorph. Comput. Eng._ 2, 022501, DOI: 10.1088/2634-4386/ac4a83 (2022).
* [3] Liu, D., Yu, H. & Chai, Y. Low-power computing with neuromorphic engineering. _Adv. Intell. Syst._ 3, 2000150, DOI: 10.1002/aisy.202000150 (2020).
* [4] Rao, A., Plank, P., Wild, A. & Maass, W. A long short-term memory for ai applications in spike-based neuromorphic hardware. _Nat. Mach. Intell._ 4, 467, DOI: 10.1038/s42256-022-00480-w (2022).
* [5] Levya, W. B. & Calvert, V. G. Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number. _Proc. Natl. Acad. Sci. U.S.A._ 118, e2008173118, DOI: 10.1073/pnas.2008173118 (2021).
* [6] An, H., Bai, K. & Yi, Y. _The Roadmap to Realizing Memristive Three-Dimensional Neuromorphic Computing System Advances in Memristor Neural Networks - Modeling and Applications_ (IntechOpen, 2018).
* [7] Grollier, J. _et al._ Neuromorphic spintronics. _Nat. Electron._ 3, 360, DOI: 10.1038/s41928-019-0360-9 (2020).
* [8] Brataas, A., van Wees, B., Klein, O., de Loubens, G. & Viret, M. Spin insulatronics. _Phys. Rep._ 885, 1–27, DOI: https://doi.org/10.1016/j.physrep.2020.08.006 (2020). Spin Insulatronics.
* [9] Brigner, W. H. _et al._ Purely spintronic leaky integrate-and-fire neurons. In _2022 IEEE International Symposium on Circuits and Systems (ISCAS)_ , 1189–1193, DOI: 10.1109/ISCAS48785.2022.9937890 (2022).
* [10] Brigner, W. H. _et al._ Domain wall leaky integrate-and-fire neurons with shape-based configurable activation functions. _IEEE Transactions on Electron Devices_ 69, 2353–2359, DOI: 10.1109/TED.2022.3159508 (2022).
* [11] Brigner, W. H. _et al._ Graded-anisotropy-induced magnetic domain wall drift for an artificial spintronic leaky integrate-and-fire neuron. _IEEE Journal on Exploratory Solid-State Computational Devices and Circuits_ 5, 19–24, DOI: 10.1109/JXCDC.2019.2904191 (2019).
* [12] Jungwirth, T., Marti, X., Wadley, P. & Wunderlich. Antiferromagnetic spintronics. _Nat. Nanotechnol._ 11, 231, DOI: 10.1038/nnano.2016.18 (2016).
* [13] Kurenkov, A., Fukami, S. & Ohno, H. Neuromorphic computing with antiferromagnetic spintronics. _J. Appl. Phys._ 128, 010902, DOI: 10.1063/5.0009482 (2020).
* [14] Zhang, S. & Tserkovnyak, Y. Antiferromagnet-based neuromorphics using dynamics of topological charges. _Phys. Rev. Lett._ 125, 207202, DOI: 10.1103/PhysRevLett.125.207202 (2020).
* [15] Bradley, H. _et al._ Artificial neurons based on antiferromagnetic auto-oscillators as a platform for neuromorphic computing. _arXiv_ DOI: 10.48550/ARXIV.2208.06565 (2022).
* [16] Bindal, N., Ian, C. A. C., Lew, W. S. & Kaushik, B. K. Antiferromagnetic skyrmion repulsion based artificial neuron device. _Nanotechnology_ 32, 215204, DOI: 10.1088/1361-6528/abe261 (2021).
* [17] Maass, W. Networks of spiking neurons: The third generation of neural network models. _Neural Netw._ 10, 1659–1671, DOI: https://doi.org/10.1016/S0893-6080(97)00011-7 (1997).
* [18] Gerstner, W. & Kistler, W. M. _Spiking Neuron Models: Single Neurons, Populations, Plasticity_ (Cambridge University Press, 2002).
* [19] Stein, R. B. A theoretical analysis of neuronal variability. _Biophys. J._ 5, 173–194, DOI: https://doi.org/10.1016/S0006-3495(65)86709-1 (1965).
* [20] Stoliar, P. _et al._ A leaky-integrate-and-fire neuron analog realized with a mott insulator. _Adv. Funct. Mater._ 27, 1604740, DOI: https://doi.org/10.1002/adfm.201604740 (2017).
* [21] Maass, W. Networks of spiking neurons: The third generation of neural network models. _Neural Netw._ 10, 1659, DOI: 10.1016/S0893-6080(97)00011-7 (1997).
* [22] Jang, H., Skatchkovsky, N. & Simeone, O. Spiking neural networks-part i: Detecting spatial patterns. _IEEE Commun. Lett._ 25, 1736, DOI: 10.1109/LCOMM.2021.3050207 (2021).
* [23] Bohte, S. M., Kok, J. N. & La Poutré, H. Error-backpropagation in temporally encoded networks of spiking neurons. _Neurocomputing (Amsterdam)_ 48, 17, DOI: 10.1016/S0925-2312(01)00658-0 (2002).
* [24] Etz, C., Bergqvist, L., Bergman, A., Taroni, A. & Eriksson, O. Atomistic spin dynamics and surface magnons. _J. Phys. Condens. Matter_ 27, 243202, DOI: 10.1088/0953-8984/27/24/243202 (2015).
* [25] Lepadatu, S. Boris computational spintronics—High performance multi-mesh magnetic and spin transport modeling software. _J. Appl. Phys._ 128, 243902, DOI: 10.1063/5.0024382 (2020).
* [26] Tveten, E. G., Qaiumzadeh, A., Tretiakov, O. A. & Brataas, A. Staggered dynamics in antiferromagnets by collective coordinates. _Phys. Rev. Lett._ 110, 127208, DOI: 10.1103/PhysRevLett.110.127208 (2013).
* [27] Rana, B. & Otani, Y. Towards magnonic devices based on voltage-controlled magnetic anisotropy. _Commun. Phys._ 2, 90, DOI: 10.1038/s42005-019-0189-6 (2019).
* [28] Ma, C. _et al._ Electric field-induced creation and directional motion of domain walls and skyrmion bubbles. _Nano Lett._ 19, 353, DOI: 10.1021/acs.nanolett.8b03983 (2018).
* [29] Yu, G. _et al._ Room-temperature creation and spin–orbit torque manipulation of skyrmions in thin films with engineered asymmetry. _Nano Lett._ 16, 1981, DOI: 10.1021/acs.nanolett.5b05257 (2016).
* [30] Skowroński, W. _et al._ Perpendicular magnetic anisotropy of ir/cofeb/mgo trilayer system tuned by electric fields. _Appl. Phys. Express_ 8, 053003, DOI: 10.7567/APEX.8.053003 (2015).
* [31] Kawabe, T. _et al._ Electric-field-induced changes of magnetic moments and magnetocrystalline anisotropy in ultrathin cobalt films. _Phys. Rev. B_ 96, 220412, DOI: 10.1103/PhysRevB.96.220412 (2017).
* [32] Herklotz, A. _et al._ Designing magnetic anisotropy through strain doping. _Adv. Sci._ 5, 1800356, DOI: https://doi.org/10.1002/advs.201800356 (2018).
* [33] Belyaev, B. A., Izotov, A. V., Solovev, P. N. & Boev, N. M. Strain-gradient-induced unidirectional magnetic anisotropy in nanocrystalline thin permalloy films. _Phys. Status Solidi RRL_ 14, 1900467, DOI: https://doi.org/10.1002/pssr.201900467 (2020).
* [34] Ebrahimian, A., Dyrdał, A. & Qaiumzadeh, A. Control of magnetic states and spin interactions in bilayer crcl3 with strain and electric fields. _arXiv_ DOI: 10.48550/ARXIV.2209.04527 (2022).
* [35] Vishkayi, S. I., Torbatian, Z., Qaiumzadeh, A. & Asgari, R. Strain and electric-field control of spin-spin interactions in monolayer ${\mathrm{cri}}_{3}$. _Phys. Rev. Materials_ 4, 094004, DOI: 10.1103/PhysRevMaterials.4.094004 (2020).
* [36] Meer, H. _et al._ Strain-induced shape anisotropy in antiferromagnetic structures. _Phys. Rev. B_ 106, 094430, DOI: 10.1103/PhysRevB.106.094430 (2022).
* [37] Zhukova, V. _et al._ Grading the magnetic anisotropy and engineering the domain wall dynamics in fe-rich microwires by stress-annealing. _Acta Materialia_ 155, 279–285, DOI: https://doi.org/10.1016/j.actamat.2018.05.068 (2018).
* [38] Lebrun, R. _et al._ Tunable long-distance spin transport in a crystalline antiferromagnetic iron oxide. _Nature_ 561, 222, DOI: 10.1038/s41586-018-0490-7 (2018).
* [39] Das, K. S., Liu, J., van Wees, B. J. & Vera-Marun, I. J. Efficient injection and detection of out-of-plane spins via the anomalous spin hall effect in permalloy nanowires. _Nano Lett._ 18, 5633, DOI: 10.1021/acs.nanolett.8b02114 (2018).
* [40] Cheng, R., Xiao, J., Niu, Q. & Brataas, A. Spin pumping and spin-transfer torques in antiferromagnets. _Phys. Rev. Lett._ 113, 057601, DOI: 10.1103/PhysRevLett.113.057601 (2014).
* [41] Mal’shukov, A. G. Spin pumping by a moving domain wall at the interface of an antiferromagnetic insulator and a two-dimensional metal. _arXiv_ DOI: 10.48550/ARXIV.2211.01195 (2022).
* [42] Pham, V. T. _et al._ Electrical detection of magnetic domain walls by inverse and direct spin hall effect. _Appl. Phys. Lett._ 109, 192401, DOI: 10.1063/1.4967171 (2016).
* [43] Qaiumzadeh, A., Kristiansen, L. A. & Brataas, A. Controlling chiral domain walls in antiferromagnets using spin-wave helicity. _Phys. Rev. B_ 97, 020402, DOI: 10.1103/PhysRevB.97.020402 (2018).
* [44] Agrawal, A. & Roy, K. Mimicking leaky-integrate-fire spiking neuron using automotion of domain walls for energy-efficient brain-inspired computing. _IEEE Trans. Magn._ 55, 1, DOI: 10.1109/TMAG.2018.2882164 (2019).
* [45] Tserkovnyak, Y., Brataas, A. & Bauer, G. E. W. Enhanced gilbert damping in thin ferromagnetic films. _Phys. Rev. Lett._ 88, 117601, DOI: 10.1103/PhysRevLett.88.117601 (2002).
* [46] Kapelrud, A. & Brataas, A. Spin pumping and enhanced gilbert damping in thin magnetic insulator films. _Phys. Rev. Lett._ 111, 097602, DOI: 10.1103/PhysRevLett.111.097602 (2013).
* [47] Tveten, E. G., Qaiumzadeh, A. & Brataas, A. Antiferromagnetic domain wall motion induced by spin waves. _Phys. Rev. Lett._ 112, 147204, DOI: 10.1103/PhysRevLett.112.147204 (2014).
* [48] Khoshlahni, R., Qaiumzadeh, A., Bergman, A. & Brataas, A. Ultrafast generation and dynamics of isolated skyrmions in antiferromagnetic insulators. _Phys. Rev. B_ 99, 054423, DOI: 10.1103/PhysRevB.99.054423 (2019).
* [49] Tveten, E. G., Qaiumzadeh, A., Tretiakov, O. A. & Brataas, A. Staggered dynamics in antiferromagnets by collective coordinates. _Phys. Rev. Lett._ 110, 127208, DOI: 10.1103/PhysRevLett.110.127208 (2013).
* [50] Tveten, E. G., Qaiumzadeh, A. & Brataas, A. Antiferromagnetic domain wall motion induced by spin waves. _Phys. Rev. Lett._ 112, 147204, DOI: 10.1103/PhysRevLett.112.147204 (2014).
* [51] Boulle, O. _et al._ Domain wall tilting in the presence of the dzyaloshinskii-moriya interaction in out-of-plane magnetized magnetic nanotracks. _Phys. Rev. Lett._ 111, 217203, DOI: 10.1103/PhysRevLett.111.217203 (2013).
* [52] Sinova, J., Valenzuela, S. O., Wunderlich, J., Back, C. H. & Jungwirth, T. Spin hall effects. _Rev. Mod. Phys._ 87, 1213–1260, DOI: 10.1103/RevModPhys.87.1213 (2015).
* [53] Reitz, D., Li, J., Yuan, W., Shi, J. & Tserkovnyak, Y. Spin seebeck effect near the antiferromagnetic spin-flop transition. _Phys. Rev. B_ 102, 020408, DOI: 10.1103/PhysRevB.102.020408 (2020).
* [54] Lebrun, R. _et al._ Long-distance spin-transport across the Morin phase transition up to room temperature in ultra-low damping single crystals of the antiferromagnet $\alpha$-Fe2O3. _Nat. Commun._ 11, 6332, DOI: 10.1038/s41467-020-20155-7 (2020).
* [55] Bradley, H. _et al._ Artificial neurons based on antiferromagnetic auto-oscillators as a platform for neuromorphic computing. _AIP Advances_ 13, 015206, DOI: 10.1063/5.0128530 (2023). https://doi.org/10.1063/5.0128530.
* [56] Squire, L. _et al._ (eds.) _Fundamental Neuroscience_ (Academic Press, San Diego, 2012).
* [57] Overton, P. & Clark, D. Burst firing in midbrain dopaminergic neurons. _Brain research. Brain research reviews_ 25, 312—334, DOI: 10.1016/s0165-0173(97)00039-8 (1997).
* [58] Tatara, G., Akosa, C. A. & Otxoa de Zuazola, R. M. Magnon pair emission from a relativistic domain wall in antiferromagnets. _Phys. Rev. Res._ 2, 043226, DOI: 10.1103/PhysRevResearch.2.043226 (2020).
* [59] Wieser, R., Vedmedenko, E. Y. & Wiesendanger, R. Domain wall motion damped by the emission of spin waves. _Phys. Rev. B_ 81, 024405, DOI: 10.1103/PhysRevB.81.024405 (2010).
* [60] Purves, D. _et al._ (eds.) _Neuroscience_ (Sinauer Associates, Sunderland (MA), 2001).
* [61] Pfeiffer, M. & Pfeil, T. Deep learning with spiking neurons: Opportunities and challenges. _Frontiers in Neuroscience_ 12, DOI: 10.3389/fnins.2018.00774 (2018).
* [62] Shen, L. _et al._ Dynamics of the antiferromagnetic skyrmion induced by a magnetic anisotropy gradient. _Phys. Rev. B_ 98, 134448, DOI: 10.1103/PhysRevB.98.134448 (2018).
* [63] Sulymenko, O. R. _et al._ Terahertz-frequency spin hall auto-oscillator based on a canted antiferromagnet. _Phys. Rev. Appl._ 8, 064007, DOI: 10.1103/PhysRevApplied.8.064007 (2017).
## Acknowledgment
This project has been supported by the Norwegian Financial Mechanism Project
No. 2019/34/H/ST3/00515, “2Dtronics”; and partially by the Research Council of
Norway through its Centres of Excellence funding scheme, Project No. 262633,
“QuSpin”.
V. B. thanks Frank Mizrahi and Mark Stiles for inspiration and fruitful
discussions.
## Author contributions statement
V.B. and J.A. conducted the simulations and analysis. S.L. provided technical
support. A.Q. lead the project and discussions. All authors contributed to the
manuscript.
## Additional information
Additional simulation results can be found in the Appendix.
Appendix
We first demonstrate that the functionality of the proposed neuron is
scalable. To prove that we use parameters of hematite, a prototype of two-
sublattice AFM insulators. Additionally, we show that in our proposed set up,
the imaginary part of the spin mixing conductance is not relevant.
## Appendix A Easy-plane hematite
We consider AFM hematite ($\alpha$-Fe3O2) above the Morin transition
temperature where the system is in magnetic easy-plane phase. In Fig. S10, we
present magnon induced domain wall (DW) motion for the easy-plane phase of
hematite above the Morin transition, which is a prototype of orthorombic AFMs.
The motion is controlled by a magnetic field (position and duration indicated
by orange area, to scale) with two opposite helicities (indicated by arrow).
Two values of the bulk Dzyaloshinskii–Moriya interaction (DMI) $D$ are
compared (blue vs green line). This is analogous to the magnetic field
controlled motion presented in the main text with the four-stage protocol.
Note that the system is larger compared to the toy model presented in the main
article, due to the DW width. The DW equilibrium position is at
$2\text{\,}\mathrm{\SIUnitSymbolMicro m}$.
As expected from our proposal based on our toy model parameters in the main
text, both magnetic field helicity and direction (sign) of the DMI switch the
direction of DW displacement. We choose an anisotropy profile
$K(\bm{x})=10K_{0}\left[\frac{1}{L_{x}}\left(x-\mathcal{X}_{0}\right)^{2}+1\right]$.
Note that the slope of the profile can be tuned even larger as it was
discussed in previous studies [62]. The simulation parameters for hematite
[63], are presented in Table S3.
Table S3: Simulation parameters for hematite [63]. Quantity | Symbol | Value | Unit
---|---|---|---
Length of AFMI layer | $L_{x}$ | $3.0\text{\,}$ | $\mathrm{\SIUnitSymbolMicro m}$
Width of AFMI layer | $L_{y}$ | $20\text{\,}$ | $\mathrm{nm}$
Thickness of AFMI layer | $L_{z}$ | $4\text{\,}$ | $\mathrm{nm}$
Grid size | a | $4\text{\,}$ | $\mathrm{nm}$
Exchange stiffness | $A_{\text{AFM}}$ | $76\text{\,}$ | $\mathrm{fJ}\text{\,}{\mathrm{m}}^{-1}$
Homogeneous exchange constant | $A_{h}$ | $-460\text{\,}$ | $\mathrm{kJ}\text{\,}{\mathrm{m}}^{-3}$
Easy-axis anisotropy constant | $K_{\text{easy}}$ | $-$21\text{\,}$$ | $\mathrm{mJ}\text{\,}{\mathrm{m}}^{-3}$
Hard-axis anisotropy constant | $K_{\text{hard}}$ | $21\text{\,}$ | $\mathrm{J}\text{\,}{\mathrm{m}}^{-3}$
Saturation magnetization | $M_{s}$ | $2.1\text{\,}$ | $\mathrm{kA}\text{\,}{\mathrm{m}}^{-1}$
Gilbert damping | $\alpha$ | $0.0003\text{\,}$ | 1
Homogenous DMI coefficient | $D_{h}$ | $4.6\text{\,}$ | $\mathrm{kJ}\text{\,}{\mathrm{m}}^{-3}$
Time step | $\Delta t$ | $2\text{\,}$ | $\mathrm{fs}$
Figure S10: Magnetic field controlled DW motion in easy-plane hematite.
In summary, our proposed neuron can be realized in AFM systems with generic
orthorhombic symmetry. Excitation timescales should be tuned for each chosen
material.
## Appendix B Contribution of the imaginary part of the spin mixing
conductance
In general, the imaginary part of the spin mixing conductance is dependent on
the quality of the interface between the heavy metal layer and the magnetic
layer. This term is negligible for dirty interfaces. The spin pumping has the
following general form [45, 40]
$\centering\bm{\mu}(t):=G_{r}^{\uparrow\downarrow}\big{(}\bm{n}(t,\bm{r})\times\dot{\bm{n}}(t,\bm{r})+\bm{m}(t,\bm{r})\times\dot{\bm{m}}(t,\bm{r})\big{)}-G_{i}^{\uparrow\downarrow}\dot{\bm{m}}(t,\bm{r}),\@add@centering$
(11)
with the Néel vector $\bm{n}=\frac{\bm{m}_{A}-\bm{m}_{B}}{2}$ and
magnetization $\bm{m}=\frac{\bm{m}_{A}+\bm{m}_{B}}{2}$, where
$G_{r}^{\uparrow\downarrow}$ and $G_{i}^{\uparrow\downarrow}$ are the real
part and the imaginary part of the spin mixing conductance, respectively.
In order to check the qualitative and quantitative effects of including
$G_{i}^{\uparrow\downarrow}$, we compare two extreme cases, i.e.,
$G_{i}^{\uparrow\downarrow}=G_{r}^{\uparrow\downarrow}$ (large imaginary part)
and $G_{i}^{\uparrow\downarrow}=0$ (zero imaginary part). As shown in Fig.
S11, both read outs are the same, suggesting that the imaginary part of the
spin mixing conductance can be neglected in our set up geometry.
Figure S11: Comparison of read out spin pumping signal including and not
including the imaginary spin mixing conductance.
|
11institutetext: Institute of Research and Innovation in Bioengineering,
Universitat Politècnica de València, Valencia, Spain (11email:
<EMAIL_ADDRESS>
22institutetext: Department of Pathology, Stavanger University Hospital,
Stavanger, Norway
33institutetext: Department of Chemistry, Bioscience and Environmental
Engineering, University of Stavanger, Stavanger, Norway 44institutetext:
Institute of Transport and Territory, Universitat Politècnica de València,
Valencia, Spain
# Challenging mitosis detection algorithms:
Global labels allow centroid localization††thanks: The work of C. Fernández-
Martín and U. Kiraz was funded from the Horizon 2020 of European Union
research and innovation programme under the Marie Sklodowska Curie grant
agreement No 860627 (CLARIFY Project). The work of Sandra Morales has been co-
funded by the Universitat Politècnica de València through the program
PAID-10-20. This work was partially funded by GVA through project
PROMETEO/2019/109.
Claudio Fernandez-Martín 11 Umay Kiraz 22 3 3 Julio Silva-Rodríguez 44 Sandra
Morales 11 Emiel Janssen 22 3 3 Valery Naranjo 11
###### Abstract
Mitotic activity is a crucial proliferation biomarker for the diagnosis and
prognosis of different types of cancers. Nevertheless, mitosis counting is a
cumbersome process for pathologists, prone to low reproducibility, due to the
large size of augmented biopsy slides, the low density of mitotic cells, and
pattern heterogeneity. To improve reproducibility, deep learning methods have
been proposed in the last years using convolutional neural networks. However,
these methods have been hindered by the process of data labelling, which
usually solely consist of the mitosis centroids. Therefore, current literature
proposes complex algorithms with multiple stages to refine the labels at pixel
level, and to reduce the number of false positives. In this work, we propose
to avoid complex scenarios, and we perform the localization task in a weakly
supervised manner, using only image-level labels on patches. The results
obtained on the publicly available TUPAC16 dataset are competitive with state-
of-the-art methods, using only one training phase. Our method achieves an
F1-score of $0.729$ and challenges the efficiency of previous methods, which
required multiple stages and strong mitosis location information.
###### Keywords:
Mitosis detection Weak labels Histology Digital pathology.
## 1 Introduction
In digital pathology, mitosis counting is one of the most important tasks in
the histopathological clinical practice. In the case of breast cancer, the
mitotic activity index (MAI) is considered one of the strongest proliferation-
associated prognostic factors [1]. However, mitosis counting is a laborious
and time-consuming task due to the large size of the Hematoxylin and Eosin
(H&E) slides under a microscope, and the low occurrence of mitotic figures. In
addition, the large heterogeneity of patterns and similarity between mitotic
and non-mitotic cells (see Figure 1) makes this task highly variable among
clinical experts [2], which hinders its reproducibility.
(a) Mitotic cells
(b) Non-mitotic cells
Figure 1: Visual illustration of the morphological heterogeneity and the
challenge of differentiating patterns between mitotic and non-mitotic cells,
extracted from TUPAC16 [3].
In the last years, the advent of modern deep learning algorithms has emerged
as a possible solution to bring objectivity and reproducibility to the
challenge of mitosis localization. Deep learning using convolutional neural
networks (CNNs) has reached remarkable results in a wide range of applications
under the supervised learning paradigm. Nevertheless, it requires a reasonable
amount of carefully-labeled data to perform properly. In the case of mitosis
localization, this is a tedious process, which usually is repeated by
different pathologists to reach consensus labels. Since delineating at pixel-
level individual mitotic cells is an unfeasible task, the reference datasets
normally contain centroid-based labels [3] or inexact pixel-level annotations
[4]. Because of this, previous works to automate the mitosis localization
process have struggled to match the available labels to the use of
segmentation or object detection CNNs, which are typically used localization
tasks. Contrary to this line of work, we propose to make use of inherent
spatial localization capacity of CNNs in image-level classification tasks [5],
without the need to resort to an exact localization of the mitotic cell inside
the region of interest. Our main contributions are summarized as follows:
* $\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}$
A CNN for weakly supervised segmentation of mitotic figures on H&E patches
using image-level labels.
* $\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}$
Concretely, training is driven by maximum aggregation of instance-level
predictions.
* $\mathbin{\vbox{\hbox{\scalebox{0.5}{$\bullet$}}}}$
Comprehensive experiments demonstrate the competitive performance on the
popular TUPAC16 dataset, using a single-phase pipeline without requiring the
exact localization information for training our model.
## 2 Related Work
### 2.1 Mitosis detection
Mitosis localization algorithms using CNNs deal with labels in the form of
centroid annotations, or inexact pixel-level delineation of the mitotic cell.
In that sense, Li et al. [6] propose a novel concentric loss to move from
centroid labels to pixel-level segmentation using the pixels surrounding
certain radius of the centroid. Other works focus on leveraging cell-level
predictions using multi-phase pipelines [7, 8, 9, 10, 11, 12]. For instance,
Sohail et al. [12] propose a complex multi-phase pipeline that includes
pseudolabeling centroid-labelled mitosis via previously trained Mask R-CNNs.
Also, Nateghi et al. use multiple training stages to refine the false positive
detection using hard-negative mining via stain priors, or prediction
uncertainty. In contrast to these works, we study how training a CNN at the
image-level for a classification task also allows the precise location of
mitotic cells without using any localization information, shape or stain
priors, or multi-phase refinement pipelines.
### 2.2 Weakly supervised segmentation
Weakly supervised segmentation (WSS) aims to leverage pixel-level localization
using global (a.k.a image-level) labels during training. According to [13],
WSS methods use fully-convolutional CNNs with an aggregation function that
merges all the spatial information into one value, that serves as global
prediction [14]. This output is then used to compute the loss function, and
drives the network optimization. Different strategies include the aggregation
of spatial features (embedding-based) or pixel-level predictions (instance-
based). Finally, the probability maps before aggregation operation are used as
segmentation predictions. Lately, these segmentation maps are refined to
incorporate self-supervised learning pipelines [15] or uncertainty proxies
[16], among others.
## 3 Methods
An overview of our proposed method is depicted in Figure 2. In the following,
we describe the problem formulation, and each of the proposed components.
Figure 2: Overview of the proposed method for mitosis localization.
#### Problem Formulation
In the paradigm of weakly supervised segmentation (WSS), the training set is
composed of images $\\{x_{n}\\}_{n=1}^{N}$, whose binary label
$\\{Y_{n}^{k}\\}_{n=1}^{N}$, such that, $Y_{n}^{k}=\\{0,1\\}$ is known, and
defines if a category $k$ is present within the image. Also, each positive
image has pixel-level labels $y_{n,i}$ for each $i$ pixel in the image, but
they remain unknown during training. Further, we denote $Y_{n}^{k}$ as $Y_{n}$
for simplicity, since one unique class is taken into account, and we assume
image index $n$.
#### Instance-based WSS
In this work, we aim to train a CNN capable of locating positive mitosis
during inference, while being trained only with image-level labels. To do so,
we make use of an instance-based weakly supervised learning strategy. Let us
denote a CNN model,
$f_{\boldsymbol{\theta}}(\cdot):\mathcal{X}\rightarrow\mathcal{H}^{K}$,
parameterized by $\boldsymbol{\theta}$, which processes instances
$x\in\mathcal{X}$ to output sigmoid-activated instance-level probabilities,
$h_{i}$, such that $h_{i}\in[0,1]$. Also, we use a parameter free aggregation
function, $f_{a}(\cdot)$, in charge of combining the pixel-level scores into
one global output $H$, such that $H=f_{a}(f_{\boldsymbol{\theta}}({x}))$.
Then, the optimization of $\boldsymbol{\theta}$ is driven by the minimization
of cross entropy loss between reference and predicted image-level score.
$\mathcal{L}_{ce}=Ylog(H)+(1-Y)log(1-H)$ (1)
In this work, we propose to use the maximum operation as aggregation function,
$f_{a}(\cdot)$. Although this aggregation only backpropagates gradients
through the maximum-activated spatial regions, this effect produces that only
very discriminative cells will be classified as mitosis, which avoids false
positive predictions.
#### Inference
During inference, pixel-level predictions are inferred using the pixel-level
predictions given by the trained CNN, $y_{i}=f_{\boldsymbol{\theta}}(x)$. The
probability maps are resized to the original image dimensions by bi-linear
interpolation. Then, sigmoid scores are converted to a binary mask by applying
a threshold to the probability maps. Concretely, the threshold is obtained
from the operative point of the ROC curve between image-level predictions and
references. Finally, a centroid is assigned to each element in the mask, to be
located as a mitosis.
## 4 Experimental setting
### 4.1 Datasets
The experiments described in this work are carried out using the popular 2016
TUmor Proliferation Assessment Challenge (TUPAC16) dataset [3]. TUPAC16 is
publically available and is composed of $73$ breast cancer whole slide images
from two different institutions. In particular, the auxiliary mitosis dataset
contains $1552$ processed regions of interest at $40\times$ magnification,
with centroid-labelled mitosis by consensus of expert pathologists. Following
relevant literature in [6], we extracted patches of size $500$ pixels from the
regions of interest for computational efficiency. The dataset is divided into
patient-level training, validation, and testing cohorts in a similar fashion
to prior literature [6].
### 4.2 Metrics
We use standard metrics for mitosis localization evaluation. First, the model
is optimized using only global image-level labels, by means of the accuracy,
AUC, and F1-score. Then, the comparison with state-of-the-art methods on
mitosis detection is assessed using the standard criteria of mitosis detection
contests [12]. A detected mitosis is considered true if it is located at most
30 pixels from an annotated mitosis. Under this criteria, precision, recall
and F1-score are computed.
### 4.3 Implementation details
The proposed method is trained using ResNet-18 [17] convolutional blocks as a
backbone. Concretely, the first $3$ blocks pre-trained on ImageNet are used as
feature extractor, which are retrained for the mitosis detection task. We
trained this architecture during $40$ epochs to optimize Eq. 1 using a batch
size of $32$ images and a learning rate of $0.0001$. In order to deal with
class imbalance, the images are sampled homogeneously according to its class
in each epoch. Also, color normalization and augmentation techniques are
employed to increase robustness against stain variations and artifacts in the
digitized slides. Images are color-normalized using the stain normalization
method of Macenko et al. [18], and data augmentation is included during
training using spatial translations, rotations, and blurring.
## 5 Results
### 5.1 Comparison to literature
The quantitative results obtained by the proposed method for mitosis
localization on the test cohort are presented in Table 1. Also, we include
results reported in previous literature on the TUPAC16 dataset. The proposed
weakly-supervised method reaches an F1-score value of 0.729, which is
comparable to prior literature without accessing to any supervision regarding
the exact location of the mitosis in the image. It should be noted that, in
addition, the best previous methods use additional training data, and require
multiple stages of label refinement. In contrast, the proposed method uses
only one training cycle. Moreover, the proposed approach obtains the best
precision on mitosis localization that only use one training phase. This could
be due to maximum aggregation, which propagates gradients only in those
regions that are highly discriminating.
Method | Precision | Recall | F-score | | Multiple
---
phases
| Location
---
supervision
| External
---
data
Paeng et al. (2017) [7] | - | - | 0.652 | $\color[rgb]{0,0,0}\times$ | $\color[rgb]{0,0,0}\times$ |
Zerhouni et al. (2017) [8] | 0.675 | 0.623 | 0.648 | $\color[rgb]{0,0,0}\times$ | $\color[rgb]{0,0,0}\times$ |
Akram et al. (2018) [9] | 0.613 | 0.671 | 0.640 | $\color[rgb]{0,0,0}\times$ | $\color[rgb]{0,0,0}\times$ | $\color[rgb]{0,0,0}\times$
Li et al. (2019) [6] | 0.64 | 0.70 | 0.669 | | $\color[rgb]{0,0,0}\times$ |
Wahab et al. (2019) [10] | 0.770 | 0.660 | 0.713 | $\color[rgb]{0,0,0}\times$ | $\color[rgb]{0,0,0}\times$ |
Mahmood et al. (2020) [19] | 0.641 | 0.642 | 0.642 | $\color[rgb]{0,0,0}\times$ | |
Nateghi et al. (2021) [11] | 0.764 | 0.714 | 0.738 | $\color[rgb]{0,0,0}\times$ | $\color[rgb]{0,0,0}\times$ |
Sohail et al. (2021) [12] | 0.710 | 0.760 | 0.750 | $\color[rgb]{0,0,0}\times$ | $\color[rgb]{0,0,0}\times$ | $\color[rgb]{0,0,0}\times$
Proposed | 0.739 | 0.720 | 0.729 | | |
Table 1: Performance comparison of the proposed model with existing methods on
test subset of TUPAC16 auxiliary dataset.
### 5.2 Ablation experiments
In the following, we depict ablation experiments to motivate the choice of the
different components of the proposed method.
#### Weakly Supervised Setting.
First, we study the configuration of the WSS model architecture. To do so, we
explore the most outstanding configurations. First, embedding-based approaches
that aggregate spatial features before the classification layers, and
instance-based approaches that apply the classification layer spatially. Also,
we use different aggregation methods, such as mean and max operations, and the
trainable attentionMIL mechanism [13]. Results are presented in Table 2. The
figures of merit show that, although all methods reach similar results at
image-level, only the instance-based with maximum aggregation performs
properly on mitosis localization, since it is the only method that penalizes
false positive localization during training.
Configuration | | F-score
---
image-level
| F-score
---
localization
embedding - mean | 0.762 | 0.134
embedding - max | 0.772 | 0.234
attentionMIL [13] | 0.768 | 0.014
instance - mean | 0.753 | 0.004
instance - max | 0.761 | 0.729
Table 2: Performance comparison of the different configurations of the WSS
proposed model, in terms of aggregation strategies. Results are presented for
mitosis localization and image-level classification.
#### On the importance of the feature complexity.
Convolutional neural networks combine stacked convolutional and pooling
operations, which merge spatial information. Thus, later layers in CNNs
extract high-level features with complex shapes, and low spatial resolution.
Although CNNs for classification tasks usually benefit from deep structures,
we observed that spatial resolution and low-level features are vital for
mitosis localization, as shown in Figure 3. For that reason, we used only 3
residual blocks of ResNet-18 architecture for the proposed method.
Figure 3: Ablation study on the number of residual blocks used for feature
extraction. Metric presented for mitosis localization.
### 5.3 Qualitative evaluation
Finally, we present visual results of the proposed method performance on the
test subset in Figure 4. In particular, correct detections of mitotic cells
(true positives), cells wrongly classified as mitosis (false positives) and
non-detected mitosis (false negatives) are shown in green, yellow and blue
colors, respectively. Visual results show a promising performance of the
proposed method, with false positive classifications occur with irregularly-
shaped non-mitotic cells.
Figure 4: Qualitative evaluation of the proposed method for mitosis
localization. Green: true positive; Blue: false negative; Yellow: false
positive.
## 6 Conclusions
In this work, we have presented a deep learning model for weakly supervised
mitosis location on H&E histology images. In particular, the model is composed
of a narrow CNN backbone that leverages pixel-level predictions. Then, those
predictions are grouped into an image-level score using maximum aggregation,
that serves as proxy for CNN training via global labels. Thanks to the maximum
operation, that only focus on very discriminative cells, obtained results have
very few false positive predictions, and reaching a precision of $0.739$ and
an F-score of $0.729$ on TUPAC16 dataset. The proposed approach, yet simple,
reaches competitive performance in comparison to previous literature, without
requiring any information of mitosis localization in the image during
training. This calls into question the efficiency of other approaches, which
require this location information, and resort to multiple phases of training
to refine centroid-based labels and to alleviate false positive predictions.
Further research could complement the proposed setting to take into account
uncertainties on predicted mitoses, and to incorporate location information
using a soft, constrained formulation.
## References
* [1] Jan P.A. Baak, Paul J. van Diest, Feja J. Voorhorst, Elsken van der Wall, Louk V.A.M. Beex, Jan B. Vermorken, and Emiel A.M. Janssen, “Prospective multicenter validation of the independent prognostic value of the mitotic activity index in lymph node–negative breast cancer patients younger than 55 years,” Journal of Clinical Oncology, vol. 23, no. 25, 2005.
* [2] Joann G. Elmore, Gary M. Longton, Patricia A. Carney, Berta M. Geller, Tracy Onega, Anna N. A. Tosteson, Heidi D. Nelson, Margaret S. Pepe, Kimberly H. Allison, Stuart J. Schnitt, Frances P. O’Malley, and Donald L. Weaver, “Diagnostic Concordance Among Pathologists Interpreting Breast Biopsy Specimens,” JAMA, vol. 313, no. 11, pp. 1122–1132, 03 2015.
* [3] Mitko Veta, Yujing J. Heng, Nikolas Stathonikos, Babak Ehteshami Bejnordi, Francisco Beca, Thomas Wollmann, Karl Rohr, Manan A. Shah, Dayong Wang, and Mikael et al. Rousson, “Predicting breast tumor proliferation from whole-slide images: The tupac16 challenge,” Medical Image Analysis, vol. 54, pp. 111–121, 2019.
* [4] Ludovic Roux, Daniel Racoceanu, Nicolas Loménie, Maria Kulikova, Humayun Irshad, Jacques Klossa, Frédérique Capron, Catherine Genestie, Gilles Le Naour, and Metin N Gurcan, “Mitosis detection in breast cancer histological images an icpr 2012 contest,” Journal of Pathology Informatics, vol. 4, no. 1, 2013.
* [5] Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
* [6] Chao Li, Xinggang Wang, Wenyu Liu, Longin Jan Latecki, Bo Wang, and Junzhou Huang, “Weakly supervised mitosis detection in breast histopathology images using concentric loss,” Medical Image Analysis, vol. 53, pp. 165–178, 2019.
* [7] Kyunghyun Paeng, Sangheum Hwang, Sunggyun Park, and Minsoo Kim, “A unified framework for tumor proliferation score prediction in breast histopathology,” 2016.
* [8] Erwan Zerhouni, David Lanyi, Matheus Viana, and Maria Gabrani, “Wide residual networks for mitosis detection,” 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), 2017.
* [9] Saad Ullah Akram, Talha Qaiser, Simon Graham, Juho Kannala, Janne Heikkilä, and Nasir Rajpoot, “Leveraging unlabeled whole-slide-images for mitosis detection,” 2018\.
* [10] Noorul Wahab, Asifullah Khan, and Yeon Soo Lee, “Transfer learning based deep cnn for segmentation and detection of mitoses in breast cancer histopathological images,” Microscopy, vol. 68, no. 3, pp. 216–233, 2019.
* [11] Ramin Nateghi, Habibollah Danyali, and Mohammad Sadegh Helfroush, “A deep learning approach for mitosis detection: Application in tumor proliferation prediction from whole slide images,” Artificial Intelligence in Medicine, vol. 114, pp. 102048, 2021\.
* [12] Anabia Sohail, Asifullah Khan, Noorul Wahab, Aneela Zameer, and Saranjam Khan, “A multi-phase deep cnn based mitosis detection framework for breast cancer histopathological images,” Scientific Reports, vol. 11, no. 1, 2021.
* [13] Maximilian Ilse, Jakub M. Tomczak, and Max Welling, “Attention-based deep multiple instance learning,” in 35th International Conference on Machine Learning (ICML), 2018\.
* [14] Julio Silva-Rodríguez, Adrián Colomer, and Valery Naranjo, “Weglenet: A weakly-supervised convolutional neural network for the semantic segmentation of gleason grades in prostate histology images,” Computerized Medical Imaging and Graphics, vol. 88, pp. 101846, 2021\.
* [15] Yude Wang, Jie Zhang, Meina Kan, Shiguang Shan, and Xilin Chen, “Self-Supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [16] Soufiane Belharbi, Jérôme Rony, Jose Dolz, Ismail Ben Ayed, Luke Mccaffrey, and Eric Granger, “Deep interpretable classification and weakly-supervised segmentation of histology images via max-min uncertainty,” IEEE Transactions on Medical Imaging, vol. 41, no. 3, pp. 702–714, 2022.
* [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
* [18] Marc Macenko, Marc Niethammer, J. S. Marron, David Borland, John T. Woosley, Xiaojun Guan, Charles Schmitt, and Nancy E. Thomas, “A method for normalizing histology slides for quantitative analysis,” in 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2009, pp. 1107–1110.
* [19] Tahir Mahmood, Muhammad Arsalan, Muhammad Owais, Min Beom Lee, and Kang Ryoung Park, “Artificial intelligence-based mitosis detection in breast cancer histopathology images using faster r-cnn and deep cnns,” Journal of Clinical Medicine, vol. 9, no. 3, pp. 749, 2020.
|
# Almost periodic distributions and crystalline measures
S.Yu. Favorov Serhii Favorov,
iiiFaculty of Mathematics and Computer Science, Jagiellonian University,
iii Lojasiewicza 6, 30-348 Krakow, Poland
iii Faculty of Mathematics and Informatics, Karazin’s Kharkiv National
University
iii Svobody sq., 4, Kharkiv, Ukraine 61022<EMAIL_ADDRESS>
> Abstract. Based on the properties of distributions and measures with
> discrete support, we investigate temperate almost periodic distributions on
> the Euclidean space and connection with their Fourier transforms. We also
> study relations between the Fourier transform of almost periodic
> distributions and their Fourier coefficients. The main result of the article
> is the construction of a crystalline measure on the real line, which is
> neither almost periodic distribution, nor a Fourier quasicrystal.
>
> AMS Mathematics Subject Classification: 46F12, 42B10, 42A75
>
> Keywords: temperate distribution, almost periodic distribution, Fourier
> transform, Fourier coefficients, crystalline measure, Fourier quasicrystal
The concept of Fourier quasicrystal was inspired by experimental discovery of
nonperiodic atomic structures with diffraction patterns consisting of spots,
made in the middle of 80’s. A number of papers has appeared, in which the
properties of Fourier quasicrystals are studied. Conditions for support of
quasicrystals to be a finite union of discrete lattices are found, and
nontrivial examples of quasicrystals are constructed
([24],[2],[4]-[6],[9]-[12], [14]-[17],[19]-[22]). These studies have been
extended to the more general setting of temperate distributions with discrete
support and spectrum ([18],[3],[7],[8], [23]). Note that the properties of
almost periodic measures and distributions were more or less explicitly used
in these investigates.
The structure of this article is as follows. In Section 1 we give the
necessary definitions and notations. In Section 2 we prove some properties of
distributions with locally finite support and spectrum. These results are a
slight enhancement of the results of [3] and are included here for the sake of
completeness. In section 3 we introduce a notion of s-almost periodicity of
temperate distributions, which is very close to the classical almost
periodicity. Here we present some theorems on connections between temperate
distributions and their Fourier transform, some of them refer to almost
periodic distributions, and others to s-almost periodic distributions. Based
on them, we obtain the main result of the paper: the example of a crystalline
measure that is not a Fourier quasicrystal. In Section 4 we give proofs of
some theorems from Section 3. Finally, in Section 5 we show that Meyer’s
Theorem on the connection between the Fourier transform of almost periodic
functions and measures and their Fourier coefficients [22] can be generalized
to temperate almost periodic distributions and their Fourier coefficients in
the sense of L.Ronkin [25].
## 1\. Definitions and notations
Denote by $S({\mathbb{R}}^{d})$ the Schwartz space of test functions
$\varphi\in C^{\infty}({\mathbb{R}}^{d})$ with the finite norms
$N_{n,m}(\varphi)=\sup_{{\mathbb{R}}^{d}}\\{\max\\{1,|x|^{n}\\}\max_{\|k\|\leq
m}|D^{k}\varphi(x)|\\},\quad n,m=0,1,2,\dots,$
where
$k=(k_{1},\dots,k_{d})\in({\mathbb{N}}\cup\\{0\\})^{d},\
\|k\|=k_{1}+\dots+k_{d},\
D^{k}=\partial^{k_{1}}_{x_{1}}\dots\partial^{k_{d}}_{x_{d}}.$
These norms generate the topology on $S({\mathbb{R}}^{d})$. Elements of the
space $S^{*}({\mathbb{R}}^{d})$ of continuous linear functionals on
$S({\mathbb{R}}^{d})$ are called temperate distributions. For each temperate
distribution $f$ there are $C<\infty$ and $n,\,m\in{\mathbb{N}}\cup\\{0\\}$
such that for all $\varphi\in S({\mathbb{R}}^{d})$
(1) $|f(\varphi)|\leq CN_{n,m}(\varphi).$
Moreover, this estimate is sufficient for the distribution $f$ to belong to
$S^{*}({\mathbb{R}}^{d})$ (see [27], Ch.3).
The Fourier transform of a temperate distribution $f$ is defined by the
equality
$\hat{f}(\varphi)=f(\hat{\varphi})\quad\mbox{for all}\quad\varphi\in
S({\mathbb{R}}^{d}),$
where
$\hat{\varphi}(y)=\int_{{\mathbb{R}}^{d}}\varphi(x)\exp\\{-2\pi i\langle
x,y\rangle\\}dx$
is the Fourier transform of the function $\varphi$. By $\check{\varphi}$ we
denote the inverse Fourier transform of $\varphi$. The Fourier transform is
the bijection of $S({\mathbb{R}}^{d})$ on itself and the bijection of
$S^{*}({\mathbb{R}}^{d})$ on itself. The support of $\hat{f}$ is called
spectrum of $f$.
We will say that a set $A\subset{\mathbb{R}}^{d}$ is locally finite if the
intersection of $A$ with any ball is finite, $A$ is relatively dense if there
is $R<\infty$ such that $A$ intersects with each ball of radius $R$, and $A$
is uniformly discrete, if $A$ is locally finite and has a strictly positive
separating constant
$\eta(A):=\inf\\{|x-x^{\prime}|:\,x,\,x^{\prime}\in A,\,x\neq x^{\prime}\\}.$
Also, we will say that $A$ is polynomially discrete, or shortly p-discrete, if
there are positive numbers $c,h$ such that
(2) $|x-x^{\prime}|\geq c\min\\{1,\,|x|^{-h},|x^{\prime}|^{-h}\\}\qquad\forall
x,x^{\prime}\in A,\quad x\neq x^{\prime}.$
A set $A$ is of bounded density if it is locally finite and
$\sup_{x\in{\mathbb{R}}^{d}}\\#A\cap B(x,1)<\infty.$
As usual, $\\#E$ is a number of elements of the finite set $E$, and $B(x,r)$
is the ball with center at the point $x$ and radius $r$.
An element $f\in S^{*}({\mathbb{R}}^{d})$ is called a crystalline measure if
$f$ and $\hat{f}$ are complex-valued measures on ${\mathbb{R}}^{d}$ with
locally finite supports.
Denote by $|\mu|(A)$ the variation of the complex-valued measure $\mu$ on $A$.
If both measures $|\mu|$ and $|\hat{\mu}|$ have locally finite supports and
belong to $S^{*}({\mathbb{R}}^{d})$, we say that $\mu$ is a Fourier
quasicrystal. A measure
$\mu=\sum_{\lambda\in\Lambda}a_{\lambda}\delta_{\lambda}$ with
$a_{\lambda}\in{\mathbb{C}}$ and countable $\Lambda$ is called purely point,
here $\delta_{y}$ is the unit mass at the point $y\in{\mathbb{R}}^{d}$. If
this is the case, we will replace $a_{\lambda}$ with $\mu(\lambda)$ and write
$\operatorname{supp}\mu=\\{\lambda:\,\mu(\lambda)\neq 0\\}$.
## 2\. Temperate distributions with locally finite support
By [26], every distribution $f$ with locally finite support $\Lambda$ has the
form
$f=\sum_{\lambda\in\Lambda}P_{\lambda}(D)\delta_{\lambda},\quad
P_{\lambda}(x)=\sum_{\|k\|\leq K_{\lambda}}p_{k}(\lambda)x^{k},\quad
x\in{\mathbb{R}}^{d},\ p_{k}(\lambda)\in{\mathbb{C}},\ K_{\lambda}<\infty.$
Here, as usual, $x^{k}=x_{1}^{k_{1}}\cdots x_{d}^{k_{d}}$. Note that
$\operatorname{ord}f=\sup_{\lambda}\deg P_{\lambda}(x)\leq\infty$. .
###### Proposition 1.
Suppose $f\in S^{*}({\mathbb{R}}^{d})$ has a locally finite support $\Lambda$.
Then
i) $\operatorname{ord}f<\infty$, hence,
(3) $f=\sum_{\lambda\in\Lambda}\sum_{\|k\|\leq
K}p_{k}(\lambda)D^{k}\delta_{\lambda},\quad
k\in({\mathbb{N}}\cup\\{0\\})^{d},\quad K=\operatorname{ord}f;$
in particular, if $f$ has a locally finite spectrum $\Gamma$, then
$\operatorname{ord}\hat{f}<\infty$ and
(4) $\hat{f}=\sum_{\gamma\in\Gamma}\sum_{\|j\|\leq
J}q_{j}(\gamma)D^{j}\delta_{\gamma},\quad
j\in({\mathbb{N}}\cup\\{0\\})^{d},\quad J=\operatorname{ord}\hat{f}.$
ii) If $\Lambda$ is $p$-discrete, then there exist $C,\,T<\infty$ such that
for all $k$
(5) $|p_{k}(\lambda)|\leq C\max\\{1,|\lambda|^{T}\\}\quad\mbox{for all}\
\lambda\in\Lambda.$
Moreover, there exists $T_{1}<\infty$ such that
(6) $\sum_{\lambda\in\Lambda,|\lambda|<R}\sum_{\|k\|\leq
K}|p_{k}(\lambda)|=O(R^{T_{1}})\quad\mbox{as}\quad R\to\infty.$
Proof of Proposition 1. i) Let $\lambda\in\Lambda$ and $\varepsilon\in(0,1)$
be such that
$\inf\\{|\lambda-\lambda^{\prime}|:\,\lambda^{\prime}\in\Lambda,\,\lambda^{\prime}\neq\lambda\\}>\varepsilon.$
Let $\varphi$ be a non-negative function on ${\mathbb{R}}$ such that
(7) $\varphi(|x|)\in C^{\infty}({\mathbb{R}}^{d}),\quad\varphi(|x|)=0\mbox{
for }|x|>1/2,\quad\varphi(|x|)=1\mbox{ for }|x|\leq 1/3.$
Then set
$\varphi_{\lambda,k,\varepsilon}(x)=\frac{(x-\lambda)^{k}}{k!}\varphi\left(\frac{|x-\lambda|}{\varepsilon}\right)\in
S({\mathbb{R}}^{d}),$
where, as usual, $k!=k_{1}!\cdots k_{d}!$. It is easily shown that
$f(\varphi_{\lambda,k,\varepsilon})=(-1)^{\|k\|}p_{k}(\lambda).$
Let $f$ satisfy (1) with some $m,\,n$. We get
$|f(\varphi_{\lambda,k,\varepsilon})|\leq
C\sup_{|x-\lambda|<\varepsilon}\max\\{1,|x|^{n}\\}\sum_{\|\alpha+\beta\|\leq
m}c(\alpha,\beta)\left|D^{\alpha}\varphi\left(\frac{|x-\lambda|}{\varepsilon}\right)D^{\beta}\left(\frac{(x-\lambda)^{k}}{k!}\right)\right|,$
where $\alpha,\beta\in({\mathbb{N}}\cup\\{0\\})^{d}$ and
$c(\alpha,\beta)<\infty$. Note that
$\left|D^{\alpha}\varphi\left(\frac{|x-\lambda|}{\varepsilon}\right)\right|\leq\varepsilon^{-\|\alpha\|}c(\alpha)\quad\mbox{for}\quad|\lambda-x|<\varepsilon/3,$
and this derivative vanishes for $|\lambda-x|\geq\varepsilon/2$. Also,
$D^{\beta}(x-\lambda)^{k}=\begin{cases}0&\text{ if }k_{j}<\beta_{j}\text{ for
at least one }j,\\\ c(k,\beta)(x-\lambda)^{k-\beta}&\text{ if
}k_{j}\geq\beta_{j}\,\ \forall j.\end{cases}.$
Since
$\max\\{1,|x|^{n}\\}\leq 2^{n}\max\\{1,|\lambda|^{n}\\}$
for $x\in\operatorname{supp}\varphi_{\lambda,k,\varepsilon}$, we get
$|p_{k}(\lambda)|\leq\sum_{\|\alpha+\beta\|\leq m,\,\beta_{j}\leq
k_{j}\,\forall
j}c(k,\alpha,\beta)\max\\{1,|\lambda|^{n}\\}\varepsilon^{\|k\|-\|\alpha+\beta\|}.$
For $\|k\|>m$ we take $\varepsilon\to 0$ and obtain $p_{k}(\lambda)=0$.
Since $\hat{f}\in S^{*}({\mathbb{R}}^{d})$, we obtain (4).
ii) Let $\Lambda$ be $p$-discrete and (5) be not satisfy. Then there is
$k,\|k\|\leq K,$ and a sequence $\lambda_{s}\to\infty$ such that
$|\lambda_{s+1}|>1+|\lambda_{s}|$ for all $s$ and
(8) $\log|p_{k}(\lambda_{s})|/\log|\lambda_{s}|\to\infty,\quad s\to\infty.$
Put $\beta_{s}=c|\lambda_{s}|^{-h}$ with $c$ from (2) and
$\psi_{s,k}(x)=\frac{(x-\lambda_{s})^{k}}{k!}\varphi\left(\frac{|x-\lambda_{s}|}{\beta_{s}}\right),\qquad\Psi_{k}(x)=\sum_{s=1}^{\infty}\frac{\psi_{s,k}(x)}{p_{k}(\lambda_{s})}.$
We may suppose that $\lambda_{1}$ is large enough such that
$\operatorname{supp}\psi_{s,k}\cap\operatorname{supp}\psi_{s^{\prime},k}=\emptyset,\
s\neq s^{\prime}$. Then by (8),
$1/p_{k}(\lambda_{s})=o(1/|\lambda_{s}|^{T}),\quad|\lambda_{s}|\to\infty,\quad\mbox{
for every }T<\infty.$
Since
$D^{j}(\psi_{s,k}(x))=O(|\lambda_{s}|^{h\|j\|}),\
j\in({\mathbb{N}}\cup\\{0\\})^{d},$
we see that
$D^{j}(\Psi_{k}(x))=o(1/|x|^{T-h\|j\|}),\quad x\to\infty,$
and $\Psi_{k}\in S({\mathbb{R}}^{d})$.
Since $\Lambda$ is $p$-discrete, we get $\lambda\not\in
B(\lambda_{s},c|\lambda_{s}|^{-h})$ for all
$\lambda\in\Lambda\setminus\\{\lambda_{s}\\}$. Therefore, $f(\Psi_{k})$ is
equal to
$\sum_{\lambda\in\Lambda}\sum_{\|l\|\leq
K}\sum_{s}(-1)^{\|l\|}p_{l}(\lambda)p_{k}(\lambda_{s})^{-1}D^{l}(\psi_{s,k})(\lambda)=\sum_{s}\sum_{\|l\|\leq
K}(-1)^{\|l\|}p_{l}(\lambda_{s})p_{k}(\lambda_{s})^{-1}D^{l}(\psi_{s,k})(\lambda_{s}).$
Since $D^{l}(\psi_{s,k})(\lambda_{s})=0$ for $l\neq k$ and
$D^{k}(\psi_{s,k})(\lambda_{s})=1$, we obtain the contradiction.
Estimate (6) follows immediately from (5) and the following simple lemma:
###### Lemma 1 (cf. [8], a part of the proof of Theorem 6).
If $S$ is $p$-discrete set, then $\\#S\cap B(0,R)=O(R^{T^{\prime}})$ as
$R\to\infty$ with $T^{\prime}<\infty$.
Remark. Proposition 1 was earlier proved by V.Palamodov [23] for temperate
distributions with uniformly discrete support.
###### Proposition 2.
Let $\mu\in S^{*}({\mathbb{R}}^{d})$ be a measure. Then $|\mu|$ belongs to
$S^{*}({\mathbb{R}}^{d})$ if and only if there is $T<\infty$ such that
$|\mu|(B(0,R))=O(R^{T})$ as $R\to\infty$.
Proof. Any non-negative measure $\nu$ on ${\mathbb{R}}^{d}$ satisfying the
condition $\nu(B(0,R))=O(R^{T})$ as $R\to\infty$ belongs to
$S^{*}({\mathbb{R}}^{d})$ (cf.[26]). The converse statement see [6], Lemma 1.
It follows from Propositions 1 and 2
###### Theorem 1 (cf. also [8]).
Let a measure $\mu$ has p-discrete support and belongs to
$S^{*}({\mathbb{R}}^{d})$. Then $|\mu|\in S^{*}({\mathbb{R}}^{d})$ too. In
particular, every crystalline measure with p-discrete support and p-discrete
spectrum is the Fourier quasicrystal.
M.Kolountzakis, J.Lagarias proved in [11] that the Fourier transform of every
measure $\mu$ on the line ${\mathbb{R}}$ with locally finite support of
bounded density, bounded masses $\mu(x)$, and locally finite spectrum is also
a measure $\hat{\mu}=\sum_{\gamma\in\Gamma}q_{\gamma}\delta_{\gamma}$ with
uniformly bounded $q_{\gamma}$. The following proposition generalizes this
result for distributions from $S^{*}({\mathbb{R}}^{d})$.
###### Proposition 3.
Suppose $f\in S^{*}({\mathbb{R}}^{d})$ has form (3) with some $K$ and
countable $\Lambda$, and $\hat{f}$ has form (4) with the locally finite
support $\Gamma$. If
$\rho_{f}(r):=\sum_{|\lambda|<r}\sum_{\|k\|\leq
K}|p_{k}(\lambda)|=O(r^{d+H}),\quad r\to\infty,\quad H\geq 0,$
then $\operatorname{ord}\hat{f}\leq H$; if $\rho_{f}(r)=o(r^{d+H})$ as
$r\to\infty$, then $\operatorname{ord}\hat{f}<H$.
Furthermore, in the case of integer $H$ and $\|j\|=H$ we have
$|q_{j}(\gamma)|\leq C^{\prime}\max\\{1,|\gamma|^{K}\\}$; for the case of
uniformly discrete $\Gamma$ this estimate with the same $K$ takes place for
all $j$.
###### Corollary 1.
If $f\in S^{*}({\mathbb{R}}^{d})$ has form (3) with countable $\Lambda$,
locally finite spectrum $\Gamma$, and $\rho_{f}(r)=O(r^{d})$ as $r\to\infty$,
then $\hat{f}$ is a measure, and
$\hat{f}=\sum_{\gamma\in\Gamma}q(\gamma)\delta_{\gamma},\ |q(\gamma)|\leq
C^{\prime}\max\\{1,|\gamma|^{K}\\}.$
Proof of Proposition 3. Let $\gamma\in\Gamma$ and pick $\varepsilon\in(0,1)$
such that
$\inf\\{|\gamma-\gamma^{\prime}|:\,\gamma^{\prime}\in\Gamma,\,\gamma^{\prime}\neq\gamma\\}>\varepsilon.$
Let $\varphi$ be the same as in the proof of Proposition 1. Put
$\varphi_{\gamma,l,\varepsilon}(y)=\frac{(y-\gamma)^{l}}{l!}\varphi(|y-\gamma|/\varepsilon)\in
S({\mathbb{R}}^{d}).$
We have
$(-1)^{\|l\|}q_{l}(\gamma)=\sum_{\|j\|\leq
J}q_{j}(\gamma)D^{j}\delta_{\gamma}(\varphi_{\gamma,l,\varepsilon}(y))=(\hat{f},\varphi_{\gamma,l,\varepsilon})=(f,\hat{\varphi}_{\gamma,l,\varepsilon}).$
Note that
$\hat{\varphi}_{\gamma,l,\varepsilon}(x)=e^{-2\pi i\langle
x,\gamma\rangle}(l!)^{-1}(-2\pi
i)^{-\|l\|}D^{l}(\widehat{\varphi(\cdot/\varepsilon)})=c(l)e^{-2\pi i\langle
x,\gamma\rangle}\varepsilon^{d+\|l\|}(D^{l}\hat{\varphi})(\varepsilon x).$
Therefore,
$D^{k}(\hat{\varphi}_{\gamma,l,\varepsilon})(x)=\varepsilon^{d+\|l\|}\sum_{\alpha+\beta=k}c(\alpha,\beta)D^{\alpha}\left[e^{-2\pi
i\langle x,\gamma\rangle}\right]D^{\beta}[(D^{l}\hat{\varphi})(\varepsilon
x)]$ $=\sum_{\alpha+\beta=k}c(\alpha,\beta)(-2\pi
i)^{\|\alpha\|}\gamma^{\alpha}e^{-2\pi i\langle
x,\gamma\rangle}\varepsilon^{d+\|l\|+\|\beta\|}(D^{\beta+l}\hat{\varphi})(\varepsilon
x).$
Since $\hat{\varphi}(\varepsilon x)\in S({\mathbb{R}}^{d})$, we get for every
$x\in{\mathbb{R}}^{d}$ and $n\in{\mathbb{N}}\cup\\{0\\}$
$|D^{\beta+l}(\hat{\varphi})(\varepsilon x)|\leq
N_{n,\|\beta+l\|}(\hat{\varphi})(\max\\{1,|\varepsilon x|^{n}\\})^{-1}.$
Therefore for every $k$, $\|k\|\leq K$,
$|D^{k}(\hat{\varphi}_{\gamma,l,\varepsilon})(x)|\leq
C(K,n)\varepsilon^{d+\|l\|}\max\\{1,|\gamma|^{K}\\}(\max\\{1,|\varepsilon
x|^{n}\\})^{-1},$
where $C(k,n)$ depends on $\varphi$. Now we may estimate
$(f,\hat{\varphi}_{\gamma,l,\varepsilon})$ as
(9)
$\left|\sum_{k}\sum_{\lambda}p_{k}(\lambda)D^{k}(\hat{\varphi}_{\gamma,l,\varepsilon})(\lambda)\right|\leq
C(K,n)\varepsilon^{d+\|l\|}\max\\{1,|\gamma|^{K}\\}\int_{0}^{\infty}\frac{\rho_{f}(dt)}{\max\\{1;\,(\varepsilon
t)^{n}\\}}.$
If $\rho_{f}(r)=O(r^{d+H})$ for $r\to\infty$, take $t_{0}$ such that
$|\rho(t)|<C_{0}t^{d+H}$ for $t>t_{0}$. If $\rho_{f}(r)=o(r^{d+H})$, fix any
$\eta>0$ and take $t_{0}=t_{0}(\eta)$ such that $|\rho(t)|<\eta t^{d+H}$ for
$t>t_{0}$. Then pick $n>d+H$ and $\varepsilon<1/t_{0}$. Integrating by parts
and using the estimate for $\rho_{f}(t)$, we obtain
$\int_{0}^{\infty}\max\\{1,(\varepsilon
t)^{n}\\}^{-1}\rho_{f}(dt)=\rho_{f}(1/\varepsilon)+\int_{1/\varepsilon}^{\infty}(\varepsilon
t)^{-n}\rho_{f}(dt)\leq\frac{nC_{0}}{\varepsilon^{n}}\int_{1/\varepsilon}^{\infty}t^{d+H-n-1}dt.$
Therefore, the left-hand side of (9) not more than
$\varepsilon^{\|l\|-H}C_{0}C^{\prime}\max\\{1,|\gamma|^{K}\\}$, and
$|q_{l}(\gamma)|\leq
C^{\prime}C_{0}\max\\{1,|\gamma|^{K}\\}\varepsilon^{\|l\|-H}.$
If $\|l\|>H$, we take $\varepsilon\to 0$ and get $q_{l}(\gamma)=0$, hence,
$J=\operatorname{ord}\hat{f}\leq H$.
If $H$ is integer, we get $|q_{l}(\gamma)|\leq
C^{\prime}C_{0}\max\\{1,|\gamma|^{K}\\}$ for $\|l\|=H$.
If $\rho_{f}(r)=o(r^{d+H})$, we replace $C_{0}$ by $\eta$ and note that $\eta$
is arbitrary small for $\varepsilon$ small enough. Hence, $q_{l}(\gamma)=0$
for $\|l\|=H$.
Finally, if $\Gamma$ is uniformly discrete, we take
$\varepsilon=\varepsilon_{0}<\eta(\Gamma)/2$ for all $\gamma\in\Gamma$ and
obtain the bound
$|q_{l}(\gamma)|\leq\varepsilon_{0}^{-H}C^{\prime}C_{0}\max\\{1,|\gamma|^{K}\\}\quad\forall
l,\|l\|\leq J.$
## 3\. Almost periodic distributions and their properties
Recall that a continuous function $g$ on ${\mathbb{R}}^{d}$ is almost periodic
if for any $\varepsilon>0$ the set of $\varepsilon$-almost periods of $g$
$\\{\tau\in{\mathbb{R}}^{d}:\,\sup_{x\in{\mathbb{R}}^{d}}|g(x+\tau)-g(x)|<\varepsilon\\}$
is a relatively dense set in ${\mathbb{R}}^{d}$.
Almost periodic functions are uniformly bounded on ${\mathbb{R}}^{d}$. The
class of almost periodic functions is closed with respect to taking absolute
values, and finite linear combinations; the limit of a uniformly in
${\mathbb{R}}^{d}$ convergent sequence of almost periodic functions is also
almost periodic.
A typical example of an almost periodic function is every absolutely
convergent exponential sum $\sum c_{n}\exp\\{2\pi i\langle
x,\omega_{n}\rangle\\}$ with
$\omega_{n}\in{\mathbb{R}}^{d},\,c_{n}\in{\mathbb{C}}$ (cf.,for example, [1],
[22]).
A measure $\mu$ on ${\mathbb{R}}^{d}$ is called almost periodic if the
function
$(\psi\star\mu)(t)=\int_{{\mathbb{R}}^{d}}\psi(t-x)d\mu(x)$
is almost periodic in $t\in{\mathbb{R}}^{d}$ for each continuous function
$\psi$ on ${\mathbb{R}}^{d}$ with compact support. A distribution $f\in
S^{*}({\mathbb{R}}^{d})$ is almost periodic if the function $(\psi\star
f)(t)=f(\psi(t-\cdot))$ is almost periodic in $t\in{\mathbb{R}}^{d}$ for each
$\psi\in C^{\infty}$ with compact support (see [13], [25], [19], [20], [22]).
Clearly, every almost periodic distribution has a relatively dense support.
But there are measures that are almost periodic temperate distributions, but
are not almost periodic as measures (see [19]).
###### Definition.
A distribution $f\in S^{*}({\mathbb{R}}^{d})$ is s-almost periodic, if the
function $(\psi\star f)(t)=f(\psi(t-\cdot))$ is almost periodic in
$t\in{\mathbb{R}}^{d}$ for each $\psi\in S({\mathbb{R}}^{d})$.
The following theorem plays a very important role in our investigations.
###### Theorem 2.
If $f$ is a temperate distribution and its Fourier transform $\hat{f}$ is a
pure point measure such that $|\hat{f}|(B(0,r))=O(r^{T})$ for $r\to\infty$
with some $T<\infty$, then $f$ is s-almost periodic distribution.
The proof is very easy. Let
$\hat{f}=\sum_{\gamma\in\Gamma}b(\gamma)\delta_{\gamma}$,
$M(r)=|\hat{f}|(B(0,r))$. For any $\psi\in S({\mathbb{R}}^{d})$ we have
(10) $f(\psi(t-\cdot))=(\hat{f}(y),\hat{\psi}(y)e^{2\pi i\langle
t,y\rangle})=\sum_{\gamma\in\Gamma}b(\gamma)\hat{\psi}(\gamma)e^{2\pi i\langle
t,\gamma\rangle}.$
Since $|\hat{\psi}(y)|\leq N_{T-1,0}(\hat{\psi})|y|^{-T-1}$ for $|y|>1$ and
$\sum_{\gamma\in\Gamma}|b(\gamma)||\hat{\psi}(\gamma)|\leq
C_{0}+C_{1}\int_{1}^{\infty}r^{-T-1}M(dr)<\infty,$
we see that the series in (10) absolutely converges, and the function
$(f\star\psi)(t)$ is almost periodic.
From Proposition 2 it follows
###### Corollary 2.
If $f\in S^{*}({\mathbb{R}}^{d})$, $\hat{f}$ is pure point measure, and
$|\hat{f}|\in S^{*}({\mathbb{R}}^{d})$, then $f$ is s-almost periodic
distribution. In particular, every Fourier quasicrystal is s-almost periodic
distribution.
Using Proposition 1, we also get
###### Corollary 3.
Let $f\in S^{*}({\mathbb{R}}^{d})$ have $p$-discrete spectrum, and the Fourier
transform $\hat{f}$ is a measure. Then $f$ is s-almost periodic distribution.
###### Corollary 4.
Let $f\in S^{*}({\mathbb{R}}^{d})$ of form (3) have locally finite spectrum
$\Gamma$ with polynomial growth of numbers $\\#(\Gamma\cap B(0,r))$. If
$\sum_{\|k\|\leq
K}\sum_{|\lambda|<r}|p_{k}(\lambda)|=O(r^{d})\quad\mbox{for}\quad r\to\infty,$
then $f$ is s-almost periodic distribution.
Indeed, by Corollary 1, $\hat{f}$ is a measure with polynomially growth
coefficients.
Evidently, every s-almost periodic distribution is an almost periodic
distribution too. I do not know if there is an almost periodic distribution,
which is not s-almost periodic. However, the following assertion is valid:
###### Theorem 3.
Every almost periodic (in sense of distributions) non-negative measure $\mu\in
S^{*}({\mathbb{R}}^{d})$ is s-almost periodic distribution. The same
implication is valid if $\mu$ is a complex-valued measure on
${\mathbb{R}}^{d}$ such that
(11) $\sup_{x\in{\mathbb{R}}^{d}}|\mu|(B(x,1))<\infty.$
It is easy to check that every almost periodic in the sense of distributions
measure $\mu$ under condition (11) is almost periodic in sense of measures.
Proofs of Theorem 3 and the following ones are given in the next Section 4.
###### Theorem 4.
If $f\in S^{*}({\mathbb{R}}^{d})$ is an almost periodic distribution with
locally finite spectrum $\Gamma$, then $\hat{f}$ is a measure.
Show that $p$-discreteness of support of a measure is closely connected with
s-almost periodicity of its Fourier transform:
###### Theorem 5.
In order for each measure $\mu\in S^{*}({\mathbb{R}}^{d})$ with support in a
fixed locally finite set $A\subset{\mathbb{R}}^{d}$ to have s-almost periodic
Fourier transform $\hat{\mu}$, it is necessary and sufficient that $A$ be
$p$-discrete.
Moreover, if $\hat{\mu}\star\psi(t)$ is bounded for all $\psi\in
S({\mathbb{R}}^{d})$ and $\mu\in S^{*}({\mathbb{R}}^{d})$ with
$\operatorname{supp}\mu\subset A$, then $A$ is $p$-discrete too.
###### Theorem 6.
There is a crystalline measure $\mu$ on ${\mathbb{R}}$ such that for some
$\psi\in S({\mathbb{R}})$ the function $(\mu\star\psi)(t)$ is unbounded in
$t\in{\mathbb{R}}$. In particular, $\mu$ is neither s-almost periodic
distribution, nor the Fourier quasicrystal.
Remark. Y.Meyer formulated in [20] as a theorem that any crystalline measure
is an almost periodic distribution. Then he wrote in [21] that the proof of
this theorem is incorrect and formulated the corresponding result as
Conjecture 2.1. Theorem 6 gives only a partial answer on this conjecture,
because the function $\psi\in S({\mathbb{R}})$, which constructs in the proof
of Theorem 6, does not have compact support. We don’t know if there is a
function $\tilde{\psi}\in C^{\infty}$ with compact support such that
$(\mu\star\tilde{\psi})(t)$ is unbounded.
## 4\. Proofs of the theorems
Proof of Theorem 3. Let $\varphi$ be $C^{\infty}$ non-negative function with
compact support such that $\varphi(x)\equiv 1$ for $x\in B(0,1)$. Since
$\varphi\star\mu(t)$ is an almost periodic function, we see that it is
uniformly bounded. If $\mu\geq 0$, we get $\mu(B(x,1))<C$ for all
$x\in{\mathbb{R}}^{d}$.
Set $\mu^{t}(x):=\mu(t-x)$ with $t\in{\mathbb{R}}^{d}$. For every complex-
valued measure $\mu$ under condition (11) we get
$M(r):=|\mu^{t}|(B(0,r))<Cr^{d}$ for all $r>1$, where the constant $C$ is the
same for all $t$. Take $\psi\in S({\mathbb{R}}^{d})$. Then $|\psi(x)|\leq
C_{1}|x|^{-d-1}$ for $|x|>1$. For any $\varepsilon>0$ there is $R<\infty$ that
does not depend on $t$ and such that
$\left|\int_{|x|>R}\psi(x)\mu^{t}(dx)\right|\leq
C_{1}\int_{R}^{\infty}r^{-d-1}M(dr)\leq
C_{1}(d+1)\int_{R}^{\infty}M(r)r^{-d-2}dr<\varepsilon/3.$
Therefore for all $t\in{\mathbb{R}}^{d}$
(12)
$\left|\int_{|t-x|>R}\psi(t-x)\mu(dx)\right|=\left|\int_{|x^{\prime}|>R}\psi(x^{\prime})\mu^{t}(dx^{\prime})\right|<\varepsilon/3.$
Let $\xi(x)$ be $C^{\infty}$-function on ${\mathbb{R}}^{d}$ such that
$0\leq\xi\leq 1,\quad\xi(x)\equiv 1\quad\text{for}\quad|x|<R,\quad\xi(x)\equiv
0\quad\text{for}\quad|x|>R+1.$
The function $(\xi\psi)\star\mu(t)$ is almost periodic, hence there are a
relatively dense set $E\subset{\mathbb{R}}^{d}$ such that for any $\tau\in E$
and all $t\in{\mathbb{R}}^{d}$
$|(\xi\psi)\star\mu(t+\tau)-(\xi\psi)\star\mu(t)|<\varepsilon/3.$
Applying (12) to $(1-\xi(t+\tau))\psi(t+\tau)$ and $(1-\xi(t))\psi(t)$, we
obtain
$|\psi\star\mu(t+\tau)-\psi\star\mu(t)|\leq|(\xi\psi)\star\mu(t+\tau)-(\xi\psi)\star\mu(t)|+|(1-\xi)\psi\star\mu(t+\tau)|+|(1-\xi)\psi\star\mu(t)|<\varepsilon.$
Hence, $E$ is the set of $\varepsilon$-almost periods for the function
$\psi\star\mu$.
Proof of Theorem 4. Let $f$ be an almost periodic temperate distribution with
a locally finite spectrum $\Gamma$. By Proposition 1, $\hat{f}$ has form (4).
Suppose that $J\neq 0$ and $q_{j^{\prime}}(\gamma^{\prime})\neq 0$ for some
$\gamma^{\prime}\in\Gamma$ and
$j^{\prime}=(j^{\prime}_{1},\dots,j^{\prime}_{d}),\,\|j^{\prime}\|=J$. Without
loss of generality suppose that $j^{\prime}_{1}\neq 0$. Set
(13) $e_{1}=(1,0,\dots,0),\quad e_{2}=(0,1,\dots,0),\dots,\
e_{d}=(0,\dots,0,1),$
and $j^{\prime\prime}=j^{\prime}-e_{1}$. Pick
$\varepsilon<\min\\{|\gamma^{\prime}-\gamma|:\,\gamma\in\Gamma\\}$, and set
$\varphi_{\gamma^{\prime},j^{\prime\prime},\varepsilon}(y)=\frac{(y-\gamma^{\prime})^{j^{\prime\prime}}}{j^{\prime\prime}!}\varphi\left(\frac{|y-\gamma^{\prime}|}{\varepsilon}\right),$
where $\varphi$ is defined in (7). We have
(14) $\hat{f}(e^{2\pi i\langle
y,t\rangle}\varphi_{\gamma^{\prime},j^{\prime\prime},\varepsilon}(y))=\sum_{\gamma\in\Gamma}\sum_{\|j\|\leq
J}(-1)^{\|j\|}q_{j}(\gamma)D^{j}(e^{2\pi i\langle
y,t\rangle}\varphi_{\gamma^{\prime},j^{\prime\prime},\varepsilon}(y))(\gamma)$
Since
$D^{j}(\varphi_{\gamma^{\prime},j^{\prime\prime},\varepsilon}(y))(\gamma)=\left\\{\begin{array}[]{l}0\mbox{
if }\gamma\neq\gamma^{\prime}\mbox{ or }j\neq j^{\prime\prime},\\\ 1\mbox{ if
}\gamma=\gamma^{\prime}\mbox{ and }j=j^{\prime\prime},\end{array}\right.$
we see that expression (14) is equal to
$(-1)^{J}q_{j^{\prime}}(\gamma^{\prime})2\pi it_{1}e^{2\pi
i\langle\gamma^{\prime},t\rangle}+(-1)^{J}\sum_{s=2}^{d}q_{j^{\prime\prime}+e_{s}}(\gamma^{\prime})2\pi
it_{s}e^{2\pi
i\langle\gamma^{\prime},t\rangle}+(-1)^{J-1}q_{j^{\prime\prime}}(\gamma^{\prime})e^{2\pi
i\langle\gamma^{\prime},t\rangle}.$
The first summand is unbounded in $t_{1}\in{\mathbb{R}}$, hence the function
$f(\hat{\varphi}_{\gamma^{\prime},j^{\prime\prime},\varepsilon}(x-t))=\hat{f}(e^{2\pi
i\langle
y,t\rangle}\varphi_{\gamma^{\prime},j^{\prime\prime},\varepsilon}(-y))$
is unbounded and not almost periodic. We obtain the contradiction, therefore,
$J=0$ and $\hat{f}$ is a measure.
In order to prove Theorems 5 and 6, we need the following proposition:
###### Proposition 4.
Let $\lambda_{n},\tau_{n}\in{\mathbb{R}}^{d}$ be two sequences such that
$\tau_{n}\to 0$, $|\lambda_{n}|>|\lambda_{n-1}|+1$ for all $n$, and
(15) $\log|\tau_{n}|/\log|\lambda_{n}|\to-\infty\quad\mbox{as}\quad
n\to\infty.$
Let $\mu$ be any measure from $S^{*}({\mathbb{R}}^{d})$ such that its
restriction for each ball $B(\lambda_{n},1/(2|\lambda_{n}|))$ equals
$|\tau_{n}|^{-2/3}(\delta_{\lambda_{n}+\tau_{n}}-\delta_{\lambda_{n}})$. Then
there is $\psi\in S({\mathbb{R}}^{d})$ such that $\hat{\mu}\star\hat{\psi}(t)$
is unbounded. In particular, $\hat{\mu}$ is not s-almost periodic
distribution.
Proof. By thinning out the sequence $\tau_{n}$, we can assume that for all $n$
(16) $\sum_{p<n}|\tau_{p}|^{-1/3}<(1/3)|\tau_{n}|^{-1/3},$
and
(17) $\sum_{p>n}|\tau_{p}|^{2/3}<(2/(3\pi))|\tau_{n}|^{2/3}.$
Set
$\psi(x)=\sum_{n}|\tau_{n}|^{1/3}\varphi(|\lambda_{n}||x-\lambda_{n}|),$
where $\varphi$ is defined in (7). By (15),
$|\tau_{n}|=o(1/|\lambda_{n}|^{T})$ as $n\to\infty$ for every $T<\infty$.
Therefore, for all $k\in({\mathbb{N}}\cup\\{0\\})^{d},\ N\in{\mathbb{N}}$ we
have $D^{k}\psi(x)=o(|\lambda_{n}|^{-N})$ for $x\in
B(\lambda_{n},1/(2|\lambda_{n}|))$. Hence, $(D^{k}\psi)(x)(1+|x|^{N})$ is
bounded on ${\mathbb{R}}^{d}$ for all $N$ and $k$, i.e., $\psi\in
S({\mathbb{R}}^{d})$. By (7), $\psi(x)=0$ for
$x\not\in\cup_{n}B(\lambda_{n},1/(2\lambda_{n}))$. Hence, for every
$t\in{\mathbb{R}}^{d}$
$\hat{\mu}(\hat{\psi}(t-y))=\mu(\psi(x)e^{-2\pi i\langle
x,t\rangle})=\sum_{n=1}^{\infty}|\tau_{n}|^{-1/3}[\varphi(|\tau_{n}||\lambda_{n}|)e^{-2\pi
i\langle(\lambda_{n}+\tau_{n}),t\rangle}-\varphi(0)e^{-2\pi
i\langle\lambda_{n},t\rangle}].$
For large $n$ we have $|\tau_{n}|<1/(3|\lambda_{n}|)$, therefore,
$\varphi(|\tau_{n}||\lambda_{n}|)=\varphi(0)=1$. Besides, for
$t=\tau_{n}/(2|\tau_{n}|^{2})$
$|e^{-2\pi i\langle(\lambda_{n}+\tau_{n}),t\rangle}-e^{-2\pi
i\langle\lambda_{n},t\rangle}|=|e^{-2\pi i\langle\tau_{n},t\rangle}-1|=2.$
Therefore,
(18) $|\hat{\mu}(\hat{\psi}(t-y))|\geq
2|\tau_{n}|^{-1/3}-2\sum_{p<n}|\tau_{p}|^{-1/3}-\sum_{p>n}|\tau_{p}|^{-1/3}|e^{-2\pi
i\langle\tau_{p},t\rangle}-1|.$
Taking into account (16), (17), and the estmates
$|e^{-2\pi i\langle\tau_{p},t\rangle}-1|\leq
2\pi|\tau_{p}||t|=\pi|\tau_{p}||\tau_{n}|^{-1},$
we obtain that (18) is more than $2|\tau_{n}|^{-1/3}/3$. Hence the convolution
$(\hat{\mu}\star\hat{\psi})(t)$ is unbounded on the sequence
$\tau_{n}/(2|\tau_{n}|^{2})$, and the distribution $\hat{\mu}$ is not s-almost
periodic.
Proof of Theorem 5. Suppose that $A$ is not $p$-discrete set. Then there are
two sequences $\lambda_{n},\lambda^{\prime}_{n}\in A$ such that $\lambda_{n}$
and $\tau_{n}:=\lambda^{\prime}_{n}-\lambda_{n}$ satisfy (15) and
$|\lambda_{n}|>1+|\lambda_{n-1}|$. Check that the measure
$\mu=\sum_{n}|\tau_{n}|^{-2/3}[\delta_{\lambda^{\prime}_{n}}-\delta_{\lambda_{n}}]$
belongs to $S^{*}({\mathbb{R}}^{d})$.
For any $\phi\in S({\mathbb{R}}^{d})$ we have
$|(\mu,\phi)|\leq\sum_{n}|\tau_{n}|^{-2/3}|\phi(\lambda^{\prime}_{n})-\phi(\lambda_{n})|\leq\sum_{n}|\tau_{n}|^{1/3}N_{0,1}(\phi),$
where $N_{0,1}(\phi)$ is defined in (1). By (15), $\tau_{n}=O(n^{-T})$ for any
$T<\infty$, therefore the sum converges, $\mu$ satisfies (1), and $\mu$ is a
temperate distribution. Applying Proposition 4, we obtain that $\hat{\mu}$ is
not s-almost periodic.
Proposition 4 actually implies the unboundedness of the convolution
$\hat{\mu}\star\hat{\psi}$ with some $\psi\in S({\mathbb{R}}^{d})$, which
proves the last part of the theorem.
Sufficiency follows from Corollary 3.
Our proof of Theorem 6 uses the following lemmas:
###### Lemma 2 (Y.Meyer [20], Lemma 7).
Let $\alpha\in(0,1/6)$. For every integer $M>M_{\alpha}$ there exists an
$M$-periodic locally finite measure $\sigma=\sigma_{M}$ that is a sum of Dirac
masses on $\Lambda_{M}=M^{-1}{\mathbb{Z}}\setminus[-\alpha M,\alpha M]$, and
whose Fourier transform is also supported by $\Lambda_{M}$. To be precise,
$\operatorname{supp}\sigma=M^{-1}{\mathbb{Z}}\setminus\left[\cup_{k\in{\mathbb{Z}}}(kM+(-\alpha
M,\alpha M)\right].$
For $\tau\in{\mathbb{R}}$ set
$\sigma_{M}^{\tau}=\sum_{\lambda\in\operatorname{supp}\sigma_{M}}\delta_{\lambda+\tau}$.
###### Lemma 3.
Let $M\geq 16,\,\alpha=1/8$. Then for any $\phi\in S({\mathbb{R}})$ and
$\tau\in(0,1/2)$ we have
(19) $|(\sigma_{M}^{2\tau}-\sigma_{M}^{\tau},\phi)|\leq C^{\prime}\tau
N_{2,1}(\phi),$
where $N_{2,1}(\phi)$ is defined in (1), and $C^{\prime}$ is an absolute
constant.
Proof of Lemma 3. Taking into account that $|\lambda|\geq\alpha M>2$ for every
$\lambda\in\operatorname{supp}\sigma_{M}$, we get
$|((\delta_{\lambda+2\tau}-\delta_{\lambda+\tau}),\phi)|\leq\max_{0\leq\theta\leq
1}|\phi^{\prime}(\lambda+(1+\theta)\tau)|\tau\leq\tau
N_{2,1}(\phi)|\lambda|^{-2}.$
Note that a number of points of
$\operatorname{supp}\sigma_{M}^{2\tau}\cup\operatorname{supp}\sigma_{M}^{\tau}$
on the interval
$((k-1)M+(1/8)M,kM-(1/8)M),\,k\geq 1,$
is less than $2M^{2}$. Therefore,
$|(\sigma_{M}^{2\tau}-\sigma_{M}^{\tau},\phi)|\leq 2M^{2}\tau
N_{2,1}(\phi)\sum_{k=1}^{\infty}[(k-1)M+M/8]^{-2}.$
Proof of Theorem 6. Let $\tau_{n}$ be a sequence of mutually independent over
${\mathbb{Z}}$ positive numbers such that $\tau_{n}<1/16$ and
$\log\tau_{n}/n\to-\infty$ as $n\to\infty$. Let $\Lambda_{M_{n}}$ be the set
defined in Lemma 2 with $M_{n}=16^{n},\ \alpha=1/8$, and $\sigma_{M_{n}}$ be
the corresponding measure. It is not hard to check that the sets
$\Lambda_{M_{n}}+\tau_{n},\Lambda_{M_{p}}+2\tau_{p},\ n,p\in{\mathbb{N}},$ are
mutually disjoint if $n,\,p\geq n_{0}$ and $n_{0}$ large enough. Since
$\Lambda_{M_{p}}\cap(-\alpha M_{n},\alpha M_{n})=\emptyset$ for $p\geq n$, we
see that the measure
$\nu=\sum_{n\geq
n_{0}}\tau_{n}^{-2/3}(\sigma_{M_{n}}^{2\tau_{n}}-\sigma_{M_{n}}^{\tau_{n}})$
is locally finite for appropriate $n_{0}$. Using Lemma 3, we obtain
$|(\nu,\phi)|\leq\sum_{n\geq n_{0}}C^{\prime}\tau_{n}^{1/3}N_{2,1}(\phi).$
Since $\tau_{n}\leq n^{-6}$ for $n$ large enough, we see that the sum
converges, the measure $\nu$ satisfies condition (1), and $\nu\in
S^{*}({\mathbb{R}})$. Moreover, its spectrum is a subset of the locally finite
set $\cup_{n\geq n_{0}}\Lambda_{M_{n}}$, therefore the measure
$\mu=\check{\nu}$ is a crystalline measure. By Proposition 4, to check that
$\mu$ is not s-almost periodic distribution we only need to find a sequence
$\lambda_{n}\to\infty$ such that $\log\lambda_{n}=O(n)$ and
$(\lambda_{n}-1/(2\lambda_{n}),\lambda_{n}+1/(2\lambda_{n}))\cap\operatorname{supp}\nu=\\{\lambda_{n},\lambda_{n}+\tau_{n}\\}.$
Fix $n\geq n_{0}$. Set $\eta_{n,j}=M_{n}+j/M_{n}$ with $j\in{\mathbb{N}}$ such
that $M_{n}+M_{n}/8\leq\eta_{n,j}\leq 2M_{n}-M_{n}/8$, and set
$I_{n,j}:=(\eta_{n,j}-1/(2\eta_{n,j}),\eta_{n,j}+1/(2\eta_{n,j}))$. For every
fixed $n$ these intervals do not intersect. Since
$2M_{n}-M_{n}/8+2\tau_{n}<M_{p}/8$ for $p>n$, we get
$I_{n,j}\cap[\operatorname{supp}\sigma_{M_{p}}^{2\tau_{p}}\cup\operatorname{supp}\sigma_{M_{p}}^{\tau_{p}}]=\emptyset\quad\forall
j,\,\forall p>n.$
Then for every $p<n$ a number of points of the form $kM_{p}+q/M_{p}+\tau_{p}$
or $kM_{p}+q/M_{p}+2\tau_{p}$, $k,q\in{\mathbb{N}}\cup\\{0\\}$ on the interval
$(M_{n},2M_{n})$ is at most $2M_{n}M_{p}$. Summing over all $p<n$, we get
$\\#\\{\cup_{p<n}[\operatorname{supp}\sigma_{M_{p}}^{2\tau_{p}}\cup\operatorname{supp}\sigma_{M_{p}}^{\tau_{p}}]\\}\cap(M_{n},2M_{n})<2M_{n}^{2}/15.$
On the other hand, the number of points $\eta_{n,j}$ on the interval
$(M_{n}+M_{n}/8,\,2M_{n}-M_{n}/8)$ is $3M_{n}^{2}/4$, and the same is the
number of intervals $I_{n,j}\subset(M_{n},2M_{n})$. Hence there is
$j^{\prime}$ such that
$I_{n,j^{\prime}}\cap[\operatorname{supp}\sigma_{M_{p}}^{2\tau_{p}}\cup\operatorname{supp}\sigma_{M_{p}}^{\tau_{p}}]=\emptyset\quad\forall
p\neq n.$
Set $\lambda_{n}=\eta_{n,j^{\prime}}+\tau_{n}$. Clearly,
$(16)^{n}<\lambda_{n}<2(16)^{n}$. Since $\tau_{n}<4^{-1}16^{-n}$ for $n$ large
enough, we see that
$\lambda_{n},\,\lambda_{n}+\tau_{n}\in(\lambda_{n}-1/(2\lambda_{n}),\,\lambda_{n}+1/(2\lambda_{n}))\subset
I_{n,j^{\prime}},$
and
$\eta_{j,n}+\tau_{n},\,\eta_{j,n}+2\tau_{n}\not\in(\lambda_{n}-1/(2\lambda_{n}),\,\lambda_{n}+1/(2\lambda_{n}))\quad\mbox{for}\quad
j\neq j^{\prime}.$
We obtain that the measure $\nu$ satisfies Proposition 4 for $n_{0}$ large
enough. It follows from Corollary 2 that the measure $|\hat{\nu}|$ does not
belong to $S^{*}({\mathbb{R}}^{d})$, hence $\hat{\nu}$ is not the Fourier
quasicrystal.
## 5\. Fourier coefficients and Fourier transform of almost periodic
distributions
For any almost periodic function $g$ on ${\mathbb{R}}^{d}$ its coefficient
Fourier for an exponent $\lambda\in{\mathbb{R}}^{d}$ is defined by the formula
$a(\lambda,g)=\lim_{R\to\infty}\frac{1}{\omega_{d}R^{d}}\int_{B(0,R)}g(t)e^{-2\pi
i\langle\lambda,t\rangle}dt,$
where $\omega_{d}$ is the volume of the unit ball in ${\mathbb{R}}^{d}$, and
the limit exists uniformly with respect to shifts of $g$.
Now let $f$ be an almost periodic temperate distribution on
${\mathbb{R}}^{d}$. By L.Ronkin [25], its Fourier coefficient corresponding to
an exponent $\lambda\in{\mathbb{R}}^{d}$ is defined by the equality
$a(\lambda,f)=a(\lambda,f\star\phi)/\hat{\phi}(\lambda),$
where $\phi$ is any $C^{\infty}$-function with compact support such that
$\hat{\phi}(\lambda)\neq 0$, and $a(\lambda,f\star\phi)$ is the Fourier
coefficient of the almost periodic function $f\star\phi$. Note that the
definition matches with the previous one for an almost periodic function $g$.
Indeed, in this case
$a(\lambda,g\star\phi)=\lim_{R\to\infty}\frac{1}{\omega_{d}R^{d}}\int_{B(0,R)}\int_{\operatorname{supp}\phi}g(t-x)\phi(x)e^{-2\pi
i\langle\lambda,t\rangle}dx\,dt$
$=\int_{\operatorname{supp}\phi}\left[\lim_{R\to\infty}\frac{1}{\omega_{d}R^{d}}\int_{B(0,R)}g(t-x)e^{-2\pi
i\langle\lambda,(t-x)\rangle}dt\right]\phi(x)e^{-2\pi
i\langle\lambda,x\rangle}dx=a(\lambda,g)\hat{\phi}(\lambda).$
Then for any almost periodic temperate distribution $f$ and any
$\phi,\,\psi\in C^{\infty}({\mathbb{R}}^{d})$ with compact supports such that
$\hat{\phi}(\lambda)\neq 0,\,\hat{\psi}(\lambda)\neq 0$ we apply the above
equality for almost periodic functions $f\star\phi,\,f\star\psi$ and get
$\frac{a(\lambda,f\star\phi)}{\hat{\phi}(\lambda)}=\frac{a(\lambda,(f\star\phi)\star\psi)}{\hat{\phi}(\lambda)\hat{\psi}(\lambda)}=\frac{a(\lambda,(f\star\psi)\star\phi))}{\hat{\phi}(\lambda)\hat{\psi}(\lambda)}=\frac{a(\lambda,f\star\psi)}{\hat{\psi}(\lambda)}.$
Therefore the definition of $a(\lambda,f)$ does not depend on a function
$\phi$.
Y.Meyer ([22], Theorem 3.8) proved that for any almost periodic function $g$
the following conditions are equivalent
i) the sum $\sum_{|\lambda|<R}|a(\lambda,g)|$ converges for every $R<\infty$,
ii) $\hat{g}$ is a measure,
iii) $\hat{g}$ is an atomic measure of the form
$\hat{g}=\sum_{\lambda}a(\lambda,g)\delta_{\lambda}$.
Then Y.Meyer generalized this result to almost periodic measures.
The result has a simple generalization to almost periodic distributions and
their Fourier coefficients in sense of Ronkin:
###### Theorem 7.
Let $f$ be an almost periodic distribution. Then the following conditions are
equivalent:
i) the sum $\sum_{|\lambda|<R}|a(\lambda,f)|$ converges for every $R<\infty$,
ii) $\hat{f}$ is a measure,
iii) $\hat{f}$ is an atomic measure of the form
$\hat{f}=\sum_{\lambda}a(\lambda,f)\delta_{\lambda}$.
Proof. Check the implication i)$\Rightarrow$iii). Then for any function
$\phi\in C^{\infty}$ with compact support the sum
$\sum_{|\lambda|<R}|a(\lambda,f\star\phi)|=\sum_{|\lambda|<R}|a(\lambda,f)||\hat{\phi}(\lambda)|$
converges for any $R<\infty$. Therefore
$\widehat{f\star\phi}=\hat{f}\hat{\phi}$ is a measure of the form
$\sum_{\lambda}a(\lambda,f\star\phi)\delta_{\lambda}=\sum_{\lambda}a(\lambda,f)\hat{\phi}(\lambda)\delta_{\lambda}.$
Note that any function $\psi\in S({\mathbb{R}}^{d})$ can be approximated by
$C^{\infty}$-functions with compact supports. Therefore for every $\psi\in
S({\mathbb{R}}^{d})$ the sum
$\hat{f}\hat{\psi}=\sum_{\lambda}a(\lambda,f)\hat{\psi}(\lambda)\delta_{\lambda}$
is a continuous linear functional on the space of all continuous functions on
every ball $\overline{B(0,R)},\,R<\infty$. If we take $\psi$ such that
$\hat{\psi}(\lambda)\equiv 1$ for $|\lambda|\leq R$, we get that the
restriction of $\hat{f}$ on $\overline{B(0,R)}$ coincides with
$\sum_{\lambda}a(\lambda,f)\delta_{\lambda}$, and iii) is proved.
There is no need to prove the implication iii)$\Rightarrow$ii).
In order to check the implication ii)$\Rightarrow$i), suppose that $\hat{f}$
is a measure and $\phi$ is any $C^{\infty}$-function with compact support.
Then $\widehat{f\star\phi}=\hat{f}\hat{\phi}$ is also a measure. Since
$f\star\phi$ is an almost periodic function, we see, by Meyer’s Theorem, that
the sum
$\sum_{|\lambda|<R}|a(\lambda,f)||\hat{\phi}(\lambda)|=\sum_{|\lambda|<R}|a(\lambda,f\star\phi)|$
converges for any $R<\infty$. If $\hat{\phi}(\lambda)\neq 0$ for all
$\lambda\in\overline{B(0,R)}$, we obtain i).
I want to thank the referee for a careful reading of my article and important
remarks that forced the author to completely change Theorem 3. I also would
like to thank N.Lev, G.Reti for the inaccuracy they discovered in one
important definition. Finally, I thank the Department of Mathematics and
Computer Science of the Jagiellonian University for their hospitality and
Professor Lukasz Kosinski for his interest in my work and useful discussions.
## References
* [1] C.Corduneanu, Almost Periodic Functions, Second English ed. Chelsea, New-York, 1989 (Distributed by AMS and Oxford University Press).
* [2] S.Yu. Favorov. Fourier quasicrystals and Lagarias’ conjecture. Proc. Amer. Math. Soc. 144 (2016) , 3527-3536.
* [3] S.Yu. Favorov. Tempered distributions with discrete support and spectrum. Bulletin of the Hellenic Mathematical Society, v.62, (2018), 66-79.
* [4] S.Yu. Favorov. Large Fourier quasicryals and Wiener’s Theorem. Journal of Fourier Analysis and Applications, Vol. 25, Issue 2, (2019), 377-392
* [5] S.Yu. Favorov. Local Wiener’s Theorem and Coherent Sets of Frequencies. Analysis Math., 46 (4) (2020), 737–746 DOI: 10.1007/s10476-020-0042-x
* [6] S.Yu. Favorov. Uniqueness Theorems for Fourier Quasicrystals and Temperate Distributions with Discrete Support. Proc. Amer. Math. Soc. 149 (2021), 4431-4440
* [7] S.Yu. Favorov. Temperate distributions with locally finite support and spectrum on Euclidean spaces. Israel Journal of Mathematics TBD (2022), 1–24.
* [8] S.Yu. Favorov. Fourier quasicrystals and distributions on Euclidean spaces with spectrum of bounded density. To appear in Analysis Math.
* [9] M.N.Kolountzakis. On the Structure of Multiple Translations Tilings by Polygonal Regions, Preprint, 1999, 16p.
* [10] M.N.Kolountzakis. Fourier Pairs of Discrete Support with Little Structure. February 2015 Journal of Fourier Analysis and Applications, 22 no.1, (2016) 1-5.
* [11] M.N.Kolountzakis, J.C.Lagarias, Structure of Tilings of the Line by a Function. Duke Math.Journal, 82, (1996), 653-678.
* [12] J.C. Lagarias, Geometric Models for Quasicrystals I.Delone Set of Finite Type. Discr.and Comp.Geometry, 21 161-191 (1999)
* [13] J.G.de Lamadrid, L.N.Argabright, Almost Periodic Measures. Memoirs of the AMS, No.428, Providence RI, (1990), 218p.
* [14] N.Lev, A.Olevskii, Measures with Uniformly Discrete Support and Spectrum. C.R.Acad.Sci.,ser.1 351, (2013) 599-603.
* [15] N.Lev, A.Olevskii, Quasicrystals and Poisson’s Summation Formula. Invent.Math. 200, (2015) 585–606.
* [16] N.Lev, A.Olevskii, Quasicrystals with Discrete Support and Spectrum. Rev.Mat.Iberoam., 32, no.4, (2016) 1341-1252.
* [17] N.Lev, A.Olevskii, Fourier Quasicrystals and Discreteness of the Diffraction Spectrum. Advances in Mathematics, 315, (2017) 1-26.
* [18] N.Lev and G.Reti Crystalline Temperate Distribution with Uniformly Discrete Support and Spectrum, Journal of Functional Analysis Volume 281, Issue 4, 15 August 2021, 109072
* [19] Y.Meyer, Quasicrystals, Almost Periodic Patterns, Mean–periodic Functions, and Irregular Sampling. African Diaspora Journal of Mathematics, 13 no.1, (2012) 1-45.
* [20] Y. Meyer, Measures with locally finite support and spectrum, Proc. Natl. Acad. Sci. USA 113(12) (2016), 31523158, DOI 10.1073/pnas.1600685113
* [21] Y.Meyer, Guinand’s Measure are Almost Periodic Distributions. Bulletin of the Hellenic Mathematical Society, 61, (2017) 11-20.
* [22] Y. Meyer, Global and local estimates on trigonometric sums, Trans. R. Norw. Soc. Sci. Lett. 2018(2) 1-25.
* [23] V.P.Palamodov, A Geometric Characterization of a Class of Poisson Type Distributions, Journal of Fourier Analysis and Applications, 23, no.5, (2017) 1227–1237.
* [24] Quasicrystals and Discrete Geometry. J.Patera,ed., Fields Institute Monographs, AMS, Providence RI, 289p.
* [25] L.I. Ronkin Almost Periodic Distributions and Divisors in Tube Domains, Zap. Nauchn. Sem. POMI 247 (1997) 210–236 (Russian).
* [26] W.Rudin. Functional Analysis, McGraw-Hill Book Company, New York, St.Louis, Sun Francisco, (1973), 443p.
* [27] V.S.Vladimirov. Equations of Mathematical Physics, Marcel Dekker, Inc.,New-York, 1971, 418p.
|
# A Major Obstacle for NLP Research: Let’s Talk about Time Allocation!
Katharina Kann♠ Shiran Dudy♠ Arya D. McCarthy♣
♠University of Colorado Boulder
<EMAIL_ADDRESS>
♣Johns Hopkins University
<EMAIL_ADDRESS>
###### Abstract
The field of natural language processing (NLP) has grown over the last few
years: conferences have become larger, we have published an incredible amount
of papers, and state-of-the-art research has been implemented in a large
variety of customer-facing products. However, this paper argues that we have
been less successful than we should have been and reflects on where and how
the field fails to tap its full potential. Specifically, we demonstrate that,
in recent years, subpar time allocation has been a major obstacle for NLP
research. We outline multiple concrete problems together with their negative
consequences and, importantly, suggest remedies to improve the status quo. We
hope that this paper will be a starting point for discussions around which
common practices are – or are not – beneficial for NLP research.
## 1 Introduction
Why did I get nothing done today? is a question many people ask themselves
frequently throughout their professional careers. Psychologists agree on good
time management skills being of utmost importance for a healthy and productive
lifestyle (Lakei, 1973; Claessens et al., 2007; Major et al., 2002, inter
alia). However, many academics and industry researchers lack time management
skills, working long days and getting not enough done – not even the
interesting experiment they had wanted to start over a year ago.
In this position paper, we argue that natural language processing (NLP) as a
field has a similar problem: we do not allocate our time well. Instead, we
spend it on things that seem more urgent than they are, are easy but
unimportant, or result in the largest short-term gains. This paper identifies
the largest traps the authors believe the NLP community falls into. We then
provide, for each of the four identified problems (P1–P4), suggested remedies.
While we know that – just as for individuals – change takes time, we hope that
this paper, in combination with the EMNLP 2022 special theme Open questions,
major obstacles, and unresolved issues in NLP, will ignite critical
discussions.
Figure 1: Avg. # of authors per paper; 2000–2021.
#### Related Work
Over the last couple of years, multiple papers have provided critical
reflections on the state of affairs in NLP research: Bender and Koller (2020)
criticizes the hype around language models and argues, similarly to Bisk et
al. (2020), that true understanding is impossible when language is detached
from the physical world. In contrast, Bowman (2022) talks about the risks
associated with underclaiming. Turning to evaluation, Bowman and Dahl (2021)
provides a critical view on benchmarking, and Rodriguez et al. (2021) proposes
ways to improve leaderboards in order to truly track progress. Other position
papers discuss the importance of data curation Rogers (2021) and the need for
focusing on the user for natural language generation Dudy et al. (2021); Flek
(2020). Bianchi and Hovy (2021) identifies general concerning trends in NLP
research. Parcalabescu et al. (2021) discusses our use of the term
multimodality and proposes to use task-specific definitions of multimodality
in the machine learning era. Church (2020) discusses downward trends in
reviewing quality and whether these can be mitigated. We add to those meta-
level papers by discussing subpar use of time as a major problem.
## 2 What Is Going Wrong?
### 2.1 P1: Too Many Papers per Author
#### The Situation
Publications in NLP are cheap compared to many other fields: there is no need
to set up complicated real-world experiments (as, e.g., in physics), existing
data can be used for many studies, and lately even much of the code we use is
readily available. Thus, the time from idea to final paper can be extremely
short. Some researchers also split one substantial paper’s work into 2–5 less
dense and structurally similar papers.
Consequently, NLP researchers publish a lot:
Rei111https://www.marekrei.com/blog/ml-and-nlp-publications-in-2021/ finds
that the 14 most productive first authors in NLP published 9 (1 researcher), 6
(2 researchers), and 5 (11 researchers) papers in 2021. And this number only
counts the most prestigious conferences in NLP: Google Scholar shows that,
across all venues, the first 3 authors published 16, 7, and 7 papers.
While some enjoy writing, many – especially junior – NLP researchers feel
external pressure to publish in large volumes; quantity often overshadows
quality of publications for hiring decisions, and PhD applicants struggle to
find advisors if they do not have multiple relevant publications.
#### Negative Consequences
A straightforward consequence of the pressure to publish is that much of an
NLP researcher’s time goes into writing: conservatively assuming one week of
full-time writing per paper, the authors with the most papers respectively
spend 16, 7, and 7 weeks per year just writing; this is nearly $\frac{1}{3}$
of the most productive author’s year.
The second negative consequence is the time needed to review this many papers:
reviewing one substantial paper would be quicker than reviewing 5 separate
ones, especially if reviewers are not shared. This lowers review quality,
frustrates authors, and causes errors to be missed. The latter then misinforms
other researchers, also wasting their time.
Third, the ongoing race for publications makes it difficult for researchers to
stop and reflect on if what they are currently working on is worthwhile. It
also leads to mixed feelings regarding the start of ambitious, high-risk/high-
reward research: many researchers are scared away by the prospect of
potentially not obtaining their expected outcomes and being unable to publish.
Thus, the need to constantly produce large quantities of output not only
reduces the quality of individual papers, but also hinders meaningful progress
of the field by encouraging the pursuit of superficial research questions.
Finally, thorough scholarship is extremely difficult in this environment. This
leads to all sorts of shortcomings in NLP publications – missing references,
mathematical errors, and even nonsensical experimental designs – which are
then overlooked by overworked reviewers (Church, 2020).
#### Suggested Remedies
To change the state of the field, we can either change our expectations or the
available opportunities. For the former, it is crucial that quality is valued
more than quantity for hiring. To start, we recommend having reviews be
publicly available (as done, e.g., by the Conference on Neural Information
Processing Systems222https://neurips.cc), to help people from adjacent fields
understand the value of a candidate’s publication. Another option is to
standardize requesting reviews from experts (in addition to letters of
recommendation). To reduce the opportunities for submitting large amounts of
less impactful papers, we could set an upper limit for the number of (first-
author) papers one can submit. This could be a hard limit or a soft limit with
a penalty for too many low-quality submissions, such as blocking papers with
low average scores from resubmission for a fixed period of time.333This is
current practice in TACL but not at conferences.
### 2.2 P2: Too Many Authors per Paper
#### The Situation
The second problem we highlight is the inverse of the first: too many authors
per paper,444Examples with ¿20 authors are Nan et al. (2021), Srivastava et
al. (2022), and Gehrmann et al. (2021, 2022). given the strategies we employ
to manage collaborations. As shown in Figure 1, author lists are, on average,
becoming longer and longer: in 2000, the average number of authors on ACL and
EMNLP papers was 2.25 and, respectively, 1.97, but that number had increased
to 4.65 and, respectively, 4.49 in 2021. Large collaborations can greatly
advance science and, if done well, are beneficial to all participating
researchers. However, they also pose an unintended challenge: many times, each
author’s expected, as well as actual, contribution becomes unclear. The former
is often a consequence of a lack of communication or team management skills.
The latter is the result of NLP not having a standardized way to communicate
each researcher’s contribution to collaborative projects.
In a traditional two-author setting with a student and their advisor, it is
generally understood that the student does most of the hands-on work and their
advisor guides the research and writing process. However, with more authors,
the situation becomes less clear to both authors and readers.
#### Negative Consequences
When expected contributions are unclear to the authors themselves, it is easy
to have too many cooks spoil the broth: e.g., one author could write one
section while one of their colleagues rewrites another section in a way that
makes combining them non-trivial and time-consuming. Additionally, being vague
about each author’s contributions can lead to friction around authorship,
which take time as well as mental energy and a toll on the relationship
between two people; also, authorship discussions tend to disadvantage members
of underrepresented groups Ni et al. (2021).
Worse, however, is a situation in which it is the reader to whom it is not
obvious what each authors’ contribution has been. Some researchers giving
authorship to people whose contribution was minimal devalues the time and work
of middle authors who actually do contribute a lot.
Another problem with too many authors is that miscommunication easily wastes
time and resources. For instance, it is easy to be inconsistent if experiments
are run by multiple researchers, who might not use the same codebase.
#### Suggested Remedies
In order to avoid situations where the contributions of individual authors are
unclear to the reader – and, thus, accurate assignment of credit is impossible
– we propose a straightforward solution that can completely eliminate this
negative consequence of large collaborations: publishing a contribution
statement Brand et al. (2015) for each paper. This is common in other fields
but very rare in NLP (a notable exception is, e.g., Srivastava et al. (2022)).
Making a contribution statement mandatory for NLP publications would be easy
but extremely effective.
For group management, setting expectations together and communicating the
expected roles of all involved parties, including the possible authorship
order can save time and energy toll.555Moster et al. (2021) offers insights on
managing collaborations adjusted to remote work conditions. We suggest that
doing this right at the beginning of each collaborative project should become
common practice in NLP ("#EMNLP2022Rule"). However, it has been shown that
many principal investigators (PIs) lack training in lab and personnel
management skills Van Noorden (2018). Thus, PIs and their research groups
would likely benefit from explicit training. One possible way to achieve this
could be to extend existing mentoring initiatives at NLP conferences to focus
more on leadership skills. Another suggestion mentioned by Van Noorden (2018)
– which we recommend for NLP – is that PIs should ask for feedback from their
groups more regularly.
### 2.3 P3: Gatekeeping
#### The Situation
We do like unconventional topics (e.g., the connection between synesthesia and
character-level NLP models Kann and Monsalve-Mercado (2021)), and statements
like "This work is too interdisciplinary to get accepted" or "This work would
be better for a workshop on a specific topic" are hardly ever true. However,
reviewers in NLP like papers that resemble those they themselves have
previously published. They only accept non-mainstream submissions if they are
written in a very specific style: authors need to know how to pitch a topic to
the NLP community.
For readers new to publishing in NLP, here are the basic guidelines we have
found for getting a paper accepted – many of which are nonsensical: 1) Your
submitted paper should always have the exact maximum number of pages – not a
line more or less. 2) The first section should be called Introduction. 3)
The last section should be called Conclusion – not Discussion or similar. 4)
You should have a figure that is (somewhat) related to your paper’s content on
the top right corner of the first page. 5) You should have equations in your
paper – complicated equations will increase your chances of acceptance (Lipton
and Steinhardt, 2019). 6) Do not explicitly write out popular model
equations, e.g., for the LSTM Hochreiter and Schmidhuber (1997). 7) The
Related Work section should come immediately before the Conclusion, to make
your novelty seem larger. 8) Do not present only a dataset—provide empirical
results, even if they are unimportant.
#### Negative Consequences
This gatekeeping especially affects people whose research mentors are not able
to teach them the style of the NLP community: 1) people from universities with
little experience in NLP research, 2) researchers from countries not
traditionally part of the international NLP community, and 3) people from
adjacent fields, such as psychology, social science, or even linguistics.
Thus, gatekeeping reinforces existing social inequalities and harms our
research progress, as we get exposed to groundbreaking ideas later than
necessary – or never. It is also a huge waste of our time: for instance, there
is no reason why content presented in 7.56 pages should be less impactful than
content presented in 8 pages. However, we, as a community, make it an issue
and cause researchers to waste hours trimming or extending papers. Similarly,
we force people to waste their time thinking about which equations they can
put into a paper that does not, in fact, benefit from them.
#### Suggested Remedies
We argue that resolving the problem of gatekeeping is crucial in order to
allow our field to grow in a healthy way. We make two suggestions: 1) We need
to explicitly educate reviewers to not take superficial properties of papers
into account. This could be implemented, e.g., in the form of mandatory
training videos for all ACL reviewers. However, this is a type of implicit
bias Greenwald and Banaji (1995) and we encourage more discussion on possible
solutions. 2) While we are waiting for this to be effective, we need to level
the playing field by making unofficial rules and tricks widely known. The
easiest way would be to publish explanations for first-time submitters
together with calls for papers. Mentoring programs are great alternatives:
while they are timewise costly for individuals, they will, in the long run,
save time for the field as a whole.
### 2.4 P4: Missing the Point
#### The Situation
NLP aims to build technology that improves the lives of its end users.
However, NLP research is often purely technically driven, and actual human
needs are investigated little or not at all Flek (2020); Dudy et al. (2021);
this is especially prevalent when building tools for communities speaking low-
resource languages Caselli et al. (2021). This can – and does – result in
researchers focusing on irrelevant problems. A similar problem is what we call
legacy research questions: research questions that are motivated by problems
or tools that are no longer relevant. Examples pointed out by Bowman (2022)
are papers motivated by the brittleness of question answering (QA) systems
whose performance has long been surpassed by the state of the art or an
analysis and drawing of conclusions based on outdated systems like BERT Devlin
et al. (2019).666It is, of course, possible to perform interesting studies
involving older models. However, this requires well motivated research
questions.
To quantify this problem, we performed a case study by randomly sampling and
examining 30 papers from human-oriented tracks at EMNLP 2021.777The tracks we
consider are: Machine translation and Multilinguality, Dialogue and
Interactive Systems, Question Answering, and Summarization. Only 3 papers
engaged with users through evaluation and only 2 papers grounded their
research questions in user needs; details can be found in Appendix A.
Last, looking at recent top-tier conferences in the field of NLP, a
substantial amount of papers focus on what we call quick research questions,
i.e., projects which maximize short-term gains for the researcher(s): Baden et
al. (2022) identify that the majority of NLP research for text analysis is
devoted to easy problems, instead of aiming to measure much more demanding
constructs.
#### Negative Consequences
Work that is missing the point does not move the field in a meaningful
direction. It wastes the researcher’s time by detracting from topics that
truly benefit the community, the public, or the researcher themselves. Next,
they waste the reviewers’ time as well as the general reader’s time by failing
to provide insights. They also needlessly use computing resources, thus
contributing to the climate crisis (Strubell et al., 2019). Ignoring user
needs further dangerously bears the risk of causing real harm to stakeholders
(Raji et al., 2022). Designing technology without the participation of
potential users has in the past led to spectacular product failures (Johnson,
2021; Simon, 2020).
Finally, work on superficial research questions can be fast and result in a
large amount of research output. In our current system that values quantity
over quality for hiring, researchers working on superficial questions tend to
have more successful careers. This, in turn, encourages new researchers to
also waste their time by doing something similar.
#### Suggested Remedies
It is important for NLP researchers to engage more with the intended users of
the technology we build. This could be encouraged during the review process,
e.g., with targeted questions. Legacy research questions will need to be
detected during reviewing as well – raising awareness of this phenomenon will
likely reduce impacted submissions and acceptance of papers focused on legacy
research questions alike. Regarding quick research questions, one of the
remedies suggested for P1 could be a possible solution here as well: moving
towards valuing quality over quantity.
## 3 Conclusion
In this paper, we outlined how several problematic practices in NLP research
lead to a waste of the most important resource we have – our time – and, thus,
constitute major obstacles for NLP research. We suggested multiple possible
solutions to existing problems. We hope to foster much-needed discussion
around how we, as a community, envision moving forward in the face of these
concerns.
## Limitations
As we focus on time allocation, this is not an exhaustive list of problems we
see in our research community. However, other concerns are beyond the scope of
this work. Similarly, not all mentioned problems apply to all groups – it is,
for instance, totally possible that individual groups excel at managing large
collaborations.
We further do not claim that our suggested remedies are perfect solutions.
They come with their own sets of challenges and should be implemented with
care: for instance, contribution statements could unintentionally minimize
contributions that do not make it into the final paper. Additionally, we do
not claim to have listed all possible remedies for the identified problems. By
contrast, we explicitly encourage other researchers to start discussing ways
to improve the status quo.
## Acknowledgments
We would like to thank the anonymous reviewers for their thought-provoking
comments as well as the members of University of Colorado Boulder’s NALA Group
for their helpful feedback. This research was supported by the NSF National AI
Institute for Student-AI Teaming (iSAT) under grant DRL 2019805\. The opinions
expressed are those of the authors and do not represent views of the NSF. ADM
is supported by an Amazon Fellowship and a Frederick Jelinek Fellowship.
## References
* Amplayo et al. (2021) Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021. Aspect-controllable opinion summarization. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 6578–6593.
* Baden et al. (2022) Christian Baden, Christian Pipal, Martijn Schoonvelde, and Mariken A. C. G van der Velden. 2022. Three gaps in computational text analysis methods for social sciences: A research agenda. _Communication Methods and Measures_ , 16(1):1–18.
* Bara et al. (2021) Cristian-Paul Bara, CH-Wang Sky, and Joyce Chai. 2021. Mindcraft: Theory of mind modeling for situated dialogue in collaborative tasks. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1112–1125.
* Bender and Koller (2020) Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5185–5198, Online. Association for Computational Linguistics.
* Bianchi and Hovy (2021) Federico Bianchi and Dirk Hovy. 2021. On the gap between adoption and understanding in NLP. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 3895–3901, Online. Association for Computational Linguistics.
* Bisk et al. (2020) Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 8718–8735, Online. Association for Computational Linguistics.
* Bowman (2022) Samuel Bowman. 2022. The dangers of underclaiming: Reasons for caution when reporting how NLP systems fail. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 7484–7499, Dublin, Ireland. Association for Computational Linguistics.
* Bowman and Dahl (2021) Samuel R. Bowman and George Dahl. 2021. What will it take to fix benchmarking in natural language understanding? In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 4843–4855, Online. Association for Computational Linguistics.
* Brand et al. (2015) Amy Brand, Liz Allen, Micah Altman, Marjorie Hlava, and Jo Scott. 2015. Beyond authorship: attribution, contribution, collaboration, and credit. _Learned Publishing_ , 28(2):151–155.
* Caselli et al. (2021) Tommaso Caselli, Roberto Cibin, Costanza Conforti, Enrique Encinas, and Maurizio Teli. 2021. Guiding principles for participatory design-inspired natural language processing. In _Proceedings of the 1st Workshop on NLP for Positive Impact_ , pages 27–35, Online. Association for Computational Linguistics.
* Chen et al. (2021) Guanhua Chen, Shuming Ma, Yun Chen, Li Dong, Dongdong Zhang, Jia Pan, Wenping Wang, and Furu Wei. 2021. Zero-shot cross-lingual transfer of neural machine translation with multilingual pretrained encoders. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 15–26.
* Church (2020) Kenneth Ward Church. 2020. Emerging trends: Reviewing the reviewers (again). _Natural Language Engineering_ , 26(2):245–257.
* Claessens et al. (2007) Brigitte JC Claessens, Wendelien Van Eerde, Christel G Rutte, and Robert A Roe. 2007\. A review of the time management literature. _Personnel review_.
* Clark et al. (2021) Christopher Clark, Jordi Salvador, Dustin Schwenk, Derrick Bonafilia, Mark Yatskar, Eric Kolve, Alvaro Herrasti, Jonghyun Choi, Sachin Mehta, Sam Skjonsberg, et al. 2021. Iconary: A pictionary-based game for testing multimodal communication with drawings and text. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1864–1886.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Dudy et al. (2021) Shiran Dudy, Steven Bedrick, and Bonnie Webber. 2021. Refocusing on relevance: Personalization in NLG. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 5190–5202, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Efrat et al. (2021) Avia Efrat, Uri Shaham, Dan Kilman, and Omer Levy. 2021. Cryptonite: A cryptic crossword benchmark for extreme ambiguity in language. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 4186–4192.
* Falke and Lehnen (2021) Tobias Falke and Patrick Lehnen. 2021. Feedback attribution for counterfactual bandit learning in multi-domain spoken language understanding. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1190–1198.
* Flek (2020) Lucie Flek. 2020. Returning the n to nlp: Towards contextually personalized classification models. In _Proceedings of the 58th annual meeting of the association for computational linguistics_ , pages 7828–7838.
* Garg and Moschitti (2021) Siddhant Garg and Alessandro Moschitti. 2021. Will this question be answered? question filtering via answer model distillation for efficient question answering. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7329–7346.
* Gehrmann et al. (2021) Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondřej Dušek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM benchmark: Natural language generation, its evaluation and metrics. In _Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)_ , pages 96–120, Online. Association for Computational Linguistics.
* Gehrmann et al. (2022) Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh Dhole, Khyathi Raghavi Chandu, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Štajner, Sebastien Montella, Shailza, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu, and Yufang Hou. 2022. GEMv2: Multilingual NLG benchmarking in a single line of code. _arXiv preprint arXiv:2206.11249_.
* Gerz et al. (2021) Daniela Gerz, Pei-Hao Su, Razvan Kusztos, Avishek Mondal, Michał Lis, Eshan Singhal, Nikola Mrkšić, Tsung-Hsien Wen, and Ivan Vulić. 2021. Multilingual and cross-lingual intent detection from spoken data. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7468–7475.
* Greenwald and Banaji (1995) Anthony G Greenwald and Mahzarin R Banaji. 1995. Implicit social cognition: attitudes, self-esteem, and stereotypes. _Psychological review_ , 102(1):4.
* Gu et al. (2021) Jia-Chen Gu, Zhenhua Ling, Yu Wu, Quan Liu, Zhigang Chen, and Xiaodan Zhu. 2021\. Detecting speaker personas from conversational texts. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1126–1136.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural Computation_ , 9(8):1735–1780.
* Huang et al. (2021) Canming Huang, Weinan He, and Yongmei Liu. 2021. Improving unsupervised commonsense reasoning using knowledge-enabled natural language inference. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 4875–4885.
* Jhamtani et al. (2021) Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Taylor Berg-Kirkpatrick. 2021. Investigating robustness of dialog models to popular figurative language constructs. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7476–7485.
* Johnson (2021) Khari Johnson. 2021. The efforts to make text-based ai less racist and terrible. https://tinyurl.com/5x8rah4s. Accessed: 17 June 2021.
* Kahardipraja et al. (2021) Patrick Kahardipraja, Brielen Madureira, and David Schlangen. 2021. Towards incremental transformers: An empirical analysis of transformer models for incremental nlu. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1178–1189.
* Kalyan et al. (2021) Ashwin Kalyan, Abhinav Kumar, Arjun Chandrasekaran, Ashish Sabharwal, and Peter Clark. 2021. How much coffee was consumed during emnlp 2019? fermi problems: A new reasoning challenge for ai. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7318–7328.
* Kann and Monsalve-Mercado (2021) Katharina Kann and Mauro M. Monsalve-Mercado. 2021. Coloring the black box: What synesthesia tells us about character embeddings. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 2673–2685, Online. Association for Computational Linguistics.
* Lakei (1973) A Lakei. 1973. How to get control of your time and life. _New York: Nal Penguin Inc_.
* Lavi et al. (2021) Ofer Lavi, Ella Rabinovich, Segev Shlomov, David Boaz, Inbal Ronen, and Ateret Anaby Tavor. 2021. We’ve had this conversation before: A novel approach to measuring dialog similarity. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1169–1177.
* Liang et al. (2021) Yunlong Liang, Chulun Zhou, Fandong Meng, Jinan Xu, Yufeng Chen, Jinsong Su, and Jie Zhou. 2021. Towards making the most of dialogue characteristics for neural chat translation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 67–79.
* Lipton and Steinhardt (2019) Zachary C. Lipton and Jacob Steinhardt. 2019. Troubling trends in machine learning scholarship: Some ml papers suffer from flaws that could mislead the public and stymie future research. _Queue_ , 17(1):45–77.
* Liu et al. (2021) Dan Liu, Mengge Du, Xiaoxi Li, Ya Li, and Enhong Chen. 2021. Cross attention augmented transducer networks for simultaneous translation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 39–55.
* Ma et al. (2021) Wenchang Ma, Ryuichi Takanobu, and Minlie Huang. 2021. Cr-walker: Tree-structured graph reasoning and dialog acts for conversational recommendation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1839–1851.
* Madotto et al. (2021) Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul A Crook, Bing Liu, Zhou Yu, Eunjoon Cho, Pascale Fung, and Zhiguang Wang. 2021. Continual learning in task-oriented dialogue systems. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7452–7467.
* Major et al. (2002) Virginia Smith Major, Katherine J Klein, and Mark G Ehrhart. 2002. Work time, work interference with family, and psychological distress. _Journal of applied psychology_ , 87(3):427.
* Moghe et al. (2021) Nikita Moghe, Mark Steedman, and Alexandra Birch. 2021. Cross-lingual intermediate fine-tuning improves dialogue state tracking. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1137–1150.
* Moster et al. (2021) Makayla Moster, Denae Ford, and Paige Rodeghero. 2021. " is my mic on?" preparing se students for collaborative remote work and hybrid team communication. In _2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET)_ , pages 89–94. IEEE.
* Nan et al. (2021) Linyong Nan, Dragomir Radev, Rui Zhang, Amrit Rau, Abhinand Sivaprasad, Chiachun Hsieh, Xiangru Tang, Aadit Vyas, Neha Verma, Pranav Krishna, Yangxiaokang Liu, Nadia Irwanto, Jessica Pan, Faiaz Rahman, Ahmad Zaidi, Mutethia Mutuma, Yasin Tarabar, Ankit Gupta, Tao Yu, Yi Chern Tan, Xi Victoria Lin, Caiming Xiong, Richard Socher, and Nazneen Fatema Rajani. 2021\. DART: Open-domain structured data record to text generation. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 432–447, Online. Association for Computational Linguistics.
* Ni et al. (2021) Chaoqun Ni, Elise Smith, Haimiao Yuan, Vincent Larivière, and Cassidy R. Sugimoto. 2021. The gendered nature of authorship. _Science Advances_ , 7(36):eabe4639.
* Ouyang et al. (2021) Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2021. Ernie-m: Enhanced multilingual representation by aligning cross-lingual semantics with monolingual corpora. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 27–38.
* Parcalabescu et al. (2021) Letitia Parcalabescu, Nils Trost, and Anette Frank. 2021. What is multimodality? In _Proceedings of the 1st Workshop on Multimodal Semantic Representations (MMSR)_ , pages 1–10, Groningen, Netherlands (Online). Association for Computational Linguistics.
* Raghu et al. (2021) Dinesh Raghu, Shantanu Agarwal, Sachindra Joshi, et al. 2021. End-to-end learning of flowchart grounded task-oriented dialogs. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 4348–4366.
* Raji et al. (2022) Inioluwa Deborah Raji, I Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. 2022\. The fallacy of ai functionality. In _2022 ACM Conference on Fairness, Accountability, and Transparency_ , pages 959–972.
* Rodriguez et al. (2021) Pedro Rodriguez, Joe Barrow, Alexander Miserlis Hoyle, John P. Lalor, Robin Jia, and Jordan Boyd-Graber. 2021. Evaluation examples are not equally informative: How should that change NLP leaderboards? In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 4486–4503, Online. Association for Computational Linguistics.
* Rogers (2021) Anna Rogers. 2021. Changing the world by changing the data. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 2182–2194, Online. Association for Computational Linguistics.
* Salesky et al. (2021) Elizabeth Salesky, David Etter, and Matt Post. 2021. Robust open-vocabulary translation from visual text representations. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7235–7252.
* Simon (2020) Simon. 2020. Google duplex: The effects of deception on well-being. https://tinyurl.com/2yadfuer. Accessed: 11 June 2020.
* Song et al. (2021) Jongyoon Song, Sungwon Kim, and Sungroh Yoon. 2021. Alignart: Non-autoregressive neural machine translation by jointly learning to estimate alignment and translate. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1–14.
* Srivastava et al. (2022) Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakaş, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang, Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Kocoń, Jana Thompson, Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski, Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem Şenel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Swędrowski, Michele Bevilacqua, Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun Peng, Nathan Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nikita Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramón Risco Del. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. _arXiv preprint arXiv:2206.04615_.
* Strubell et al. (2019) Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in nlp. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3645–3650.
* Van Noorden (2018) Richard Van Noorden. 2018. Leadership problems in the lab. _Nature_ , 557(3).
* Vulić et al. (2021) Ivan Vulić, Pei-Hao Su, Samuel Coope, Daniela Gerz, Paweł Budzianowski, Iñigo Casanueva, Nikola Mrkšić, and Tsung-Hsien Wen. 2021. Convfit: Conversational fine-tuning of pretrained language models. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 1151–1168.
* Xu et al. (2021) Haoran Xu, Benjamin Van Durme, and Kenton Murray. 2021. Bert, mbert, or bibert? a study on contextualized embeddings for neural machine translation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 6663–6675.
* Zhang and Feng (2021) Shaolei Zhang and Yang Feng. 2021. Universal simultaneous machine translation with mixture-of-experts wait-k policy. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7306–7317.
* Zhang and Bansal (2021) Shiyue Zhang and Mohit Bansal. 2021. Finding a balanced degree of automation for summary evaluation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 6617–6632.
* Zhao et al. (2021a) Jinming Zhao, Philip Arthur, Gholamreza Haffari, Trevor Cohn, and Ehsan Shareghi. 2021a. It is not as good as you think! evaluating simultaneous machine translation on interpretation data. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 6707–6715.
* Zhao et al. (2021b) Yangyang Zhao, Zhenyu Wang, Changxi Zhu, and Shihan Wang. 2021b. Efficient dialogue complementary policy learning via deep q-network policy and episodic memory policy. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 4311–4323.
* Zhu et al. (2021) Kunrui Zhu, Yan Gao, Jiaqi Guo, and Jian-Guang Lou. 2021. Translating headers of tabular data: A pilot study of schema translation. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 56–66.
## Appendix A Appendix
In Table 1 we provide the analysis conducted on selected EMNLP 2021 papers.
Engaging with Users indicates that researchers engage with humans, either
during the design phase or for evaluation. In our analysis none of the papers
engage with users throughout the process, leaving humans only to the
evaluation part (3 papers). User-driven indicates that the motivation is
grounded in user needs (2 papers). The following tracks are considered:
session 1: track A: Machine translation and multi-linguality 1, session 3:
track B: Dialogue and interactive systems 1, session 4: track B: Dialogue and
interactive systems 2, session 5: track A: question answering 1, session 6:
track B: summarization, session 7: track A: machine translation and multi-
linguality 2, session 7: track B: question answering 2.
| Paper | Engaging with Users | User-driven
---|---|---|---
1 | AligNART (Song et al., 2021) | no | no
2 | Zero-Shot Cross-Lingual Transfer (Chen et al., 2021) | no | no
3 | ERNIE-M (Ouyang et al., 2021) | no | no
4 | Cross attention augmented transducer (Liu et al., 2021) | no | no
5 | Translating Headers of Tabular Data (Zhu et al., 2021) | no | no
6 | Towards Making the Most (Liang et al., 2021) | no | no
7 | MindCraft (Bara et al., 2021) | yes | no
8 | Detecting Speaker Personas (Gu et al., 2021) | no | no
9 | Cross-lingual Intermediate Fine-tuning (Moghe et al., 2021) | no | no
10 | ConvFiT Vulić et al. (2021) | no | no
11 | We’ve had this conversation before (Lavi et al., 2021) | no | no
12 | Towards Incremental Transformers (Kahardipraja et al., 2021) | no | no
13 | Feedback Attribution (Falke and Lehnen, 2021) | no | yes
14 | CR-Walker (Ma et al., 2021) | no | no
15 | Iconary (Clark et al., 2021) | yes | no
16 | Improving Unsupervised Commonsense (Huang et al., 2021) | no | no
17 | Cryptonite (Efrat et al., 2021) | no | no
18 | Efficient Dialogue Complementary Policy Learning (Zhao et al., 2021b) | yes | no
19 | End-to-End Learning of Flowchart (Raghu et al., 2021) | no | yes
20 | Aspect-Controllable Opinion Summarization (Amplayo et al., 2021) | no | no
21 | Finding a Balanced Degree of Automation (Zhang and Bansal, 2021) | no | no
22 | BERT, mBERT, or BiBERT (Xu et al., 2021) | no | no
23 | It Is Not As Good As You Think (Zhao et al., 2021a) | no | no
24 | Robust Open-Vocabulary Translation (Salesky et al., 2021) | no | no
25 | Universal Simultaneous Machine Translation (Zhang and Feng, 2021) | no | no
26 | How much coffee was consumed (Kalyan et al., 2021) | no | no
27 | Will this Question be Answered (Garg and Moschitti, 2021) | no | no
28 | Continual Learning (Madotto et al., 2021) | no | no
29 | Multilingual and Cross-Lingual Intent (Gerz et al., 2021) | no | no
30 | Investigating Robustness of Dialog Models (Jhamtani et al., 2021) | no | no
Table 1: Our analysis of 30 randomly chosen papers from EMNLP 2021.
|
# NGC 7538 IRS2 in [NeII]: Shell and Cavity Kinematics of a Compact HII Region
Dan Beilis 1 Sara Beck,1 John Lacy2
1School of Physics and Astronomy, Tel Aviv University, Ramat Aviv, Israel
69978
2Department of Astronomy, University of Texas, Austin Tx USA
<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
NGC 7538 IRS2 is a compact HII region and recent star formation source, with a
shell morphology, lying on the border of the visible HII region NGC 7538. We
present a spectral cube of the [NeII] $12.8$ µm emission line obtained with
the TEXES spectrometer on Gemini North with velocity resolution $\sim 4$ km
s-1 and angular resolution $\sim 0.3^{{}^{\prime\prime}}$. The kinematics of
the data cube show ionized gas flowing along multiple cavity walls. We have
simulated the kinematics and structure of IRS2 with a model of superimposed
cavities created by outflows from embedded stars in a cloud with density
gradients. Most of the cavities, including the largest that dominates IRS2
structure, are associated with B-type stars; the outflow of the bright
ionizing O star binary IRS2a/b is small in extent and lies in a high-density
clump. The IRS2 model shows that the behavior of an HII region is not a matter
of only the most massive star present; cloud clumpiness and activity of lower
mass stars may determine the structure and kinematics.
###### keywords:
; HII Regions – ISM:kinematics and dynamics – stars:formation
††pubyear: 2022††pagerange: NGC 7538 IRS2 in [NeII]: Shell and Cavity
Kinematics of a Compact HII Region–A
## 1 Introduction
NGC 7538, 2.65 kpc distant, is an extended region of star formation. It holds
a visible HII region, on the western and southern edges of which are embedded
star-forming complexes that excite strong infrared sources. The brightest are
the massive embedded star-forming regions IRS1 and IRS2, shown in Fig. 1,
which are south of the visible source and obscured by $A$V at least 15
magnitudes. IRS1 is an ultra-compact HII region excited by an O7 star (Sandell
et al., 2020) at a very early stage of evolution. It is embedded in a dense
disk (ibid.), which has collimated an ionized wind and is the site of methanol
maser spots (Minier, Conway & Booth, 2001), and drives a giant bipolar outflow
of molecular gas $\sim 3$pc in extent. IRS2, about 10 arcsec north of IRS1, is
a compact HII region about 0.13 pc in diameter. It has an overall shell or
cometary morphology and is a bright near and mid-infrared nebula. Zhu et al.
(2008) observed the HII region in the [NeII] 12.8 µm line and report that with
$\sim~{}2$ arcsec spatial resolution it has a cometary appearance and the
kinematics of a pressure-driven outflow. The exciting source, IRS2a/b, is a
close binary of a O9 and O5 star (Kraus et al., 2006). Near-infrared line
images (Bloomer et al. (1998),Kraus et al. (2006)) find shocked gas between
IRS1 and IRS2 and multiple shells of gas centered on IRS2. Bloomer et al.
(1998) suggest that these structures show the bow shock created as IRS2a/b
moves relative to the cloud and to its own stellar wind.
The close association and energetic activity of these embedded massive stars
raise the question of the possible interactions with each other, with the rest
of the young stars in the NGC 7538 region and with the gas clouds in their
environment. The molecular outflow of IRS1 appears to cover the IRS2 HII
region; has it affected the development and appearance of the IRS2 nebula? Has
the stellar wind and bow shock of IRS2 any influence on the complex of
outflows driven by IRS1? To address these questions we must determine how the
IRS2 HII region evolved into its present state, for which we need to
understand the kinematics of the ionized gas. While the molecular gas has been
extensively observed with high spatial and spectral resolution, the ionized
gas has not been as thoroughly studied: Zhu et al. (2008) had high (4 km s-1 )
velocity resolution but relatively poor spatial resolution, with only 3-4
beams across the entire source, and Bloomer et al. (1998) had better ($\sim 1$
arcsec) spatial resolution but velocity resolution only $\sim 350$ km s-1. The
21 cm HI observations of Read (1980) had formal velocity resolution $\sim 4$
km s-1 but the 2 arcmin spatial resolution was too low for useful information
on the individual IRS sources.
We have therefore observed the ionized gas in the IRS2 HII region with high
spatial and spectral resolution, using the TEXES spectrometer on Gemini North
to measure the 12.8 µm [NeII] line with a sub-arcsec beam and true velocity
resolution, including thermal broadening, of $\sim 4$ km s-1 . Starting from
the basic assumption that the HII region structure was created by stellar
outflows in a dense cloud, we create, from PLUTO and the RADMC-3D simulations,
models showing IRS2’s evolution and iteratively compare them to the
observations to arrive at the model most consistent with the data. In the next
section we describe the observations and simulations and review data from the
literature.
## 2 Observations of The IRS2 HII Region
### 2.1 New [NeII] and Radio Maps
The 12.8 µm [NeII] fine structure line is a useful infrared tracer of gas
ionized by young stars. [NeII] line images of compact and ultra-compact HII
regions are seen to match very well with radio continuum images of equivalent
angular resolution (Zhu et al., 2008) except in extreme cases of very high
local obscuration (Beilis, Beck & Lacy, 2021). In the density and ionization
conditions of normal compact and ultra-compact HII regions essentially all the
neon is singly ionized and emits this line. While the exciting star of NGC
7538 IRS2 is unusually hot and may have possibly doubly-ionized some neon to
$Ne^{++}$, the spatial distribution of [NeII] in NGC 7538 agrees very well
with the ionized hydrogen of the radio continuum maps (shown below). This
argues that [NeIII] is not significant in IRS2 and [NeII] can be relied on to
trace the motions of the gas.
We observed the [NeII] line in NGC 7538 at Gemini North on 7 November 2006 in
program GN-2006A-DS-2 with the University of Texas echelle spectrograph TEXES
(Lacy et al., 2002). TEXES is a high resolution spectrograph for wavelengths
5-25 µm which uses a 256 $\times$ 256 element Raytheon Si:As array. The
diffraction and seeing limited beam size at Gemini is 0.4 arcsec,
corresponding to $\sim 0.0045$ pc on the source, and the velocity resolution,
including instrumental effects and thermal broadening, is $\sim 4$ km s-1.
Each pixel along the N-S oriented slit was 0.15 arcsec; the slit was 0.59
arcsec wide and 4 arcsec long. The telescope was stepped $0.25$ arcsec east-
west across the source. The scans were merged to create a data cube with
pixels 0.15 arcsec in declination $\times$ 0.314 arcsec in right ascension
(square on the sky) $\times$ 0.92 km s-1 in velocity. The cube was further
processed with a Maximum Entropy deconvolution; while this does not completely
remove the point-spread function it sharpens the cube. The registration of the
cube was checked by comparison to an archival VLA map of the radio continuum
at 6 cm with a $1.35\times 1.09$ arcsec beam that was obtained in Program
AP374 on 12 August 1998 and we conclude that the uncertainty in the absolute
coordinates of the [NeII] cube is $\pm\sim 0.2$ arcsec, a little more than 1
pixel. Fig. 1 shows the total line emission (zeroth moment) map of the
sharpened [NeII] cube and the [NeII] total superimposed on the 6 cm radio map.
(Both the radio and [NeII] maps are consistent with the [NeII] map of Zhu et
al. (2008), allowing for the greatly improved resolution of the current data).
The spatial appearance of IRS2 is very similar in the two tracers, justifying
the use of [NeII] as an ionized gas tracer. The [NeII] observations did not
cover IRS1, which appears in Fig. 1 as a bright radio source south of IRS2.
Figure 1: Top: The zeroth moment of the sharpened [NeII] cube. The noise level
is $\sim 1.8$ and the contour levels are 17 to 89.9 in steps of 9 (units erg
s-1 cm-2 sr-1 cm). Bottom: the [NeII] contours superimposed on the 6 cm radio
continuum map; the radio noise level $\sim 0.1$mJy bm-1.
### 2.2 Data from the Literature
#### 2.2.1 Stars
Kraus et al. (2006) obtained high resolution $K^{\prime}$ band images of the
IRS1 and IRS2 region with bispectrum speckle interferometry. They found that
the exciting star of IR2 is a close binary, IRS2a/b, and further located point
sources. In Fig. 2 we show the positions in the HII region of the stars and
stellar candidates they report. They find spectral types O5 and O9 for IRS2a
and IRS2b respectively. From the relative brightnesses, the $K^{\prime}$
magnitudes and Ducati et al. (2001) we estimate the spectral type of the
brightest stellar candidates, stars A, P and Q, to be B0-B1.5. The other point
sources are almost 1 magnitude fainter than star A at $K^{\prime}$ and their
nature is unclear; they may be stars of type B or later in NGC 7538 or non-
stellar density peaks in the extended emission.
Figure 2: Locations of point sources measured by Kraus et al. (2006) on the
moment 0 [NeII] map. The overlapping green and red circles show the two
components of the main source IRS2a/b, the blue circle is the location of
Kraus et al. (2006)’s star A, and the other circles correspond, from north to
south, to their sources P,Q,F,E as labelled.
#### 2.2.2 Molecular Clouds and Outflows
The systemic velocity of the molecular cloud surrounding IRS2 and IRS1 is -57
km s-1 (Sandell et al., 2009). The observed velocities of the molecular gas
are dominated by the complex of powerful and very extended outflows driven
most notably by IRS1, The blue lobe of the molecular outflow extends over the
position of IRS2 and has a velocity range between $\sim-80$ and $-64$ km s-1
(Scoville et al. (1986),Sandell et al. (2020)). Sandell et al. (2009) and
Bloomer et al. (1998) mapped shock tracers around IRS2 and suggested that the
shock is excited from the south, by the IRS1 outflow. This interaction may
also have created the complex layers or filaments of ionized gas apparent on
the south edge of IRS2.
## 3 Kinematics of Ionized Gas in IRS2 – Results
### 3.1 Moment Maps
Figure 3: Top: The first moment (mean velocity $V$IR) of the [NeII] cube.
Bottom: The second moment $\sigma$V of the [NeII] cube.
Fig. 3 displays the first and second moments, respectively the intensity
weighted velocity and the velocity dispersion, of the [NeII] data. Both maps
agree with the picture of overlapping shells or bubbles from the moment 0 and
the radio maps. The first moment map shows that the bulk of the [NeII] is
significantly blue of the systemic cloud velocity of -57 km s-1, suggesting
that the gas is expanding preferentially towards us, and that the rim is the
most blue-shifted region. The [NeII] line emission summed over the entire
source, is close to a Gaussian centered at $-66$ km s-1and with $\sim 20$ km
s-1 FWHM. In the second moment map the dispersion is low and almost uniform on
the rim of the HII region, while the central portion contains distinct areas
of higher $\sigma$V, some of which will be shown in the next section to be
affected by double-peaked line profiles. The pattern of the dispersion shows
that the gas has expanded more freely on the western side of IRS2 and suggests
a shell open on the west and with a closed vertex on the east. The shell
appears tilted into the plane of the sky.
### 3.2 Position-Velocity Diagrams
Fig. 4 shows the Position-Velocity Diagrams (PVDs) in cuts across IRS2 in
Declination (top) and Right Ascension (bottom). The width of the PVD (in the
velocity dimension) shows the relatively small range of velocity across the
source; the width of the line in one position is comparable to the maximum
shift of the line peak across the source. In many of the cuts the PVD is
curved and clumpy,with the turn-around point in the curve close to the cloud
velocity of $-57$ km s-1 . Zhu et al. (2008) and Immer et al. (2014) display
the PVDs produced by different types of gas flows. Comparing their models to
our observations we see that the NGC 7538 PVDs closely resemble pressure-
driven flows, with shell-like features in some positions. We do not see the
velocity offsets that characterize bow-shock systems in which the star moves
into the cloud.
Figure 4: Position- Velocity Diagrams of the NGC 7538 data, showing the
intensity of [NeII] emission at the given velocity $V$IR and position along
cuts in Declination (top) and Right Ascension (bottom).
### 3.3 Working Picture: Multiple Overlapping Shells of Gas
The observations of Zhu et al. (2008) with spatial resolution $\sim 1.8$
arcsec led them to interpret IRS2 as a single limb-brightened source, redder
in the center than on the edge, and overall blue-shifted relative to the
cloud. They assumed a single exciting star in the center and interpret the gas
kinematics of IRS2 as a tangential outflow towards the observer along the
walls of a stationary shell. The much higher spatial resolution of our [NeII]
data, together with current information on the stellar population, gives a
different and more complicated picture of the source.
First, the high spatial resolution in Fig. 3 shows several apparently
overlapping shells and partial shells. The velocity dispersion is highest
through the centers of the apparent shells (Fig. 3). This may suggest that the
cavities are not empty but contain expanding lower-density gas, or,
alternatively, that the cavities are not entirely open on the side facing us
and that component of the gas flow adds to the apparent dispersion. Then,
examining the velocity structure in detail with channel maps in Fig. 5, we see
three main shells at slightly different velocities; the shells are identified
and marked also in Fig. 6. The cavity marked ’A’ first appears at velocities
close to the cloud velocity, is clear in the channel maps around $\sim-63$ km
s-1 and persists through to the bluest velocities. The velocity range of
cavity ’B’ overlaps with cavity ’A’ on the red side; cavity ’B’ is clearest
between $-62$ and $-67$ km s-1 and fades out at more blue velocities, and the
small cavities ’C’ and ’D’ first appear at velocities around $\sim-63$ and
$\sim-73$ km s-1. It should be noted that while the overall shift of the
velocity blue relative to the cloud may suggest a cometary flow, there is not
the relation of size and velocity that characterizes cometary flow features.
In a classic cometary flow (e.g. W33-Main-4, Beilis, Beck & Lacy (2021)) the
size of the feature increases with blue-shift. In NGC 7538 the source does
appear to expand with velocity at the lowest velocities $\lessapprox 65$ km
s-1 , but the size is roughly constant between $-70$ and $-80$ km s-1 , and
while it appears slightly larger in that range than at lower velocities, that
may be an artifact of the higher S/N. Alternatively, the size-velocity
correlation depends on the moving star having created a paraboloidal cavity
and will not appear if the cavity is cylindrical instead.
Figure 5: Selected 0.92 km s-1 wide velocity channels of the [NeII] data cube;
the velocities are given on each panel. Contour levels are $n\times 0.5$ erg
s-1 cm-2 sr-1 cm$,n=1,2,3...10$.
The line profiles at every position on the source are shown in the Appendix
Fig. 11, and Fig. 6 shows the spectrum in a 0.3 arcsec radius in each of the
four candidate shells. Fig. 6 and Fig. 3 together demonstrate that the shells
are the regions of highest velocity dispersion and that line profiles through
the shells are double-peaked. This is the spectral signature expected of
expanding unfilled shells. The spectra taken on the walls of the shells or
cavities have single peaks with intensity higher than at positions in the
shell; this agrees with the picture that the bulk of the gas flows along these
walls.
## 4 NGC 7538 Simulations and Model
### 4.1 Motivation
Our examination of the data cube in the previous sections shows that NGC 7538
IRS2 includes several cavities or shells and that the main cavities are
associated with embedded stars or protostellar sources. We now try to
determine how this structure was created. Where the low spatial resolution of
previous studies showed a simplified picture of one large cavity and only one
possible stellar source, IRS2a/b, we now have multiple cavities and potential
driving sources to consider. In this section we model the source as having
been created by multiple stellar outflows expanding into a cloud with density
gradients. The positions of the cavities lead us to take as driving stars
IRS2a/b and Kraus et al. (2006)’s sources A,P,Q. We have assigned stellar and
wind parameters to each simulated star based on the spectral types as
estimated in section 2.1.1 above, and the OB wind results of Lamers &
Leitherer (1993)and Krtička (2014). The parameters are given in Table 1.
It should be noted that young stars are active and known to drive many
different forms of outflow and mass expulsion, ranging from highly collimated
jets to high-velocity, low-density stellar winds. Our models do not depend on,
nor do we specify, any particular type or strength of activity. We assume only
that some form of wind or outflow (here used interchangeably) was present and
has created the observed cavities, along which the dense ionized gas (traced
by [NeII]) flows, driven by pressure.
Figure 6: A map of the -65 km s-1 velocity channel with the embedded stars
and the 4 cavities marked and the [NeII] line profile at each cavity shown.
The red dots are approximately the beam size.
We simulated the gas kinematics with PLUTO, as described in the next section,
and iteratively adjusted the cloud density structure and stellar positions to
best match the observed data cube. In the next section we review PLUTO and the
basic calculations, and in the following sections we discuss how we fine-tuned
the stellar positions and density structure.
### 4.2 Hydrodynamics – PLUTO
We created the model of the HII region with the PLUTO hydrodynamics package
(Mignone et al., 2007). The simulation conditions and boundaries were as
described in Beilis, Beck & Lacy (2021) and the cell width $4.8\times 10^{-4}$
pc. The boundary conditions in all directions were set to outflow. The
Navier–Stokes equations of classical fluid dynamics were solved in an Eulerian
method in 3D Cartesian coordinates using a HLL approximate Riemann solver
(Harten, Lax & Leer, 1983). $T_{e}=10^{4}$ K was assumed. To simplify the
model, we did not include heating and cooling processes; as the supersonic
expansion of the gas into the evacuated bubbles should be close to isothermal
(Franco et al., 1990) their effect should be small. Finally, a Courant number
of $0.3$ was used based on the Courant-Friedrichs-Lewy condition (Courant,
Friedrichs & Lewy, 1967).
Each star starts with a spherically symmetric radial wind flowing from the
origin of coordinates and remains stationary so that for an ambient medium
density of $\rho\textsubscript{a}$ and pressure of $p\textsubscript{a}$, the
initial conditions are: $\rho=\rho\textsubscript{a}$ and
$p=p\textsubscript{a}$. The wind is injected within the internal boundary
$r_{0}$ (inside the inner reverse shock of a stellar wind bubble) and based on
Mignone (2014) maintains these flow quantities constant in time:
$\begin{array}[]{cc}r^{2}v_{r}\rho=r_{0}^{2}V_{0}\rho_{0}&\text{(conservation
of mass flux)}\end{array}$ (1)
$\begin{array}[]{cc}v_{r}=V_{0}\tanh\left(\frac{r}{r_{0}}\right)&\text{(stellar
wind acceleration structure)}\end{array}$ (2)
$\begin{array}[]{cc}p=\frac{c_{s}^{2}}{\Gamma}(\rho_{0}^{1-\Gamma})\rho^{\Gamma}&\text{(pressure-
density adiabatic relation)}\end{array}$ (3)
where $r$ is the radius, $\rho_{0}$ is the ambient density, $V_{0}$ is the gas
velocity at $r_{0}$, $p$ is the gas pressure, $c$s is the sound speed of the
gas and $\Gamma=c_{P}/c_{V}$ is the ratio of specific heat coefficients.
Values of $r_{0}$ and $V_{0}$ for the 4 simulated stars can be seen in Table
1.
#### 4.2.1 Locating Stars and IRS2a/b
The positions of the various stars in the X and Y axis (the Dec and RA axes in
the plane of the sky) were based on the coordinates given by Kraus et al.
(2006), while the distances in the line-of-sight axis (Z-axis) were determined
by a iterative fine-tuning process to give the best match to the observed data
cube.
The brightest stars in IRS2, and therefore the sources of the strongest winds,
are in IRS2a/b: they are estimated to be type O9 and O5 (Kraus et al., 2006)
and their separation on the plane of the sky is only 0.195arcsec (ibid). Will
the interaction of the two star’s outflows modify their effect on the ambient
cloud? As a test we simulated only those two stars in a constant ambient
density. We found that on length scales comparable to the distance between the
stars, the interactions had significant effect, producing noticeable swirls
and spirals. But on length scales comparable to the distances between the
embedded stars in IRS 2, the effects are negligible. The two stellar winds
together produced a single minimally distorted spherical cavity, and the
radius of the cavity was almost the same as what the more massive star (IRS2a)
would make alone. We therefore treat IRS2a/b in the simulations as a single
star at the center of the binary’s orbit and with IRS2a parameters.
#### 4.2.2 Density Structure
The overall structure of IRS2 is dominated by the largest shell. It is
blueshifted relative to the cloud and has PVD characteristic of a strong
pressure driven flow (Zhu et al., 2008). To match this in the model, the
density gradient includes both a power law function
$H(r)\propto\rho^{-\alpha}$ and a step function. This sharpens the
$r\textsubscript{d}$ boundary between the low and high density areas (Henney,
Arthur & García-Díaz, 2005), and allows us to place the stars off-center at
distance $\tilde{z}$ from the molecular cloud core. We used a standard
$\alpha=1.5$ (Pirogov, 2009; Sato et al., 2010). $\rho_{0}$ was chosen to give
density on the order of $10^{5}\text{ }$cm-3 close to the edge of the high
density region, and $\tilde{\rho}_{0}=10^{2}\text{ }$cm-3, so the ambient
density profile is:
$\rho\textsubscript{a}(r,\tilde{z})=\begin{cases}\rho_{0}H(r)&r<r\textsubscript{d}\\\
\tilde{\rho}_{0}&r>r\textsubscript{d}\\\ \end{cases}$ (4)
The core’s center was set at an approximately $45^{\circ}$ angle relative to
the Z or line-of-sight axis, to match the PVDs. IRS2a/b was placed at the edge
of the high density area.
The density structure of the data cube indicate that stars A,P,Q are in
regions of low constant ambient denstiy, To fit this structure it is necessary
to assume that these stars had already evacuated bubbles around themselves
before the main IRS2a/b outflow started. As they are B stars and much longer
lived than the O stars that comprise IRS2a/b, this is realistic. The density
in the bubbles was set to $\rho\textsubscript{a}=10\text{ }$cm-3 and the sizes
on the order of the observed shell sizes. We added a small low density bubble
close to star P at the edge of star A’s bubble, which reproduces a filamentary
structure extending towards the observer in the data cube.
The region around IRS2a/b, Cavity B, is particularly complex. The data cube
indicates that a strong outflow has expanded unevenly into a clumpy and dense
medium. The channel maps of Fig. 5 show that the south-west side of Cavity B
is blue-shifted and its north-east edge relatively red-shifted. To match this
we placed one small and two large low density
($\rho\textsubscript{a}=10^{2}\text{ }$cm-3) spheres near IRS2a/b. The large
spheres north-east and south-west of IRS2a/b, are offset in the line-of-sight
and are dominated respectively by red and blue outflowing material.
The proximity of the strong source IRS2a/b to the large cavity around star A
suggests that the IRS2a/b outflow could expand into the low density Cavity A,
but simulations of the IRS2a/b and star A outflows interacting produced a wake
which is not seen in the observations. We therefore constrain the IRS2a/b
outflow to remain within the high density Cavity B area. We note that the
mismatch with the data may be due to the limitations of the simulations and of
the simple stellar outflow model that we have chosen, and does not rule out
the possibility that IRS2a/b is also expanding into cavity A.
Fig. 7 shows the most satisfactory model we have obtained. In the figure the
driving stars, the main cavities and the low and high-density structures are
displayed from several angles; the full 3-dimensional model can be seen as a
movie in the appendix. Fig. 8 shows the zeroth moment of the simulated cube,
as viewed from our position.
Table 1: Simulated stars’ parameters Star | Spectral Type | $M_{*}$ | $T_{*}\textsuperscript{eff}$ | $r_{0}$ | $V_{0}$
---|---|---|---|---|---
| | M$\odot$ | $10^{4}$ K | $10^{-3}$ pc | km s-1
IRS2a/b | O5 | 51.00 | 4.43 | 6.49 | 40
A | B0V | 14.73 | 3.00 | 3.25 | 20
P | B1.5V | 9.26 | 2.40 | 1.91 | 12
Q | B1.5V | 9.26 | 2.40 | 1.91 | 12
Note: List of the simulated stars. We have assigned to each star a mass ($M_{*}$), effective temperature ($T_{*}\textsuperscript{eff}$), internal boundary of the bubble created by the outflow ($r_{0}$), and $V_{0}$ the gas velocity at $r_{0}$, consistent with the spectral types as estimated in section 2.1.1. |
Figure 7: Multiple viewing angles of the [NeII] density model of NGC 7538 IRS
2 produced by PLUTO where the high and low density areas are marked on the
figure and the stars are denoted by red spheres. 1 X,Y,Z unit is 0.00048 pc
and the observer is located in the Z+ direction. The velocity scale was chosen
for computational efficiency and is offset from the observed velocities by 80
km s-1 .
### 4.3 3D ionic line emission profile simulation – RADMC-3D
The PLUTO outputs (Ne+ density, velocity, and position) were input to the
radiative transfer software RADMC-3D (Dullemond et al., 2012), which
calculated the 3D [NeII] line profile emission. We assumed constant gas
temperature $10^{4}$ K (the line intensity is not very sensitive to
temperature, depending on $(\frac{10^{4}~{}K}{T_{e}})^{1/2}$). The line
profile was simulated at the spectral range and resolution matching the
observed [NeII] data cube. The PVDs extracted from the simulated cube are
shown in Fig. 9 and the channel maps in Fig. 10. (The velocity scale in the
simulated cube was chosen for a computationally convenient zero, and therefore
there is an overall shift in velocity between the simulated observed data
cubes).
Figure 8: Moment 0 of the simulated data cube convolved with the Gemini beam
size, showing total [NeII] emission.
#### 4.3.1 Simulation Results and the Observed Data Cube
The moment 0 map of the simulated data cube agrees well with the observed
spatial distribution of the radio continuum and the [NeII] (Fig. 1). A
striking feature of the simulation is the ridge of bright emission running SE-
NW. In the simulation, the ridge is the overlap in the line of sight of the
density features created by the outflows of IRS2a/b and stars A and P; the
three intensity peaks show (east to west) the IRS2a/b outflow, the outflow of
star A meeting the wall of Cavity A, and the Cavity A wall meeting the outflow
of star P.
This [NeII] ridge coincides spatially with the region in which Bloomer et al.
(1998) find emission from the shock tracers $H_{2}$ and [FeII]; they suggest
that the shocks show where the stellar wind of IRS2a/b impacts the molecular
cloud. Bloomer et al. (1998) further suggest that IRS2a/b has created a bow
shock by moving through the cloud SE at $\sim 10$ km s-1 . The observed PVDs,
however, show the signature of pressure driven flows into cavities of
stationary stars rather than the bow shock signature of moving stars. The
current data and the shock tracers may be consistent if IRS2a/b is moving
almost entirely in the plane of the sky, and so creating shocks but not the
kinematic marker of a bow shock. It is also possible that the shocks Bloomer
et al. (1998) observe were excited by the HII region expanding into the
molecular cloud and do not reflect stellar motion.
The channel maps of the observed (Fig. 5) and simulated (Fig. 10 agree fairly
well. In particular the simulations match the observed velocity development of
Cavities A and B, giving us some confidence in our model for the complex
setting of IRS2a/b. The PVDs of the simulation match many but not all of the
small brightness clumps in the original data, and confirm that the kinematics
are champagne-type; that is, with no motion of the exciting star relative to
the cloud.
Figure 9: Position-Velocity Diagrams showing the velocity of the simulated
[NeII] 3D line profile at every position in the simulation results. Cuts in
Dec are at the top and R.A. on the bottom. The 0 of the velocity scale is
arbitrary and the results convolved to match the Gemini $0.3^{\prime\prime}$
beam size. Figure 10: Velocity channel maps from the simulated [NeII] 3D line
profile, convolved to match the Gemini beam size.
## 5 Discussion and Conclusions
We have obtained high spatial and spectral resolution observations of the
$12.8$ µm [NeII] emission in NGC 7538 IRS2. The ionized gas traced by the
[NeII] line has formed several cavities or shells, and the PVDs suggest that
the gas flows along the walls of cavities created by stellar outflows. The
observed cavities can be correlated with some of the young embedded stars
observed in the near-infrared by Kraus et al. (2006). We have accordingly
modelled the HII region as a collection of overlapping shells or bubbles
created by outflows from the embedded stars. Simulations of the gas kinematics
from the model show:
* •
The observations can be well matched with 4 outflow cavities.
* •
Three of the outflow cavities (A,B, and C) are associated with embedded stars;
the fourth has no known stellar sources.
* •
The early O binary pair IRS2a/b, the main luminosity and ionization source, is
not associated with the largest cavity but is located in a small high-density
region.
* •
The largest cavity, that dominates the source structure, is in a low-density
region and holds an early B-type star.
* •
The overlap of outflow cavities in the line of sight has created the bright
clumpy ridge structure which dominates the continuum maps and which is thought
to be a shocked region.
We do not have a timeline for the creation of the cavities, as we have no
information about the strength or duration of the ouflows that formed them,
nor about their relative ages. We note that the O stars of IRS2a/b are short-
lived compared to B-type stars, so stars A,P,Q may well be older than IRS2a/b
and had longer to affect the cloud structure, and also that even complicated
density structures may be formed quickly, especially if the ambient gas is not
very dense. In the conservative limiting case that one of the small cavities
which appears as a closed sphere in the models is the result of a bubble
expanding at $\sim 5$ km s-1, it would reach its current size of $\approx
10^{-2}$ pc in only $2\times 10^{3}$ years.
The model we have developed gives results which agree with the observed data,
but this does not necessarily mean that the model matches reality. It is
possible that the observed cavities were created by or influenced by some
mechanism other than stellar outflows, or by stars other than the ones
recorded by Kraus et al. (2006). The molecular medium around IRS2 is active
and complex. The NGC 7538 region holds several embedded IR sources that drive
large-scale molecular flows. The closest to IRS2 on the plane of the sky is
the very active source IRS1 which has a strong outflow. It is possible,
although not observed, that the IRS1 outflow has affected the IRS2 region.
Observations of the molecular gas in the IRS2/IRS1 region with good spatial
resolution could determine if the two sources are interacting; measurements of
shock tracers would likewise be useful.
NGC 7538 IRS2 is an excellent example of a mature compact HII region with
multiple embedded stars. Our data suggest that even though the ionization and
luminosity of the source is dominated by one (binary) star, the structure of
the source has been shaped by the smaller stars; their outflows have created
the cavities and shells in which the gas flows. The importance of the smaller
cluster stars to the structure of IRS2 may help understand the structure and
evolution of other HII regions that hold embedded proto-clusters.
## Acknowledgements
Based on observations obtained at the international Gemini Observatory, a
program of NSF’s NOIRLab, which is managed by the Association of Universities
for Research in Astronomy (AURA) under a cooperative agreement with the
National Science Foundation. on behalf of the Gemini Observatory partnership:
the National Science Foundation (United States), National Research Council
(Canada), Agencia Nacional de Investigación y Desarrollo (Chile), Ministerio
de Ciencia, Tecnología e Innovación (Argentina), Ministério da Ciência,
Tecnologia, Inovações e Comunicações (Brazil), and Korea Astronomy and Space
Science Institute (Republic of Korea).
## Data Availability
The TEXES [NeII] data is available at the Gemini Observatory Archive under the
Program ID GN-2006A-DS-2, and in FITS format at
https://github.com/danbeilis/data/tree/master/ngc7538. The radio continuum
data may be found at the VLA Data Archive under Program AP374.
## References
* Beilis et al. (2021) Beilis D., Beck S., Lacy J., 2021, MNRAS, 509, 2234
* Bloomer et al. (1998) Bloomer J. D., et al., 1998, ApJ, 506, 727
* Courant et al. (1967) Courant R., Friedrichs K., Lewy H., 1967, IBM J. Res. Dev., 11, 215
* Ducati et al. (2001) Ducati J. R., Bevilacqua C., Rembold S., Riberio D., 2001, ApJ, 558, 309
* Dullemond et al. (2012) Dullemond C., Juhasz A., Pohl A., Sereshti F., Shetty R., Peters T., Commercon B., Flock M., 2012, ASCL, pp ascl–1202
* Franco et al. (1990) Franco J., Tenorio-Tagle G., Bodenheimer P., 1990, ApJ, 349, 126
* Harten et al. (1983) Harten A., Lax P. D., Leer B. v., 1983, SIAM Rev., 25, 35
* Henney et al. (2005) Henney W. J., Arthur S. J., García-Díaz M. T., 2005, ApJ, 627, 813
* Immer et al. (2014) Immer K., Cyganowski C., Reid M. J., Menten K. M., 2014, A&A, 563, 22
* Kraus et al. (2006) Kraus S., et al., 2006, A&A, 455, 521
* Krtička (2014) Krtička J., 2014, A&A, 564, A70
* Lacy et al. (2002) Lacy J., Richter M., Greathouse T., Jaffe D., Zhu Q., 2002, PASP, 114, 153
* Lamers & Leitherer (1993) Lamers H. J., Leitherer C., 1993, ApJ, 412, 771
* Mignone (2014) Mignone A., 2014, J. Comput. Phys., 270, 784
* Mignone et al. (2007) Mignone A., Bodo G., Massaglia S., Matsakos T., Tesileanu O., Zanni C., Ferrari A., 2007, ApJS, 170, 228
* Minier et al. (2001) Minier V., Conway J. E., Booth R. S., 2001, A&A, 369, 278
* Pirogov (2009) Pirogov L. E., 2009, Astronomy Reports, 53, 1127
* Read (1980) Read P., 1980, MNRAS, 193, 487
* Sandell et al. (2009) Sandell G., Goss W. M., Wright M., Corder S., 2009, Astrophys.J., 699, L31
* Sandell et al. (2020) Sandell G., Wright M., Güsten R., Wiesemeyer H., Reyes N., Mookerjea B., Corder S., 2020, ApJ, 904
* Sato et al. (2010) Sato J., et al., 2010, ApJ, 724, 59
* Scoville et al. (1986) Scoville N., Sargent A., Sanders D., Claussen M., Masson C., Lo K. Y., Phillips T., 1986, Masers, Molecules and Mass Outflows in Star Forming Regions. Proceedings of a meeting held by the Haystack Observatory, Westford, Mass., USA, 15 - 16 May 1985.. A. D. Haschick (Editor). Haystack Observatory, Westford, Mass., USA. P. 201, 1986
* Zhu et al. (2008) Zhu Q.-F., Lacy J. H., Jaffe D. T., Richter M. J., Greathouse T. K., 2008, ApJS, 177, 584
## Appendix A Line Profiles
Figure 11: The line profile in 1 pixel at the given positions. The velocity
scale and intensity scale are the same in all positions and are shown at the
top right. The pixels shown are every 3rd pixel in Dec. and every 6th in R.A.
|
# SNAC: Speaker-normalized affine coupling layer in flow-based architecture
for zero-shot multi-speaker text-to-speech
Byoung Jin Choi, , Myeonghun Jeong, , Joun Yeop Lee,
and Nam Soo Kim This work was supported by Samsung Research, Samsung
Electronics Co.,Ltd. Byoung Jin Choi, Myeonghun Jeong, and Nam Soo Kim are
with the Department of Electrical and Computer Engineering and with the
Institute of New Media and Communications, Seoul National University, Seoul
08826, South Korea (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>Joun Yeop Lee is with Samsung Research, Samsung Electronics
Co., Ltd, Seoul, South Korea (e-mail<EMAIL_ADDRESS>
###### Abstract
Zero-shot multi-speaker text-to-speech (ZSM-TTS) models aim to generate a
speech sample with the voice characteristic of an unseen speaker. The main
challenge of ZSM-TTS is to increase the overall speaker similarity for unseen
speakers. One of the most successful speaker conditioning methods for flow-
based multi-speaker text-to-speech (TTS) models is to utilize the functions
which predict the scale and bias parameters of the affine coupling layers
according to the given speaker embedding vector. In this letter, we improve on
the previous speaker conditioning method by introducing a speaker-normalized
affine coupling (SNAC) layer which allows for unseen speaker speech synthesis
in a zero-shot manner leveraging a normalization-based conditioning technique.
The newly designed coupling layer explicitly normalizes the input by the
parameters predicted from a speaker embedding vector while training, enabling
an inverse process of denormalizing for a new speaker embedding at inference.
The proposed conditioning scheme yields the state-of-the-art performance in
terms of the speech quality and speaker similarity in a ZSM-TTS setting.
###### Index Terms:
speech synthesis, zero-shot multi-speaker text-to-speech, conditional
normalizing flow
## I Introduction
As the sample quality of the recently proposed neural text-to-speech (TTS)
models, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], approaches to the natural speech,
the research interest has extended to high fidelity multi-speaker TTS systems,
which enables the speech generation of multiple speakers via a single trained
model. However, training a multi-speaker TTS system requires a large dataset
of $[text,audio,speaker]$ tuples where the labeling can be costly.
Furthermore, such systems are limited to generate the voice of speakers seen
during the training period, whereas instant adaptation to a new speaker’s
voice may be required in real life applications. To this end, _personalized
TTS_ is gaining huge attention from researchers.
Personalized TTS aims at generating new speakers’ speech with limited
resources. One possible approach is speaker adaptation. The idea of adapting a
pre-trained TTS model to a new speaker with more than one $[text,audio]$ pair
dates back to the hidden Markov model (HMM)-based TTS [12, 13, 14, 15]. [16]
and [17] extend the maximum likelihood linear regression (MLLR) algorithm for
speaker adaptation. For more robust speaker adaptation, structured maximum _a
posteriori_ linear regression (SMAPLR) [18] is developed by combining the
maximum _a posteriori_ (MAP) criterion to MLLR. The adaptation process is
based on the affine transformation of mean and variance of HMM parameters for
the target speaker and such transformation matrices are derived by maximum
likelihood and MAP criterion respectively. With the recent development of non-
autoregressive neural TTS systems, [19] and [20] focus on effectively
finetuning the parameters of pre-trained neural TTS model to adapt to the
speaker’s characteristics.
Another approach deals with an extreme situation where only an $[audio]$ from
a target speaker is available. The model is required to correctly reflect the
unseen target speaker’s characteristics without further finetuning the model.
This task is known as zero-shot multi-speaker TTS (ZSM-TTS). Some of the
previous works, [21, 22, 23, 24], propose using an external speaker encoder
trained for speaker verification. [25, 26, 27] utilized adversarial training
to enhance unseen speaker generalization. On the other hand, normalization-
based conditioning techniques used in style transfer, [28, 29], were
introduced to condition speaker embeddings for FastSpeech-based models in [25]
and [30]. These conditioning methods first remove the instance-specific
information from the input to preserve content via speaker-normalization. The
normalized input is then scaled and shifted by the affine parameters predicted
from the target speaker embedding vector.
However, recently proposed flow-based TTS models are rather under-explored in
ZSM-TTS applications. Leveraging the aforementioned normalization-based
speaker conditioning techniques in flow-based models is especially challenging
because the flow requires the inverse operation of such normalization unlike
feed-forward models.
In this letter, we propose a speaker-normalized affine coupling (SNAC) layer
for flow-based TTS models in the ZSM-TTS scenario. The proposed method
explicitly normalizes the input by the speaker-dependent parameters in order
to preserve speaker-independent information at training, while target
speaker’s information is inversely injected at inference through
denormalization. We compare the proposed conditioning method to the existing
method in several different experimental settings using VITS [7] as our base
model and demonstrate that the proposed method outperforms the conventional
technique in both subjective and objective measures. The audio samples are
available on the demo page111https://byoungjinchoi.github.io/snac/.
## II Affine coupling-based generative flow
Normalizing flow models, [31, 32, 33], learn an invertible mapping between a
prior distribution, $p_{\theta}(z)$, and a more complex data distribution
$p_{\theta}(x)$ using a sequence of bijective functions. The log-likelihood
computation is tractable via the rule of change-of-variables. Let
$f_{\theta}:\mathbb{R}^{D}\rightarrow{}\mathbb{R}^{D}$ be a bijective function
which maps the observed data $x$ to the latent variable $z$ from a simple
prior distribution $p_{\theta}(z)$ where $x,z\in\mathbb{R}^{D}$. Then the log-
likelihood is obtained by
$\log p_{\theta}(x)=\log
p_{\theta}(z)+\log\left|\text{det}\dfrac{{\partial}f_{\theta}(x)}{{\partial}x}\right|_{\textstyle.}$
(1)
Computing the log determinant of the Jacobian matrix in (1) is computationally
expensive in general. In addition, $f_{\theta}$ is strictly restricted to be a
bijective function in which only certain types of transformations can be
easily inversed. An affine coupling layer, first introduced in [31], allows
for an efficient computation of the log determinant with invertible
transformations by generating an output $y\in\mathbb{R}^{D}$ given an input
$x\in\mathbb{R}^{D}$ and $d<D$ via
$\displaystyle\begin{gathered}y_{1:d}=x_{1:d}\\\
y_{d+1:D}=x_{d+1:D}\,{\odot}\,\exp({s_{\theta}(x_{1:d})})+b_{\theta}(x_{1:d})\end{gathered}$
(4)
where $s_{\theta}$ and $b_{\theta}$ are parameterized scale and bias functions
mapping $\mathbb{R}^{d}\rightarrow{}\mathbb{R}^{D-d}$, and $\odot$ is an
element-wise product. With this coupling architecture, the Jacobian becomes a
lower triangular matrix as given by
$\displaystyle\frac{\partial y}{\partial x}=\begin{bmatrix}\mathbb{I}_{d}&0\\\
\frac{\partial y_{d+1:D}}{\partial
x_{1:d}}&\text{diag}(\exp({s_{\theta}(x_{1:d})}))\end{bmatrix}$ (5)
where $\mathbb{I}_{d}$ represents a $d\times d$ identity matrix. Computing the
determinant of Jacobian matrix of the affine coupling does not depend on the
Jacobians of $s_{\theta}$ and $b_{\theta}$. Therefore, they can be any type of
complex functions modeled by highly expressive neural networks, such as non-
causal Wavenet [34]. The inverse transformation of the coupling layer can be
easily derived as
$\displaystyle\begin{gathered}x_{1:d}=y_{1:d}\\\
x_{d+1:D}=\dfrac{y_{d+1:D}-b_{\theta}(y_{1:d})}{\exp({s_{\theta}(y_{1:d})})}\end{gathered}_{\textstyle,}$
(8)
hence sampling is also efficient. Each coupling layer is then followed by a
layer which permutes the ordering of the channels along the feature dimension.
## III Speaker-normalized affine coupling layer for ZSM-TTS
A conditional generative flow [35, 36] models a conditional probability
distribution $p_{\theta}(x|g)$ where $g$ represents a conditioning term.
Conventionally, a conditional flow extends the forward and inverse
transformations of an affine coupling layer given in (4) and (8) by modifying
$s_{\theta}$ and $b_{\theta}$ such that they take $g$ as an additional input.
For ZSM-TTS, the condition $g$ usually represents a specific speaker embedding
vector. Our strategy for ZSM-TTS is to convert the speaker-dependent data
distribution to a latent prior distribution which becomes speaker-independent.
Then, when synthesizing speech, the speaker-independent latent prior
distribution is mapped back to a speaker-specific data distribution depending
on the given speaker embedding. In order to achieve this, we design each
affine coupling layer to remove the information related to $g$ at the forward
transformation. Contrarily, $g$ is injected to the input embedding sequence at
the inverse transformation. To obtain such bijective transformation with
explicit $g$ conditioning, we propose a speaker-normalized affine coupling
(SNAC) layer, which normalizes and denormalizes the input embedding sequence
by the mean and standard deviation parameters predicted by $g$. Speaker
normalization ($SN$) and speaker denormalization ($SDN$) in SNAC are performed
as follows:
$\displaystyle\begin{gathered}SN(x;g)=\dfrac{x-m_{\theta}(g)}{\exp{(v_{\theta}(g))}}\\\
SDN(x;g)=x\,{\odot}\,\exp{(v_{\theta}(g))}+m_{\theta}(g)\end{gathered}$ (11)
where $m_{\theta}$ and $v_{\theta}$ are simple linear projections to obtain
the mean and standard deviation parameters from $g$. $SN$ and $SDN$ are
applied across the temporal axis, thus normalizing and denormalizing each
frame of the input $x$ with the same mean and standard deviation parameters.
The forward transformation of the SNAC layer is now given by
$\displaystyle\begin{gathered}y_{1:d}=x_{1:d}\\\
y_{d+1:D}=SN(x_{d+1:D};g)\,{\odot}\,\exp({s_{\theta}(SN(x_{1:d};g))})\\\
+b_{\theta}(SN(x_{1:d};g))\end{gathered}_{\textstyle.}$ (15)
The inverse transformation can be derived straightforwardly as follows:
$\displaystyle\begin{gathered}x_{1:d}=y_{1:d}\\\
x_{d+1:D}=SDN\left(\dfrac{y_{d+1:D}-b_{\theta}(SN(y_{1:d};g))}{\exp({s_{\theta}(SN(y_{1:d};g))})};g\right)\\\
\end{gathered}_{\textstyle.}$ (19)
At each SNAC layer, $SN$ is applied to the input of $s_{\theta}$ and
$b_{\theta}$ such that the affine parameters contain the information unrelated
to the speaker. Since $x_{d+1:D}$ is also speaker-normalized in the forward
transformation, this results in the extensive removal of speaker information
at training. When inferring $x_{d+1:D}$ through the inverse transformation,
$SDN$ is enforced after the affine transformation is applied to $y_{d+1:D}$ to
appropriately inject information related to the target speaker.
The log-determinant of the conditional flow with SNAC layers can be obtained
by
$\displaystyle\log\left|\text{det}\dfrac{{\partial}f_{\theta}(x)}{{\partial}x}\right|=\log\sum_{j}\dfrac{\exp({s_{\theta}(SN(x_{1:d};g))_{j}})}{\exp{(v_{\theta}(g)_{j})}}_{\textstyle.}$
(20)
The complete architecture of the SNAC layer is presented in Fig. 1.
(a) Forward transformation
(b) Inverse transformation
Figure 1: (a) and (b) each show the forward and inverse transformations of
SNAC layer.
## IV Experiments
### IV-A Model
We performed experiments on the proposed method by replacing the affine
coupling layer of the flow module in VITS with the SNAC layer.
#### IV-A1 VITS overview
VITS leverages the variational autoencoder (VAE) [37] formulation and the
adversarial training strategy to successfully combine the joint training of
acoustic feature generation, vocoding, and duration prediction in an end-to-
end manner. The objective is to maximize the variational lower bound of a
conditional log-likelihood
$\log p_{\theta}(o|l)\geq E_{q_{\phi}(z|o)}\left[\log
p_{\theta}(o|z)-\log\dfrac{q_{\phi}(z|o)}{p_{\theta}(z|l)}\right]$ (21)
where $p_{\theta}(o|z)$, $q_{\phi}(z|o)$ and $p_{\theta}(z|l)$ respectively
denote the likelihood, the approximate posterior and the conditional prior
distributions. A target speech and the corresponding phoneme sequence are
denoted as $o$ and $l=[l_{text},A]$, and $z$ is a frame-level latent sequence
representing the intermediate acoustic features. The alignment $A$ is
estimated using the monotonic alignment search (MAS) algorithm proposed in
[10]. The generator part of VITS architecture consists of a posterior encoder,
prior encoder, decoder, and duration predictor and is trained with a
discriminator in an adversarial manner. The prior encoder is composed of two
parts: a text encoder and a flow module. The flow module plays an essential
role in transforming a simple text-conditional distribution to a more complex
distribution.
#### IV-A2 Multi-speaker VITS
In a multi-speaker setting, the likelihood $p_{\theta}(o|l)$ is substituted
with $p_{\theta}(o|l,g)$ where $g$ represents a speaker embedding. For
training, the original work uses a speaker label as an additional input which
is transformed to a fixed-dimensional vector $g$ via a learnable embedding
table. $g$ is then conditioned to every module of the generator.
### IV-B Datasets
All tested models were trained on VCTK [38] dataset. VCTK is a multi-speaker
audio dataset which contains approximately 44 hours of speech recorded by 109
speakers. We selected 11 speakers as an in-domain test set following [24]. To
evaluate the performance on the out-of-domain dataset, we randomly selected 20
speakers from LibriTTS [39] test-clean dataset, which consists of 8.56 hours
of audio from 39 speakers. Each utterance was downsampled to 22050 Hz for
training.
### IV-C Implementation details
Our proposed method modifies the official implementation of
VITS222https://github.com/jaywalnut310/vits. For the partitioning scheme of
affine coupling layer at flow module, we chose channel-wise masking pattern
[32]. To ensure that all input entries are processed, we reverse the ordering
of the feature dimension at each layer of flow module. We employ a reference
encoder to extract the speaker embedding vector. The reference encoder is
composed of a stack of 2-D convolutional layers and a gated recurrent unit
(GRU) [40], following global style token (GST) [41]. The reference encoder
takes a sequence of linear spectrograms of the reference audio as an input and
outputs a 256-dimensional embedding vector. Two separate linear projection
layers, $m_{\theta}$ and $v_{\theta}$, are employed to predict the mean and
standard deviation parameters of the speaker in (11) from the reference
embedding vector.
Model | VCTK | LibriTTS
---|---|---
MOS($\uparrow$) | SMOS($\uparrow$) | SECS($\uparrow$) | MOS($\uparrow$) | SMOS($\uparrow$) | SECS($\uparrow$)
Ground Truth | 4.76$\pm$0.02 | 4.19$\pm$0.04 | 0.748 | 4.80$\pm$0.02 | 4.51$\pm$0.03 | 0.646
Meta-StyleSpeech | 2.06$\pm$0.04 | 2.62$\pm$0.05 | 0.212 | 2.00$\pm$0.03 | 2.50$\pm$0.04 | 0.131
YourTTS | 4.42$\pm$0.03 | 3.86$\pm$0.04 | 0.447 | 4.23$\pm$0.03 | 3.35$\pm$0.04 | 0.317
Baseline+REF+ALL | 4.22$\pm$0.04 | 4.11$\pm$0.04 | 0.350 | 4.30$\pm$0.03 | 3.67$\pm$0.04 | 0.143
Baseline+REF+FLOW | 4.08$\pm$0.04 | 4.01$\pm$0.04 | 0.339 | 3.98$\pm$0.04 | 3.64$\pm$0.04 | 0.135
Baseline+PRE-TRAINED+FLOW | 4.38$\pm$0.03 | 3.52$\pm$0.04 | 0.321 | 4.17$\pm$0.03 | 2.91$\pm$0.05 | 0.135
Proposed+REF+ALL | 4.30$\pm$0.03 | 4.07$\pm$0.04 | 0.320 | 4.11$\pm$0.03 | 3.56$\pm$0.04 | 0.145
Proposed+REF+FLOW | 4.48$\pm$0.03 | 4.19$\pm$0.04 | 0.352 | 4.41$\pm$0.03 | 3.70$\pm$0.04 | 0.151
Proposed+PRE-TRAINED+FLOW | 4.46$\pm$0.03 | 3.61$\pm$0.04 | 0.270 | 4.40$\pm$0.03 | 3.18$\pm$0.04 | 0.116
TABLE I: MOS, SMOS, and SECS on unseen speakers of VCTK and LibriTTS
### IV-D Experimental setup
We evaluated our method in several different settings. We built our first
baseline, Baseline+REF+ALL, by attaching a reference encoder to the vanilla
multi-speaker VITS model, which applied the speaker conditioning at every
module of the generator. The second baseline, Baseline+REF+FLOW, conditioned
the speaker embedding only to the duration predictor and the flow module to
focus on the effect of the proposed method. The last baseline, Baseline+PRE-
TRAINED+FLOW, substituted the reference encoder with a pre-trained speaker
encoder which was trained from a speaker verification for a different speaker
embedding scenario. We used a H/ASP model [42] which can be obtained from an
open-source project333https://github.com/clovaai/voxceleb_trainer. In this
baseline, the pre-trained speaker encoder weights were fixed to consistently
draw speaker embedding vectors from a learned speaker embedding space. For the
above three baselines, the speaker embedding vector was used as a conditional
input to produce $s_{\theta}$ and $b_{\theta}$ at affine coupling layers of
the flow module.
To demonstrate the effect of the proposed method for each setting, we replaced
the conventional affine coupling layers with the SNAC layers for the above
three baselines. We name the three proposed models corresponding to each
baseline as follows: Proposed+REF+ALL, Proposed+REF+FLOW, Proposed+PRE-
TRAINED+FLOW.
Furthermore, we compared our models with two other baseline models: Meta-
StyleSpeech [25] and YourTTS [24]. Meta-StyleSpeech is trained with a meta-
learning scheme with a modified structure of FastSpeech2 [9]. YourTTS is built
on VITS architecture with an external speaker encoder and an additional
speaker consistency loss. We used an open-source
implementation444https://github.com/keonlee9420/StyleSpeech for Meta-
StyleSpeech and followed the paper to implement YourTTS on the official VITS
code.
### IV-E Evaluation method
We first conducted subjective tests to measure the overall speech quality
using mean opinion score (MOS). To assess the effectiveness of the proposed
speaker conditioning method, we also measured similarity mean opinion score
(SMOS). SMOS is employed to evaluate how similar the synthesized samples are
to the reference speech samples in terms of speaker characteristic. Both MOS
and SMOS are 5-scale scores ranging from 1 to 5 and reported with the 95%
confidence interval. For in-domain evaluations, we drew 3 pairs of text and
reference audio randomly from each of the 11 unseen speakers of VCTK test
dataset. To evaluate on the out-of-domain case, 2 pairs of text and reference
audio from 20 randomly selected speakers were drawn from the LibriTTS test-
clean dataset. 15 judges participated in the subjective tests with both VCTK
and LibriTTS unseen speakers.
In addition, we also evaluated the objective score for speaker similarity
between the generated samples and the ground truth samples using speaker
embedding cosine similarity (SECS). The SECS ranges from -1 to 1, where 1
indicates both samples are from the same speaker. We computed SECS using a
pre-trained speaker encoder model provided by SpeechBrain
toolkit555https://github.com/speechbrain/speechbrain [43]. The results of MOS,
SMOS, and SECS are presented in Table I.
### IV-F Results
The MOS and SMOS results shown in Table I indicate that Proposed+REF+FLOW
consistently shows superior performance over the baseline models in terms of
sample quality and speaker similarity.
In Proposed+REF+ALL, the SNAC-based flow module enforces the explicit removal
of speaker information at the forward transformation while the speaker
information is conversely injected to the generator modules. Training this
model results in neutralizing the effect of the SNAC layer, and this accounts
for the MOS and SMOS drop from Baseline+REF+ALL to Proposed+REF+ALL in
LibriTTS dataset. On the contrary, the best performance for subjective tests
is consistently achieved by Proposed+REF+FLOW in both VCTK and LibriTTS
datasets. Nonetheless, YourTTS shows the highest SECS scores among all models
since YourTTS is trained to minimize the speaker embedding cosine similarity.
From the synthesized samples, we have noticed that the models using pre-
trained speaker encoders occasionally produce the voice of different speakers.
This phenomenon is reflected in the lower SMOS of Baseline+PRE-TRAINED+FLOW
and Proposed+PRE-TRAINED+FLOW. This shows that the joint training of a
reference encoder may be more suitable for ZSM-TTS task than using a pre-
trained speaker encoder in terms of speaker stability. However, this does not
affect the MOS as much since the generated samples maintain the consistent
quality.
Although Proposed+REF+ALL outperforms the baseline models in both in-domain
and out-of-domain datasets, the overall performance drop between the two
settings still exists in terms of speaker similarity. Since LibriTTS dataset
inherently includes various types of channel conditions which may interfere
with the accurate inference of speaker embedding while VCTK contains only
clean speech data, we conjecture such domain mismatch accounts for the
performance drop.
## V Conclusion
We have proposed a novel speaker conditioning method for flow-based multi-
speaker TTS. The experimental results show that the proposed method
outperforms the conventional conditioning technique in a ZSM-TTS setting and
achieves the best performance in subjective tests compared to the other
baseline models. For a future work, we intend to incorporate locally-varying
features related to prosody and accents.
## Acknowledgment
This work was supported by Samsung Research, Samsung Electronics Co.,Ltd.
## References
* [1] Y. Wang et al., “Tacotron: Towards end-to-end speech synthesis,” in Proc. Interspeech 2017, pp. 4006–4010, 2017.
* [2] J. Shen et al., “Natural tts synthesis by conditioning wavenet on mel spectrogram predictions,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4779–4783, IEEE, 2018\.
* [3] H. Tachibana, K. Uenoyama, and S. Aihara, “Efficiently trainable text-to-speech system based on deep convolutional networks with guided attention,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4784–4788, IEEE, 2018.
* [4] S. Ö. Arık et al., “Deep voice: Real-time neural text-to-speech,” in Proc. International Conference on Machine Learning, pp. 195–204, PMLR, 2017.
* [5] W. Ping et al., “Deep voice 3: Scaling text-to-speech with convolutional sequence learning,” arXiv preprint arXiv:1710.07654, 2017.
* [6] N. Li, S. Liu, Y. Liu, S. Zhao, and M. Liu, “Neural speech synthesis with transformer network,” in Proc. AAAI Conference on Artificial Intelligence, vol. 33, pp. 6706–6713, 2019.
* [7] J. Kim, J. Kong, and J. Son, “Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech,” in Proc. International Conference on Machine Learning, pp. 5530–5540, PMLR, 2021.
* [8] Y. Ren et al., “Fastspeech: Fast, robust and controllable text to speech,” Proc. Advances in Neural Information Processing Systems, vol. 32, 2019.
* [9] Y. Ren, C. Hu, X. Tan, T. Qin, S. Zhao, Z. Zhao, and T.-Y. Liu, “Fastspeech 2: Fast and high-quality end-to-end text to speech,” arXiv preprint arXiv:2006.04558, 2020.
* [10] J. Kim, S. Kim, J. Kong, and S. Yoon, “Glow-tts: A generative flow for text-to-speech via monotonic alignment search,” Proc. Advances in Neural Information Processing Systems, vol. 33, pp. 8067–8077, 2020.
* [11] J. Donahue, S. Dieleman, M. Bińkowski, E. Elsen, and K. Simonyan, “End-to-end adversarial text-to-speech,” arXiv preprint arXiv:2006.03575, 2020.
* [12] T. Yoshimura, K. Tokuda, T. Masuko, T. Kobayashi, and T. Kitamura, “Simultaneous modeling of spectrum, pitch and duration in hmm-based speech synthesis,” in Proc. Sixth European Conference on Speech Communication and Technology, 1999.
* [13] K. Tokuda, T. Kobayashi, and S. Imai, “Speech parameter generation from hmm using dynamic features,” in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 1, pp. 660–663, IEEE, 1995.
* [14] T. Masuko, K. Tokuda, T. Kobayashi, and S. Imai, “Speech synthesis using hmms with dynamic features,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 1, pp. 389–392, IEEE, 1996.
* [15] T. Masuko, K. Tokuda, T. Kobayashi, and S. Imai, “Voice characteristics conversion for hmm-based speech synthesis system,” in Proc. IEEE international conference on acoustics, speech, and signal processing (ICASSP), vol. 3, pp. 1611–1614, IEEE, 1997.
* [16] M. Tamura, T. Masuko, K. Tokuda, and T. Kobayashi, “Speaker adaptation for hmm-based speech synthesis system using mllr,” in the third ESCA/COCOSDA Workshop (ETRW) on Speech Synthesis, 1998.
* [17] M. Tamura, T. Masuko, K. Tokuda, and T. Kobayashi, “Adaptation of pitch and spectrum for hmm-based speech synthesis using mllr,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 2, pp. 805–808, IEEE, 2001.
* [18] O. Siohan, T. A. Myrvoll, and C.-H. Lee, “Structural maximum a posteriori linear regression for fast hmm adaptation,” Computer Speech & Language, vol. 16, no. 1, pp. 5–24, 2002.
* [19] S. Arik, J. Chen, K. Peng, W. Ping, and Y. Zhou, “Neural voice cloning with a few samples,” Proc. Advances in Neural Information Processing Systems, vol. 31, 2018.
* [20] M. Chen et al., “Adaspeech: Adaptive text to speech for custom voice,” arXiv preprint arXiv:2103.00993, 2021.
* [21] Y. Jia et al., “Transfer learning from speaker verification to multispeaker text-to-speech synthesis,” Proc. Advances in Neural Information Processing Systems, vol. 31, 2018.
* [22] E. Cooper et al., “Zero-shot multi-speaker text-to-speech with state-of-the-art neural speaker embeddings,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6184–6188, IEEE, 2020.
* [23] E. Casanova et al., “Sc-glowtts: an efficient zero-shot multi-speaker text-to-speech model,” arXiv preprint arXiv:2104.05557, 2021.
* [24] E. Casanova et al., “Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone,” in Proc. International Conference on Machine Learning, pp. 2709–2720, PMLR, 2022.
* [25] D. Min, D. B. Lee, E. Yang, and S. J. Hwang, “Meta-stylespeech: Multi-speaker adaptive text-to-speech generation,” in Proc. International Conference on Machine Learning, pp. 7748–7759, PMLR, 2021.
* [26] S.-H. Lee, H.-W. Yoon, H.-R. Noh, J.-H. Kim, and S.-W. Lee, “Multi-spectrogan: High-diversity and high-fidelity spectrogram generation with adversarial style combination for speech synthesis,” in Proc. AAAI Conference on Artificial Intelligence, vol. 35, pp. 13198–13206, 2021.
* [27] B. J. Choi, M. Jeong, M. Kim, S. H. Mun, and N. S. Kim, “Adversarial speaker-consistency learning using untranscribed speech data for zero-shot multi-speaker text-to-speech,” arXiv preprint arXiv:2210.05979, 2022.
* [28] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylization,” arXiv preprint arXiv:1607.08022, 2016\.
* [29] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in Proc. IEEE International Conference on Computer Vision, pp. 1501–1510, 2017.
* [30] N. Kumar, S. Goel, A. Narang, and B. Lall, “Normalization driven zero-shot multi-speaker speech synthesis.,” in Proc. Interspeech 2021, pp. 1354–1358, 2021.
* [31] L. Dinh, D. Krueger, and Y. Bengio, “Nice: Non-linear independent components estimation,” arXiv preprint arXiv:1410.8516, 2014.
* [32] L. Dinh, J. Sohl-Dickstein, and S. Bengio, “Density estimation using real nvp,” arXiv preprint arXiv:1605.08803, 2016.
* [33] D. P. Kingma and P. Dhariwal, “Glow: Generative flow with invertible 1x1 convolutions,” Proc. Advances in Neural Information Processing Systems, vol. 31, 2018.
* [34] A. v. d. Oord et al., “Wavenet: A generative model for raw audio,” arXiv preprint arXiv:1609.03499, 2016.
* [35] A. Atanov, A. Volokhova, A. Ashukha, I. Sosnovik, and D. Vetrov, “Semi-conditional normalizing flows for semi-supervised learning,” arXiv preprint arXiv:1905.00505, 2019.
* [36] J. Serrà, S. Pascual, and C. Segura Perales, “Blow: a single-scale hyperconditioned flow for non-parallel raw-audio voice conversion,” Proc. Advances in Neural Information Processing Systems, vol. 32, 2019.
* [37] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
* [38] J. Yamagishi, C. Veaux, and K. MacDonald, “Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit (version 0.92),” 2019. University of Edinburgh. The Centre for Speech Technology Research (CSTR). https://doi.org/10.7488/ds/2645.
* [39] H. Zen et al., “Libritts: A corpus derived from librispeech for text-to-speech,” Proc. Interspeech 2019, pp. 1526–1530, 2019.
* [40] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,” arXiv preprint arXiv:1412.3555, 2014.
* [41] Y. Wang et al., “Style tokens: Unsupervised style modeling, control and transfer in end-to-end speech synthesis,” in Proc. International Conference on Machine Learning, pp. 5180–5189, PMLR, 2018.
* [42] H. S. Heo, B.-J. Lee, J. Huh, and J. S. Chung, “Clova baseline system for the voxceleb speaker recognition challenge 2020,” arXiv preprint arXiv:2009.14153, 2020.
* [43] M. Ravanelli et al., “Speechbrain: A general-purpose speech toolkit,” arXiv preprint arXiv:2106.04624, 2021.
|
On packing of Minkowski balls. I
Nikolaj M. Glazunov
Glushkov Institute of Cybernetics NASU, Kiev,
Institute of Mathematics and Informatics Bulgarian Academy of Sciences
Email<EMAIL_ADDRESS>
Abstract.
We investigate lattice packings of Minkowski balls. By the results of the
proof of Minkowski conjecture about the critical determinant we devide
Minkowski balls on 3 classes: Minkowski balls, Davis balls and Chebyshev-Cohn
balls. We investigate lattice packings of these balls on planes with varieng
Minkowski metric and search among these packings the optimal packings. In this
paper we prove that the optimal lattice packing of the Minkowski, Davis, and
Chebyshev-Cohn balls is realized with respect to the sublattices of index two
of the critical lattices of corresponding balls.
Keywords: lattice packing, Minkowski ball, Minkowski metric, critical lattice,
optimal lattice packing.
2020 Mathematics Subject Classification: 11H06, 52C05
Thanks. The author is deeply grateful to the Bulgarian Academy of Sciences,
the Institute of Mathematics and Informatics of the Bulgarian Academy of
Sciences, Professor P. Boyvalenkov for their support. The author was supported
by Simons grant 992227.
## 1 Introduction
A system of equal balls in $n$-dimensional space is said to form a packing, if
no two balls of the system have any inner points in common.
Lately the remarkable results in resolving the problem of optimal packing of
balls in $8$ and $24$-dimensional real Euclidean spaces have been obtained [1,
2].
In this research, acknowledged by the Fields medal, optimal packings are
constructed on lattices.
Our considerations connect with Minkowski conjecture [3, 5, 6, 7, 8, 12, 16,
17, 18] and use results of its proof. Corresponding results and conjectures is
most simply stated in terms of geometric lattices and critical lattices [4, 3,
9, 10]. The last are important partial case of geometric lattices. We
investigate lattice packings of Minkowski balls, Davis balls and Chebyshev-
Cohn balls on planes with varying Minkowski metric and search among these
packings the optimal packings. The packing problem is studied on classes of
lattices related to the problem of the theory of Diophantine approximations
considered by H. Minkowski [3] for the case of the plane. For some other
selected problems and results of the theory Diophantine approximations see for
instance [13] and references therein. The naming of balls is connecting with
results of investigating of Minkowski conjecture on critical determinant and
its justification is given below. In this paper we prove that the optimal
lattice packing of the Minkowski, Davis, and Chebyshev-Cohn balls is realized
with respect to the sublattices of index two of the critical lattices of
corresponding balls.
## 2 Minkowski conjecture, its proof and Minkowski balls
Let
$|\alpha x+\beta y|^{p}+|\gamma x+\delta y|^{p}\leq
c\cdot|\det(\alpha\delta-\beta\gamma)|^{p/2},$
be a diophantine inequality defined for a given real $p>1$; hear
$\alpha,\beta,\gamma,\delta$ are real numbers with
$\alpha\delta-\beta\gamma\neq 0.$
H. Minkowski in his monograph [3] raise the question about minimum constant
$c$ such that the inequality has integer solution other than origin. Minkowski
with the help of his theorem on convex body has found a sufficient condition
for the solvability of Diophantine inequalities in integers not both zero:
$c={\kappa_{p}}^{p},\kappa_{p}=\frac{{\Gamma(1+\frac{2}{p})}^{1/2}}{{\Gamma(1+\frac{1}{p})}}.$
But this result is not optimal, and Minkowski also raised the issue of not
improving constant $c$. For this purpose Minkowski has proposed to use the
critical determinant.
Recall the definitions [9].
Let $\mathcal{R}$ be a set and $\Lambda$ be a lattice with base
$\\{a_{1},\ldots,a_{n}\\}$
in ${\mathbb{R}}^{n}.$
A lattice $\Lambda$ is admissible for body $\mathcal{R}$
(${\mathcal{R}}-$admissible) if ${\mathcal{R}}\bigcap\Lambda=\emptyset$ or
$0.$
Let $\overline{\mathcal{R}}$ be the closure of ${\mathcal{R}}$. A lattice
$\Lambda$ is strictly admissible for $\overline{\mathcal{R}}$ (
$\overline{\mathcal{R}}-$strictly admissible) if
$\overline{\mathcal{R}}\bigcap\Lambda=\emptyset$ or $0.$
Let
$d(\Lambda)=|\det(a_{1},\ldots,a_{n})|$
be the determinant of $\Lambda.$
The infimum $\Delta(\mathcal{R})$ of determinants of all lattices admissible
for $\mathcal{R}$ is called the critical determinant of $\mathcal{R};$ if
there is no $\mathcal{R}-$admissible lattices then puts
$\Delta(\mathcal{R})=\infty.$
A lattice $\Lambda$ is critical if $d(\Lambda)=\Delta(\mathcal{R}).$
For the given 2-dimension region $D_{p}\subset{\mathbb{R}}^{2}=(x,y),\ p\geq
1$ :
$|x|^{p}+|y|^{p}<1,$
let $\Delta(D_{p})$ be the critical determinant of the region.
All determinants of admissible lattices of this domain that have three pairs
of points on the boundary of this domain are parametrized by the Minkowski-
Cohn moduli space of the form
$\Delta(p,\sigma)=(\tau+\sigma)(1+\tau^{p})^{-\frac{1}{p}}(1+\sigma^{p})^{-\frac{1}{p}},$
(1)
in the domain
${\mathcal{M}}:\;\infty>p>1,\;1\leq\sigma\leq\sigma_{p}=(2^{p}-1)^{\frac{1}{p}},$
of the $\\{p,\sigma\\}$-plane, where $\sigma$ is some real parameter [7, 16,
17, 18, 19].
In notations [18] next result have proved:
###### Theorem 1
[18]
$\Delta(D_{p})=\left\\{\begin{array}[]{lc}\Delta(p,1),\;1\leq p\leq 2,\;p\geq
p_{0},\\\ \Delta(p,\sigma_{p}),\;2\leq p\leq p_{0};\\\ \end{array}\right.$
here $p_{0}$ is a real number that is defined unique by conditions
$\Delta(p_{0},\sigma_{p})=\Delta(p_{0},1),\;2,57<p_{0}<2,58,\;p_{0}\approx
2,5725$.
###### Remark 1
We will call $p_{0}$ the Davis constant.
###### Corollary 1
${\kappa_{p}}={\Delta(D_{p})}^{-\frac{p}{2}}.$
From Theorem (1) in notations [18, 19] we deduce the following corollary:
###### Corollary 2
([18, 19])
${\Delta^{(0)}_{p}}=\Delta(p,{\sigma_{p}})=\frac{1}{2}{\sigma}_{p},\;{\sigma}_{p}=(2^{p}-1)^{1/p},$
${\Delta^{(1)}_{p}}=\Delta(p,1)=4^{-\frac{1}{p}}\frac{1+\tau_{p}}{1-\tau_{p}},\;2(1-\tau_{p})^{p}=1+\tau_{p}^{p},\;0\leq\tau_{p}<1.$
For their critical lattices respectively
$\Lambda_{p}^{(0)},\;\Lambda_{p}^{(1)}$ next conditions satisfy:
$\Lambda_{p}^{(0)}$ and $\Lambda_{p}^{(1)}$ are two $D_{p}$-admissible
lattices each of which contains three pairs of points on the boundary of
$D_{p}$ with the property that
$(1,0)\in\Lambda_{p}^{(0)},\;(-2^{-1/p},2^{-1/p})\in\Lambda_{p}^{(1)},$
(under these conditions the lattices are uniquely defined).
## 3 Minkowski balls and the density of the packing of $2$-dimensional
Minkowski balls
Let $p_{0}\in{\mathbb{R}}$ be the Davis constant such that $2,57<p_{0}<2,58$.
We consider balls of the form
$D_{p}:\;|x|^{p}+|y|^{p}\leq 1,\;p\geq 1,$
and call balls with the conditions
$|x|^{p}+|y|^{p}\leq 1,\;2>p>1,$
the Minkowski balls in two dimension and correspondingly the circles with the
condition
$|x|^{p}+|y|^{p}=1,\;2>p>1$
the Minkowski circles in two dimension.
Limiting Minkowski circle in two dimension: $|x|+|y|=1$
Davis balls in two dimension: $|x|^{p}+|y|^{p}\leq 1$ for $p_{0}>p\geq 2$
Davis circles in two dimension: $|x|^{p}+|y|^{p}=1$ for $p_{0}>p\geq 2$
Chebyshev-Cohn balls in two dimension: $|x|^{p}+|y|^{p}\leq 1$ for $p\geq
p_{0}$
Chebyshev-Cohn circles in two dimension: $|x|^{p}+|y|^{p}=1$ for $p\geq p_{0}$
Limiting Chebyshev-(Cohn) ball in two dimension:
$||x,y||_{\infty}=\max(|x|,|y|).$
Recall the definition of a packing lattice [1, 2, 9, 10]. We will give it for
$n$-dimensional Minkowski balls $D_{p}^{n}$ in ${\mathbb{R}}^{n}$.
###### Definition 1
Let $\Lambda$ be a full lattice in ${\mathbb{R}}^{n}$ and
$a\in{\mathbb{R}}^{n}$. In the case if it is occurs that no two of balls
$\\{D_{p}^{n}+b,b\in\Lambda+a\\}$ have inner points in common, the collection
of balls $\\{D_{p}^{n}+b,b\in\Lambda+a\\}$ is called a
$(D_{p}^{n},\Lambda)$-packing, and $\Lambda$ is called a packing lattice of
$D_{p}^{n}$.
Recall also that if $\alpha\in{\mathbb{R}}$ and $D_{p}^{n}$ is a ball than
$\alpha D_{p}^{n}$ is the set of points $\alpha x,x\in D_{p}^{n}$.
In some cases we will consider interiors of balls $D_{p}=D_{p}^{2}$ (open
balls) which we will denoted as $ID_{p}$.
From the considerations of Minkowski and other authors [3, 4, 9, 10, 19], the
following statements can be deduced (for the sake of completeness, we present
the proof of Proposition 1).
Denote by $V(D_{p})$ the volume (area) of $D_{p}$.
###### Proposition 1
[9, 10]. A lattice $\Lambda$ is a packing lattice of $D_{p}$ if and only if it
is admissible lattice for $2D_{p}$.
Proof (contrary proof). First, note that one can take an open ball $ID_{p}$
and use the notion of strict admissibility. Suppose that $\Lambda$ is not
strictly admissible for $2ID_{p}$. Then $2ID_{p}$ contains a point $a\neq 0$
of $\Lambda$. Then the two balls $ID_{p}$ and $ID_{p}+a$ contain the point
$\frac{1}{2}a$ in common. So $\Lambda$ is not a packing lattice of $D_{p}$.
Suppose now that $\Lambda$ is not a packing lattice of $D_{p}$. Then there
exist two distinct points $b_{1},b_{2}\in\Lambda$ and a point $c$ such that
$c\in ID_{p}+b_{1}$ and $c\in ID_{p}+b_{2}$. Hence there are points
$a_{1},a_{2}\in ID_{p}$ such that $c=a_{1}+b_{1}=a_{2}+b_{2}$. So
$b_{1}-b_{2}=a_{2}-a_{1}\in ID_{p}$, whereas $b_{1}-b_{2}\neq 0$ and
$b_{1}-b_{2}\in\Lambda$. Therefore $\Lambda$ is not (strictly) admissible
lattice.
###### Proposition 2
[9, 10]. The dencity of a $(D_{p},\Lambda)$-packing is equal to
$V(D_{p})/d(\Lambda)$ and it is maximal if $\Lambda$ is critical for $2D_{p}$.
## 4 On packing Minkowski balls, Davis balls and Chebyshev-Cohn balls on the
plane
Let us consider possible optimal lattice packings of these balls and their
connection with critical lattices.
At first give lattices of trivial optimal lattice packings for the limiting
(asymptotic) cases at the points $p=1$ and $\infty$ ”infinity” (the latter
corresponds to the classical Chebyshev balls) and as the introductory example
the optimal packing of two-dimensional unit balls.
###### Proposition 3
The lattice
$\Lambda_{1}^{(1)}=\\{(\frac{1}{2},\frac{1}{2}),(0,1)\\}$
is the critica lattice for $D_{1}$. Limiting case of Minkowski balls for $p=1$
gives the optimal sublattice of index two of the lattice
$\Lambda_{1}^{(1)}$-lattice packing with the density $1$. The centers of the
Minkowski balls in this case are at the vertices of the sublattice of index
two of the lattice $\Lambda_{1}^{(1)}$.
Proof. Recall that a critical lattice for $D_{1}$ is a lattice $\Lambda$ which
is $D_{1}$-admissible and which has determinant $d(\Lambda)=\Delta(D_{1})$.
The lattice $\Lambda_{1}^{(1)}$ is $D_{1}$-admissible. We have
$\Delta(D_{1})=\frac{1}{2}$ and $d(\Lambda_{1}^{(1)})=\frac{1}{2}$. Minkowski
balls for $p=1$ are congruent squares. Hance we have the optimal the
sublattice of index two of the lattice $\Lambda_{1}^{(1)}$ packing of the
squares with the density $1$.
###### Proposition 4
The lattice
$\Lambda_{\infty}^{(1)}=\\{(1,1),(0,1)\\}$
is the critica lattice for $D_{\infty}$. Limiting case of Minkowski balls for
$p=\infty$ gives the optimal of the density $1$ packing with respect to the
sublattice of index two of the critical lattice $\Lambda_{\infty}^{(1)}$. The
centers of the Minkowski balls in this case are at the vertices of the
sublattice of index two of the lattice $\Lambda_{\infty}^{(1)}$.
Proof. The lattice $\Lambda_{\infty}^{(1)}$ is $D_{\infty}$-admissible. We
have $\Delta(D_{\infty})=1$ and $d(\Lambda_{\infty}^{(1)}=1$. Minkowski balls
for $p=\infty$ are congruent squares. Hance we have the optimal the sublattice
of index two of the lattice $\Lambda_{\infty}^{(1)}$ packing of the squares
with the density $1$.
###### Proposition 5
The lattice
$\Lambda_{2}^{(0)}=\\{(1,0),(\frac{1}{2},\frac{\sqrt{3}}{2})\\}$
is the critica lattice for $D_{2}$. The lattice packing of Davis balls for
$p=2$ gives the optimal of the density $\approx 0.91$ packing with respect to
the sublattice of index two of the critical lattice $\Lambda_{2}^{(0)}$. The
centers of the Minkowski balls in this case are at the vertices of the
sublattice of index two of the lattice $\Lambda_{2}^{(0)}$.
Proof. As in Proposition (3) a critical lattice for $D_{2}$ is a lattice
$\Lambda$ which is $D_{2}$-admissible and which has determinant
$d(\Lambda)=\Delta(D_{2})$. The lattice $\Lambda_{2}^{(0)}$ is
$D_{2}$-admissible. We have $\Delta(D_{2})=\frac{\sqrt{3}}{2}$ and
$d(\Lambda_{2}^{(0)})=\frac{\sqrt{3}}{2}$. So sublattice of index two of the
lattice $\Lambda_{2}^{(0)}$ is the hexagonal lattice. Next, we use the
following classical results [14, 15]: the optimal sphere packing of dimension
2 is the hexagonal lattice (honeycomb) packing with the density $\approx
0.91$.
###### Proposition 6
If $\Lambda$ is the critical lattice of the Minkowski ball $D_{p}$ than the
sublattice $\Lambda_{2}$ of index two of the critical lattice is the critical
lattice of $2D_{p}$. (Examples for $n=1,2,\infty$ above). Here we give the
proof of the Proposition 6.
Proof. Since the Minkowski ball $D_{p}$ is symmetric about the origin and
convex, then $2D_{p}$ is convex and symmetric about the origin [9, 10].
When parametrizing admissible lattices $\Lambda$ having three pairs of points
on the boundary of the ball $D_{p}$, the following parametrization is used [7,
17, 18, 19]:
$\Lambda=\\{((1+\tau^{p})^{-\frac{1}{p}},\tau(1+\tau^{p})^{-\frac{1}{p}}),(-(1+\sigma^{p})^{-\frac{1}{p}},\sigma(1+\sigma^{p})^{-\frac{1}{p}})\\}$
(2)
where
$0\leq\tau<\sigma,\;0\leq\tau\leq\tau_{p}.$
$\tau_{p}$ is defined by the equation
$2(1-\tau_{p})^{p}=1+\tau_{p}^{p},\;0\leq\tau_{p}<1.$
$1\leq\sigma\leq\sigma_{p},\;\sigma_{p}=(2^{p}-1)^{\frac{1}{p}}.$
Admissible lattices of the form (2) for doubled Minkowski balls $2D_{p}$ have
a representation of the form
$\Lambda_{2D_{p}}=\\{2((1+\tau^{p})^{-\frac{1}{p}},2\tau(1+\tau^{p})^{-\frac{1}{p}}),(-2(1+\sigma^{p})^{-\frac{1}{p}},2\sigma(1+\sigma^{p})^{-\frac{1}{p}})\\}$
(3)
Hence the Minkowski-Cohn moduli space for these admissible lattices has the
form
$\Delta(p,\sigma)_{2D_{p}}=4(\tau+\sigma)(1+\tau^{p})^{-\frac{1}{p}}(1+\sigma^{p})^{-\frac{1}{p}},$
(4)
in the same domain
${\mathcal{M}}:\;\infty>p>1,\;1\leq\sigma\leq\sigma_{p}=(2^{p}-1)^{\frac{1}{p}},$
Consequently, the critical determinants of doubled Minkowski balls have a
representation of the form
${\Delta^{(0)}_{p}}(2D_{p})=\Delta(p,{\sigma_{p}})_{2D_{p}}=2\cdot{\sigma}_{p},\;{\sigma}_{p}=(2^{p}-1)^{1/p},$
${\Delta^{(1)}_{p}}(2D_{p})=\Delta(p,1))_{2D_{p}}=4^{1-\frac{1}{p}}\frac{1+\tau_{p}}{1-\tau_{p}},\;2(1-\tau_{p})^{p}=1+\tau_{p}^{p},\;0\leq\tau_{p}<1.$
And these are the determinants of the sublattices of index 2 of the critical
lattices of the corresponding Minkowski balls.
###### Theorem 2
The optimal lattice packing of the Minkowski, Davis, and Chebyshev-Cohn balls
is realized with respect to the sublattices of index two of the critical
lattices
$(1,0)\in\Lambda_{p}^{(0)},\;(-2^{-1/p},2^{-1/p})\in\Lambda_{p}^{(1)}.$
Proof. By Proposition 6 the critical lattice of $2D_{p}$ is the sublattice of
index two of the critical lattice of Minkowski ball $D_{p}$ . So it is the
admissible lattice for $2D_{p}$ and by Proposition 1 is packing lattice of
$D_{p}$. By Proposition 2 the corresponding lattice packing has maximal
density and so is optimal.
###### Remark 2
This result concerns the packing of unit balls and spheres in complete normed
(Banach) spaces of dimension 2.
## References
* [1] Viazovska M.S. The sphere packing problem in dimension 8, Ann. of Math. (2) 185, no. 3, 991–1015, (2017).
* [2] H. Cohn, A. Kumar, S. D. Miller, D. Radchenko, and M. Viazovska, The sphere packing problem in dimension 24, Ann. of Math. (2) 185, no. 3, 1017–1033, (2017).
* [3] H. Minkowski, Diophantische Approximationen, Leipzig: Teubner (1907).
* [4] H. Minkowski, Geometrie der Zahlen, Berlin–Leipzig: Teubner (1910).
* [5] L.J. Mordell, Lattice points in the region $|Ax^{4}|+|By^{4}|\geq 1,$ J. London Math. Soc. 16 , 152–156 (1941).
* [6] C. Davis, Note on a conjecture by Minkowski, J. London Math. Soc., 23, 172–175 (1948).
* [7] H. Cohn, Minkowski’s conjectures on critical lattices in the metric $\\{|\xi|^{p}+|\eta|^{p}\\}^{{1}/{p}},$ Annals of Math., 51, (2), 734–738 (1950).
* [8] G. Watson, Minkowski’s conjecture on the critical lattices of the region $|x|^{p}+|y|^{p}\leq 1\;$, (I), (II), Jour. London Math. Soc., 28, (3, 4), 305–309, 402–410 (1953).
* [9] Cassels J. W. S., An Introduction to the Geometry of Numbers, Springer, NY, 1997.
* [10] Lekkerkerker C.G., Geometry of Numbers, NorthHolland, 1969.
* [11] A. Malyshev, Application of computers to the proof of a conjecture of Minkowski’s from geometry of numbers. I, Zap. Nauchn. Semin. LOMI, 71, 163–180 (1977).
* [12] A. Malyshev, Application of computers to the proof of a conjecture of Minkowski’s from geometry of numbers. II, Zap. Nauchn. Semin. LOMI, 82, 29–32 (1979).
* [13] Andersen N., Duke W., On a theorem of Davenport and Schmidt, Acta Arithmetica 198, 37-75, 2021.
* [14] Feges Toth, Uber einen geometrischen Satz, Math. Z., 46, 79-83, 1940.
* [15] A. Thue, Uber die dichteste, Norske Vid. Selsk. Skr., no. 1, 1-9, 1910.
* [16] N. Glazunov, A. V. Malyshev, On Minkowski’s critical determinant conjecture,Kibernetika, No. 5, 10–14 (1985).
* [17] N. Glazunov, A. Malyshev. The proof of Minkowski‘s conjecture concerning the critical determinant of the region $|x|^{p}+|y|^{p}<1$ near $p=2,$(in Russian), Doklady Akad. Nauk Ukr.SSR ser.A, 7 .P.9–12 (1986).
* [18] N. Glazunov, A. Golovanov, A. Malyshev, Proof of Minkowski’s hypothesis about the critical determinant of $|x|^{p}+|y|^{p}<1$ domain, Research in Number Theory 9. Notes of scientific seminars of LOMI. 151 Leningrad: Nauka. 40–53 (1986).
* [19] Glazunov N.M. On A. V. Malyshev’s approach to Minkowski’s conjecture concerning the critical determinant of the region $|x|^{p}+|y|^{p}<1$ for $p>1$, Chebyshevskii Sb., Volume 17, Issue 4, 185–193, 2016.
|
<EMAIL_ADDRESS>Yang
# Robust, fast and accurate mapping of diffusional mean kurtosis
Megan E. Farquhar School of Mathematical Sciences, Faculty of Science,
Queensland University of Technology, Brisbane, Australia Qianqian Yang
School of Mathematical Sciences, Faculty of Science, Queensland University of
Technology, Brisbane, Australia Centre for Data Science, Queensland
University of Technology, Brisbane, Australia Viktor Vegh Centre for
Advanced Imaging, The University of Queensland, Brisbane, Australia ARC
Training Centre for Innovation in Biomedical Imaging Technology, Brisbane,
Australia
###### Abstract
Diffusional kurtosis imaging (DKI) is a methodology for measuring the extent
of non-Gaussian diffusion in biological tissue, which has shown great promise
in clinical diagnosis, treatment planning and monitoring of many neurological
diseases and disorders. However, robust, fast and accurate estimation of
kurtosis from clinically feasible data acquisitions remains a challenge. In
this study, we first outline a new accurate approach of estimating mean
kurtosis via the sub-diffusion mathematical framework. Crucially, this
extension of the conventional DKI overcomes the limitation on the maximum
b-value of the latter. Kurtosis and diffusivity can now be simply computed as
functions of the sub-diffusion model parameters. Second, we propose a new fast
and robust fitting procedure to estimate the sub-diffusion model parameters
using two diffusion times without increasing acquisition time as for the
conventional DKI. Third, our sub-diffusion based kurtosis mapping method is
evaluated using both simulations and the Connectome 1.0 human brain data.
Exquisite tissue contrast is achieved even when the diffusion encoded data is
collected in only minutes. In summary, our findings suggest robust, fast and
accurate estimation of mean kurtosis can be realised within a clinically
feasible diffusion weighted magnetic resonance imaging data acquisition time.
## 1 Introduction
Diffusion weighted magnetic resonance imaging (DW-MRI) over a period of more
than 30 years has become synonymous with tissue microstructure imaging.
Measures of how water diffuses in heterogeneous tissues allow indirect
interpretation of changes in tissue microstructure (Le Bihan and Johansen-
Berg, 2012). DW-MRI has predominantly been applied in the brain, where
properties of white matter connections between brain regions are often studied
(Lebel et al., 2019), in addition to mapping tissue microstructural properties
(Tournier, 2019). Applications outside of the brain have clinical importance
as well, and interest is growing rapidly.
Generally, DW-MRI involves the setting of diffusion weightings and direction
over which diffusion is measured. While diffusion tensor imaging (DTI) can be
performed using a single diffusion weighting, a so-called b-shell, and at
least six diffusion encoding directions (Le Bihan et al., 2001), other models
tend to require multiple b-shells each having multiple diffusion encoding
directions. Diffusional kurtosis imaging (DKI) is a primary example of a
multiple b-shell, multiple diffusion encoding direction method (Jensen et al.,
2005). DKI is considered as an extension of DTI (Jensen and Helpern, 2010;
Hansen et al., 2013; Veraart et al., 2011b), where the diffusion process is
assumed to deviate away from standard Brownian motion, and the extent of such
deviation is measured via the kurtosis metric. Essentially, the increased
sampling achieved via DKI data acquisitions allows more complex models to be
applied to data (Van Essen et al., 2013; Shafto et al., 2014), in turn
resulting in metrics of increased utility for clinical decision making.
Recent clinical benefits of using kurtosis metrics over other DW-MRI derived
measures have been demonstrated for grading hepatocellular carcinoma (Li et
al., 2022b), prognosing chronic kidney disease (Liu et al., 2021),
differentiating parotid gland tumours (Huang et al., 2021a), measuring
response to radiotherapy treatment in bone tumour (Guo et al., 2022a) and
glioblastoma (Goryawala et al., 2022), identifying tissue abnormalities in
temporal lobe epilepsy patients with sleep disorders (Guo et al., 2022b) and
brain microstructural changes in mild traumatic brain injury (Wang et al.,
2022), monitoring of renal function and interstitial fibrosis (Li et al.,
2022a), detecting the invasiveness of bladder cancer into muscle (Li et al.,
2022d), aiding management of patients with depression (Maralakunte et al.,
2022), delineating acute infarcts with prognostic value (Hu et al., 2022),
predicting breast cancer metastasis (Zhou et al., 2022), diagnosing
Parkinson’s disease (Li et al., 2022c), amongst others reported prior and not
listed here.
The routine use of DKI in the clinic has nonetheless lagged due the inability
to robustly estimate the kurtosis metric (Veraart et al., 2011a; Tabesh et
al., 2010; Kuder et al., 2011; Henriques et al., 2021). A known requirement
for estimating kurtosis in DKI is to restrict the maximum b-value to
2000$\text{ s}/\text{mm}^{2}$-3000$\text{ s}/\text{mm}^{2}$ for brain studies
(Jensen et al., 2005; Jensen and Helpern, 2010; Poot et al., 2010), with the
optimal maximum b-value found to be dependent on tissue type (Poot et al.,
2010). This suggests that the traditional kurtosis model is less accurate at
representing the diffusion signal at large b-values. Moreover, multiple
b-shell, multiple direction high quality DW-MRI data can take many minutes to
acquire, which poses challenges for clinical imaging protocols involving a
multitude of MRI contrasts already taking tens of minutes to execute.
Reduction of DKI data acquisition times through parallel imaging, optimisation
of b-shells and directions have been investigated (Zong et al., 2021;
Heidemann et al., 2010; Zelinski et al., 2008), and DW-MRI data necessary for
DKI analysis has been shown to supersede the data required for DTI (Veraart et
al., 2011b). Therefore, an optimised DKI protocol can potentially replace
clinical DTI data acquisitions without adversely affecting the estimation of
DTI metrics.
For DKI to become a routine clinical tool, DW-MRI data acquisition needs to be
fast and provides a robust estimation of kurtosis. The ideal protocol should
have a minimum number of b-shells and diffusion encoding directions in each
b-shell. The powder averaging over diffusion directions improves the signal-
to-noise ratio of the DW-MRI data used for parameter estimation. Whilst this
approach loses out on directionality of the kurtosis, it nonetheless provides
a robust method of estimating mean kurtosis (Henriques et al., 2021), a metric
of significant clinical value.
Instead of attempting to improve an existing model-based approach for kurtosis
estimation, as has been considered by many others, we considered the problem
from a different perspective. In view of the recent generalisation of the
various models applicable to DW-MRI data (Yang et al., 2022), the sub-
diffusion framework provides new, unexplored opportunities, for fast and
robust kurtosis mapping. We report on our investigation into the utility of
the sub-diffusion model for practically useful mapping of mean kurtosis.
## 2 Results
Simulation studies were conducted to establish the requirements on the number
of diffusion times and the separation between them for accurate estimation of
mean kurtosis based on the sub-diffusion model augmented with random Gaussian
noise, following (10). Testing and validation was performed using the human
Connectome 1.0 brain dataset (Tian et al., 2022). The $2\times 2\times
2\text{mm}^{3}$ resolution data were obtained using two diffusion times
($\Delta=19,49\text{ms}$) with a pulse duration of $\delta=8\text{ms}$ and $G$
= 31, 68, 105, 142, 179, 216, 253, 290 mT/m, respectively generating b-values
= 50, 350, 800, 1500, 2400, 3450, 4750, 6000 $\text{s/mm}^{2}$, and b-values =
200, 950, 2300, 4250, 6750, 9850, 13500, 17800 $\text{s/mm}^{2}$, according to
b-value $=(\gamma\delta G)^{2}(\Delta-\delta/3)$. Up to 64 diffusion encoding
directions per b-shell were set. The traditional method for mean kurtosis
estimation was implemented (producing $K_{DKI}$), which is limited to the use
of DW-MRI generated using a single diffusion time (Jensen et al., 2005; Jensen
and Helpern, 2010; Veraart et al., 2011a; Poot et al., 2010), alongside our
implementation based on the sub-diffusion model (3), wherein mean kurtosis
$K^{*}$ is computed as a function of the sub-diffusion model parameter $\beta$
(refer to (9)) using either a single or multiple diffusion times.
### 2.1 Multiple diffusion times for robust and accurate mean kurtosis
estimation
In our simulations we tested up to five distinct diffusion times to generate
b-values. Figure 1 illustrates the effects of the number of diffusion times on
the parameter estimation at various SNR levels. We draw attention to a number
of features in the plots. First, as SNR is increased from 5 to 20 (rows 1-3),
the variability in the estimated parameters ($D_{\beta}$, $\beta$, $K^{*}$)
decreases. Second, increasing the number of distinct diffusion times used for
parameter estimation decreases estimation variability, with the most
significant improvement when increasing from one to two diffusion times (rows
1-3). Third, sampling with two distinct diffusion times provides more robust
parameter estimates than sampling twice as many b-values using one diffusion
time (compare $2$ and $2^{\prime}$ violin plots, rows 1-3). Fourth, the last
row (row 4) highlights the improvement in the coefficient of variation (CV)
for each parameter estimate with increasing SNR. This result again confirms
that the most pronounced decline of CV occurs when increasing from one to two
diffusion times, and parameter estimations can be performed more robustly
using DW-MRI data with a relatively high SNR.
Figure 1: Simulation results on the effect of the number of diffusion times
involved in the fitting of the sub-diffusion model (3) parameters
($D_{\beta}$, $\beta$) and computing $K^{*}$ following (9) at various SNR
levels. The ground truth values for ($D_{\beta}$, $\beta$, $K^{*}$) are set to
($3\times 10^{-4}$, 0.75, 0.8125) to represent white matter (blue) and
($5\times 10^{-4}$, 0.85, 0.4733) to represent gray matter (orange). Rows 1-3:
Distributions of fitted parameter values using different number of diffusion
times. 2’ represents an additional simulation using two diffusion times but
set to be the same, so it has the same number of data points in the fitting as
for using two different diffusion times. Row 4: Coefficient of variation (CV)
of the parameter values fitted using different number of diffusion times.
In Figure 2 we provide simulation results evaluating the choice of the two
distinct diffusion times (assuming $\Delta_{1}<\Delta_{2}$) by measuring the
goodness-of-fit of the model. The smaller of the two diffusion times is stated
along the abscissa, and the difference, i.e., $\Delta_{2}-\Delta_{1}$, is
plotted along the ordinate. The quality of fitting was measured using the
coefficient of determination (the larger the $R^{2}$ value, the better the
goodness-of-fit of the model) for each combination of abscissa and ordinate
values. The conclusion from this figure is that $\Delta_{1}$ should be small,
while $\Delta_{2}$ should be as large as practically plausible. Note, DW-MRI
echo time was not considered in this simulation, but as $\Delta_{2}$
increases, the echo time has to proportionally increase. Because of the
inherent consequence of decreasing SNR with echo time, special attention
should be paid to the level of SNR achievable with the use of a specific
$\Delta_{2}$. Nonetheless, our findings suggest that when $\Delta_{1}=8\text{
ms}$, $\Delta_{2}$ can be set as small as $21\text{ ms}$ to achieve an
$R^{2}>0.90$ with an SNR as low as $5$. If $\Delta_{1}$ is increased past
$15\text{ ms}$, then the separation between $\Delta_{1}$ and $\Delta_{2}$ has
to increase as well, and such choices benefit from an increased SNR in the DW-
MRI data. The Connectome 1.0 DW-MRI data was obtained using
$\Delta_{1}=19\text{ ms}$ and $\Delta_{2}=49\text{ ms}$, leading to a
separation of $30\text{ ms}$. For this data it is expected that with SNR = $5$
an $R^{2}$ around $0.9$ is feasible, and by increasing SNR to $20$, the
$R^{2}$ can increase to a value above $0.99$.
Figure 2: Surface plots of $R^{2}$ values achieved with fitting simulated data
with two diffusion times, $\Delta_{1}$ and $\Delta_{2}$, to the sub-diffusion
model (3) at various SNR levels. $R^{2}$ values were computed by comparing the
estimated mean kurtosis with the ground truth kurtosis. $R^{2}$ contours at
the $0.85$, $0.90$, $0.95$ and $0.99$ levels have been provided for
visualisation purposes.
In Figure 3, we present the scatter plots of simulated $K$ values versus
fitted $K$ values using simulated data with different number of diffusion
times at various SNR levels. Four cases are provided, including fitting
simulated data generated with $\Delta_{1}=19\text{ ms}$ (row 1) or
$\Delta_{2}=49\text{ ms}$ (row 2), fitting data with both diffusion times (row
3), and fitting data with three diffusion times (row 4). $R^{2}$ values for
each case at each SNR level are provided. This result verifies that sub-
diffusion based kurtosis estimation (blue dots) improves using multiple
diffusion times. The improvement in $R^{2}$ is prominent when moving from
fitting single diffusion time data to two diffusion times data, especially
when the data is noisy (e.g., SNR = 5 and 10). The improvement gained by
moving from two to three distinct diffusion times is marginal (less than
$0.01$ improvement in $R^{2}$ value at SNR = 5 and 10, and no improvements for
SNR = 20 data). Moreover, our simulation findings highlight the deviation away
from the ground truth kurtosis $K$ by using the traditional DKI method (orange
dots), especially with kurtosis values larger than 1. Overall, fitting sub-
diffusion model (3) to data with two adequately separated diffusion times can
lead to robust estimation of mean kurtosis, via (9).
Figure 3: Scatter plots of simulated $K$ values vs. fitted $K$ values for
simulated data with different number of diffusion times at various SNR levels.
The simulated data is created using the sub-diffusion model with random normal
noise (10). Blue dots represent kurtosis based on fitting the sub-diffusion
model (3). Orange dots represent kurtosis based on fitting the traditional DKI
model (6). Black line is a reference line for $R^{2}=1.00$, indicating fitted
kurtosis values are 100$\%$ matching the simulated ones.
### 2.2 Time-dependence in DKI metrics
In Figure 4, we show the time-dependence effect of the DKI metrics ($D_{DKI}$
and $K_{DKI}$) after fitting the standard DKI model to our simulated data. In
this fitting, we consider b-values of $0,1000,1400$ and $2500$, and vary the
diffusion time, as in Jelescu et al. (2022). We depict the parameter
estimates, $D_{DKI}$ and $K_{DKI}$, from simulated data with added noise (SNR
$=20$) in Figure 4(A) and (B) for the diffusion time between 10-110ms. In
Figure 4 (C) and (D), using data with no added noise, we illustrate the long
term fitting results and trends in the parameter estimates. In both (A) and
(C), as diffusion time increases, $D_{DKI}$ decreases as expected for an
effective diffusion coefficient. The mean $D_{DKI}$ (averaged over 1000
simulations) agrees with the ground truth diffusivity $D^{*}$. When it comes
to the kurtosis $K_{DKI}$, in the noiseless data setting (D), we see $K_{DKI}$
is converging to the ground truth kurtosis value $K^{*}$ at large diffusion
time, while in the noisy data setting (B), this trend is not obvious within
the experimental diffusion time window. More explanations of the observed
time-dependence of diffusivity and kurtosis are provided in the Discussion.
Figure 4: Time-dependence in DKI metrics using simulated data at different
diffusion times ($\bar{\Delta}$). The ground truth values for ($D_{\beta}$,
$\beta$, $K^{*}$) used in the simulations are set to ($3\times 10^{-4}$, 0.75,
0.8125) to represent white matter (blue dotted lines) and ($5\times 10^{-4}$,
0.85, 0.4733) to represent gray matter (orange dotted lines). (A) and (B) use
data with added random Gaussian noise (SNR = 20) to estimate the parameters
$D_{DKI}$ and $K_{DKI}$. (C) and (D) use noiseless data to obtain estimates
for large $\bar{\Delta}$ values. Shaded regions in (A) and (B) represent the
95% confidence intervals of the estimates.
### 2.3 Towards fast DKI data acquisitions
Next, we sought to identify the minimum number and combination of b-values to
use for mean kurtosis estimation based on the sub-diffusion model (3). The
simulation results were generated using the Connectome 1.0 DW-MRI data b-value
setting, which has 16 b-values, 8 from $\Delta=19\text{ms}$ and 8 from
$\Delta=49\text{ms}$.
Figure 5: $R^{2}$ values for the b-value sampling optimisation based on DW-MRI
data with SNR = $5$, $10$ and $20$. The specifically investigated b-value
combinations using two, three and four non-zero b-values have been ordered by
the size of the $R^{2}$ value. The colour bar depicts the proportion of
$\Delta=19\text{ ms}$ and $\Delta=49\text{ ms}$ b-values needed to produce the
corresponding $R^{2}$ value. Note, the different b-value combinations were
assigned a unique identifier, and these appear along the abscissa for each of
the three non-zero b-value cases. The b-value combinations achieving the
highest $R^{2}$ values are displayed in the inset pictures and the b-values
are provided in Table 1.
In Figure 5, we computed $R^{2}$ values for all combinations of choices when
the number of non-zero b-values is two, three, and four. We then plotted the
$R^{2}$ values sorted in ascending order for each considered SNR level.
Notably, as the number of non-zero b-values increase, the achievable
combinations increase as well (i.e., $120$, $560$ and $1820$ for the three
different non-zero b-value cases). The colours illustrate the proportion of
b-values used in the fitting based on $\Delta_{1}$ and $\Delta_{2}$. For
example, $1:0$ means only $\Delta_{1}$ b-values were used, and $2:1$ means two
$\Delta_{1}$ and one $\Delta_{2}$ b-values were used to generate the result.
The b-value combinations achieving highest $R^{2}$ values are displayed in the
inset pictures and the b-values are provided in Table 1.
Our simulation results in Table 1 suggest that increasing the number of non-
zero b-values from two to four improves the quality of the parameter
estimation, also achievable by fitting the sub-diffusion model (3) to higher
SNR data. The gain is larger by using higher SNR data than by using more
b-values. For example, going from two to four non-zero b-values with SNR = 5
data approximately doubles the $R^{2}$, whereas the $R^{2}$ almost triples
when SNR = $5$ data is substituted by SNR = $20$ data. Additionally, the use
of $\Delta_{1}$ or $\Delta_{2}$ alone is not preferred (also see Figure 5),
and preference is towards first using $\Delta_{1}$ and then supplementing with
b-values generated using $\Delta_{2}$. At all SNR levels when only two non-
zero b-values are used, one b-value should be chosen based on the $\Delta_{1}$
set, and the other based on $\Delta_{2}$. Moving to three non-zero b-values
requires the addition of another $\Delta_{1}$ b-value, and when four non-zero
b-values are used then two from each diffusion time are required. If we
consider an $R^{2}=0.90$ to be a reasonable goodness-of-fit for the sub-
diffusion model, then at least three or four non-zero b-values are needed with
an SNR = $20$. If SNR = $10$, then three non-zero b-values will not suffice.
Interestingly, an $R^{2}$ of $0.85$ can still be achieved when SNR = $20$ and
two optimally chosen non-zero b-values are used.
| Two b-values | Three b-values | Four b-values
---|---|---|---
| $\Delta_{1}$ | $\Delta_{2}$ | $R^{2}$ | $\Delta_{1}$ | $\Delta_{2}$ | $R^{2}$ | $\Delta_{1}$ | $\Delta_{2}$ | $R^{2}$
SNR = 5 | 350 | 6750 | 0.01 | 350, 800 | 2300 | 0.46 | 350, 4750 | 950, 4250 | 0.61
800 | 2300 | 0.11 | 350 | 950, 4250 | 0.47 | 350, 6000 | 950, 6750 | 0.62
350 | 4250 | 0.18 | 350 | 2300, 4250 | 0.47 | 350, 4750 | 950, 6750 | 0.62
350 | 950 | 0.26 | 50, 800 | 2300 | 0.48 | 350, 2400 | 950, 6750 | 0.63
350 | 2300 | 0.30 | 350 | 950, 6750 | 0.49 | 350, 2400 | 950, 4250 | 0.63
SNR = 10 | 350 | 4250 | 0.63 | 350, 4750 | 2300 | 0.83 | 350, 2400 | 950, 9850 | 0.91
800 | 4250 | 0.64 | 50, 800 | 2300 | 0.84 | 350, 3450 | 950, 6750 | 0.91
350 | 950 | 0.67 | 350, 800 | 2300 | 0.84 | 350, 6000 | 950, 9850 | 0.91
350 | 2300 | 0.72 | 350, 1500 | 2300 | 0.84 | 350, 4750 | 950, 9850 | 0.91
800 | 2300 | 0.73 | 350, 2400 | 2300 | 0.85 | 350, 3450 | 950, 9850 | 0.91
SNR = 20 | 1500 | 2300 | 0.77 | 350, 6000 | 2300 | 0.89 | 350, 4750 | 2300, 13500 | 0.96
350 | 950 | 0.78 | 50, 1500 | 4250 | 0.90 | 350, 4750 | 2300, 6750 | 0.96
350 | 2300 | 0.80 | 350, 1500 | 2300 | 0.90 | 350, 6000 | 2300, 9850 | 0.96
800 | 4250 | 0.82 | 350, 4750 | 2300 | 0.90 | 350, 4750 | 2300, 9850 | 0.96
800 | 2300 | 0.85 | 50, 2400 | 4250 | 0.90 | 350, 4750 | 950, 6750 | 0.96
| 800 | 2300 | 0.85 | 350, 1500 | 2300 | 0.90 | 350, 1500 | 950, 4250 | 0.92
Table 1: A selection of the best b-value sampling regimes to achieve the
highest $R^{2}$ value in the three cases considered. The various categories
correspond with two, three and four non-zero b-value sampling schemes, with
$\Delta_{1}$ and $\Delta_{2}$ denoting the diffusion time setting used to
generate the b-values. Note, entries are b-values in unit of $\text{
s}/\text{mm}^{2}$, and $\Delta_{1}=19\text{ ms}$ and $\Delta_{2}=49\text{ ms}$
were used to match the Connectome 1.0 DW-MRI data collection protocol. The
entries listed at the bottom row are suggested optimal nonzero b-values for
clinical practice.
Table 1 summarises findings based on having different number of non-zero
b-values with $R^{2}$ values deduced from the SNR = $5$, $10$ and $20$
simulations. We have chosen to depict five b-value combinations producing the
largest $R^{2}$ values for the two, three and four non-zero b-value sampling
cases. We found consistency in b-value combinations across SNR levels. Thus,
we can conclude that a range of b-values can be used to achieve a large
$R^{2}$ value, which is a positive finding, since parameter estimation does
not stringently rely on b-value sampling. For example, using three non-zero
b-values an $R^{2}\geq 0.90$ is achievable based on different b-value
sampling. Importantly, two distinct diffusion times are required, and
preference is towards including a smaller diffusion time b-value first. Hence,
for three non-zero b-values we find two b-values based on $\Delta_{1}$ and one
based on $\Delta_{2}$. This finding suggests one of the $\Delta_{1}$ b-values
can be chosen in the range $50\text{ s}/\text{mm}^{2}$ to $350\text{
s}/\text{mm}^{2}$, and the other in the range $1500\text{ s}/\text{mm}^{2}$ to
$4750\text{ s}/\text{mm}^{2}$. Additionally, the $\Delta_{2}$ b-value can also
be chosen in a range, considering between $2300\text{ s}/\text{mm}^{2}$ to
$4250\text{ s}/\text{mm}^{2}$ based on the Connectome 1.0 b-value settings.
The b-value sampling choices made should nonetheless be in view of the
required $R^{2}$ value. Overall, sampling using two distinct diffusion times
appears to provide quite a range of options for the DW-MRI data to be used to
fit the sub-diffusion model parameters. The suggested optimal b-value sampling
in the last row of Table 1, primarily chosen to minimise b-value size whilst
maintaining a large $R^{2}$ value, may be of use for specific neuroimaging
studies, which will be used to inform our discussion on feasibility for
clinical practice.
### 2.4 Benchmark mean kurtosis in the brain
The benchmark mean kurtosis estimation in the brain is established using the
entire b-value range with all diffusion encoding directions available in the
Connectome 1.0 DW-MRI data. For two subjects in different slices, Figure 6
provides the spatially resolved maps of mean kurtosis computed using the sub-
diffusion method (i.e., $K^{*}$) with one or two diffusion times, and using
the standard method (i.e., $K_{DKI}$) considering the two distinct diffusion
times. First, we notice a degradation in the $K_{DKI}$ image with an increase
in diffusion time. Second, the use of a single diffusion time with the sub-
diffusion model leads to $K^{*}$ values which are larger than either the
$K_{DKI}$ values or $K^{*}$ values generated using two diffusion times. Third,
the quality of the mean kurtosis map appears to visually be best when two
diffusion times are used to estimate $K^{*}$. Superior grey-white matter
tissue contrast (TC) was found for the $K^{*}$ map ($TC=1.73$), compared to
the $K_{DKI}$ maps ($TC=0.80$ for the $\Delta=19\text{ ms}$ dataset and
$TC=1.01$ for the $\Delta=49\text{ ms}$ dataset).
In Figure 7, an error map (measured by root- mean-square-error, RMSE) from
Subject 5 slice 74 was presented for fitting the sub-diffusion model to the
DW-MRI data with two diffusion times. Sample parameter fittings in both
b-space (3) and q-space (4) were provided for four representative white and
grey matter voxels.
Figure 6: Spatially resolved maps of mean kurtosis shown for two example
slices and two different subjects, Subject 3 rescan slice 71 (Panel A) and
Subject 5 slice 74 (Panel B) from the Connectome 1.0 DW-MRI data. Individual
maps were generated using the sub-diffusion model framework ($K^{*}$), as well
as using the traditional approach ($K_{DKI}$). The diffusion times, $\Delta$,
used to generate each plot are provided for each case. We consider the mean
kurtosis maps using two diffusion times ($\Delta=19,\leavevmode\nobreak\
49$\mathrm{m}\mathrm{s}$$) as the benchmarks. Figure 7: Representative error
map and sample parameter fits for Subject 5 slice 74. The DW-MRI data with two
diffusion times was fitted to the sub-diffusion model in both q-space (A-D)
and b-space (E-H), following (3) and (4) respectively. The first and second
columns are voxels in white matter (30,20,74) and (45,56,74), respectively.
The third and fourth columns are voxels in grey matter (58,35,74) and
(34,78,74), respectively.
Quantitative findings are provided in Table 2. The analysis was performed for
sub-cortical grey matter (scGM), cortical grey mater (cGM) and white matter
(WM) brain regions. For specifics we refer the reader to the appropriate
methods section. The table entries highlight the differences in mean kurtosis
when computed using the two different approaches. The trend for the
traditional single diffusion time approach is that an increase in $\Delta$
results in a slight decrease in the mean $K_{DKI}$, and an increase in the
coefficient of variation (CV) for any region. For example, the mean $K_{DKI}$
in the thalamus reduces from $0.65$ to $0.58$, while the CV increases from
$30\%$ to $39\%$. As much as $30\%$ increase in CV is common for scGM and cGM
regions, and around $10\%$ for WM regions. The CV based on the $K^{*}$ value
for each region is less than the CV for $K_{DKI}$ with either $\Delta=19\text{
ms}$ or $\Delta=49\text{ ms}$.
Figure 8 presents the distributions of the fitted parameters ($D$ and $K$) in
specific brain regions, based on the sub-diffusion model (panel A) and the
standard DKI model with $\Delta=19\text{ ms}$ (panel B). The distributions are
colored by the probability density. Yellow indicates high probability density,
light blue indicates low probability density. In each subplot, the diffusivity
is plotted along the abscissa axis and the kurtosis is along the ordinate
axis. Results for the standard DKI model with $\Delta=49\text{ ms}$ are
qualitatively similar, so are not shown here. From panel (B), we see an
unknown nonlinear relationship between the DKI pair, $D_{DKI}$ and $K_{DKI}$,
in all regions considered. By comparison, the sub-diffusion based $K^{*}$ and
$D^{*}$ (panel A) are less correlated with each other, indicating $D^{*}$ and
$K^{*}$ carry distinct information, which will be very valuable for
characterising tissue microstructure.
| $K_{DKI}$ | $K^{*}$
---|---|---
| $\Delta=19\textrm{ ms}$ | $\Delta=49\textrm{ ms}$ | $\Delta=19,49\textrm{ ms}$
scGM | $0.57\pm 0.23(40\%)$ | $0.50\pm 0.24(48\%)$ | $0.60\pm 0.21(35\%)$
Thalamus | $0.65\pm 0.19(30\%)$ | $0.58\pm 0.22(39\%)$ | $0.70\pm 0.17(25\%)$
Caudate | $0.41\pm 0.24(58\%)$ | $0.37\pm 0.23(64\%)$ | $0.39\pm 0.14(35\%)$
Putamen | $0.54\pm 0.21(40\%)$ | $0.45\pm 0.22(49\%)$ | $0.49\pm 0.13(27\%)$
Pallidum | $0.68\pm 0.25(37\%)$ | $0.64\pm 0.26(41\%)$ | $0.93\pm 0.18(19\%)$
cGM | $0.53\pm 0.24(46\%)$ | $0.46\pm 0.23(51\%)$ | $0.40\pm 0.16(39\%)$
Fusiform | $0.55\pm 0.22(40\%)$ | $0.44\pm 0.22(49\%)$ | $0.40\pm 0.15(37\%)$
Lingual | $0.59\pm 0.21(35\%)$ | $0.53\pm 0.22(42\%)$ | $0.47\pm 0.16(34\%)$
WM | $0.78\pm 0.20(25\%)$ | $0.76\pm 0.19(26\%)$ | $0.87\pm 0.22(25\%)$
Cerebral WM | $0.77\pm 0.19(25\%)$ | $0.75\pm 0.19(26\%)$ | $0.85\pm 0.22(26\%)$
Cerebellum WM | $0.99\pm 0.18(18\%)$ | $0.95\pm 0.19(20\%)$ | $1.07\pm 0.22(21\%)$
CC | $0.65\pm 0.22(35\%)$ | $0.65\pm 0.22(34\%)$ | $0.95\pm 0.25(26\%)$
Table 2: Benchmark kurtosis values estimated using the Connectome 1.0 DW-MRI
data for different regions of the human brain. Results are provided for the
traditional mean kurtosis ($K_{DKI}$) at two distinct diffusion times, and
values ($K^{*}$) obtained based on fitting the sub-diffusion model across both
diffusion times. Results are for grey matter (GM) and white matter (WM) brain
regions, in categories of sub-cortical (sc) and cortical (c), and CC stands
for corpus callosum. The pooled means and standard deviations across
participants have been tabulated, along with the coefficient of variation in
parentheses. Figure 8: Distributions of the estimated parameter pair $(D,K)$
in different regions of the brain of all subjects, colored by the probability
density. Yellow indicates high probability density, light blue indicates low
probability density. Panel A: distributions of $(D^{*},K^{*})$, generated
using the sub-diffusion model (3) with both $\Delta=19,49\text{ms}$. Panel B:
distributions of $(D_{DKI},K_{DKI})$, generated using the standard DKI model
(6) with $\Delta=19\text{ms}$. Kurtosis is dimensionless and diffusivity is in
units of $\times 10^{-3}\text{ mm}^{2}/\text{s}$.
### 2.5 Reduction in DKI data acquisition
Results for reductions in diffusion encoding directions to achieve different
levels of SNR with the purpose of shortening acquisition times will be
benchmarked against the $K^{*}$ maps in Figure 6 and the $K^{*}$ values
reported in Table 2.
Figure 9 presents the qualitative findings for two subjects generated using
all, optimal and sub-optimal b-value samplings with SNR = $6$ (3 non-collinear
directions, 6 measurements), $10$ (8 non-collinear directions, 16
measurements) and $20$ (32 non-collinear directions, 64 measurements) DW-MRI
data. The quality of the mean kurtosis map improves with increasing SNR, and
also by optimising b-value sampling. Optimal sampling at SNR = $10$ is
qualitatively comparable to the SNR = $20$ optimal sampling result, and to the
benchmark sub-diffusion results in Figure 6.
Figure 9: Spatially resolved maps of mean kurtosis shown for two example
slices and two different subjects, Subject 3 rescan slice 71 (Panel A) and
Subject 5 slice 74 (Panel B), based on SNR reduction of the Connectome 1.0 DW-
MRI data. Individual maps were generated using the sub-diffusion model
framework ($K^{*}$), considering optimal and sub-optimal four non-zero b-value
sampling schemes. Here, two b-values with $\Delta=19\text{ ms}$ and two
b-values with $\Delta=49\text{ ms}$ were selected for each case. The optimal
b-values were chosen as the best for each SNR shown in Table 1. The sub-
optimal b-values were chosen to have an $R^{2}=0.3,0.45,0.5$ to be about half
of the maximum $R^{2}$, for SNR $=6$ ($b=800,1500,200,2300\text{
s}/\text{mm}^{2}$), SNR $=10$ ($b=1500,3450,6750,13500\text{
s}/\text{mm}^{2}$) and SNR $=20$ ($b=3450,4750,2300,4250\text{
s}/\text{mm}^{2}$), respectively. The benchmark kurtosis map is provided in
Figure 6.
Quantitative verification of the qualitative observations are provided in
Table 3. Significant differences in brain region-specific mean kurtosis values
occur at the SNR $=6$ level, which are not apparent when SNR $=10$ or $20$
data with optimal b-value sampling were used. The average errors are relative
errors compared to the benchmark kurtosis values reported in Table 2. When
using optimal b-values, average errors range from 22% to 43% at SNR = 6, from
13% to 43% at SNR = 10, and from 8% to 20% at SNR = 20, across brain regions.
When using sub-optimal b-values, average errors range from 47% to 57% at SNR =
6, from 24% to 102% at SNR = 10, and from 27% to 72% at SNR = 20. Also, the
brain region-specific CV for mean kurtosis was not found to change markedly
when SNR $=10$ or $20$ data were used to compute $K^{*}$. The result of
reducing the SNR to $6$ leads to an approximate doubling of the CV for each
brain region. These findings confirm that with optimal b-value sampling, the
data quality can be reduced to around the SNR $=10$ level, without a
significant impact on the region-specific mean kurtosis estimates derived
using the sub-diffusion model.
| SNR = 6 | SNR = 10 | SNR = 20 | Average error for SNR $=6/10/20$
---|---|---|---|---
Optimal b-values |
scGM | $\textit{ 0.47 }\pm 0.27(57\%)$ | $0.65\pm 0.22(33\%)$ | $0.61\pm 0.21(34\%)$ | $37/23/12\%$
Thalamus | $\textit{ 0.57 }\pm 0.26(46\%)$ | $0.75\pm 0.20(26\%)$ | $0.71\pm 0.17(24\%)$ | $33/20/11\%$
Caudate | $\textit{ 0.30 }\pm 0.20(68\%)$ | $0.46\pm 0.17(38\%)$ | $0.40\pm 0.15(38\%)$ | $43/30/15\%$
Putamen | $\textit{ 0.38 }\pm 0.21(56\%)$ | $0.58\pm 0.16(28\%)$ | $0.52\pm 0.14(28\%)$ | $40/28/13\%$
Pallidum | $\textit{ 0.66 }\pm 0.33(49\%)$ | $0.90\pm 0.24(26\%)$ | $0.89\pm 0.20(22\%)$ | $39/19/12\%$
cGM | $0.40\pm 0.22(54\%)$ | $0.51\pm 0.20(39\%)$ | $0.45\pm 0.18(40\%)$ | $30/36/19\%$
Fusiform | $0.37\pm 0.21(57\%)$ | $0.54\pm 0.20(37\%)$ | $0.45\pm 0.17(38\%)$ | $35/43/20\%$
Lingual | $0.45\pm 0.22(50\%)$ | $0.59\pm 0.19(33\%)$ | $0.52\pm 0.18(34\%)$ | $28/34/18\%$
WM | $\textit{ 0.72 }\pm 0.26(36\%)$ | $0.91\pm 0.23(26\%)$ | $0.89\pm 0.22(25\%)$ | $23/13/8\%$
Cerebral WM | $\textit{ 0.71 }\pm 0.25(35\%)$ | $0.89\pm 0.23(25\%)$ | $0.88\pm 0.22(25\%)$ | $22/13/8\%$
Cerebellum WM | $\textit{ 0.83 }\pm 0.29(35\%)$ | $1.17\pm 0.25(21\%)$ | $1.15\pm 0.23(20\%)$ | $27/15/11\%$
CC | $\textit{ 0.88 }\pm 0.36(41\%)$ | $0.98\pm 0.31(31\%)$ | $0.86\pm 0.25(29\%)$ | $26/17/13\%$
Sub-optimal b-values | |
scGM | $0.31\pm 0.22(71\%)$ | $0.82\pm 0.29(36\%)$ | $0.58\pm 0.25(43\%)$ | $55/45/34\%$
Thalamus | $0.36\pm 0.23(65\%)$ | $0.95\pm 0.25(27\%)$ | $0.72\pm 0.19(26\%)$ | $54/43/31\%$
Caudate | $0.23\pm 0.19(81\%)$ | $0.57\pm 0.17(31\%)$ | $0.40\pm 0.21(52\%)$ | $56/57/40\%$
Putamen | $0.25\pm 0.16(67\%)$ | $0.65\pm 0.17(26\%)$ | $0.44\pm 0.20(46\%)$ | $54/42/36\%$
Pallidum | $0.44\pm 0.29(66\%)$ | $1.27\pm 0.32(25\%)$ | $0.76\pm 0.25(33\%)$ | $56/42/32\%$
cGM | $0.25\pm 0.17(67\%)$ | $0.44\pm 0.14(33\%)$ | $0.37\pm 0.17(47\%)$ | $47/30/36\%$
Fusiform | $0.22\pm 0.15(68\%)$ | $0.47\pm 0.16(35\%)$ | $0.33\pm 0.17(52\%)$ | $51/32/38\%$
Lingual | $0.29\pm 0.19(66\%)$ | $0.53\pm 0.16(30\%)$ | $0.39\pm 0.20(51\%)$ | $48/29/40\%$
WM | $0.42\pm 0.18(44\%)$ | $1.16\pm 0.36(31\%)$ | $0.72\pm 0.22(31\%)$ | $53/38/28\%$
Cerebral WM | $0.41\pm 0.18(43\%)$ | $1.15\pm 0.35(31\%)$ | $0.74\pm 0.20(28\%)$ | $53/38/27\%$
Cerebellum WM | $0.47\pm 0.21(45\%)$ | $1.27\pm 0.33(26\%)$ | $0.35\pm 0.26(74\%)$ | $57/24/72\%$
CC | $0.69\pm 0.42(61\%)$ | $1.94\pm 0.44(23\%)$ | $0.92\pm 0.19(20\%)$ | $47/102/28\%$
Table 3: Kurtosis values ($K^{*}$) under the optimal and sub-optimal b-value
sampling regimes for specific brain regions. $K^{*}$ was estimated based on
fitting the sub-diffusion model to the Connectome 1.0 DW-MRI data with two
diffusion times and selected four b-shells. Optimal b-value sampling is
considered to have $R^{2}=0.63$, $0.91$ and $0.96$ for the SNR = $6$, $10$ and
$20$ columns, according to Table 1. Sub-optimal b-values are chosen to have
$R^{2}=0.3$, $0.45$ and $0.5$, respectively, as reported in Figure 9.
Individual entries are for grey matter (GM) and white matter (WM) brain
regions, in categories of sub-cortical (sc) and cortical (c), and CC stands
for corpus callosum. A reduction in SNR level was achieved by reducing the
number of diffusion encoding directions in each b-shell of the DW-MRI data.
The pooled means and standard deviations across participants have been
tabulated, along with the coefficient of variation in parentheses. The entries
identified in italic under the optimal b-value heading were found to be
significantly different from the mean $K^{*}$ reported in Table 2. Sub-optimal
result population means were mostly significantly different from the mean
$K^{*}$, and they are not italicised. The average errors (last column) are
relative errors compared to the benchmark kurtosis values reported in Table 2.
### 2.6 Scan-rescan reproducibility of mean kurtosis
Figure 10 summarises the intraclass correlation coefficient (ICC) distribution
results ($\mu$ for mean; $\sigma$ for standard deviation) for the specific
brain regions analysed. The two sets of ICC values were computed based on all
DW-MRI (i.e., SNR $=20$; subscript A) and the SNR = $10$ optimal b-value
sampling scheme (subscript O). As the value of $\mu$ approaches 1, the inter-
subject variation in mean kurtosis is expected to greatly outweigh the intra-
subject scan-rescan error. The value of $\mu$ should always be above $0.5$,
otherwise parameter estimation cannot be performed robustly and accurately,
and values above $0.75$ are generally accepted as good. The $\mu_{A}$ values
for all regions were in the range $0.76$ (thalamus) to $0.87$ (caudate), and
reduced to the range $0.57$ (thalamus) to $0.80$ (CC) when optimal sampling
with SNR = 10 was used to estimate the $K^{*}$ value. Irrespective of which of
the two DW-MRI data were used for $K^{*}$ estimation, the value of $\mu$ was
greater than or equal to $0.70$ in $20$ out of $24$ cases. The $\mu_{O}$ was
less than $0.70$ for only the thalamus, putamen and pallidum. The loss in ICC
by going to SNR = 10 data with optimal b-value sampling went hand-in-hand with
an increase in $\sigma$, which is not unexpected, since the uncertainty
associated with using less data should be measurable. Overall, $\mu_{A}$,
$\mu_{O}$, and $\sigma_{A}$, $\sigma_{O}$, were fairly consistent across the
brain regions, suggesting the DW-MRI data with SNR = 10 is sufficient for mean
kurtosis estimation based on the sub-diffusion framework.
Figure 10: Interclass correlation coefficient (ICC) results for mean kurtosis
are depicted for the 12 brain regions analysed. The mean ($\mu$) and standard
deviation ($\sigma$) computed based on all the Connectome 1.0 DW-MRI data (A),
and the reduced data achieving an SNR = 10 with optimal four non-zero b-value
sampling (O), are provided for each brain region. Histograms were generated
using all data. Mean kurtosis based on the optimised protocol was computed
using the sub-diffusion framework using DW-MRI data with the four non-zero
b-values suggested in Table 1 and diffusion encoding directions down sampled
to achieve an SNR = 10.
## 3 Discussion
DW-MRI allows the measurement of mean kurtosis, a metric for the deviation
away from standard Brownian motion of water in tissue, which has been used to
infer variations in tissue microstructure. Research on mean kurtosis has shown
benefits in specific applications over other diffusion related measures
derived from DW-MRI data (Li et al., 2022b; Liu et al., 2021; Huang et al.,
2021a; Guo et al., 2022a; Goryawala et al., 2022; Guo et al., 2022b; Wang et
al., 2022; Li et al., 2022a, d; Maralakunte et al., 2022; Hu et al., 2022;
Zhou et al., 2022; Li et al., 2022c). Whilst many efforts have been made to
optimise mean kurtosis imaging for clinical use, the limitations have been
associated with lack of robustness and the time needed to acquire the DW-MRI
data for mean kurtosis estimation. The choice of the biophysical model and how
diffusion encoding is applied are critical to how well kurtosis in the brain
is mapped. Here, we evaluated the mapping of mean kurtosis based on the sub-
diffusion model, which allows different diffusion times to be incorporated
into the data acquisition. Using simulations and the Connectome 1.0 public DW-
MRI dataset, involving a range of diffusion encodings, we showed that mean
kurtosis can be mapped robustly and rapidly provided at least two different
diffusion times are used and care is taken towards how b-values are chosen
given differences in the SNR level of different DW-MRI acquisitions.
### 3.1 Reduction in scan time
Previous attempts have been made in optimising the DW-MRI acquisition protocol
for mean kurtosis estimation based on the traditional, single diffusion time
kurtosis model (Hansen et al., 2013; Hu et al., 2022; Poot et al., 2010;
Hansen et al., 2016). Considerations have been made towards reducing the
number of b-shells, directions per shell, and sub-sampling of DW-MRI for each
direction in each shell. Our findings suggest that robust estimation of
kurtosis cannot be achieved using the classical model for mean kurtosis, as
highlighted previously (Yang et al., 2022; Ingo et al., 2014, 2015; Barrick et
al., 2020). A primary limitation of the traditional method is the use of the
cumulant expansion resulting in sampling below a b-value of around $2500\text{
s}/\text{mm}^{2}$ (Jensen et al., 2005; Jensen and Helpern, 2010), and using
the sub-diffusion model this limitation is removed (Yang et al., 2022). Our
simulation findings and experiments using the Connectome 1.0 data confirm that
mean kurtosis can be mapped robustly and rapidly using the sub-diffusion model
applied to an optimised DW-MRI protocol. Optimisation of the data acquisition
with the use of the sub-diffusion model has not been considered previously.
Mean kurtosis values can be generated based on having limited number of
diffusion encoding directions (refer to Figure 9 and Table 3). Given that each
direction for each b-shell takes a fixed amount of time, then a four b-shell
acquisition with six directions per shell will take $25$ times longer than a
single diffusion encoding data acquisition (assuming a single b-value = 0 data
is collected). The total acquisition time for the diffusion MRI protocol for
the Connectome 1.0 data was 55 minutes, including 50 b $=0\text{
s}/\text{mm}^{2}$ scans plus seven b-values with 32 and nine with 64 diffusion
encoding directions (Tian et al., 2022). This gives a total of 850 scans per
dataset. As such, a single 3D image volume took $3.88\text{ s}$ to acquire.
Conservatively allowing $4\text{ s}$ per scan, and considering SNR $=20$ data
(i.e. 64 directions) over four b-values and a single b-value = 0 scan, DW-MRI
data for mean kurtosis estimation can be completed in $17\text{ min}\,8\text{
s}$ ($R^{2}=0.96$). At SNR $=10$ (i.e. 32 directions), DW-MRI data with the
same number of b-values can be acquired in $4\text{ min}\,20\text{ s}$
($R^{2}=0.91$). If an $R^{2}=0.85$ (SNR $=10$) is deemed adequate, then one
less b-shell is required, saving an additional $64\text{ s}$. We should point
out that even though 2-shell optimised protocols can achieve $R^{2}=0.85$ with
SNR = 20, this is not equivalent in time to using 3-shells with SNR = $10$
(also $R^{2}=0.85$). This is because $4\times$ additional data are required to
double the SNR (equivalent to acquiring an additional 4-shells). However, only
one extra b-shell is required to convert 2-shell data to 3-shells with SNR =
$10$. Our expected DW-MRI data acquisition times are highly feasible
clinically, where generally neuroimaging scans take around $15\text{ min}$
involving numerous different MRI contrasts and often a DTI acquisition.
An early study on estimating mean kurtosis demonstrated the mapping of a
related metric in less than $1\text{ min}$ over the brain (Hansen et al.,
2013). Clinical adoption of the protocol lacked, possibly since b-values are a
function $\delta$, $G$ and $\Delta$. Hence, different b-values can be obtained
using different DW-MRI protocol settings, leading to differences in the
mapping of mean kurtosis based on the data (we showed the $\Delta$ effect in
Figure 6 and Table 2). Our findings suggest this impediment is removed by
sampling and fitting data with b-values across two distinct diffusion times.
Nonetheless, we should consider what might be an acceptable DW-MRI data
acquisition time.
A recent study on nasopharyngeal carcinoma investigated reducing the number of
b-shell signals based on fixing diffusion encoding directions to the three
Cartesian orientations (He et al., 2022). The 3-shell acquisitions took
$3\text{ min }2\text{ s}$ to acquire, while the 5-shell data required $5\text{
min }13\text{ s}$. They investigated as well the impact of using partial
Fourier sampling, i.e., reducing the amount of data needed for image
reconstruction by reducing k-space line acquisitions for each diffusion
encoded image. Their benchmark used 5-shells $(200,400,800,1500,2000$ $\text{
s}/\text{mm}^{2})$, and found partial Fourier sampling with omission of the
$1500\text{ s}/\text{mm}^{2}$ b-shell produced acceptable results. With this
acquisition the scan could be completed in $3\text{ min\,}46\text{ s}$, more
than $2\times$ faster than the benchmark $8\text{ min\,}31\text{ s}$. Our
proposed 3-shells acquisition with an $R^{2}=0.85$ (see SNR = $10$ results in
Table 1) executable under $4\text{ min\,}$ is therefore inline with current
expectations. Note, at the $R^{2}=0.85$ level the ICC for the different brain
regions were in the range 0.60 to 0.69, and these were not formally reported
in Figure 10. This level of reproducibility is still acceptable for routine
use. We should additionally point out that we used the Subject 1 segmentation
labels, after having registered each DW-MRI data to the Subject 1 first scan.
This approach results in slight mismatch of the region-specific segmentations
when carried across subjects, inherently resulting in an underestimation of
ICC values.
Less than $4\text{ min\,}$ DW-MRI data acquisitions can potentially replace
existing data acquisitions used to obtained DTI metrics, since even the
estimation of the apparent diffusion coefficient improves by using DW-MRI data
relevant to DKI (Veraart et al., 2011b; Wu and Cheung, 2010). Additionally, it
is increasingly clear that in certain applications the DKI analysis offers a
more comprehensive approach for tissue microstructure analysis (Li et al.,
2022b; Liu et al., 2021; Huang et al., 2021a; Guo et al., 2022a; Goryawala et
al., 2022; Guo et al., 2022b; Wang et al., 2022; Li et al., 2022a, d;
Maralakunte et al., 2022; Hu et al., 2022; Zhou et al., 2022; Li et al.,
2022c). As such, multiple b-shell, multiple diffusion encoding direction DW-
MRI acquisitions should be used for the calculation of both DTI and DKI
metrics.
### 3.2 DW-MRI data acquisition considerations
To achieve $R^{2}>0.92$ for estimating kurtosis $K^{*}$, it is necessary to
have four b-values, e.g., two relatively small b-values ($350\text{
s}/\text{mm}^{2}$ using $\Delta=19\text{ ms}$, and $950\text{
s}/\text{mm}^{2}$ using $\Delta=49\text{ ms}$, both with $G=68\text{
mT}/\text{m}$) and two larger b-values of around $1500\text{ s}/\text{mm}^{2}$
and $4250\text{ s}/\text{mm}^{2}$ (using $\Delta=19\text{ ms}$ and
$\Delta=49\text{ ms}$ respectively, both with $G=142\text{ mT}/\text{m}$) (see
bottom row in Table 1). If two or three non-zero b-values are considered
sufficient (with $R^{2}=0.85$ or $0.90$), then the larger $\Delta$ needs to be
used to set the largest b-value to be $2300\text{ s}/\text{mm}^{2}$, and the
other(s) should be set using the smaller $\Delta$. For the two non-zero
b-values case, the b-value from the smaller $\Delta$ should be around
$800\text{ s}/\text{mm}^{2}$. For the three non-zero b-values case, the
b-values from the smaller $\Delta$ would then need to be $350\text{
s}/\text{mm}^{2}$ and $1500\text{ s}/\text{mm}^{2}$. Interestingly, b-value =
$800\text{ s}/\text{mm}^{2}$ lies around the middle of the $350\text{
s}/\text{mm}^{2}$ to $1500\text{ s}/\text{mm}^{2}$ range. The additional gain
to $R^{2}=0.92$ can be achieved by splitting the b-value with the larger
$\Delta$ into two, again with $2300\text{ s}/\text{mm}^{2}$ near the middle of
the two new b-values set. In addition, the separation between $\Delta_{1}$ and
$\Delta_{2}$ needs to be as large as plausible, as can be deduced from the
simulation result in Figure 2, but attention should be paid to signal-to-noise
ratio decreases with increased echo times (He et al., 2022).
A recent study on optimising quasi-diffusion imaging (QDI) considered b-values
up to $5000\text{ s}/\text{mm}^{2}$ (Spilling et al., 2022). While QDI is a
non-Gaussian approach, it is different to the sub-diffusion model, but still
uses the Mittag-Leffler function and involves the same number of model
parameters. The DW-MRI data used in their study was acquired with a single
diffusion time. Nevertheless, particular points are worth noting. Their
primary finding was parameter dependence on the maximum b-value used to create
the DW-MRI data. They also showed the accuracy, precision and reliability of
parameter estimation are improved with increased number of b-shells. They
suggested a maximum b-value of $3960\text{ s}/\text{mm}^{2}$ for the 4-shell
parameter estimation. Our results do not suggest a dependence of the parameter
estimates on maximum b-value (note, if a maximum b-value dependence is
present, benchmark versus optimal region specific results in Tables 2 and 3
should show some systematic difference; and we used the real part of the DW-
MRI data and not the magnitude as commonly used). These findings potentially
confirm that $\Delta$ separation is an important component of obtaining a
quality parameter fit from which mean kurtosis is deduced (Figure 2).
### 3.3 Diffusion gradient pulse amplitudes
Commonly available human clinical MRI scanners are capable of $40\text{ mT/m}$
gradient amplitudes. Recently, the increased need to deduce tissue
microstructure metrics from DW-MRI measurements has led to hardware
developments resulting in $80\text{ mT/m}$ gradient strength MRI scanners.
These were initially sought by research centres. The Connectome 1.0 scanners
achieve $300\text{ mT/m}$ gradient amplitudes (Tian et al., 2022), which in
turn allow large b-values within reasonable echo times. The Connectome 2.0
scanner is planned to achieve $500\text{ mT/m}$ gradient amplitudes (Huang et
al., 2021b), providing data for further exploration of the $(q,\Delta)$ space
by providing a mechanism for increasing our knowledge of the relationships
between the micro-, meso- and macro-scales. Whilst there are three Connectome
1.0 scanners available, only one Connectome 2.0 scanner is planned for
production. Hence, robust and fast methods utilising existing $80\text{ mT/m}$
gradient systems are needed, and the Connectome scanners provide opportunities
for testing and validating methods.
The b-value is an intricate combination of $\delta$, $G$, and $\Delta$. An
increase in any of these three parameters increases the b-value (note,
$\delta$ and $G$ increase b-value quadratically, and $\Delta$ linearly). An
increase in $\delta$ can likely require an increase in $\Delta$, the
consequence of which is a reduction in signal-to-noise ratio as the echo time
has to be adjusted proportionally. Partial Fourier sampling methods aim to
counteract the need to increase the echo time by sub-sampling the DW-MRI data
used to generate an image for each diffusion encoding direction (Zong et al.,
2021; Heidemann et al., 2010). The ideal scenario, therefore, is to increase
$G$, as for the Connectome 1.0 and 2.0 scanners.
Based on our suggested b-value settings in Table 1, a maximum b-value of
around $4250\text{ s}/\text{mm}^{2}$ is required for robust mean kurtosis
estimation (assume $R^{2}>0.90$ is adequate and achieved via four non-zero
b-values and two distinct $\Delta$s, and we should highlight that b-value =
$1500\text{ s}/\text{mm}^{2}$ ($\Delta=19\text{ ms}$) and b-value =
$4250\text{ s}/\text{mm}^{2}$ ($\Delta=49\text{ ms}$) were achieved using
$G=142\text{ mT/m}$). Considering $80\text{ mT/m}$ gradient sets are the new
standard for MRI scanners, an adjustment to $\delta$ to compensate for $G$ is
needed (recall, $q=\gamma\delta G$). Hence, for a constant $q$, $G$ can be
reduced at the consequence of proportionally increasing $\delta$. This then
allows MRI systems with lower gradient amplitudes to generate the relevant
b-values. For example, changing the $\delta$ from $8\text{ ms}$ to $16\text{
ms}$ would result in halving of $G$ (using a maximum $G$ of $142\text{ mT/m}$
from the Connectome 1.0 can achieve the optimal b-values; hence halving would
require around $71\text{ mT/m}$ gradient pulse amplitudes). Increasing
$\Delta$ above $49\text{ ms}$ to create larger b-value data is unlikely to be
a viable solution due to longer echo times leading to a loss in SNR. SNR
increases afforded by moving from $3\text{ T}$ to $7\text{ T}$ MRI are most
likely counteracted by an almost proportional decrease in $T_{2}$ times
(Pohmann et al., 2016), in addition to $7\text{ T}$ MRI bringing new
challenges in terms of increased transmit and static field inhomogeneities
leading to signal inconsistencies across an image (Kraff and Quick, 2017).
### 3.4 Relationship between sub-diffusion based mean kurtosis $K^{*}$ and
histology
Following Table 2 (last column), white matter regions showed high kurtosis
(0.87$\pm$0.22), consistent with a structured heterogeneous environment
comprising parallel neuronal fibres, as shown in Maiter et al. (2021).
Cortical grey matter showed low kurtosis (0.40$\pm$0.16). Subcortical grey
matter regions showed intermediate kurtosis (0.60$\pm$0.21). In particular,
caudate and putamen showed similar kurtosis to grey matter, while thalamus and
pallidum showed similar kurtosis properties to white matter. Histological
staining results of the subcortical nuclei (Maiter et al., 2021) showed the
subcortical grey matter was permeated by small white matter bundles, which
could account for the similar kurtosis in thalamus and pallidum to white
matter. These results confirmed that our proposed sub-diffusion based mean
kurtosis $K^{*}$ is consistent with published histology of normal human brain.
### 3.5 Time-dependence of diffusivity and kurtosis
The time-dependence of diffusivity and kurtosis has attracted much interest in
the field of tissue microstructure imaging. Although our motivation here is
not to map time-dependent diffusion, we can nonetheless point out that the
assumption of a sub-diffusion model provides an explanation of the observed
time-dependence of diffusivity and kurtosis. From (5), the diffusivity that
arises from the sub-diffusion model is of the form
$D_{SUB}=D_{\beta}\bar{\Delta}^{\beta-1}$, where $\bar{\Delta}$ is the
effective diffusion time, $D_{SUB}$ has the standard units for a diffusion
coefficient $\text{mm}^{2}\text{/s}$, and $D_{\beta}$ is the anomalous
diffusion coefficient (with units $\text{mm}^{2}\text{/s}^{\beta}$) associated
with sub-diffusion. In the sub-diffusion framework (1), $D_{\beta}$ and
$\beta$ are assumed to be constant, and hence $D_{SUB}$ exhibits a fractional
power-law dependence on diffusion time. Then, following (8), $D^{*}$ is
obtained simply by scaling $D_{SUB}$ by a constant $1/\Gamma(1+\beta)$, and
hence also follows a fractional power-law dependence on diffusion time. This
time-dependence effect of diffusivity was illustrated in our simulation
results, Figure 4(A) and (C).
When it comes to kurtosis, the literature on the time-dependence is mixed.
Some work showed kurtosis to be increasing with diffusion time in both white
and gray matter (Aggarwal et al., 2020), and in gray matter (Ianus et al.,
2021), while others showed kurtosis to be decreasing with diffusion time in
gray matter (Lee et al., 2020; Olesen et al., 2022; Jelescu et al., 2022). In
this study, we provide an explanation of these mixed results. We construct a
simulation of diffusion MRI signal data based on the sub-diffusion model (3)
augmented with random Gaussian noise. Then we fit the conventional DKI model
to the synthetic data. As shown in Figure 4(D), when there is no noise,
$K_{DKI}$ increases with diffusion time in white matter, while decreasing with
diffusion time in gray matter. When there is added noise, as shown in Figure
4(B), the time-dependency of kurtosis within the timescale of a usual MR
experiment is not clear. This goes some way to explaining why the results in
the literature on the time-dependence of kurtosis are quite mixed.
Furthermore, we summarise the benefits of using the sub-diffusion based mean
kurtosis measurement $K^{*}$. First, as shown in (9), sub-diffusion based mean
kurtosis $K^{*}$ is not time-dependent, and hence has the potential to become
a tissue-specific imaging biomarker. Second, the fitting of the sub-diffusion
model is straightforward, fast and robust, from which the kurtosis $K^{*}$ is
simply computed as a function of the sub-diffusion model parameter $\beta$,
(9). Third, the kurtosis $K^{*}$ is not subject to any restriction on the
maximum b-value, as in standard DKI. Hence its value truly reflects the
information contained in the full range of b-values.
### 3.6 Extension to directional kurtosis
The direct link between the sub-diffusion model parameter $\beta$ and mean
kurtosis is well established (Yang et al., 2022; Ingo et al., 2014, 2015). An
important aspect to consider is whether mean $\beta$ used to compute the mean
kurtosis is alone sufficient for clinical decision making. While benefits of
using kurtosis metrics over other DW-MRI data derived metrics in certain
applications are clear, the adequacy of mean kurtosis over axial and radial
kurtosis is less apparent. Most studies perform the mapping of mean kurtosis,
probably because the DW-MRI data can be acquired in practically feasible
times. Nonetheless, we can point to a few recent examples where the
measurement of directional kurtosis has clear advantages. A study on mapping
tumour response to radiotherapy treatment found axial kurtosis to provide the
best sensitivity to treatment response (Goryawala et al., 2022). In a
different study a correlation was found between glomerular filtration rate and
axial kurtosis is assessing renal function and interstitial fibrosis (Li et
al., 2022a). Uniplor depression subjects have been shown to have brain region
specific increases in mean and radial kurtosis, while for bipolar depression
subjects axial kurtosis decreased in specific brain regions and decreases in
radial kurtosis were found in other regions (Maralakunte et al., 2022). This
selection of studies highlight future opportunities for extending the methods
to additionally map axial and radial kurtosis.
Notably, estimates for axial and radial kurtosis require directionality of
kurtosis to be resolved, resulting in DW-MRI sampling over a large number of
diffusion encoding directions within each b-shell (Jensen and Helpern, 2010;
Poot et al., 2010). As such, extension to directional kurtosis requires a
larger DW-MRI dataset acquired using an increased number of diffusion encoding
directions. The number of b-shells and directions therein necessary for robust
and accurate mapping of directional kurtosis based on the sub-diffusion model
is an open question.
There are three primary ways of determining mean kurtosis. These include the
powder averaging over diffusion encoding directions in each shell, and then
fitting the model, as in our approach. A different approach is to ensure each
b-shell in the DW-MRI data contains the same diffusion encoding directions,
and then kurtosis can be estimated for each diffusion encoding direction,
after which the average over directions is used to state mean kurtosis.
Lastly, the rank-4 kurtosis tensor is estimated from the DW-MRI data, from
which mean kurtosis is computed directly. The latter two approaches are
potential candidates for extending to axial and radial kurtosis mapping. Note,
in DTI a rank-2 diffusion tensor with six unique tensor entries is needed to
be estimated, whilst in DKI in addition to the rank-2 diffusion tensor, the
kurtosis tensor is rank-4 with 15 unknowns, resulting in 21 unknowns
altogether (Hansen et al., 2016). As such, DKI analysis for directional
kurtosis requires much greater number of diffusion encoding directions to be
sampled than DTI. This automatically means that at least 22 DW-MRI data
(including b-value = 0) with different diffusion encoding properties have to
be acquired (Jensen and Helpern, 2010). The traditional approach has been to
set five distinct b-values with 30 diffusion encoding directions within each
b-shell (Poot et al., 2010). Hence, to obtain the entries of the rank-4
kurtosis tensor, much more DW-MRI data is needed in comparison to what is
proposed for mean kurtosis estimation in this study. Estimation of the tensor
entries from this much data is prone to noise, motion and image artifacts in
general (Tabesh et al., 2010), posing challenges on top of long DW-MRI data
acquisition times.
A kurtosis tensor framework based on the sub-diffusion model where separate
diffusion encoding directions are used to fit a direction specific $\beta$ is
potentially an interesting line of investigation for the future, since it can
be used to establish a rank-2 $\beta$ tensor with only six unknowns, requiring
at least six distinct diffusion encoding directions. This type of approach can
reduce the amount of DW-MRI data to be acquired, and potentially serve as a
viable way forward for the combined estimation of mean, axial and radial
kurtosis.
### 3.7 Kurtosis estimation outside of the brain
Although our study has been focusing on mean kurtosis imaging in the human
brain, it is clear that DKI has wide application outside of the brain (Li et
al., 2022b; Liu et al., 2021; Huang et al., 2021a; Guo et al., 2022a; Li et
al., 2022a, d; Zhou et al., 2022). Without having conducted experiments
elsewhere, we cannot provide specific guidelines for mean kurtosis imaging in
the breast, kidney, liver, and other human body regions. We can, however,
point the reader in a specific direction.
The classical mono-exponential model can be recovered from the sub-diffusion
equation by setting $\beta=1$. For this case, the product between the b-value
and fitted diffusivity has been reported to be insightful for b-value sampling
(Yablonskiy and Sukstanskii, 2010), in accordance with a theoretical
perspective (Istratov and Vyvenko, 1999). It was suggested the product should
approximately span the (0, 1) range. Considering our case based on the sub-
diffusion equation, we can investigate the size of $bD_{SUB}$ by analysing the
four non-zero b-value optimal sampling regime ($\Delta_{1}$: $350\text{
s}/\text{mm}^{2}$ and $1500\text{ s}/\text{mm}^{2}$; $\Delta_{2}$: $950\text{
s}/\text{mm}^{2}$ and $4250\text{ s}/\text{mm}^{2}$ from Table 1). Considering
scGM, cGM and WM brain regions alone, the rounded and dimensionless $bD_{SUB}$
values are $(0.09,0.38,0.20,0.88)$, $(0.13,0.55,0.30,1.34)$ and
$(0.05,0.23,0.11,0.48)$, respectively, and note that in each case the first
two effective sampling values are based on $\Delta_{1}$, and the other two are
derived using $\Delta_{2}$. Interestingly, the log-linear sampling proposed in
(Istratov and Vyvenko, 1999) is closely mimicked by the effective sampling
regime (scGM: $-2.42$, $-1.62$, $-0.97$, $-0.13$; cGM: $-2.06$, $-1.20$,
$-0.60$, $0.29$; WM: $-2.94$, $-2.23$, $-1.48$, $-0.73$; by sorting and taking
the natural logarithm). This analysis also informs on why it may be difficult
to obtain a generally large $R^{2}$ across the entire brain, since $\beta$ and
$D_{SUB}$ are brain region specific and the most optimal sampling strategy
should be $\beta$ and $D_{SUB}$ specific. Whilst region specific sampling may
provide further gains in the $R^{2}$ value, and improve ICC values for
specific brain regions, such data would take a long time to acquire and
require extensive post-processing and in-depth analyses.
## 4 Methods
### 4.1 Theory
#### 4.1.1 Sub-diffusion modelling framework
In biological tissue, the motion of water molecules is hindered by various
microstructures, and hence the diffusion can be considerably slower than
unhindered, unrestricted, free diffusion of water. The continuous time random
walk formalism provides a convenient mathematical framework to model this sub-
diffusive behaviour using fractional calculus (Metzler and Klafter, 2000). The
resulting probability density function $P(x,t)$ of water molecules at location
$x$ (in units of $\mathrm{m}\mathrm{m}$) at time $t$ (in units of
$\mathrm{s}$) satisfies the time fractional diffusion equation:
$\frac{\partial^{\beta}P(x,t)}{\partial
t^{\beta}}=D_{\beta}\nabla^{2}P(x,t),\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ 0<\beta\leq 1,$ (1)
where $\frac{\partial^{\beta}}{\partial t^{\beta}}$ is the time fractional
derivative of order $\beta$ ($0<\beta\leq 1$) in the Caputo sense, $D_{\beta}$
is the generalised anomalous diffusion coefficient with unit of $\text{
mm}^{2}/\text{s}^{\beta}$, and the parameter $\beta$ characterises the
distribution of waiting times between two consecutive steps in the continuous
time random walk interpretation. When $\beta=1$, the waiting times have finite
mean; when $0<\beta<1$, the waiting times have infinite mean, leading to sub-
diffusion behaviour. The solution to the time fractional diffusion equation
(1) in Fourier space is:
$p(k,t)=E_{\beta}\left(D_{\beta}|k|^{2}t^{\beta}\right),$ (2)
where $E_{\beta}(z)=\sum_{n=0}^{\infty}\frac{z^{n}}{\Gamma(1+\beta n)}$ is the
single-parameter Mittag-Leffler function, $\Gamma$ is the standard Gamma
function and by definition $E_{1}(z)=\exp(z)$. In the context of diffusion DW-
MRI, $k$ in (2) represents the q-space parameter $q=\gamma G\delta$, $t$
represents the effective diffusion time $\bar{\Delta}=\Delta-\delta/3$ and
$p(k,t)$ represents the signal intensity $S(q,\bar{\Delta})$, leading to the
diffusion signal equation (Magin et al., 2020):
$S(q,\bar{\Delta})=S_{0}E_{\beta}\left(-D_{\beta}q^{2}{\bar{\Delta}}^{\beta}\right).$
(3)
Defining $b=q^{2}\bar{\Delta}$, the DW-MRI signal then can be expressed in
terms of b-values:
$S(b)=S_{0}E_{\beta}(-bD_{SUB}),$ (4)
where
$D_{SUB}=D_{\beta}\bar{\Delta}^{\beta-1}$ (5)
has the standard unit for a diffusion coefficient, $\text{s}/\text{mm}^{2}$.
#### 4.1.2 Diffusional kurtosis imaging
The traditional DKI approach was proposed by Jensen et al. (Jensen et al.,
2005; Jensen and Helpern, 2010) to measure the extent of non-Gaussian
diffusion in biological tissues using DW-MRI data:
$S(b)=S_{0}\exp\left(-bD_{DKI}+\frac{1}{6}b^{2}D_{DKI}^{2}K_{DKI}\right),$ (6)
where $S$ is the signal for a given diffusion weighting $b$ (i.e., b-value),
$S_{0}$ is the signal when $b=0$, $D_{DKI}$ and $K_{DKI}$ are the apparent
diffusivity and kurtosis. A major limitation of (6) is that it was developed
based on the Taylor expansion of the logarithm of the signal at $b=0$, as such
b-values should be chosen in the neighbourhood of b-value = $0$ (Yang et al.,
2022; Kiselev, 2010). Hence, to estimate diffusivity and kurtosis, Jensen and
Helpern (Jensen and Helpern, 2010) suggested the use of three different
b-values (such as $0,1000,2000\text{ s}/\text{mm}^{2}$) and the maximum
b-value should be in the range $2000\text{ s}/\text{mm}^{2}$ to $3000\text{
s}/\text{mm}^{2}$ for brain studies. Subsequently, the optimal maximum b-value
was found to be dependent on the tissue types and specific pathologies, which
makes the experimental design optimal for a whole brain challenging (Chuhutin
et al., 2017). The procedure for fitting kurtosis and diffusivity tensors is
also not trivial, and a variety of fitting procedures are currently in use. We
refer readers to the descriptive and comparative studies for detail on the
implementation and comparison of methods (Veraart et al., 2011b, a; Chuhutin
et al., 2017).
#### 4.1.3 Mean kurtosis from the sub-diffusion model
Yang et al. (2022) established that the traditional DKI model corresponds to
the first two terms in the expansion of the sub-diffusion model:
$S(b)=S_{0}E_{\beta}(-bD_{SUB})=S_{0}\exp\left(-bD^{*}+\frac{1}{6}b^{2}D^{*2}K^{*}+O(b^{3})\right),$
(7)
where diffusivity, $D^{*}$, and kurtosis, $K^{*}$, are computed directly via
sub-diffusion parameters $D_{SUB}$ and $\beta$:
$D^{*}=\frac{D_{SUB}}{\Gamma(1+\beta)},$ (8)
$K^{*}=6\frac{\Gamma^{2}(1+\beta)}{\Gamma(1+2\beta)}-3,$ (9)
where $D_{SUB}$ is defined in (5). Note the mean kurtosis expression in (9)
was also derived by Ingo et al. (Ingo et al., 2015) using a different method.
Their derivation follows the definition of kurtosis, $K=\langle
x^{4}\rangle/\langle x^{2}\rangle^{2}-3$, i.e., by computing the fourth moment
$\langle x^{4}\rangle$ and the second moment $\langle x^{2}\rangle$ based on
the sub-diffusion equation (1).
### 4.2 Connectome 1.0 human brain DW-MRI data
The DW-MRI dataset provided by Tian et al. (2022) was used in this study. The
publicly available data were collected using the Connectome 1.0 scanner for 26
healthy participants. The first seven subjects had a scan-rescan available. We
evaluated qualitatively the seven datasets, and chose the six which had
consistent diffusion encoding directions. Subject 2 had 60 instead of 64
diffusion encoding directions, and hence, was omitted from this study. The
$2\times 2\times 2\text{ mm}^{3}$ resolution data were obtained using two
diffusion times ($\Delta=19,49\,\text{ ms}$) with a pulse duration of
$\delta=8\,\text{ ms}$ and $G=31,68,105,142,179,216,253,290\,\text{ mT/m}$,
respectively generating b-values = $50,350,800,1500,2400,$
$3450,4750,6000\,\text{ s}/\text{mm}^{2}$ for $\Delta=19\,\text{ ms}$, and
b-values = $200,950,2300,4250,6750,9850,13500,17800\,\text{ s}/\text{mm}^{2}$
for $\Delta=49\,\text{ ms}$, according to b-value $=(\gamma\delta
G)^{2}(\Delta-\delta/3)$. 32 diffusion encoding directions were uniformly
distributed on a sphere for $b<2400\,\text{ s}/\text{mm}^{2}$ and 64 uniform
directions for $b\leq 2400\,\text{ s}/\text{mm}^{2}$.
The FreeSurfer’s segmentation labels as part of the dataset were used for
brain-region specific analyses. Tian et al. (2022) provided the white matter
averaged group SNR ($23.10\pm 2.46$), computed from 50 interspersed b-value
$=0\text{ s}/\text{mm}^{2}$ images for each subject. Both magnitude and the
real part of the DW-MRI were provided. Based on an in-depth analysis, the use
of the real part of the DW-MRI data was recommended, wherein physiological
noise, by nature, follows a Gaussian distribution (Gudbjartsson and Patz,
1995).
### 4.3 Simulated DW-MRI data at specific b-values
DW-MRI data were simulated to establish (i) the correspondence between actual
versus fitted mean kurtosis using the traditional DKI and sub-diffusion models
based on various choices for $\Delta$, and (ii) to investigate the impact of
SNR levels and sub-sampling of b-values on the mean kurtosis estimate. The DW-
MRI signal was simulated using the sub-diffusion model (3) with random
Gaussian noise added to every normalised DW-MRI signal instance:
$S(D_{\beta},\beta,q,\bar{\Delta})=E_{\beta}(-D_{\beta}q^{2}\bar{\Delta}^{\beta})+N(0,\sigma^{2}),$
(10)
where $N(0,\sigma^{2})$ is white noise with mean of zero and standard
deviation of $\sigma$ according to the normal distribution.
Two aspects influence $\sigma$ in the case of real-valued DW-MRI data. These
include the SNR achieved with a single diffusion encoding direction (i.e.,
Connectome 1.0 DW-MRI data SNR was derived using only b-value $=0\text{
s}/\text{mm}^{2}$ data), and the number of diffusion encoding directions in
each b-shell across which the powder average is computed:
$\sigma=\frac{1}{\mathrm{SNR}\sqrt{N_{\mathrm{DIR}}}},$ (11)
where $N_{\mathrm{DIR}}$ is the number of diffusion encoding directions for
each b-shell and assuming it is consistent across b-shells. The $\sigma$ for
the Connectome 1.0 data is approximately $1/(23.10\times 8)=0.0054$ based on
$64$ diffusion encoding directions. Achieving of SNR = 5, 10 and 20 for the
simulation study can therefore be accomplished by changing only the $\sigma$
and keeping $N_{\mathrm{DIR}}=64$. As such, $\sigma_{\mathrm{SNR}=5}=0.0250$,
$\sigma_{\mathrm{SNR=10}}=0.0125$ and $\sigma_{\mathrm{SNR=20}}=0.0063$.
Three simulation experiments were carried out at various SNR levels. The first
simulation experiment was to examine the effect of the number of diffusion
times on the accuracy of the parameter fitting for idealised grey and white
matter cases. In (10) the choices of $D_{\beta}=3\times 10^{-4}$$\text{
mm}^{2}/\text{s}^{\beta}$, $\beta=0.75$, and $D_{\beta}=5\times
10^{-4}$$\text{ mm}^{2}/\text{s}^{\beta}$, $\beta=0.85$, were made for white
matter and grey matter, respectively. These two distinct $\beta$s led to
$K^{*}$ of $0.8125$ and $0.4733$ using (9). Diffusion times were chosen from
the range ${\Delta_{i}\leavevmode\nobreak\ \in\leavevmode\nobreak\
[\delta,\leavevmode\nobreak\ \delta+50]}$, where diffusion pulse length was
set to $\delta=8\text{ ms}$ to match the Connectome 1.0 data. A minimum
required separation between any two $\Delta$s was enforced, i.e., $30/(n-1)$,
where $30$ corresponds with the $30\text{ ms}$ difference between the
$\Delta$s for the Connectome 1.0 DW-MRI data, and $n$ is the number of
distinct diffusion times simulated. We considered as many as five distinct
diffusion times. Simulations were conducted by randomly selecting sets of
$\Delta$s for $1000$ instances, and then generating individual DW-MRI
simulated signals using (10), before fitting for $D_{\beta}$ and $\beta$, from
which $K^{*}$ was computed using (9). For the case of two diffusion times,
suggestions on the separation between them was given based on the goodness-of-
fit of the model.
The second simulation experiment was to investigate the effect of the number
of diffusion times on the accuracy of parameter fitting using simulated data
with random values of $D_{\beta}$ and $\beta$. The domains were restricted to
$D_{\beta}\in[10^{-4},10^{-3}]$ in the unit of $\text{
mm}^{2}/\text{s}^{\beta}$ and $\beta\in[0.5,1]$, corresponding to
$K^{*}\in[0,1.7124]$. For the case of two diffusion times, suggestions on the
separation between them were given based on the goodness-of-fit of the model.
The third simulation experiment is to study b-value sub-sampling under various
SNR levels (SNR = 5, 10 and 20). We set the b-values in the simulated data the
same as those used to acquire the Connectome 1.0 dataset. At each SNR level,
we selected combinations of two, three and four b-values, irrespective of the
difference between them and the diffusion time set to generate the b-value.
Essentially, we explored the entire possible sets of b-values for the three
regimes, resulting in 120, 560 and 1820 combinations, respectively. A
goodness-of-fit measure for model fitting was used to make comparisons between
the different b-value combinations.
### 4.4 SNR reduction by downsampling diffusion encoding directions
We performed SNR reduction of the Connectome 1.0 DW-MRI data by downsampling
of diffusion directions in each b-shell. The method of multiple subsets from
multiple sets (P-D-MM) subsampling algorithm provided in DMRITool (Cheng et
al., 2018) was applied to the b-vectors provided with the Connectome 1.0 DW-
MRI data. Note, the b-shells contained $32$ directions if $b<2400\text{
s}/\text{mm}^{2}$, and $64$ directions if $b\geq 2400\text{ s}/\text{mm}^{2}$.
We consider SNR $=20$ to be the full dataset. The SNR $=10$ data was
constructed by downsampling to eight non-collinear diffusion encoding
directions in each b-shell, and three were required for the SNR = $6$ data. In
the downsampled data, each diffusion encoding direction was coupled with the
direction of opposite polarity (i.e., SNR = 10 had sixteen measurements for
each b value, and SNR = 6 had six).
### 4.5 Parameter estimation
#### 4.5.1 Standard DKI model
The maximum b-value used to acquire the DW-MRI for standard DKI model fitting
is limited to the range $2000\text{ s}/\text{mm}^{2}$ to $3000\text{
s}/\text{mm}^{2}$ due to the quadratic form of (6). We opted to use DW-MRI
data generated with b-values $=50,350,800,1500,2400\text{ s}/\text{mm}^{2}$
using $\Delta=19\text{ ms}$, and b-values $=200,950,2300\text{
s}/\text{mm}^{2}$ using $\Delta=49\text{ ms}$. Note, the apparent diffusion
coefficient, $D_{DKI}$ in (6) is time dependent, as can be deduced from (5)
and (8). Thereby, standard DKI fitting can only be applied to DW-MRI data
generated using a single diffusion time. The model in (6) was fitted in a
voxelwise manner to the powder averaged (i.e., geometric mean over diffusion
encoding directions, often referred to as trace-weighted) DW-MRI data using
the $\verb'lsqcurvefit'$ function in MATLAB (Mathworks, Version 2022a) using
the trust-region reflective algorithm. Optimisation function specific
parameters were set to $\verb'TolFun'=10^{-4}$ and $\verb'TolX'=10^{-6}$.
Parameters were bounded to the ranges of $D_{DKI}>0$ and $0<K_{DKI}\leq 3$.
#### 4.5.2 Sub-diffusion model
For the single diffusion time case, the sub-diffusion model in (3) was fitted
to the powder averaged DW-MRI data in a voxelwise manner using the same MATLAB
functions as in the previous section. For each subject, spatially resolved
maps of $D_{\beta}$ and $\beta$ were generated. The fitting strategy for
multiple diffusion time data is to solve:
$(D_{\beta},\beta)=argmin\sum_{\begin{subarray}{c}i=1,2,\ldots,n,\\\
j=1,2,\ldots,J_{i}\end{subarray}}\left[S_{ij}-SUB\left(\bar{\Delta}_{i},q_{j};D_{\beta},\beta\right)\right]^{2}$
(12)
where $S_{ij}$ is the signal at the $i$th effective diffusion time
$\bar{\Delta}_{i}$ and the $j$th q-space parameter $q_{j}$, $SUB$ is the sub-
diffusion model (3) at $\bar{\Delta}_{i}$ and $q_{j}$ for a given set of
($D_{\beta},\beta$), $n$ is the number of diffusion times, and $J_{i}$ is the
number of $q$-values corresponding to $\bar{\Delta}_{i}$ in data acquisition.
This objective function allows incorporation of an arbitrary number of
diffusion times, each having arbitrary number of $q$-values. Parameters were
bounded to the ranges of $0<\beta\leq 1$ and $D_{\beta}>0$. Model parameters
were found to be insensitive to the choice of initial values. Parameters
$D^{*}$ and $K^{*}$ were computed analytically using the estimated $D_{\beta}$
and $\beta$ according to (8) and (9).
### 4.6 Goodness-of-fit and region-based statistical analysis
The coefficient of determination, referred to as $R^{2}$, was used to assess
how well the mean kurtosis values in the simulation were able to be fitted. It
is generally accepted that an $R^{2}$ value above $0.5$ should be achieved and
a value of $1.0$ is unreasonable for data with realistic noise. Negative
values imply the model is a very poor fit. For human data, we computed the
region specific mean and standard deviation for each subject, and reported the
weighted mean and pooled standard deviation along with the coefficient of
variation (CV), defined as the ratio of the standard deviation to the mean.
The weights were the number of voxels in the associated regions in each
subject. Following Barrick et al. (2020), the tissue contrast is computed as
$TC=|\mu_{WM}-\mu_{GM}|/\sqrt{\sigma_{WM}^{2}+\sigma_{GM}^{2}}$, where
$\mu_{WM}$ and $\mu_{GM}$ are the mean parameter values in white and grey
matters; and $\sigma_{WM}$ and $\sigma_{GM}$ are the standard deviations of
parameter values. Higher TC values indicate greater tissue contrast.
Human data were analysed voxelwise, and also based on regions-of-interest. We
considered three categories of brain regions, namely sub-cortical grey matter
(scGM), cortical grey matter (cGM) and white matter (WM). The scGM region
constituted the thalamus (FreeSurfer labels 10 and 49 for left and right
hemisphere), caudate (11, 50), putamen (12, 51) and pallidum (13, 52). The cGM
region was all regions (1000 to 2999) and separately analysed the fusiform
(1007, 2007) and lingual (1013, 2013) brain regions, while the WM had white
matter fibre regions from the cerebral (2, 41), cerebellum (7, 46) and corpus
callosum (CC; 251 to 255) areas. The average number of voxels in each region
were 3986 (scGM), 53326 (cGM), 52121 (WM), 1634 (thalamus), 831 (caudate),
1079 (putamen), 443 (pallidum), 2089 (fusiform), 1422 (lingual), 48770
(cerebral WM), 2904 (cerebellum WM) and 447 (CC). For each brain region a
t-test was performed to test for differences in mean kurtosis population
means.
### 4.7 Scan-rescan analysis using intraclass correlation coefficient (ICC)
For each of the six subjects, both the first (scan) and second (rescan) scan
images were registered to the first scan images of Subject 1 using inbuilt
MATLAB (Mathworks, Version 2022a) functions ($\verb'imregtform'$ and
$\verb'imwarp'$). We used 3D affine registration to account for distortions
and warps common in DW-MRI data. Cubic spline interpolation was applied to
resample both the scan and rescan DW-MRI data for each subject onto Subject
1’s first scan data grid. The FreeSurfer labels for Subject 1’s first scan
were used for brain region analysis. The ICC measure was applied to assess
scan-rescan reproducibility of mean kurtosis, as described by Duval et al.
(Duval et al., 2017) and Fan et al. (Fan et al., 2021). An ICC histogram and
the mean and standard deviation descriptive statistics were generated for all
brain regions analysed.
### 4.8 Data and code availability statements
The Connectome 1.0 human brain DW-MRI data used in this study is part of the
MGH Connectome Diffusion Microstructure Dataset (CDMD)(Tian et al., 2022),
which is publicly available on the figshare repository
https://doi.org/10.6084/m9.figshare.c.5315474. MATLAB codes generated for
simulation study, parameter fitting, and optimising b-value sampling is openly
available at https://github.com/m-farquhar/SubdiffusionDKI.
## 5 Conclusion
The utility of diffusional kurtosis imaging for inferring information on
tissue microstructure was described decades ago. Continued investigations in
the DW-MRI field have led to studies clearly describing the importance of mean
kurtosis mapping to clinical diagnosis, treatment planning and monitoring
across a vast range of diseases and disorders. Our research on robust, fast,
and accurate mapping of mean kurtosis using the sub-diffusion mathematical
framework promises new opportunities for this field by providing a clinically
useful, and routinely applicable mechanism for mapping mean kurtosis in the
brain. Future studies may derive value from our suggestions and apply methods
outside the brain for broader clinical utilisation.
## CRediT authorship contribution statement
Megan E. Farquhar: Methodology; Software; Validation; Formal analysis; Writing
- Original Draft; Writing - Review & Editing; Visualization; Qianqian Yang:
Conceptualization; Methodology; Software; Validation; Formal analysis; Writing
- Original Draft; Writing - Review & Editing; Supervision; Project
administration; Funding acquisition; Viktor Vegh: Conceptualization;
Methodology; Software; Validation; Formal analysis; Writing - Original Draft;
Writing - Review & Editing; Supervision; Project administration; Funding
acquisition;
## Acknowledgement
Qianqian Yang and Viktor Vegh acknowledge the financial support from the
Australian Research Council (ARC) Discovery Project scheme (DP190101889) for
funding a project on mathematical model development and MRI-based
investigations into tissue microstructure in the human brain. Qianqian Yang
also acknowledges the support from the ARC Discovery Early Career Research
Award (DE150101842) for funding a project on new mathematical models for
capturing heterogeneity in human brain tissue. Authors also thank the members
of the Anomalous Relaxation and Diffusion Study (ARDS) group for many
interesting discussions involving diffusion MRI.
## References
* Aggarwal et al. (2020) Aggarwal M, Smith MD, Calabresi PA. Diffusion-time dependence of diffusional kurtosis in the mouse brain. Magnetic Resonance in Medicine. 2020 Feb; 84(3). doi: https://doi.org/10.1002/mrm.28189.
* Barrick et al. (2020) Barrick TR, Spilling CA, Ingo C, Madigan J, Isaacs JD, Rich P, Jones TL, Magin RL, Hall MG, Howe FA. Quasi-diffusion magnetic resonance imaging (QDI): A fast, high b-value diffusion imaging technique. NeuroImage. 2020 may; 211:116606. doi: 10.1016/j.neuroimage.2020.116606.
* Cheng et al. (2018) Cheng J, Shen D, Yap PT, Basser PJ. Single- and multiple-shell uniform sampling schemes for diffusion MRI using spherical codes. IEEE Transactions on Medical Imaging. 2018 jan; 37(1):185–199. doi: 10.1109/tmi.2017.2756072.
* Chuhutin et al. (2017) Chuhutin A, Hansen B, Jespersen SN. Precision and accuracy of diffusion kurtosis estimation and the influence of b-value selection. NMR in Biomedicine. 2017 aug; 30(11):e3777. doi: 10.1002/nbm.3777.
* Duval et al. (2017) Duval T, Smith V, Stikov N, Klawiter EC, Cohen-Adad J. Scan–rescan of axcaliber, macromolecular tissue volume, and g-ratio in the spinal cord. Magnetic Resonance in Medicine. 2017 oct; 79(5):2759–2765. doi: 10.1002/mrm.26945.
* Fan et al. (2021) Fan Q, Polackal MN, Tian Q, Ngamsombat C, Nummenmaa A, Witzel T, Klawiter EC, Huang SY. Scan-rescan repeatability of axonal imaging metrics using high-gradient diffusion MRI and statistical implications for study design. NeuroImage. 2021 oct; 240:118323. doi: 10.1016/j.neuroimage.2021.118323.
* Goryawala et al. (2022) Goryawala M, Mellon EA, Shim H, Maudsley AA. Mapping early tumor response to radiotherapy using diffusion kurtosis imaging. The Neuroradiology Journal. 2022 aug; p. 197140092211222. doi: 10.1177/19714009221122204.
* Gudbjartsson and Patz (1995) Gudbjartsson H, Patz S. The Rician distribution of noisy MRI data. Magnetic Resonance in Medicine. 1995 dec; 34(6):910–914. doi: 10.1002/mrm.1910340618.
* Guo et al. (2022a) Guo J, Sun W, Dong C, Wu Z, Li X, Zhou R, Xu W. Intravoxel incoherent motion imaging combined with diffusion kurtosis imaging to assess the response to radiotherapy in a rabbit VX2 malignant bone tumor model. Cancer Imaging. 2022 sep; 22(1). doi: 10.1186/s40644-022-00488-w.
* Guo et al. (2022b) Guo M, Shen B, Li J, Huang X, Hu J, Wei X, Wang S, Yuan R, He C, Li Y. Diffusion abnormality in temporal lobe epilepsy patients with sleep disorders: A diffusion kurtosis imaging study. Frontiers in Psychiatry. 2022 may; 13. doi: 10.3389/fpsyt.2022.885477.
* Hansen et al. (2013) Hansen B, Lund TE, Sangill R, Jespersen SN. Experimentally and computationally fast method for estimation of a mean kurtosis. Magnetic Resonance in Medicine. 2013 apr; 69(6):1754–1760. doi: 10.1002/mrm.24743.
* Hansen et al. (2016) Hansen B, Shemesh N, Jespersen SN. Fast imaging of mean, axial and radial diffusion kurtosis. Neuroimage. 2016 nov; 142:381–393. doi: 10.1016/j.neuroimage.2016.08.022.
* He et al. (2022) He Y, Chen H, Zhang H, Grimm R, Zhao C, Guo X, Liu Y, Yuan Z. Optimization of scan parameters to reduce acquisition time for RESOLVE-based diffusion kurtosis imaging (DKI) in nasopharyngeal carcinoma (NPC). The British Journal of Radiology. 2022 aug; 95(1136). doi: 10.1259/bjr.20210641.
* Heidemann et al. (2010) Heidemann RM, Porter DA, Anwander A, Feiweier T, Heberlein K, Knösche TR, Turner R. Diffusion imaging in humans at 7T using readout-segmented EPI and GRAPPA. Magnetic Resonance in Medicine. 2010 jun; 64(1):9–14. doi: 10.1002/mrm.22480.
* Henriques et al. (2021) Henriques RN, Jespersen SN, Jones DK, Veraart J. Toward more robust and reproducible diffusion kurtosis imaging. Magnetic Resonance in Medicine. 2021 apr; 86(3):1600–1613. doi: 10.1002/mrm.28730.
* Hu et al. (2022) Hu R, Kim H, Kim J, Allen JW, Sun PZ. Fast diffusion kurtosis imaging in acute ischemic stroke shows mean kurtosis-diffusivity mismatch. Journal of Neuroimaging. 2022 apr; 32(5):941–946. doi: 10.1111/jon.13000.
* Huang et al. (2021a) Huang N, Chen Y, She D, Xing Z, Chen T, Cao D. Diffusion kurtosis imaging and dynamic contrast-enhanced MRI for the differentiation of parotid gland tumors. European Radiology. 2021 oct; 32(4):2748–2759. doi: 10.1007/s00330-021-08312-y.
* Huang et al. (2021b) Huang SY, Witzel T, Keil B, Scholz A, Davids M, Dietz P, Rummert E, Ramb R, Kirsch JE, Yendiki A, Fan Q, Tian Q, Ramos-Llordén G, Lee HH, Nummenmaa A, Bilgic B, Setsompop K, Wang F, Avram AV, Komlosh M, et al. Connectome 2.0: Developing the next-generation ultra-high gradient strength human MRI scanner for bridging studies of the micro-, meso- and macro-connectome. NeuroImage. 2021 nov; 243:118530. doi: 10.1016/j.neuroimage.2021.118530.
* Ianus et al. (2021) Ianus A, Alexander DC, Zhang H, Palombo M. Mapping complex cell morphology in the grey matter with double diffusion encoding MR: A simulation study. NeuroImage. 2021; 241. doi: https://doi.org/10.1016/j.neuroimage.2021.118424.
* Ingo et al. (2014) Ingo C, Magin R, Parrish T. New insights into the fractional order diffusion equation using entropy and kurtosis. Entropy. 2014 nov; 16(11):5838–5852. doi: 10.3390/e16115838.
* Ingo et al. (2015) Ingo C, Sui Y, Chen Y, Parrish TB, Webb AG, Ronen I. Parsimonious continuous time random walk models and kurtosis for diffusion in magnetic resonance of biological tissue. Frontiers in Physics. 2015 mar; 3:11. doi: 10.3389/fphy.2015.00011.
* Istratov and Vyvenko (1999) Istratov AA, Vyvenko OF. Exponential analysis in physical phenomena. Review of Scientific Instruments. 1999 feb; 70(2):1233–1257. doi: 10.1063/1.1149581.
* Jelescu et al. (2022) Jelescu IO, de Skowronski A, Geffroy F, Palombo M, Novikov DS. Neurite Exchange Imaging (NEXI): A minimal model of diffusion in gray matter with inter-compartment water exchange. NeuroImage. 2022 aug; 256:119277. doi: 10.1016/j.neuroimage.2022.119277.
* Jensen and Helpern (2010) Jensen JH, Helpern JA. MRI quantification of non-Gaussian water diffusion by kurtosis analysis. NMR in Biomedicine. 2010 may; 23(7):698–710. doi: 10.1002/nbm.1518.
* Jensen et al. (2005) Jensen JH, Helpern JA, Ramani A, Lu H, Kaczynski K. Diffusional kurtosis imaging: The quantification of non-Gaussian water diffusion by means of magnetic resonance imaging. Magnetic Resonance in Medicine. 2005; 53(6):1432–1440. doi: 10.1002/mrm.20508.
* Kiselev (2010) Kiselev VG. The cumulant expansion: An overarching mathematical framework for understanding diffusion NMR. In: _Diffusion MRI_ Oxford University Press; 2010.p. 152–168. doi: 10.1093/med/9780195369779.003.0010.
* Kraff and Quick (2017) Kraff O, Quick HH. 7T: Physics, safety, and potential clinical applications. Journal of Magnetic Resonance Imaging. 2017 apr; 46(6):1573–1589. doi: 10.1002/jmri.25723.
* Kuder et al. (2011) Kuder TA, Stieltjes B, Bachert P, Semmler W, Laun FB. Advanced fit of the diffusion kurtosis tensor by directional weighting and regularization. Magnetic Resonance in Medicine. 2011 aug; 67(5):1401–1411. doi: 10.1002/mrm.23133.
* Le Bihan and Johansen-Berg (2012) Le Bihan D, Johansen-Berg H. Diffusion MRI at 25: Exploring brain tissue structure and function. NeuroImage. 2012 jun; 61(2):324–341. doi: 10.1016/j.neuroimage.2011.11.006.
* Le Bihan et al. (2001) Le Bihan D, Mangin JF, Poupon C, Clark CA, Pappata S, Molko N, Chabriat H. Diffusion tensor imaging: Concepts and applications. Journal of Magnetic Resonance Imaging. 2001; 13(4):534–546. doi: 10.1002/jmri.1076.
* Lebel et al. (2019) Lebel C, Treit S, Beaulieu C. A review of diffusion MRI of typical white matter development from early childhood to young adulthood. NMR in Biomedicine. 2019; 32(4):e3778. doi: 10.1002/nbm.3778.
* Lee et al. (2020) Lee HH, Papaioannou A, Novikov DS, Fieremans E. In vivo observation and biophysical interpretation of time-dependent diffusion in human cortical gray matter. NeuroImage. 2020; 222. doi: https://doi.org/10.1016/j.neuroimage.2020.117054.
* Li et al. (2022a) Li A, Yuan G, Hu Y, Shen Y, Hu X, Hu D, Li Z. Renal functional and interstitial fibrotic assessment with non-Gaussian diffusion kurtosis imaging. Insights into Imaging. 2022 apr; 13(1). doi: 10.1186/s13244-022-01215-6.
* Li et al. (2022b) Li HW, Yan GW, Yang J, Zhuo LH, Bhetuwal A, Long YJ, Feng X, Yao HC, Zou XX, Feng RH, Yang HF, Du Y. Quantitative analysis for detection and grading of hepatocellular carcinoma: Comparison of diffusion kurtosis imaging, intravoxel incoherent motion and conventional diffusion-weighted imaging. Oncology Letters. 2022 sep; 24(5). doi: 10.3892/ol.2022.13523.
* Li et al. (2022c) Li Q, Cao J, Liu X, Luo X, Su G, Wang D, Lin B. The diagnostic value of diffusion kurtosis imaging in Parkinson’s disease: a systematic review and meta-analysis. Annals of Translational Medicine. 2022 apr; 10(8):474–474. doi: 10.21037/atm-22-1461.
* Li et al. (2022d) Li Q, Cao B, Liu K, Sun H, Ding Y, Yan C, Wu PY, Dai C, Rao S, Zeng M, Jiang S, Zhou J. Detecting the muscle invasiveness of bladder cancer: An application of diffusion kurtosis imaging and tumor contact length. European Journal of Radiology. 2022 jun; 151:110329. doi: 10.1016/j.ejrad.2022.110329.
* Liu et al. (2021) Liu Y, Zhang GMY, Peng X, Li X, Sun H, Chen L. Diffusion kurtosis imaging as an imaging biomarker for predicting prognosis in chronic kidney disease patients. Nephrology Dialysis Transplantation. 2021 jul; 37(8):1451–1460. doi: 10.1093/ndt/gfab229.
* Magin et al. (2020) Magin RL, Hall MG, Karaman MM, Vegh V. Fractional calculus models of magnetic resonance phenomena: Relaxation and diffusion. Critical Reviews in Biomedical Engineering. 2020; 48(5):285–326. doi: 10.1615/critrevbiomedeng.2020033925.
* Maiter et al. (2021) Maiter A, Riemer F, Allinson K, Zaccagna F, Crispin-Ortuzar M, Gehrung M, McLean MA, Priest AN, Grist J, Matys T, Graves MJ, Gallagher FA. Investigating the relationship between diffusion kurtosis tensor imaging (DKTI) and histology within the normal human brain. Scientific Reports. 2021; 11. doi: https://doi.org/10.1038/s41598-021-87857-w.
* Maralakunte et al. (2022) Maralakunte M, Gupta V, Grover S, Ahuja CK, Sahoo S, Kishore K, Vyas S, Garg G, Singh P, Govind V. Cross-sectional analysis of whole-brain microstructural changes in adult patients with bipolar and unipolar depression by diffusion kurtosis imaging. The Neuroradiology Journal. 2022 jul; p. 197140092211144. doi: 10.1177/19714009221114446.
* Metzler and Klafter (2000) Metzler R, Klafter J. The random walk's guide to anomalous diffusion: a fractional dynamics approach. Physics Reports. 2000 dec; 339(1):1–77. doi: 10.1016/s0370-1573(00)00070-3.
* Olesen et al. (2022) Olesen JL, Østergaard L, Shemesh N, Jespersen SN. Diffusion time dependence, power-law scaling, and exchange in gray matter. NeuroImage. 2022 may; 251:118976. doi: 10.1016/j.neuroimage.2022.118976.
* Pohmann et al. (2016) Pohmann R, Speck O, Scheffler K. Signal-to-noise ratio and MR tissue parameters in human brain imaging at 3, 7, and 9.4 Tesla using current receive coil arrays. Magnetic Resonance in Medicine. 2016 mar; 75(2):801–809. doi: 10.1002/mrm.25677.
* Poot et al. (2010) Poot DHJ, den Dekker AJ, Achten E, Verhoye M, Sijbers J. Optimal experimental design for diffusion kurtosis imaging. IEEE Transactions on Medical Imaging. 2010 mar; 29(3):819–829. doi: 10.1109/tmi.2009.2037915.
* Shafto et al. (2014) Shafto MA, Tyler LK, Dixon M, Taylor JR, Rowe JB, Cusack R, Calder AJ, Marslen-Wilson WD, Duncan J, Dalgleish T, Henson RN, Brayne C, Matthews FE. The Cambridge Centre for Ageing and Neuroscience (Cam-CAN) study protocol: a cross-sectional, lifespan, multidisciplinary examination of healthy cognitive ageing. BMC Neurology. 2014 oct; 14(1). doi: 10.1186/s12883-014-0204-1.
* Spilling et al. (2022) Spilling CA, Howe FA, Barrick TR. Optimization of quasi-diffusion magnetic resonance imaging for quantitative accuracy and time-efficient acquisition. Magnetic Resonance in Medicine. 2022 aug; 88(6):2532–2547. doi: 10.1002/mrm.29420.
* Tabesh et al. (2010) Tabesh A, Jensen JH, Ardekani BA, Helpern JA. Estimation of tensors and tensor-derived measures in diffusional kurtosis imaging. Magnetic Resonance in Medicine. 2010 oct; 65(3):823–836. doi: 10.1002/mrm.22655.
* Tian et al. (2022) Tian Q, Fan Q, Witzel T, Polackal MN, Ohringer NA, Ngamsombat C, Russo AW, Machado N, Brewer K, Wang F, Setsompop K, Polimeni JR, Keil B, Wald LL, Rosen BR, Klawiter EC, Nummenmaa A, Huang SY. Comprehensive diffusion MRI dataset for in vivo human brain microstructure mapping using 300 mT/m gradients. Scientific Data. 2022 jan; 9(1). doi: 10.1038/s41597-021-01092-6.
* Tournier (2019) Tournier JD. Diffusion MRI in the brain–Theory and concepts. Progress in Nuclear Magnetic Resonance Spectroscopy. 2019 jun; 112-113:1–16. doi: 10.1016/j.pnmrs.2019.03.001.
* Van Essen et al. (2013) Van Essen DC, Smith SM, Barch DM, Behrens TEJ, Yacoub E, Ugurbil K. The WU-Minn Human Connectome Project: An overview. NeuroImage. 2013 oct; 80:62–79. doi: 10.1016/j.neuroimage.2013.05.041.
* Veraart et al. (2011a) Veraart J, Hecke WV, Sijbers J. Constrained maximum likelihood estimation of the diffusion kurtosis tensor using a Rician noise model. Magnetic Resonance in Medicine. 2011 mar; 66(3):678–686. doi: 10.1002/mrm.22835.
* Veraart et al. (2011b) Veraart J, Poot DHJ, Hecke WV, Blockx I, der Linden AV, Verhoye M, Sijbers J. More accurate estimation of diffusion tensor parameters using diffusion kurtosis imaging. Magnetic Resonance in Medicine. 2011; 65(1):138–145. doi: 10.1002/mrm.22603.
* Wang et al. (2022) Wang ML, Wei XE, Yu MM, Li WB. Cognitive impairment in mild traumatic brain injury: a diffusion kurtosis imaging and volumetric study. Acta Radiologica. 2022 feb; 63(4):504–512. doi: 10.1177/0284185121998317.
* Wu and Cheung (2010) Wu EX, Cheung MM. MR diffusion kurtosis imaging for neural tissue characterization. NMR in Biomedicine. 2010 jul; 23(7):836–848. doi: 10.1002/nbm.1506.
* Yablonskiy and Sukstanskii (2010) Yablonskiy DA, Sukstanskii AL. Theoretical models of the diffusion weighted MR signal. NMR in Biomedicine. 2010 jun; 23(7):661–681. doi: 10.1002/nbm.1520.
* Yang et al. (2022) Yang Q, Reutens DC, Vegh V. Generalisation of continuous time random walk to anomalous diffusion MRI models with an age-related evaluation of human corpus callosum. NeuroImage. 2022 jan; p. 118903. doi: 10.1016/j.neuroimage.2022.118903.
* Zelinski et al. (2008) Zelinski AC, Angelone LM, Goyal VK, Bonmassar G, Adalsteinsson E, Wald LL. Specific absorption rate studies of the parallel transmission of inner-volume excitations at 7T. Journal of Magnetic Resonance Imaging. 2008 oct; 28(4):1005–1018. doi: 10.1002/jmri.21548.
* Zhou et al. (2022) Zhou Z, Chen Y, Zhao F, Sun Z, Zhu L, Yu H, Wang W. Predictive value of intravoxel incoherent motion combined with diffusion kurtosis imaging for breast cancer axillary lymph node metastasis: a retrospective study. Acta Radiologica. 2022 jun; p. 028418512211076. doi: 10.1177/02841851221107626.
* Zong et al. (2021) Zong F, Du J, Deng X, Chai X, Zhuo Y, Vegh AV, Xue R. Fast diffusion kurtosis mapping of human brain at 7 Tesla with hybrid principal component analyses. IEEE Access. 2021; 9:107965–107975. doi: 10.1109/access.2021.3100546.
|
# MVRackLay: Monocular Multi-View Layout Estimation for Warehouse Racks and
Shelves
Pranjali Pathre Corresponding author<EMAIL_ADDRESS>Robotics
Research Center, IIIT Hyderabad, India Anurag Sahu Robotics Research Center,
IIIT Hyderabad, India Ashwin Rao Robotics Research Center, IIIT Hyderabad,
India Avinash Prabhu Robotics Research Center, IIIT Hyderabad, India Meher
Shashwat Nigam Robotics Research Center, IIIT Hyderabad, India Tanvi
Karandikar Robotics Research Center, IIIT Hyderabad, India
Harit Pandya Toshiba Research, UK K. Madhava Krishna Robotics Research
Center, IIIT Hyderabad, India
###### Abstract
In this paper, we propose and showcase, for the first time, monocular multi-
view layout estimation for warehouse racks and shelves. Unlike typical layout
estimation methods, MVRackLay estimates multi-layered layouts, wherein each
layer corresponds to the layout of a shelf within a rack. Given a sequence of
images of a warehouse scene, a dual-headed Convolutional-LSTM architecture
outputs segmented racks, the front and the top view layout of each shelf
within a rack. With minimal effort, such an output is transformed into a 3D
rendering of all racks, shelves and objects on the shelves, giving an accurate
3D depiction of the entire warehouse scene in terms of racks, shelves and the
number of objects on each shelf. MVRackLay generalizes to a diverse set of
warehouse scenes with varying number of objects on each shelf, number of
shelves and in the presence of other such racks in the background. Further,
MVRackLay shows superior performance vis-a-vis its single view counterpart,
RackLay [1] in layout accuracy, quantized in terms of the mean IoU and mAP
metrics. We also showcase a multi-view stitching of the 3D layouts resulting
in a representation of the warehouse scene with respect to a global reference
frame akin to a rendering of the scene from a SLAM pipeline. To the best of
our knowledge, this is the first such work to portray a 3D rendering of a
warehouse scene in terms of its semantic components - Racks, Shelves and
Objects - all from a single monocular camera.
## I INTRODUCTION
The need for warehouse automation grows by the day, and in the future, a fleet
of robots could manage an entire warehouse with little to no human
intervention [2]. Yet, almost 30% of warehouses operate without their staple
warehouse management systems [3].
In this paper, we address the thus far untackled problem of multi-view layout
estimation for all the visible racks in the image. We propose a
straightforward and effective network architecture MVRackLay111Project page:
https://github.com/pranjali-pathre/vRacklay, which outputs the top-view and
front-view layouts of all shelves making up each rack, partly or wholly
visible in every frame of an input sequence of monocular RGB images of a
warehouse (these could be the frames of a video) (Fig. LABEL:fig:teaser). Note
that a rack may only be partially visible in a frame. The network learns
layouts in the canonical frame centered on the shelf, called the shelf-centric
layout.
An essential point to note is that the problem is not a direct application of
a standard formulation of object recognition, semantic segmentation or layout
estimation. While the above methods can be applied to objects on rack shelves
[4, 5], present methods cannot be adapted directly to localize rack shelves,
as shown in our baseline comparisons in Sec. V-D. While typical layout
formulations estimate layouts with reference to a single dominant plane (such
as the ground plane) [6], warehouse rack shelves take the form of disconnected
planar segments, each present at different heights above the ground plane.
Furthermore, they often appear occluded and diffused in warehouse scenes.
Thus, an important novelty of our formulation is the adaptation of deep
architectures to the problem of multi-view layout estimation over multiple
shelves and racks, as well as shelf contents.
MVRackLay leverages, using Convolutional LSTM layers, the temporal data across
images and extends the layout prediction problem to a wider scope in a
warehouse setting. Unlike RackLay[1], where the model predicts the layouts
only for the rack in focus, we design our model o predict the layouts for all
the racks in the image, whether visible fully or partially in frame.
Additionally, to leverage the temporal information across image sequences, we
propose the multiview and multilayer layout prediction mechanism for
MVRackLay, where layouts are predicted over a sequence of images. Furthermore,
through the downstream application of layout stitching across multiple views,
we demonstrate that an explicit 3D model of the warehouse can be reconstructed
from the predicted layouts.
Real-world public datasets for warehouses are few and not comprehensive. To
alleviate the issue of obtaining sufficient training data, we develop and
open-source a complete synthetic data generation pipeline WareSynth, an
extended and improved version of the pipeline introduced in [1]. Using domain
randomization, WareSynth can generate a huge variety of synthetic warehouses,
suitably emulating any real-world warehouse. The warehouse dataset generation
pipeline allows users to customize scenes as per their needs, automating both
the image-capturing process as well as the generation of ground-truth
annotations for these images. We train and evaluate the performance of
MVRackLay on such synthetic warehouse scenes. The monocular image sequences
obtained from WareSynth involve translation of the camera along a predefined
path, facing a row of racks and at a fixed distance from them. The annotations
for these sequences are fed into the deep MVRackLay network.
Specifically, the paper contributes as follows:
1) It presents a notable improvement over the formerly proposed RackLay [1]
architecture (Sec. III-B), the keynote of which is the use of Convolutional
LSTM Layers, which enable the network to train on a monocular image sequence,
rather than discrete images of racks. The network uses this previously-missing
spatial data to improve shelf-centric layout (Sec. III-A) predictions in each
frame, specifically when racks and shelves are only partially visible in the
image.
2) We open-source a flexible data generation pipeline WareSynth with domain
randomization capabilities that empower a huge variety of synthetic data. We
also release relevant instructions that enable the user to create and
customize their warehouse scenes and generate 2D/3D ground truth annotations
needed for their task automatically, as discussed in Sec. IV.
3) Further, we show several applications using these layout representations.
Layout-enabled multi-view 3D warehouse reconstruction is a novel application
discussed in Sec. V-F. The downstream task of free space estimation can also
be achieved. Most importantly, we show that the use of Convolutional LSTM
layers to predict 3D representations of racks in each frame enables them to be
stitched together into a 3D reconstruction of the warehouse in a global
reference frame, similar to the rendering of a scene from a SLAM pipeline.
4) We demonstrate significant performance gain on popular rubrics compared to
previous methods [7, 5] adapted to the estimation of shelf layouts. Equally
important, we showcase notable improvement in layout estimation over the
single-view estimator [1]. Moreover, we compile and present results of several
ablations involving variations in architecture which establish the superiority
of MVRackLay (Sec. V).
## II RELATED WORK
Object detection methods: A significant portion of our problem deals with
localizing semantic classes like shelves and boxes/cartons in a 3D scene.
There exist several approaches to detect object layouts in 3D. Some of these
[8, 9] combine information from images and LiDAR, while others [6, 10] first
convert images to bird’s eye view representations, followed by object
detection.
Bird’s eye view (BEV) representation: Schulter _et al._[11] proposed one of
the first approaches to estimate an occlusion-reasoned BEV road layout from a
single color image. Wang _et al._[12] build on top of [11] to infer
parameterized road layouts. In contrast, our approach is non-parametric and
hence, more flexible than such parametric models, which may not account for
all possible layouts. We take inspiration from MonoLayout [13] (which can be
trained end to end on color images, reasons beyond occlusion boundaries and
being non-parameterized, need not be actuated with these additional inputs)
and extend it to multiple planes.
Single-view layout estimation: RackLay [1] proposed a layout estimation
technique that is able to predict shelf layouts of one rack at a time, which
must be in focus and completely visible in a single monocular image. Our
network MVRackLay is more flexible in that it predicts shelf layouts of all
fully-visible and partially-visible racks in a monocular image sequence and
more accurate due to the incorporation of spatial data of warehouse racks from
the consecutive frames of the input video.
Warehouse Datasets: Publicly available datasets for warehouse settings are far
and in between. Real-world datasets like LOCO [14] exist for scene
understanding in warehouse logistics, but they provide a limited number of
images, along with corresponding 2D annotations. Furthermore, there are only a
handful of general-purpose synthetic data simulators for generating photo-
realistic images, like NVIDIA Isaac [15], which provide warehouse scenes.
However, there is no straightforward way to modify them to generate
annotations needed for the task at hand.
Domain Randomization: We integrate domain randomization techniques [16] in our
dataset generation pipeline, as described in Sec. V-A.
## III Method
### III-A Problem Formulation
Given a sequence of RGB images $\mathcal{I}_{1}$, $\mathcal{I}_{2}$,…,
$\mathcal{I}_{n}$ of racks in warehouses in perspective view, we aim to
predict the top-view (bird’s eye view) and front-view layout for each rack
present in each frame of the input video sequence.
We consider R to be a rectangular area in a top-down orthographic view of the
scene. The camera is placed at the mid-point of the lower side of the
rectangle, directly facing the racks such that the image plane is orthogonal
to the ground plane. (Fig. 2). Concretely, we want our network to generate
top-view and front-view layouts for all the racks visible in each frame
$\mathcal{I}_{t}$, within a region of interest $\Omega$. Our network predicts
shelf-centric layouts where we map $\Omega$ to a rectangular area. In shelf-
centric layout representation, we consider $\Omega$ to be a rectangular area,
positioned such that its center coincides with the center of the shelves
spanning across all the racks visible in the image, as shown in Fig. 2. This
layout is hence with respect to the rack and is view-point agnostic.
Figure 2: Top-view representation of the shelf-centric (ii) layout for a given
position of a shelf (i), and the reference coordinate frames for the same.
Figure 3: Front-view representation of the shelf-centric (ii) layout for a
given position of a shelf (i), and the reference coordinate frames for the
same. Figure 4: Architecture: The figure shows the architecture diagram for
MVRackLay-disc. It comprises a context encoder, a Convolutional LSTM for
encoding temporal information, and multi-channel decoders and adversarial
discriminators. (Refer to Sec. III-B).
Our model predicts top-view and front-view layout representations for each
frame in the sequence. Top-view layouts predict the bird’s eye view occupancy
of each shelf on the rack. Each pixel in the layout can either be classified
as occupied, unoccupied, or background. A pixel is said to be occupied when it
is a part of the object on the shelf, unoccupied when it represents the empty
space on the rack, and background when it denotes the region which is not
occupied by the shelf.
Consider a right-handed coordinate frame where the X axis points to the right,
the Y axis points downwards, and the Z axis points into the plane. For top-
view layouts, the coordinate frame is positioned at the center of the shelves
spanning across all racks for layouts. Hence, the top-view layouts are
parallel to the X-Z plane, as in fig. 2 (and the ground plane) at
corresponding shelf heights. In the case of front-view layouts, the center of
the coordinate frame is positioned at the center of shelves in X and Y
directions. Front view layouts are, therefore, orthogonal to the ground plane,
as in fig. 3.
As an additional task, we demonstrate Multi-view Stitching \- combining the
representation from the top-view and front-view layouts to obtain a 3D
reconstruction of all racks in the warehouse in the global shelf frame (Fig.
9). This can be further used for 3D spatial reasoning tasks. We infer the X
and Z coordinates from the top-view and the Y coordinate from the front-view.
We further explore these applications in Sec. V-F.
### III-B MVRackLay Architecture
We build a double-decoder MVRackLay architecture (Fig. 4) which takes as input
a sequence of RGB images $\mathcal{I}_{1}$, $\mathcal{I}_{2}$,…,
$\mathcal{I}_{n}$ and predicts the top-view and front-view layouts. The
components of the model are described in detail below.
1) A context encoder which uses a ResNet-18 backbone pre-trained on ImageNet
[17] to retrieve relevant 3D scene and semantic components from the monocular
input $\mathcal{I}_{t}$. This feature extractor learns low-level features
$\mathcal{C}_{1}$, $\mathcal{C}_{2}$,…, $\mathcal{C}_{n}$ that help reason
about the occupancy of the scene points.
2) A stacked Convolutional LSTM submodule uses the encoder extracted features
$\mathcal{C}_{1}$, $\mathcal{C}_{2}$,…, $\mathcal{C}_{t}$ and in turn encodes
a temporal representation to capture motion across input frames. We use this
Spatio-temporal prediction to estimate consistent layouts by consolidating the
information from the past frames to predict the current frame. The number of
previous frames used in this prediction is a hyperparameter, the value of
which is varied in our ablation studies (Refer to Sec. V-E). The output of
this block is an encoded representation that better reasons the scene points
as occupied, unoccupied, and background.
3) A top-view decoder and front-view decoder that generates layouts
respectively for top-view and front-view from the temporal representation
learned by the Convolutional LSTM submodule. It consists of downsampling
layers to output an $\mathcal{R}$ × $\mathcal{D}$ × $\mathcal{D}$ grid which
represents the layout, where $\mathcal{R}$ is the number of output channels
and $\mathcal{D}$ is the resolution of the output layouts.
4) Identical discriminators following top-view and front-view decoders,
respectively, are adversarial regularizers that rectify the layouts further by
homogenizing their distributions to be similar to the true distribution of
plausible layouts. The layout predicted by the decoder is the input to this
submodule which outputs the final refined predictions.
### III-C Loss function
We describe here the loss function of our MVRackLay architecture for top-view
layout estimation. We use stochastic gradient descent to optimize over the
network parameters $\phi$, $\psi$, $\theta$ of the context encoder,
convolutional LSTM, and the decoder.
${{L}_{sup}(\widehat{\mathcal{T}};\phi,\psi)=\sum_{j=1}^{N}\sum_{i=1}^{\mathcal{R}}f\left(\widehat{\mathcal{T}}_{i}^{j},\mathcal{T}_{i}^{j}\right)}$
${{L}_{adv}(\widehat{\mathcal{T}};\phi,\psi,\theta)=\operatorname{\mathbb{E}}_{\theta\sim
p_{fake}}[(\widehat{\mathcal{T}}(\theta)-1)^{2}]}$
${{L}_{short}(\widehat{\mathcal{T}};\phi,\psi)=\sum_{j=1}^{N}\sum_{i=1}^{{seqlen-1}}f\left(\widehat{\mathcal{T}}_{i}^{j},\widehat{\mathcal{T}}_{i}^{j+1}\right)}$
${{L}_{long}(\widehat{\mathcal{T}};\phi,\psi)=\sum_{j=1}^{N}\sum_{i=1}^{seqlen-1}\sum_{k=j+2}^{seqlen}f\left(\widehat{\mathcal{T}}_{i}^{j},\widehat{\mathcal{T}}_{i}^{k}\right)}$
${{L}_{discr}(\widehat{\mathcal{T}};\theta)=\operatorname{\mathbb{E}}_{\theta\sim
p_{true}}[(\widehat{\mathcal{T}}(\theta)-1)^{2}]\\\
+\operatorname{\mathbb{E}}_{\theta\sim
p_{fake}}[(\widehat{\mathcal{T}}(\theta))^{2}]}$
${{L}_{total}={L}_{sup}+{L}_{short}+{L}_{long}+{L}_{adv}+{L}_{discr}}$
Here $\widehat{\mathcal{T}}$ is the predicted top-view layout of each shelf,
$\mathcal{T}$ is the ground truth top-view layout of each shelf, $\mathcal{R}$
is the maximum number of shelves considered, and $N$ is the mini-batch size.
${L}_{sup}$ is the per-pixel cross-entropy loss which penalizes variation of
the predicted output labels ($\widehat{\mathcal{T}}$) from corresponding
ground-truth values ($\mathcal{T}$). The adversarial loss ${L}_{adv}$ enables
the distribution of layout estimates from the top-view decoder ($p_{fake}$) to
be similar to the actual data distribution ($p_{true}$). ${L}_{discr}$
enforces the discriminator to accurately classify the network-generated top-
view layouts sampled from the true data distribution [18]. ${L}_{short}$ is
the short-range consistency loss, and ${L}_{long}$ is the long-range
consistency loss. Finally, we minimize the total loss over the network
parameters $\phi$, $\psi$, $\theta$ and use it to back-propagate gradients
through the network. Equivalent expressions are defined for front-view layout
prediction as well.
Top View Front View Rack Box Rack Box Method mIoU mAP mIoU mAP mIoU mAP mIoU
mAP MVRackLay-Disc-4 (ours) 96.44 $98.01$ 86.89 92.70 94.98 97.14 88.02 93.49
MVRackLay-Disc-8 (ours) $95.84$ 98.12 $85.98$ $88.35$ ${93.78}$ ${96.98}$
$86.49$ $88.76$ MVRackLay-4 (ours) ${95.94}$ $97.94$ ${86.66}$ ${89.31}$
$93.73$ $96.58$ ${86.72}$ ${89.16}$ RackLay-D-disc ${93.44}$ ${94.98}$
${82.80}$ ${85.47}$ ${91.75}$ ${93.10}$ $83.28$ $85.49$ PseudoLidar-
PointRCNN[7] $73.28$ $77.40$ $55.77$ $81.26$ $-$ $-$ $63.05$ $89.45$ MaskRCNN-
GTdepth[5] $-$ $-$ $35.57$ $47.44$ $-$ $-$ $76.48$ $82.48$
Table I: Quantitative results: We benchmark the 3 different versions of our
network - MVRackLay-Disc-4, MVRackLay-Disc-8 and MVRackLay-4, along with three
baselines- RackLay-D-disc[1] PseudoLidar-PointRCNN[10, 7] and MaskRCNN-
GTdepth[5] (as described in Sec. V-B).
## IV Dataset Generation Pipeline
In this section, we introduce WareSynth222For more information on WareSynth
and code: https://pranjali-pathre.github.io/MVRackLay/, a robust pipeline for
synthetic data generation, inspired by the more simplistic pipeline in [1].
WareSynth is an end-to-end pipeline that can be used to generate 3D warehouse
scenes, capture the relevant data and output the annotations. It is developed
in the 3D graphics framework Unity [19] in order to generate more realistic
images compared to the original pipeline created in Blender [20]. By using the
WareSynth pipeline on an NVIDIA RTX 2080Ti, we are able to generate 500 images
per minute. Our pipeline is characterized by the following:
1. 1.
Highly customizable as per user requirements.
2. 2.
Ability to export to popular annotation formats such as COCO, YOLO, Pix3D,
KITTI and BOP.
3. 3.
Completely free and open source.
4. 4.
Extensive variation and domain randomization.
Although only a handful of warehouse datasets/simulators are currently
available such as LOCO and NVIDIA Isaac (discussed in Sec. II), to the best of
our knowledge, there is no such pipeline that satisfies all four of the above
criteria. WareSynth allows users to customize the number of racks and boxes in
the scene and also the box and rack models used. The ability to export to not
only our annotation format but also popular annotation formats is important
for additional benchmarks against new approaches being developed in these
settings. Section V-A enumerates the randomizations enabled by WareSynth. Our
data generation and capture methods are efficient, flexible, and easily
customizable based on user requirements. Hence, WareSynth can prove very
useful for large-scale data annotation generation.
## V Experiments and Analysis
### V-A MVRackLay Dataset
For training and testing our network, we generated a diverse dataset with 20k
images spanning 400 sequences with 50 images per sequence, using
WareSynth333Download RackLay dataset: https://tinyurl.com/yxmu5t64. It is
split into 360/20/20 for train/test/validation. All the results discussed are
on the test set of this dataset. The dataset is highly varied and spans
multiple warehouse scenes. The variety demonstrates the generality of
MVRackLay and is useful to evaluate the performance of our model in varied
warehouse settings. The video sequences captured using WareSynth resemble the
data captured by a manual camera movement or a mechanized system performing
the task in an actual warehouse.
Various scene elements were diversified during data generation to impart
assortment in generated scenes so that the synthetically developed warehouses
mimic their real-world counterparts. We describe these randomizations below.
Domain Randomization: We show the randomizations we introduce using 3 randomly
selected images from our dataset through Fig. 5 (referred throughout this
section):
* •
Boxes have random sizes, textures, rotation about the vertical axis, colors,
and reflective properties.
* •
Box placement varies from dense to moderate to sparse.
* •
Color and texture of racks are randomized.
* •
Height to which boxes are stacked vertically is randomized.
* •
Background is either a wall or a busy warehouse.
* •
Color and textures of floors and walls are randomized.
* •
The camera’s position concerning the rack varies within a range to capture
different numbers of shelves in the sequence. For our dataset, we set
$\mathcal{R}$=3.
* •
Camera is moved such that different number of racks are visible across frames.
We find that this large diversity in the dataset has enabled the network to
not overfit on the domain of the synthetic data, but rather learn features
that emulate real-world scenes.
### V-B Evaluated Methods and Metrics
We compare the following variants of MVRackLay:
* •
MVRackLay-Disc-4: Double decoder architecture with discriminators for both
front-view and top-view, with a sequence length of 4 used in the ConvLSTM
module.
* •
MVRackLay-Disc-8: Double decoder architecture with discriminators for both
front-view and top-view, with a sequence length of 8 used in the ConvLSTM
module.
* •
MVRackLay-4: Double decoder architecture for both front-view and top-view
without discriminators, with a sequence length of 4 used in the ConvLSTM
module.
We report Mean Intersection-Over-Union (mIoU) and Mean Average-Precision (mAP)
scores in this task as the previously proposed methods also evaluate the model
on the same criterion.
### V-C Results 444More results: https://pranjali-pathre.github.io/MVRackLay/
We first trained MVRackLay-4 for top-view and front-view. Having achieved
superior results compared to baselines, we further trained MVRackLay-Disc-4
for both front-view and top-view to capture the distribution of our layouts,
which led to the best results. We observed performance gains as discussed in
Sec. V-E. We further trained MVRackLay-Disc-8 to capture the performance of
the model for higher sequence length (discussed in Sec. V-E). Overall,
MVRackLay-Disc-4 showed the best overall performance (refer Table I).
Fig. 5 summarizes the results of MVRackLay-disc-4 tested on domain randomized
data. Our best network is able to predict all the racks present in the image
with clean boundaries separating them and precisely estimate the space between
two racks and two objects on the shelf. MVRackLay can rightly predict the
layouts for the racks in the foreground, even in the presence of background
clutter in the image. It can also reason for the narrow spaces in densely
packed shelves.
The results show that MVRackLay can significantly benefit downstream warehouse
tasks. Sec. V-F demonstrates one such task of 3D warehouse reconstruction
using a multiview layout predicted by the network. The generality and
superiority of the results show that our model can easily be transferred to
warehouses with diverse arrangements of racks and objects on shelves.
### V-D Comparison with baselines
RackLay: RackLay [1] was proposed to solve the problem of layout prediction
for the dominant rack present in the input RGB image. We trained it for the
task of layout prediction for all the racks present over the video sequence.
From Table I it is clear that MVRackLay-Disc-4 performs significantly better
than RackLay-D-disc quantitatively. Comparing qualitatively, Fig. 6 summarizes
the improvements of our network over RackLay. RackLay often fails to demarcate
an exact boundary between two racks present in an image (row 1). In row 2,
observe that RackLay is unable to predict a sharp boundary of both box and
shelf as in the corresponding output of MVRackLay-Disc-4. RackLay often
incorrectly predicts the presence of the box on racks (rows 1, 2) or suffers
from noisy predictions of boxes in regions with no objects as well as of the
shelf (row 2). Our model not only extends RackLay’s functionality to image
sequences but also improves upon its liabilities. We predict sharper and more
accurate shelf and object boundaries and reduce false box predictions.
PseudoLidar-PointRCNN: PointRCNN [7] is a 3D object detector that uses the raw
point cloud as input. Hence we use the PseudoLidar[10] information to detect
3D objects using PointRCNN. As this method considers a single dominant plane
and is employed for birds-eye view prediction, we report metrics for the
bottommost shelf (refer to Table I) and the top-view prediction only.
Accounting for the single-dominant layer assumption, it is clear from Table I
that our model performs better as it performs multilayer, front-view, and top-
view layout predictions that can also reason about the inter-shelf distance.
Mask R-CNN: We select Mask R-CNN[5] as one of our baselines to test the
instance segmentation method for multi-layer layout prediction in the
warehouse setting for the sequential data. We subsequently integrate the Mask
R-CNN segmented instances with the depth maps and project on a horizontal
plane to segment the boxes shelf-wise, using the fact that a set of boxes on a
particular shelf will be situated on the same plane located at some elevation
from the ground plane. From the experiment, it is observed that Mask R-CNN
fails to detect the precise boundary of the rack due to its thin structures.
If multiple racks are present in the image, Mask R-CNN also fails to mark a
clear distinction between them. The results summarized in Table I prove that
our model performs better quantitatively too. Since Mask R-CNN can only claim
regarding the points visible in the image, it is evident that our model
accomplishes better results with amodal estimation to reason beyond the visual
elements that structurally characterize racks and objects.
Figure 5: MVRackLay-Disc-4 Results: Here, we present the results of our
network tested on domain randomized data. The bottom-most shelf layout is
shown in the left-most column, followed by the middle and top shelf (if
visible). Observe the diversity of warehouse scenes captured (detailed in V-A)
and the top-view and front-view layouts predicted for the same.
### V-E Ablation studies
To thoroughly comprehend the underlying effect of different components, we
perform ablation studies over the model’s architecture and examine its effect
on performance.
Figure 6: RackLay vs. MVRackLay-Disc-4: Above, we compare qualitatively the
results of RackLay and our MVRackLay-Disc-4. The shelf in focus is highlighted
with a red border. Observe that our model removes the false positive in row 1,
removes noise in row 2, and increases the sharpness of both box boundaries
(both rows) and shelf edges (row 2).
.
Figure 7: MVRackLay-Disc-4 vs. MVRackLay-Disc-8: The shelf in focus is
highlighted with a red border. Better demarcations between adjoining boxes and
less joining of abreast layouts are observed in the output of MVRackLay-Disc-4
compared to its counterpart. Figure 8: MVRackLay-4 vs. MVRackLay-Disc-4: The
shelf in focus is highlighted with a red border. Observe how using a
discriminator fixes the false negative in row 1 and improves predicted box
boundaries and shelf boundaries in row 2.
1) Convolutional LSTM sequence length: We varied the time steps used in the
stacked Convolutional LSTM submodule. We observed that MVRackLay-Disc-4 was
able to converge faster and improve qualitatively over MVRackLay-Disc-8.
Although quantitatively MVRackLay-Disc-4 and MVRackLay-Disc-8 perform alike
(refer to Table I), from fig. 7 considerable qualitative improvements can be
observed. MVRackLay-Disc-4 performs better in identifying precise object
boundaries and distinguishing between the closely spaced objects on the rack.
A lower sequence length enabled the model to compile only relevant details
from past frames and avoid spurious noise.
2) Adversarial learning: In MVRackLay-Disc-4, we add discriminators after
decoders in MVRackLay-4 to capture the distribution of plausible layouts. We
observed a substantial improvement both quantitatively (refer to Table I) and
qualitatively (Fig. 8). Layouts have become sharper, and most notably, using a
discriminator diminished the stray pixels wrongly categorized as boxes. The
actual distance between the boxes positioned near the end of the shelf is
difficult to estimate as they are imaged obliquely. In such cases, MVRackLay-
Disc-4 remarkably improved the prediction of the boxes and generated cleaner
layouts.
### V-F Applications
Multi View Stitching: From the layout prediction of a particular frame
$\mathcal{I}_{t}$, we first obtain the 2D bounding boxes of all shelves and
boxes detected in the front-view and top-view layout. The detections from the
top-view and front-view layouts are corresponded to identify the matching.
Once we have a map, using the dimension information from front-view and top-
view layout predictions, we generate the 3D bounding box for all the mapped
racks and objects. Finally, we combine these representations of each shelf to
get a 3D reconstruction ${f}_{t}$ of all the racks in the frame. This process
is repeated for all the frames in the sequence.
Given two consecutive 3D reconstructions ${f}_{t}$ and ${f}_{t+1}$, we
initially find all the corresponding matching boxes. We then calculate the
shift between them; using this shift, we discern the direction of the motion.
Finally, we consider the last box in frame ${f}_{t}$ in the direction of
motion and check its corresponding box in frame ${f}_{t+1}$. If the size of
the box in ${f}_{t+1}$ is larger, we increase the size of the shelf and boxes
in ${f}_{t}$ accordingly. If there is an addition of a new box or shelf in
${f}_{t+1}$, the same is included in the ${f}_{t}$ reconstruction. Eventually,
we obtain the merged layouts of ${f}_{t}$ and ${f}_{t+1}$ in ${f}_{t}$ frame.
If ${F}_{t}$ denotes the merged 3D representation from ${f}_{1}$ to ${f}_{t}$,
${f}_{t+1}$ is merged into ${F}_{t}$ using the method described above. Fig. 9
shows the 3D reconstruction of a single warehouse with 4 racks, obtained from
the multi-view stitching of predicted layouts of 4 sequences with 70 frames
per sequence.
Figure 9: Multi View Reconstruction of the entire warehouse using four
sequences covering 280 frames (70 frames each), using the layouts predicted by
MVRackLay-Disc-4.
## VI Conclusion
In this paper, we present MVRackLay to perform multi-view and multi-layered
layout estimation for all racks partly or fully visible in each frame of the
input monocular image sequence. Distinct from existing methods, it utilizes
temporal information across the frames of a image sequence to enhance the
quality of layouts. We also present a pipeline to 3D reconstruct the entire
warehouse from the predicted shelf-centric layouts. Further, we introduce a
versatile synthetic data generation pipeline, WareSynth, that is capable of
producing domain randomized data which can emulate a wide variety of warehouse
scenes. MVRackLay’s versatility is demonstrated by the experimental results
over diverse warehouse scenes, and is vastly superior to previous baselines
adapted for the same task.
## References
* [1] M. S. Nigam, A. Prabhu, _et al._ , _Monocular Multi-Layer Layout Estimation for Warehouse Racks_. New York, NY, USA: Association for Computing Machinery, 2021. [Online]. Available: https://doi.org/10.1145/3490035.3490263
* [2] P. Buxbaum, “Many warehouses don’t have wms,” Aug 2018. [Online]. Available: https://www.globaltrademag.com/many-warehouses-dont-have-wms/
* [3] “Trends in warehouse automation-2021,” Oct 2020. [Online]. Available: https://www.nitco-lift.com/blog/warehouse-automation-trends/
* [4] O. Ronneberger and Fischer, “U-net: Convolutional networks for biomedical image segmentation,” in _International Conference on Medical image computing and computer-assisted intervention_ , 2015.
* [5] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Maskrcnn,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2961–2969.
* [6] T. Roddick, A. Kendall, and R. Cipolla, “Orthographic feature transform for monocular 3d object detection,” _arXiv preprint_ , 2018.
* [7] S. Shi, X. Wang, and H. Li, “Pointrcnn: 3d object proposal generation and detection from point cloud,” in _CVPR_ , 2019.
* [8] J. Ku, M. Mozifian, _et al._ , “Joint 3d proposal generation and object detection from view aggregation,” in _IROS_ , 2018.
* [9] M. Liang, B. Yang, S. Wang, and R. Urtasun, “Deep continuous fusion for multi-sensor 3d object detection,” in _ECCV_ , 2018.
* [10] Y. Wang, W.-L. Chao, _et al._ , “Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving,” in _CVPR_ , 2019.
* [11] S. Schulter, M. Zhai, N. Jacobs, and M. Chandraker, “Learning to look around objects for top-view representations of outdoor scenes,” in _ECCV_ , 2018\.
* [12] Z. Wang, B. Liu, S. Schulter, and M. Chandraker, “A parametric top-view representation of complex road scenes,” in _CVPR_ , 2019.
* [13] K. Mani, S. Daga, _et al._ , “Monolayout: Amodal layout estimation from a single image,” _IEEE Winter Conference on Applications of Computer Vision (WACV)_ , 2020.
* [14] C. Mayershofer, D. M. Holm, B. Molter, and J. Fottner, “Loco: Logistics objects in context,” in _2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)_ , 2020, pp. 612–617.
* [15] “NVIDIA Isaac SDK,” 2019. [Online]. Available: https://developer.nvidia.com/isaac-sdk
* [16] J. Tremblay, A. Prakash, _et al._ , “Training deep networks with synthetic data: Bridging the reality gap by domain randomization,” in _Proceedings of the IEEE conference on computer vision and pattern recognition workshops_ , 2018, pp. 969–977.
* [17] J. Deng, W. Dong, _et al._ , “Imagenet: A large-scale hierarchical image database,” in _CVPR_ , 2009.
* [18] I. Goodfellow, J. Pouget-Abadie, _et al._ , “Generative adversarial nets,” in _Advances in Neural Information Processing Systems 27_ , 2014.
* [19] “Unity real-time development platform | 3d, 2d vr & ar engine,” 2021, accessed: 13-09-2021. [Online]. Available: https://unity.com/
* [20] B. O. Community, “Blender - a 3d modelling and rendering package,” Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018. [Online]. Available: http://www.blender.org
|
# A Unifying Theory of Distance from Calibration
Jarosław Błasiok
Columbia University Parikshit Gopalan
Apple Lunjia Hu
Stanford University Preetum Nakkiran
Apple
###### Abstract
We study the fundamental question of how to define and measure the distance
from calibration for probabilistic predictors. While the notion of _perfect
calibration_ is well-understood, there is no consensus on how to quantify the
distance from perfect calibration. Numerous calibration measures have been
proposed in the literature, but it is unclear how they compare to each other,
and many popular measures such as Expected Calibration Error (ECE) fail to
satisfy basic properties like continuity.
We present a rigorous framework for analyzing calibration measures, inspired
by the literature on property testing. We propose a ground-truth notion of
distance from calibration: the $\ell_{1}$ distance to the nearest perfectly
calibrated predictor. We define a _consistent calibration measure_ as one that
is polynomially related to this distance. Applying our framework, we identify
three calibration measures that are _consistent_ and can be estimated
efficiently: smooth calibration, interval calibration, and Laplace kernel
calibration. The former two give quadratic approximations to the ground truth
distance, which we show is information-theoretically optimal in a natural
model for measuring calibration which we term the _prediction-only access_
model. Our work thus establishes fundamental lower and upper bounds on
measuring the distance to calibration, and also provides theoretical
justification for preferring certain metrics (like Laplace kernel calibration)
in practice.
Contents
toc
## 1 Introduction
Probabilistic predictions are central to many domains which involve
categorical, even deterministic, outcomes. Whether it is doctor predicting a
certain incidence probability of heart disease, a meteorologist predicting a
certain chance of rain, or an autonomous vehicle system predicting a
probability of road obstruction— probabilistic prediction allows the predictor
to incorporate and convey epistemic and aleatory uncertainty in their
predictions.
In order for predicted probabilities to be operationally meaningful, and not
just arbitrary numbers, they must be accompanied by some form of formal
probabilistic guarantee. The most basic requirement of this form is
_calibration_ [Daw82b]. Given a distribution $\mathcal{D}$ on
$\mathcal{X}\times\\{0,1\\}$ representing points with binary labels, a
predictor $f:\mathcal{X}\to[0,1]$ which predicts the probability of the label
being $1$ is calibrated if for every $v\in\operatorname{Im}(f)$, we have
$\mathop{\mathbb{E}}[y|f(x)=v]=v$. Calibration requires that, for example,
among the set of patients which are predicted to have a 10% incidence of heart
disease, the true incidence of heart disease is exactly 10%. Calibration is
recognized as a crucial aspect of probabilistic predictions in many
applications, from their original development in meteorological forecasting
[DF83, MW84, Hal20, Mur98], to models of risk and diagnosis in medicine
[JOKOM12, LAS+04, Doi07, MSR+10, KSB21, VCV15, CAT16], to image classification
settings in computer vision [MDR+21, MLF+21]. There is a large body of
theoretical work on it in forecasting, for example [Daw85, FV98, KF08, FH18].
More recently, the work of [HKRR18] on multicalibration as a notion of group
fairness (see also [KMR17, KNRW17]), and connections to indistinguishability
[DKR+21] and loss minimization [GKR+22, GHK+23] have spurred renewed interest
in calibration from theoretical computer science.
In practice, it is of course rare to encounter _perfectly_ calibrated
predictors, and thus it is important to quantify their _distance from
calibration_. However, while there is consensus across domains on what it
means for a predictor to be _perfectly calibrated_ , there is no consensus
even within a domain on how to measure this distance. This is because the
commonly-used metrics of calibration have fundamental theoretical flaws, which
manifest as practical frustrations. Consider the Expected Calibration Error
(ECE), which is the de-facto standard metric in the machine learning community
(e.g. [NCH14, GPSW17, MDR+21, R+21]).
###### Definition 1.1.
For a predictor $f:\mathcal{X}\to[0,1]$ and distribution $\mathcal{D}$ over
$(x,y)\in\mathcal{X}\times\\{0,1\\}$, the expected calibration error
$\mathsf{ECE}_{\mathcal{D}}(f)$ is defined as
$\mathsf{ECE}_{\mathcal{D}}(f)=\mathop{\mathbb{E}}_{\mathcal{D}}\left[\left|\mathop{\mathbb{E}}_{\mathcal{D}}[y\mid
f(x)]-f(x)\right|\right].$
The ECE has a couple of flaws: First, it is impossible to estimate in general
from finite samples (e.g. [LHHD22, Proposition 5.1] and [AIGT+22]). This is
partly because estimating the conditional expectation
$\mathop{\mathbb{E}}[y|f(x)=v]$ requires multiple examples with the exact same
prediction $f(x)=v$, which could happen with arbitrarily low probability in a
fixed and finite number of examples over a large domain $\mathcal{X}$. Second,
the ECE is _discontinuous_ as a function of the predictor $f$, as noted by
[KF08, FH18]. That is, arbitrarily small perturbations to the predictor $f$
can cause large fluctuations in $\mathsf{ECE}_{\mathcal{D}}(f)$. We illustrate
this with a simple example below.
Consider the uniform distribution over a two-point space $X=\\{a,b\\}$, with
the label for $a$ is $0$ whereas for $b$ it is $1$. The predictor
$\overline{f}$ which always predicts $1/2$ is perfectly calibrated under
$\mathcal{D}$, so $\mathsf{ECE}(\overline{f})=0$. In contrast, the related
predictor where $f(a)=(1/2-\varepsilon)$ and $f(b)=(1/2+\varepsilon)$, for
arbitrarily small $\varepsilon>0$, has $\mathsf{ECE}(f)=1/2-\varepsilon$. Thus
the infinitesimal change from $\overline{f}$ to $f$ causes a jump of almost
$1/2$ in $\mathsf{ECE}$.
This discontinuity also presents a barrier to popular heuristics for
estimating the $\mathsf{ECE}$. For example, the estimation problem is usually
handled by discretizing the range of $f$, yielding an alternate quantity — the
“binned-ECE”— that can be estimated from samples [NCH15]. However, the choice
of discretization turns out to matter significantly both in theory [KLM19,
Example 3.2] and in practice [NDZ+19]. For example, both [MDR+21, Section 5]
and [NDZ+19, Section 5.3] found that changing the number of bins used in
binned-ECE can change conclusions about which of two models is better
calibrated. In the simple two-point example above, if we choose $m$ bins of
equal width, then we observe a binned-ECE of either $0$ or $\approx 1/2$,
depending on whether $m$ is odd or even!
To address the shortcomings of the ECE, a long line of works have proposed
alternate metrics of miscalibration. These metrics take a diversity of forms:
some are based on modifications to the ECE (e.g. alternate binning schemes,
debiased estimators, or smoothing) [ZKH20, KLM19, RCSM22, KCT+21, LHHD22],
some use proper scoring rules, some rely on distributional tests such as
Kolmogorov–Smirnov [GRA+20], Kernel MMD [KSJ18], or other nonparametric tests
[AIGT+22]. Yet it is not clear what to make of this smorgasbord of calibration
metrics: whether these different metrics are at all related to each other, and
whether they satisfy desirable properties (such as continuity). For a
practitioner training a model, if their model is calibrated under some of
these notions, but not others, what are they to make of it? Should they report
the most optimistic metrics, or should they strive to be calibrated for all of
them? Or is there some inherent but undiscovered reason why all these metrics
should paint a similar picture?
Underlying this confusion is a foundational question: what is the ground truth
distance of a predictor from calibration? To our knowledge, this question has
not been answered or even asked in the prior literature. Without a clearly
articulated ground truth and a set of desiderata that a calibration measure
must satisfy, we cannot hope to have meaningful comparisons among metrics.
At best one can say that $\mathsf{ECE}$ and (certain but not all) binning
based variants give an upper bound on the true distance to calibration; we
prove this formally for $\mathsf{ECE}$ in section 4.3. Thus if a predictor can
be guaranteed to have small $\mathsf{ECE}$, then it is indeed close to being
calibrated in a formal sense (see for instance [HKRR18, Claim 2.10]). But
small $\mathsf{ECE}$ might an unnecessarily strong (or even impossible)
constraint to satisfy in many realistic settings, especially when dealing with
predictors which are allowed to produce real-valued outputs. For example,
consider the standard setting of a deep neural network trained from random
initialization for binary classification. The predicted value $f(x)\in[0,1]$
is likely to be different for every individual $x$ in the population, which
could result in a similar situation to our example. The $\mathsf{ECE}$ is
likely to greatly overstate the true distance from calibration in such a
setting.
This brings us to the main motivations behind this work. We aim to:
* •
Formulate desiderata for good calibration measures, based on a rigorous notion
of ground truth distance from calibration.
* •
Use our desiderata to compare existing calibration measures, identifying
measures that are good approximations to the ground truth distance.
* •
Apply theoretical insights to inform practical measurements of calibration in
machine learning, addressing known shortcomings of existing methods.
### 1.1 Summary of Our Contributions
We summarize the main contributions of our work:
* •
Framework for measuring the distance to calibration (Section 4). We propose a
ground truth notion of distance to calibration which is the $\ell_{1}$
distance to the closest perfectly calibrated predictor, inspired by the
property testing literature [BLR90, GGR98]. We define the set of _consistent
calibration measures_ to be those that provide polynomial upper and lower
bounds on the true distance.
* •
Consistent calibration measures. We identify three calibration measures that
are in fact consistent: two have been proposed previously [KF08, KSJ18] and
the third is new. Interestingly, the two prior measures (smooth and kernel
calibration) were proposed with other motivations in mind, and not as
standalone calibration measures. We consider it surprising that they turn out
to be intimately related to the ground truth $\ell_{1}$ distance.
1. 1.
Interval calibration error (Section 6). This is a new measure which is
reminiscent of the binning estimate that is popular in practice [NCH14,
GPSW17, MDR+21]. We show that by randomizing both the width of each bin and
using a random offset, and by adding the average bin width to the resulting
calibration error, one can derive a consistent estimator that this is always
an upper bound on the true distance, and it is never more than the square root
of the true distance.
2. 2.
Smooth calibration error (Section 7) was proposed in the work of [KF08]. We
show using LP duality that it is a constant factor approximation of the
_lower_ distance to calibration, which we define to be, roughly speaking, a
particular Wasserstein distance to perfect calibration. The lower distance to
calibration is always at most the true distance to calibration and is always
at least a constant times the true distance squared.
3. 3.
Laplace-kernel calibration error (Section 8). This is a calibration measure
that was proposed in the work of [KSJ18]. While they did not recommend a
particular choice of kernel, we show that using the Laplace kernel happens to
yield a consistent measure, while using the Gaussian kernel does not.
In contrast to these measures, other commonly used heuristics ($\mathsf{ECE}$
and binning based) do not meet our criteria for being consistent calibration
measures. Our work thus provides a firm theoretical foundation on which to
base evaluations and comparisons of various calibration measures; such a
foundation was arguably lacking in the literature.
* •
Matching lower bounds. Smooth calibration and interva calibration provide
quadratic approximations to the true distance from calibration. We prove that
this is the best possible approximation, by showing an information-theoretic
barrier: for calibration measures depending only on the labels $y$ and
predictions $f$, which are oblivious to the points $x$ themselves (as most
calibration measures are), it is impossible to obtain better than a quadratic
approximation to the true distance from calibration. Thus, the measures above
are in fact optimal in this sense.
* •
Better efficiency in samples and run time. We present improved algorithms and
sample complexity bounds for computing some calibration measures. We present
the first efficient algorithm for computing smooth calibration error using a
linear program. We also observe that the techniques of [RR07] yield an
alternate algorithm to computing kernel calibration error which is (somewhat
surprisingly) reminiscent of randomized binning.
* •
Insights for Practice. Our results point to concrete takeaways for practical
measurements of calibration. First, we recommend using either Laplace kernel
calibration or Interval calibration, as calibration measures that are
theoretically consistent, computationally efficient, and simple to implement.
Second, if Binned-ECE must be used, we recommend randomly shifting the bin
boundaries together, and adding the average width of the bins to the
calibration estimate. These modifications turn binning into a upper-bound on
calibration distance, and bring it closer to interval calibration error which
is a consistent calibration measure (Section 2.3). Finally, in Section 10 we
experimentally evaluate our calibration measures on a family of synthetic data
distributions, to demonstrate their behavior in more natural settings (beyond
worst-case guarantees).
##### Organization of this paper.
The rest of this paper is organized as follows. In Section 2 we present an
informal overview of our main results, highlighting the definitions and key
conceptual ideas. We discuss related works in Section 3. Section 4 sets up our
notion of true distance from calibration and the desiderata that we seek from
calibration measures. We also explain how $\mathsf{ECE}$ and some other
measures fail these desiderata. Section 5 defines the upper and lower distance
to calibration. Section 6 analyzes Interval Calibration error, Section 7
analyzes Smooth calibration error and Section 8 analyzes the Laplace kernel
calibration error. In Section 9 we give sample complexity bounds and efficient
algorithms for estimating various calibration measures using random sample.
And in Section 10, we experimentally evaluate our calibration measures on a
representative family of synthetic data distributions. Throughout, we defer
technical proofs to Appendix B.
For the reader primarily interested in using calibration measures in practice,
we suggest the following fast-track: read Section 2 followed by Section 8.4
for a practical implementation note, and Section 10 for example experiments.
Users of Binned-ECE may be interested in the relevant parts of Section 2.3,
which describes how to “fix” the standard binning procedure to be more closely
related to calibration distance.
## 2 Overview of Our Results
We start by setting up some notation for calibration in the binary
classification setting. Let $\mathcal{X}$ be a discrete domain, defining the
input space. We are given samples $(x,y)$ drawn from a distribution
$\mathcal{D}$ on $\mathcal{X}\times\\{0,1\\}$. A _predictor_ is a function
$f:\mathcal{X}\rightarrow[0,1]$, where $f(x)$ is interpreted as an estimate of
$\Pr[y=1\mid x]$. For a predictor $f$ and distribution $\mathcal{D}$, we often
consider the induced joint distribution of prediction-label pairs
$(f(x),y)\in[0,1]\times\\{0,1\\}$, which we denote $\mathcal{D}_{f}$. We say a
prediction-label distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$ is
_perfectly calibrated_ if $\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[y\mid v]=v$.
For a distribution $\mathcal{D}$ over $\mathcal{X}\times\\{0,1\\}$, we say a
predictor $f:\mathcal{X}\to[0,1]$ is _perfectly calibrated w.r.t.
$\mathcal{D}$_ if the induced distribution $\mathcal{D}_{f}$ is perfectly
calibrated. Finally, a _calibration measure_ $\mu$ is a function that maps a
distribution $\mathcal{D}$ and a predictor $f:\mathcal{X}\to[0,1]$ to a value
$\mu_{\mathcal{D}}(f)\in[0,1]$.
### 2.1 Framework for Measuring Distance from Calibration
The primary conceptual contribution of this work is a formal framework in
which we can reason about and compare various measures of calibration. We
elaborate upon the key ingredients of this framework.
##### The true distance to calibration.
We define the ground truth distance from calibration as the distance to the
closest calibrated predictor. Measuring distance requires a metric on the
space of all predictors. A natural metric is the $\ell_{1}$ metric given by
$\ell_{1}(f,g)=\mathop{\mathbb{E}}_{\mathcal{D}}|f(x)-g(x)|$. Accordingly we
define the true distance from calibration as
$\mathsf{dCE}_{\mathcal{D}}(f):=\inf\limits_{g\in\mathsf{cal}(\mathcal{D})}\mathop{\mathbb{E}}_{\mathcal{D}}|f(x)-g(x)|,$
(1)
where $\mathsf{cal}(\mathcal{D})$ denotes the set of predictors that are
perfectly calibrated w.r.t. $\mathcal{D}$. This definition is intuitive, and
natural from a property testing point of view [BLR90, GGR98, PRR06], but has
not been proposed before to our knowledge. Note that it is not clear how to
compute this distance efficiently: the set $\mathsf{cal}(\mathcal{D})$ is non-
convex, and in fact it is discrete when the domain $\mathcal{X}$ is discrete.
A more subtle issue is that it depends on knowing the domain $\mathcal{X}$,
whereas traditionally calibration measures only depend on the joint
distribution $\mathcal{D}_{f}$ of predictions and labels.
##### Access model.
Calibration measures $\mu_{\mathcal{D}}(f)$ can depend on the entire
distribution $\mathcal{D}$, as well as on the predictor
$f:\mathcal{X}\to[0,1]$. However, we would prefer measures which only depend
on the prediction-label joint distribution $\mathcal{D}_{f}$, similar to
standard loss functions in machine learning and classic calibration measures
[Daw84, Daw85, FV98]. This distinction has important consequences for the
power of calibration measures, which we describe shortly. We delineate the two
levels of access as follows:
1. 1.
Sample access (SA). In the SA model, $\mu_{\mathcal{D}}(f)$ is allowed to
depend on the full joint distribution $(x,f(x),y)$ for $(x,y)\sim\mathcal{D}$.
This terminology follows [DKR+21].
2. 2.
Prediction-only access (PA). In the PA model, $\mu_{\mathcal{D}}(f)$ is only
allowed to depend on $\mathcal{D}_{f}$, the joint distribution $(f(x),y)$ for
$(x,y)\sim\mathcal{D}$. In particular, $\mu$ cannot depend on the input domain
$\mathcal{X}$.
Observe that the ground truth distance ($\mathsf{dCE}$) is defined in the
sample access model, since Equation (1) depends on the domain and distribution
of $x$. On the other hand, we often desire measures that can be computed in
the prediction access model.
##### Robust completeness and soundness.
We propose two desiderata for any calibration measure $\mu$: robust
completeness and robust soundness, in analogy to completeness and soundness in
proof systems.
1. 1.
Robust completeness requires
$\mu_{\mathcal{D}}(f)\leq\mathcal{O}(\mathsf{dCE}_{\mathcal{D}}(f)^{c})$ for
some constant $c$. This guarantees that any predictor which is close to a
perfectly calibrated predictor (in $\ell_{1}$) has small calibration error
under $\mu$. This is a more robust guarantee than standard completeness, which
in this setting would mean just that $\mathsf{dCE}_{\mathcal{D}}(f)=0$ implies
$\mu_{\mathcal{D}}(f)=0$, but would not give any guarantees when
$\mathsf{dCE}_{\mathcal{D}}(f)$ is non-zero but small.
2. 2.
Robust soundness requires
$\mu_{\mathcal{D}}(f)\geq\Omega(\mathsf{dCE}_{\mathcal{D}}(f)^{s})$ for some
constant $s$. That is, if $\mathsf{dCE}_{\mathcal{D}}(f)$ is large then so is
$\mu_{\mathcal{D}}(f)$.
We call a calibration measure _consistent_ (or more precisely, _$(c,s)$
-consistent_) if it satisfies both robust completeness and robust soundness,
for some parameters $c,s>0$:
$\Omega(\mathsf{dCE}_{\mathcal{D}}(f)^{s})\leq\mu_{\mathcal{D}}(f)\leq\mathcal{O}(\mathsf{dCE}_{\mathcal{D}}(f)^{c}).$
(2)
Consistent measures are exactly those that are polynomially-related to the
true distance from calibration, $\mathsf{dCE}_{\mathcal{D}}(f)$. The reader
might wonder if, in our definition of consistent calibration measures
(Equation 2), we could require _constant factor_ approximations to
$\mathsf{dCE}_{\mathcal{D}}(f)$ rather than polynomial factors. It turns out
that there are information-theoretic barriers to such approximations in the
prediction-access model. The core obstacle is that the true distance
$\mathsf{dCE}$ is defined in the SA model, and one cannot compute it exactly
in the prediction-only access model or approximate it within a constant
factor. Indeed, we show that any calibration measure computable in the
prediction-access must satisfy $s/c\geq 2$: information theoretically, a
quadratic approximation is the best possible.
Another nice property of our definition is that the set of all consistent
measures stays the same, even if we define distances between predictors using
the $\ell_{p}$ metric for $p>1$ in place of $\ell_{1}$, since all $\ell_{p}$
measures are polynomially related.
##### Desiderata for calibration measures.
Given the discussion so far, we can now stipulate three desiderata that we
would like calibration measures $\mu$ to satisfy:
1. 1.
Access: $\mu$ is well-defined in the Prediction-only access model (PA).
2. 2.
Consistency: $\mu$ is $(c,s)$-consistent – that is, $\mu$ is polynomially
related to the true distance from calibration $\mathsf{dCE}$. Ideally we have
$s/c=2$, which is optimal in the PA model (Corollary 4.6).
3. 3.
Efficiency: $\mu_{\mathcal{D}}(f)$ can be computed within accuracy
$\varepsilon$ in time $\mathrm{poly}(1/\varepsilon)$ using
$\mathrm{poly}(1/\varepsilon)$ random samples from $\mathcal{D}_{f}$.
Various notions that have been proposed in the literature fail one or more of
these desiderata; $\mathsf{ECE}$ for instance fails robust completeness since
an arbitrarily small perturbation of a perfectly calibrated predictor could
result in high $\mathsf{ECE}$. We refer the reader to Table 1 for a more
complete treatment of such notions.
### 2.2 Information-theoretic Limitations of the Prediction-access Model
##### Upper and Lower Distances.
We start with the following question, which formalizes how well one can
approximate the true distance to calibration in the prediction-only access
(PA) model.
For a given distribution $\mathcal{D}$ and predictor $f$, how large or small
can $\mathsf{dCE}_{\mathcal{D}^{\prime}}(f^{\prime})$ be, among all other
$(\mathcal{D}^{\prime},f^{\prime})$ which have the same prediction-label
distribution ($\mathcal{D}^{\prime}_{f^{\prime}}=\mathcal{D}_{f}$)?
We denote the minimum and maximum by
$\underline{\mathsf{dCE}}_{\mathcal{D}}(f)$ and
$\overline{\mathsf{dCE}}_{\mathcal{D}}(f)$ respectively, which we call the
lower and upper distance to calibration respectively. Hence
$\displaystyle\underline{\mathsf{dCE}}_{\mathcal{D}}(f)\leq\mathsf{dCE}_{\mathcal{D}}(f)\leq\overline{\mathsf{dCE}}_{\mathcal{D}}(f).$
(3)
Both these quantities are defined in the PA model, in which they represent the
tightest lower and upper bounds respectively that one can prove on
$\mathsf{dCE}$. As framed, they involve considering all possible domains and
distributions $\mathcal{D}^{\prime}$ over them. But we can give simpler
characterizations of these notions.
The upper distance $\overline{\mathsf{dCE}}$ can be alternatively viewed as
the minimum distance to calibration via post-processing: it is the distance to
the closest calibrated predictor $g\in\mathsf{cal}(\mathcal{D})$ such that
$g=\kappa(f)$ can be obtained from $f$ by post-processing its predictions. For
the lower distance $\underline{\mathsf{dCE}}$, we can abstract away the domain
and ask only for a coupling between $f$ and a perfectly calibrated predictor:
Consider all joint distributions $\Pi$ of $(u,v,y)$ over
$[0,1]\times[0,1]\times\\{0,1\\}$ where $(v,y)\sim\mathcal{D}_{f}$ and the
distribution of $(u,y)$ is perfectly calibrated. How small can
$\mathop{\mathbb{E}}|u-v|$ be?
Limiting ourselves to couplings of the form $(g(x),f(x),y)\sim\mathcal{D}$
where $g\in\mathsf{cal}(\mathcal{D})$ would recover $\mathsf{dCE}$. Our
definition also permits couplings that may not be realizable on the domain
$\mathcal{X}$, giving a lower bound.
An equivalent view of these distances is that the upper distance only
considers those calibrated predictors whose level sets are obtained by a
coarsening of the level sets of $f$. The lower distance allows calibrated
predictors that are obtained by a finer partitioning of the level sets of $f$.
Theorem 5.5 proves the equivalence of these various formulations of
$\underline{\mathsf{dCE}}$ and $\overline{\mathsf{dCE}}$.
##### A Quadratic Barrier.
How tight are the lower and upper bounds in Equation (3)? That is, how tightly
can $\mathsf{dCE}$ be determined in the Prediction-only access model? In Lemma
4.5, we show that there can be at least a quadratic gap in between any two
adjacent terms in Equation (3). We construct two distributions
$\mathcal{D}^{1}$ and $\mathcal{D}^{2}$ and a predictor $f$ such that
* •
$\mathcal{D}^{1}_{f}=\mathcal{D}^{2}_{f}$, so the upper and the lower distance
are equal for both distributions, but they are well-separated from each other;
$\underline{\mathsf{dCE}}_{\mathcal{D}^{i}}(f)=\Theta(\alpha^{2})$ whereas
$\overline{\mathsf{dCE}}_{\mathcal{D}^{i}}(f)=\Theta(\alpha)$ for
$i\in\\{1,2\\}$.
* •
$\mathsf{dCE}_{\mathcal{D}^{i}}(f)$ equals either $\underline{\mathsf{dCE}}$
or $\overline{\mathsf{dCE}}$ depending on whether $i=1$ or $2$.
This example raises the question of whether an even bigger gap can exist,
which we answer next.
### 2.3 Consistent Calibration Measures
We describe three consistent calibration measures, and their relation to the
true distance from calibration.
##### Interval calibration (Section 6)
Interval calibration error is a subtle modification to the heuristic of
binning predictions into buckets and computing the expected calibration error.
Formally, given a partition $\mathcal{I}=\\{I_{1},\ldots,I_{m}\\}$ of $[0,1]$
into intervals of width bounded by $w(\mathcal{I})$, we first consider the
standard quantity
$\mathsf{binnedECE}_{\mathcal{D}}(f,\mathcal{I})=\sum_{j\in[m]}|\mathop{\mathbb{E}}[(f-y){\mathbf{1}}(f\in
I_{j})]|.$ (4)
This quantity, as the name suggests, is exactly the Binned-ECE for the bins
defined by the partition $\mathcal{I}$. We then define our notion of _Interval
calibration error_ ($\mathsf{intCE}$) as the minimum of this Binned-ECE over
all partitions $\mathcal{I}$, when “regularized” by maximum bin width
$w(\mathcal{I})$:
$\mathsf{intCE}_{\mathcal{D}}(f):=\inf\limits_{\mathcal{I}:~{}\textrm{Interval
partition}}\left(\mathsf{binnedECE}_{\mathcal{D}}(f,\mathcal{I})+w(\mathcal{I})\right).$
In Theorem 6.2, we show that $\mathsf{intCE}$ satisfies the following bounds.
$\overline{\mathsf{dCE}}_{\mathcal{D}}(f)\leq\mathsf{intCE}_{\mathcal{D}}(f)\leq
4\sqrt{\underline{\mathsf{dCE}}_{\mathcal{D}}(f)}.$
This shows that the measure $\mathsf{intCE}$ is $(1/2,1)$-consistent, and
gives the best possible (quadratic) approximation to the true distance to
calibration. The outer inequality implies that the gap between the lower and
upper distance is no more than quadratic, hence the gap exhibited in Lemma 4.5
is tight.
We now address the computational complexity. While the definition of interval
calibration minimizes over all possible interval partitions, in Section 6.2,
we show that it suffices to consider a geometrically decreasing set of values
for the width $w$, with a random shift, to get the desired upper bound on
$\mathsf{dCE}$.
Our result suggests an additional practical takeaway: if the standard binning
algorithm must be used to measure calibration, then the bin width should be
added to the binnedECE. This yields a quantity which is at least an _upper
bound_ on the true distance to calibration, which is not true without adding
the bin widths. Specifically, for _any_ interval partition $\mathcal{I}$, we
have:
$\displaystyle\overline{\mathsf{dCE}}_{\mathcal{D}}(f)\leq\mathsf{binnedECE}(f,\mathcal{I})+w(\mathcal{I}).$
(5)
Thus, if we add the bin width, then $\mathsf{binnedECE}$ can at least be used
to certify closeness to calibration. The extreme case of width $0$ buckets
corresponds to $\mathsf{ECE}$, while the case when the bucket has width $1$
corresponds to the weaker condition of accuracy in expectation [HKRR18]. It is
natural to penalize larger width buckets which allow cancellations between
calibration errors for widely separated values of $f$. The notion of using
bucket width as a penalty to compare calibration error results obtained from
using differing width buckets is intuitive in hindsight, but not done in prior
work to our knowledge (e.g. [MDR+21]).
##### Smooth Calibration (Section 7)
Smooth calibration is a calibration measure first defined by [KF08], see also
[FH18, GKSZ22]. Smooth calibration error is defined as the following
maximization over the family $L$ of all bounded $1$-Lipschitz functions
$w:[0,1]\to[-1,1]$:
$\mathsf{smCE}_{\mathcal{D}}(f):=\mathsf{smCE}(\mathcal{D}_{f})=\sup\limits_{w\in
L}\mathop{\mathbb{E}}_{(v,y)\sim\mathcal{D}_{f}}[w(v)(y-v)].$
Without the Lipschitz condition on functions $w$, this definition would be
equivalent to $\mathsf{ECE}(f)$. Adding the Lipschitz condition smooths out
the contribution from each neighborhood of $v$ and results in a calibration
measure that is Lipschitz in $f$ with respect to the $\ell_{1}$ distance. This
notion has found applications in game theory and leaky forecasting [KF08,
FH18]. Our main result is that the smooth calibration error captures the lower
distance from calibration up to constant factors:
$\frac{1}{2}\,\underline{\mathsf{dCE}}_{\mathcal{D}}(f)\leq\mathsf{smCE}_{\mathcal{D}}(f)\leq
2\,\underline{\mathsf{dCE}}_{\mathcal{D}}(f).$
We find this tight connection to be somewhat surprising, since $\mathsf{smCE}$
(as a maximization over weight functions $w$) and $\underline{\mathsf{dCE}}$
(as a minimization over couplings) have a priori very different definitions.
They turn out to be related via LP duality, in a way analogous to Kantorovich-
Rubinstein duality of Wasserstein distances. We present a high-level overview
of the proof at the start of Section 7. Along the way, we give an efficient
polynomial time algorithm for estimating the smooth calibration error, the
first such algorithm to our knowledge. To summarize the relations between
notions discussed so far, we have
$\displaystyle\boxed{\mathsf{smCE}\approx\underline{\mathsf{dCE}}\leq\mathsf{dCE}\leq\overline{\mathsf{dCE}}\leq\mathsf{intCE}\leq
4\sqrt{\underline{\mathsf{dCE}}}}$ (6)
For each of the first three inequalities, we show that the gap can be
quadratic. The final inequality shows that these gaps are _at most_ quadratic.
##### Kernel Calibration (Section 8)
The notion of kernel calibration error was introduced in [KSJ18] as _Maximum
Mean Calibration Error_ (MMCE). Kernel calibration can be viewed as a variant
of smooth calibration error, where we use as weight functions
$w:[0,1]\to\mathbb{R}$ which are bounded with respect to a norm
$\|\cdot\|_{K}$ on the _Reproducing Kernel Hilbert Space_ associated with some
positive-definite kernel $K$:
$\mathsf{kCE}^{K}_{\mathcal{D}}(f):=\sup\limits_{w:\|w\|_{K}\leq
1}~{}\mathop{\mathbb{E}}_{(v,y)\sim\mathcal{D}_{f}}[w(v)(y-v)].$
When $\mathcal{D}_{f}$ is an empirical distribution over samples
$\\{(v_{1},y_{1}),\ldots,(v_{n},y_{n})\\}$, this can be computed as
$\mathsf{kCE}^{K}_{\mathcal{D}}(f)=\sqrt{\frac{1}{n^{2}}\sum_{i,j}(y_{i}-v_{i})(y_{j}-v_{j})K(v_{i},v_{j})}.$
The original motivation of introducing the kernel calibration error was to
provide a differentiable proxy for ECE — allowing for the calibration error to
be explicitly penalized during the training of a neural network. However,
[KSJ18] does not discuss how the choice of the kernel affects the resulting
measure, although they used Laplace kernel in their experiments. We prove here
that this choice has strong theoretical justification — the kernel calibration
error with respect to Laplace kernel is a consistent calibration measure;
specifically for some positive absolute constants $c_{1},c_{2}>0$,
$c_{1}\underline{\mathsf{dCE}}(f)\leq\mathsf{kCE}^{\mathsf{Lap}}(f)\leq
c_{2}\sqrt{\underline{\mathsf{dCE}}(f)}.$
This says that we can view kernel calibration with respect to the Laplace
kernel as fundamental measure in its own right, as opposed to a proxy for (the
otherwise flawed) $\mathsf{ECE}$. We also show that the choice of kernel is in
fact crucial: for the Gaussian kernel, another commonly used kernel across
machine learning, the resulting measure is not robustly sound anymore (Theorem
8.6).
### 2.4 Better Algorithms and Sample Complexity
For many of the measures discussed in the paper, we provide efficient
algorithms yielding an $\varepsilon$ additive approximation to the measure in
question, using samples from the distribution $\mathcal{D}_{f}$. In most
cases, those results follow a two step paradigm, we give an algorithm that
approximates the measure on a finite sample, followed by a generalization
bound. Our generalization bounds follow from essentially standard bounds on
Rademacher complexity of the function families involved in defining our
measures (e.g. bounding the Rademacher complexity of 1-Lipshitz functions for
$\mathsf{smCE}$). On the algorithmic side, we prove that the $\mathsf{smCE}$
on the empirical distribution over a sample of size $n$ can be computed by
solving a linear program with $\mathcal{O}(n)$ variables and constraints.
Similarly, the $\underline{\mathsf{dCE}}$ can be approximated up to an error
$\varepsilon$, by linear time prepossessing followed by solving a linear
program with $\mathcal{O}(\varepsilon^{-1})$ variables and constrains.
We provide an alternate algorithm for estimating the kernel calibration error
with Laplace kernel, using the _Random Features Sampling_ technique from
[RR07]. This algorithm does not improve on naive estimators in worst-case
guarantees, but it reveals an intriguing connection. After unwrapping the
random features abstraction, the final algorithm is similar to the popular
interval binning calibration estimator, where we choose the length of the
interval at random from a specific distribution, and introduce a uniformly
random shift. We find it surprising that an estimate of this type is _exactly_
equal to $(\mathsf{kCE}^{\mathsf{Lap}})^{2}$ in expectation.
## 3 Related Work
Metric | Continuity | Completeness | Soundness
---|---|---|---
($\ell_{p}$-)ECE | ✗ | ✓ | ✓
Binned-ECE | ✗ | ✓ | ✗
Proper Scoring Rules (Brier, NLL) | ✓ | ✗ | ✓
NCE [SGR97] | ✓ | ✗ | ✓
ECCE [AIGT+22] | ✗ | ✓ | ✓
MMCE [KSJ18] | ✓ | ✓ | ✓
smCE [KF08] | ✓ | ✓ | ✓
Table 1: Calibration measures proposed in, or based on, prior works.
We start by discussing the high-level relation between our work and other
areas of theoretical computer science, and then discuss work on calibration in
machine learning and forecasting.
##### Property testing.
Our framework for defining the distance to calibration is inspired by the
elegant body of literature on property testing [BLR90, RS96, GGR98]. Indeed,
the notions of ground truth distance to calibration, robust completeness and
robust soundness are in direct correspondence to notions in the literature on
tolerant property testing and distance estimation [PRR06]. Like in property
testing, algorithms for estimating calibration measures operate under
stringent resource constraints, although the constraints are different. In
property testing, the algorithm only has a local view of the object based on a
few queries. In our setting, the constraint comes from having the operate in
the prediction-only access model whereas the ground truth distance is defined
in the sample-access model.
##### Multicalibration.
Recent interest in calibration and its variants in theoretical computer
science has been spurred by the work of [HKRR18] introducing multicalibration
as a group fairness notion (see also [KMR17, KNRW18]). This notion has proved
to be unexpectedly rich even beyond the context of fairness, with connections
to indistinguishability [DKR+21] and loss minimization [GKR+22]. Motivated by
the goal of finding more computationally efficient notions of
multicalibration, notions such as low-degree multicalibration [GKSZ22] and
calibrated multiaccuracy have been analyzed in the literature [GHK+23], some
of these propose new calibration measures.
##### Level of access to the distribution.
The sample access model is considered in the work of [DKR+21], who relate it
to the notion of multicalibration [HKRR18]. Prediction-only access is a
restriction of sample access which is natural in the context of calibration,
and is incomparable to the no-access model of [DKR+21] where on gets access to
point label pairs. This model is not considered explicitly in [DKR+21], and
the name prediction-only access for it is new. But the model itself is well-
studied in the literature on calibration [Daw85, Daw82a, FV98], indeed all
existing notions of calibration that we are aware of are defined in the PA
model, as are the commonly used losses in machine learning.
##### Prior work on calibration measures.
Several prior works have proposed alternate measures of calibration (Table 1
lists a few of them). Most focus on the how: they give a formula or procedure
for computing of the calibration measure from a finite set of samples,
sometimes accompanied by a generalization guarantee that connects it to some
property of the population. There is typically not much justification for why
the population quantity is a good measure of calibration error, or discussion
of its merits relative to other notions in the literature (notable exceptions
are the works of [KF08, FH18]). The key distinction in our work is that we
start from a clear and intuitive ground truth notion and desiderata for
calibration measures, we analyze measures based on how well they satisfy these
desiderata, and then give efficient estimators and generalization guarantees
for consistent calibration measures.
Our desiderata reveal important distinctions between measures that were
proposed previously; Table 1 summarizes how well calibration measures
suggested in prior works satisfy our desiderata. It shows that a host of
calibration measures based on variants of $\mathsf{ECE}$, binning and proper
scoring rules fail to give basic guarantees. We present these guarantees
formally in section 4.3. Briefly, many ECE variants suffer from the same flaws
as ECE itself, and proper scoring rules suffer different issues we describe
below. We also discuss other notions of calibration from the literature in
appendix A.
Proper Scoring Rules. Proper scoring rules such as the Brier Score [Bri50] or
Negative-Log-Loss (NLL) are popular proxies for miscalibration. Every proper
scoring rule satisfies soundness, since if the score is $0$, the function $f$
is perfectly calibrated. However, such rules violate completeness: there are
perfectly-calibrated functions for which the score is non-zero. For example,
if the true distribution on labels $p(y|x)=\textrm{Bernoilli}(0.5)$, then the
constant function $f(x)=0.5$ is perfectly calibrated but has non-zero Brier
score. This is because proper scoring rules measure predictive quality, not
just calibration. The same holds for Normalized Cross Entropy (NCE), which is
sound but not complete.
Smooth and Kernel Calibration. We show that some definitions in the literature
do satisfy our desiderata— namely the notions of weak calibration (smooth
calibration in our terminology) introduced in [KF08], and MMCE (calibration
with a Laplace kernel in our terminology) introduced in [KSJ18]. Smooth
calibration was introduced under the name “weak calibration” in [KF08], the
terminology of smooth calibration is from [GKSZ22]111[FH18] introduced a
notion of “smooth calibration” with an unrelated definition, but thankfully,
they proved that their “smooth calibration” is in fact polynomially related to
the [GKSZ22] notion — therefore in our framework it is also a consistent
calibration measure.. Interestingly, these were developed with different
motivations. MMCE was proposed by [KSJ18] for practical reasons: as a
differentiable proxy for ECE, to allow optimizing for calibration via
backpropagation. One of the motivations behind smooth calibration, discussed
in both [KF08, FH18] was to address the discontinuity of the standard binning
measures of calibration and $\mathsf{ECE}$. But it main application was as a
weakening of perfect calibration, to study the power of deterministic
forecasts in the online setting and derandomize the classical result of [FV98]
on calibrated forecasters.
Our work establishes that these measures are not just good ways to measure
calibration, they are more fundamental than previously known. Smooth
calibration is within constant factors of the lower distance to calibration,
and yields the best possible quadratic approximation to the true distance to
calibration.
## 4 A Framework for Calibration Measures
In this section, we will present our framework for calibration measures. We
start by characterizing the set of perfectly calibrated predictors. We then
propose our ground truth notion of distance from calibration, in analogy to
the distance from a code in property testing. Building on this, we formulate
robust completeness and soundness guarantees that we want calibration measures
to satisfy. Finally, we show information theoretic reasons why any calibration
measure can only hope to give a quadratic approximation to the ground truth
distance. We would like to emphasize the new definitions and the rationale
behind them, hence most proofs are deferred to Appendix B.1 to streamline the
flow. We start with some notation.
##### Notation.
Let $\mathcal{X}$ be a discrete domain.222We will assume that the domain
$\mathcal{X}$ is discrete but possibly very large. As a consequence,
$\operatorname{Im}(f)$ is discrete, and events such as $f(x)=v$ for
$v\in\operatorname{Im}(f)$ are well defined. We can think of the finiteness
assumption reflecting the fact that inputs to any model have finite precision.
We do this to avoid measure-theoretic intricacies, but assuming
$f:\mathcal{X}\to[0,1]$ is measurable should suffice when $\mathcal{X}$ is
infinite. Let $\mathcal{D}$ be a distribution on
$\mathcal{X}\times\\{0,1\\}$; we denote a sample from $\mathcal{D}$ by
$(x,y)\sim\mathcal{D}$ where $x\in\mathcal{X},y\in\\{0,1\\}$. A predictor is a
function $f:\mathcal{X}\rightarrow[0,1]$, where $f(x)$ is an estimate of
$\Pr[y=1|x]$. We define the Bayes optimal predictor $f^{*}$ as
$f^{*}(x)=\mathop{\mathbb{E}}[y|x]$. Note that $\mathcal{D}$ is completely
specified by the marginal distribution $\mathcal{D}_{\mathcal{X}}$ on
$\mathcal{X}$, and the conditional expectations $f^{*}$. We let
$\mathcal{F}_{\mathcal{X}}$ denote the set of all predictors
$f:\mathcal{X}\rightarrow[0,1]$. We define the $\ell_{1}$ distance in
$\mathcal{F}_{\mathcal{X}}$ as
$\ell_{1}(f,g)=\mathop{\mathbb{E}}_{\mathcal{D}}|f(x)-g(x)|.$
For a distribution $\mathcal{D}$ and predictor $f$, we use $\mathcal{D}_{f}$
to denote the distribution over $\operatorname{Im}(f)\times\\{0,1\\}$ of
$(f(x),y)$ where $(x,y)\sim\mathcal{D}$. Two predictors $f$ and $g$ might be
far apart in $\ell_{1}$, yet $D_{f}$ and $\mathcal{D}_{g}$ can be identical.
333Consider the uniform distribution on $\mathcal{X}=\\{0,1\\}$ and let
$f^{*}(x)=1/2$ so labels are drawn uniformly. Consider the predictors $f(x)=x$
and $g(x)=1-x$. While $\ell_{1}(f,g)=1$, the distributions $\mathcal{D}_{f}$
and $\mathcal{D}_{g}$ are identical, since $f/g$ is uniform on $\\{0,1\\}$,
and the labels are uniform conditioned on $f/g$.
A calibration measure $\mu$ is a function that maps a distribution
$\mathcal{D}$ and a predictor $f:\mathcal{X}\to[0,1]$ to a value
$\mu_{\mathcal{D}}(f)\in[0,1]$. A crucial question is the level of access to
the underlying distribution that a procedure for computing $\mu$ has. We refer
to the setting where an algorithm has access to $(x,f(x),y)$ for
$(x,y)\sim\mathcal{D}$ as the sample-access model or SA model for short
following [DKR+21]. Calibration measures are typically defined in the more
restricted prediction-only access model or PA model for short, where we only
get access to the joint distribution $\mathcal{D}_{f}$ of prediction-label
pairs $(f,y)$. Such calibration measures $\mu$ can be defined as follows: we
first define $\mu(\Gamma)\in[0,1]$ for every distribution $\Gamma$ over
$[0,1]\times\\{0,1\\}$, and then for a distribution $\mathcal{D}$ and a
predictor $f$, we define $\mu_{\mathcal{D}}(f)$ to be $\mu(\mathcal{D}_{f})$.
We say a distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$ is _perfectly
calibrated_ if $\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[y|v]=v$. For a
distribution $\mathcal{D}$ over $\mathcal{X}\times\\{0,1\\}$, we say a
predictor $f:\mathcal{X}\to[0,1]$ is _perfectly calibrated_ w.r.t.
$\mathcal{D}$ if $\mathcal{D}_{f}$ is perfectly calibrated. We use
$\mathsf{cal}(\mathcal{D})$ to denote the set of predictors $f$ that is
perfectly calibrated w.r.t. $\mathcal{D}$.
There is an injection from $\mathsf{cal}(\mathcal{D})$ to the set of
partitions of the domain $\mathcal{X}$. A consequence is that when
$\mathcal{X}$ is finite, so is $\mathsf{cal}(\mathcal{D})$. In particular,
$\mathsf{cal}(\mathcal{D})$ is not a convex subset of
$\mathcal{F}_{\mathcal{X}}$. We describe the injection below for completeness,
although it is not crucial for our results. For a partition
$\mathcal{S}=\\{S_{i}\\}_{i=1}^{m}$, we define $g_{\mathcal{S}}(x)=E[y|x\in
S_{i}]$ for all $x\in S_{i}$. It is clear that
$g_{\mathcal{S}}\in\mathsf{cal}(\mathcal{D})$. For a predictor $f$, let
$\mathrm{level}(f)$ be the partition of the domain $\mathcal{X}$ given by its
level sets. By the definition of calibration, $f\in\mathsf{cal}(\mathcal{D})$
iff it is equal to $g_{\mathrm{level}(f)}$, which establishes the injection.
### 4.1 Desiderata for Calibration Measures
A calibration measure $\mu$ is a function that for a given distribution
$\mathcal{D}$, maps predictors $f$ in $\mathcal{F}_{\mathcal{X}}$ to values in
$[0,1]$. We denote this value as $\mu_{\mathcal{D}}(f)$. At the bare minimum,
we want $\mu_{\mathcal{D}}$ to satisfy completeness and soundness, meaning
that for all $\mathcal{D},f$,
$\displaystyle\mu_{\mathcal{D}}(f)=0\ $ $\displaystyle\text{if}\
f\in\mathsf{cal}(\mathcal{D})\ \ $ (Completeness)
$\displaystyle\mu_{\mathcal{D}}(f)>0\ $ $\displaystyle\text{if}\
f\not\in\mathsf{cal}(\mathcal{D})\ \ $ (Soundness)
Ideally, we want these guarantees to be robust: $\mu(f)$ is small if $f$ is
close to calibrated, and large is $f$ is far from calibrated. Formalizing this
requires us to specify how we wish to measure the distance from calibration. A
family of metrics $m$ is a collection of metrics $m_{\mathcal{D}}$ on
$\mathcal{F}_{\mathcal{X}}$ for every distribution $\mathcal{D}$ on
$\mathcal{X}$. For instance, the $\ell_{p}$ distance on
$\mathcal{F}_{\mathcal{X}}$ under distribution $\mathcal{D}$ for $p\geq 1$ is
given by
$\ell_{p,\mathcal{D}}(f,g)=\mathop{\mathbb{E}}_{\mathcal{D}}[|f(x)-g(x)|^{p}]^{1/p}$
We note that $m_{\mathcal{D}}$ only ought to depend on the marginal
$\mathcal{D}_{\mathcal{X}}$ on $\mathcal{X}$. When the distribution
$\mathcal{D}$ is clear, we will sometimes suppress the dependence on the
distribution and refer to $m$ as a metric rather than a family. Indeed, it is
common to refer to the above distance as $\ell_{p}$ distance, ignoring the
dependence on $\mathcal{D}$.
###### Definition 4.1 (True distance to calibration).
Given a metric family $m$ on $\mathcal{F}_{\mathcal{X}}$, we define the true
$m$-distance to calibration under $\mathcal{D}$ as
$\mathsf{dCE}^{m}_{\mathcal{D}}(f)=\min_{g\in\mathsf{cal}(\mathcal{D})}m_{\mathcal{D}}(f,g).$
With this definition in place, we define consistent calibration measures with
respect to $m$.
###### Definition 4.2 (Consistent calibration measures).
For $c,s\geq 0$, we say that $\mu$ satisfies $c$-robust completeness w.r.t.
$m$ if there exist a constant $a\geq 0$ such that for every distribution
$\mathcal{D}$ on $\mathcal{X}\times\\{0,1\\}$, and predictor
$f\in\mathcal{F}_{\mathcal{X}}$
$\displaystyle\mu_{\mathcal{D}}(f)\leq
a(\mathsf{dCE}^{m}_{\mathcal{D}}(f))^{c}$ (Robust completeness)
and $s$-robust soundness w.r.t. $m$ if there exist $b\geq 0$ such that for
every distribution $\mathcal{D}$ on $\mathcal{X}\times\\{0,1\\}$, and
predictor $f\in\mathcal{F}_{\mathcal{X}}$
$\displaystyle\mu_{\mathcal{D}}(f)\geq
b(\mathsf{dCE}_{\mathcal{D}}^{m}(f))^{s}.$ (Robust soundness)
We say that $\mu$ is an $(c,s)$-consistent calibration measure w.r.t $m$ if
both these conditions hold, and we define its approximation degree to be
$s/c$. 444For metrics that can take on arbitrarily small values (such as the
$\ell_{p}$ metrics), it follows that $s>c$. We say $\mu$ it is an consistent
calibration measure w.r.t. $m$ if there exists $c,s\geq 0$ for which
$(c,s)$-consistency for $m$ holds.
To see that these names indeed make sense, observe that if $f$ is
$\varepsilon$-close to being perfectly calibrated, then robust completeness
ensures that $\mu_{\mathcal{D}}(f)$ is $O(\varepsilon^{c})$ and hence goes to
$0$ with $\varepsilon$. Robust soundness ensures that if
$\mu_{\mathcal{D}}(f)=\varepsilon\to 0$, then
$\mathsf{dCE}^{m}_{\mathcal{D}}(f)=O(\varepsilon^{1/s})\to 0$. Conversely,
when the true $m$-distance to calibration for $f$ is $\eta\gg 0$, robust
soundness ensures that $\mu_{\mathcal{D}}(f)=\Omega(\eta^{s})$ is also bounded
away from $0$.
Given a sequence of predictors $\\{f_{n}\\}$, we say that the sequence
converges to $f\in\mathcal{F}_{\mathcal{X}}$, denote $F_{n}\to f$ if
$\lim_{n\to\infty}m_{\mathcal{D}}(f_{n},f)=0.$
Robust soundness ensures that if $f_{n}\to g\in\mathsf{cal}(\mathcal{D})$,
then $\mu(f_{n})\to 0$. $\mathsf{dCE}^{m}_{\mathcal{D}}$ satisfies a stronger
continuity property, namely that it is $1$-Lipshcitz with respect to
$m_{\mathcal{D}}$:
$|\mathsf{dCE}^{m}_{\mathcal{D}}(f)-\mathsf{dCE}^{m}_{\mathcal{D}}(f^{\prime})|\leq
m_{\mathcal{D}}(f,f^{\prime}).$
This property is easy to verify from the definition. It implies that for any
$f\in\mathcal{F}_{\mathcal{X}}$ not necessarily calibrated, if $f_{n}\to f$,
$\mu(f_{n})\to\mu(f)$. Not every $\ell_{1}$-consistent calibration measure
have this stronger property of convergence everywhere, although some do.
Indeed, the following lemma implies that among all calibration measures that
satisfy completeness and are $1$-Lipschitz with respect to $m_{\mathcal{D}}$,
$\mathsf{dCE}^{m}_{\mathcal{D}}$ is the largest. Thus any consistent
calibration measure that can grow as $\omega(\mathsf{dCE}^{m})$ cannot be
Lipschitz.
###### Lemma 4.3.
Any calibration measure $\mu_{\mathcal{D}}$ which satisfies completeness and
is $L$-Lipschitz w.r.t $m_{\mathcal{D}}$ must satisfy
$\mu_{\mathcal{D}}(f)\leq L\ \mathsf{dCE}_{\mathcal{D}}(f)$ for all
$f\in\mathcal{F}_{\mathcal{X}}$.
Metric families that are particularly important to us are the $\ell_{p}$
metrics. Since all $\ell_{p}$ measures are polynomially related, the set of
$\ell_{p}$ consistent calibration metrics is independent of $p$ for bounded
$p$.
###### Lemma 4.4.
The set of $\ell_{p}$-consistent calibration measures is identical for all
$p\in[1,\infty)$.
Given this result, we will focus on the $\ell_{1}$ metric and define the true
distance to calibration by
$\displaystyle\mathsf{dCE}_{\mathcal{D}}(f):=\mathsf{dCE}_{\mathcal{D}}^{\ell_{1}}(f)=\min_{g\in\mathsf{cal}(\mathcal{D})}\ell_{1,\mathcal{D}}(f,g).$
(True distance from calibration)
It has good continuity properties, and the resulting set of consistent
calibration measures does not depend on the choice of the $\ell_{p}$ metric.
Henceforth when we refer to $(c,s)$-consistent calibration metrics without
making $m$ explicit, it is assumed that we mean the $\ell_{1}$ distance. We
note that there might be settings (not considered in this work) where other
metrics on $\mathcal{F}_{\mathcal{X}}$ are suitable.
### 4.2 Approximation Limits in the PA Model
Given the desirable properties of $\mathsf{dCE}$, one might wonder: why not
use $\mathsf{dCE}$ as a calibration measure in itself? The main barrier to
this is that $\mathsf{dCE}$ cannot be computed (or even defined) in the
prediction access model. Indeed, if it were, there would be no need to look
for alternative notions of approximate calibration.
###### Lemma 4.5.
Let $\alpha\in(0,1/2]$. There exists a domain $\mathcal{X}$, a predictor
$f\in\mathcal{F}_{\mathcal{X}}$, and distributions
$\mathcal{D}^{1},\mathcal{D}^{2}$ on $\mathcal{X}\times\\{0,1\\}$ such that
* •
The distributions $\mathcal{D}^{1}_{f}$ and $\mathcal{D}^{2}_{f}$ are
identical.
* •
$\mathsf{dCE}_{\mathcal{D}^{1}}(f)\leq 2\alpha^{2}$, while
$\mathsf{dCE}_{\mathcal{D}^{2}}(f)\geq\alpha$.
###### Proof.
We consider the space $\mathcal{X}=\\{00,01,10,11\\}$. $\mathcal{D}^{1}$ and
$\mathcal{D}^{2}$ will share the same marginal distribution
$\mathcal{D}_{\mathcal{X}}$ on $\mathcal{X}$ given by
$\displaystyle\mathcal{D}_{\mathcal{X}}(x)=\begin{cases}\alpha\ \text{if}\
x\in\\{00,11\\}\\\ \frac{1}{2}-\alpha\ \ \text{if}\
x\in\\{01,10\\}\end{cases}$
The predictor $f:\mathcal{X}\to[0,1]$ is given by
$\displaystyle f(x)=\begin{cases}\frac{1}{2}-\alpha\ \text{if}\ x_{1}=1\\\
\frac{1}{2}+\alpha\ \text{if}\ x_{1}=0\end{cases}$
For the distribution $\mathcal{D}^{1}$, the conditional probabilities are
given by $f^{*}_{1}(x)=(x_{1}+x_{2})/2$. It is easy to check that the
predictor $g_{1}$ defined as
$\displaystyle g_{1}(x)=\begin{cases}\frac{1}{2}-\alpha\ \text{if}\ x_{2}=0\\\
\frac{1}{2}+\alpha\ \ \text{if}\ x_{2}=1\end{cases}$
lies in $\mathsf{cal}(\mathcal{D}^{1})$ and
$\displaystyle|f(x)-g_{1}(x)|=\begin{cases}0\ \text{for}\ x\in\\{01,10\\},\\\
2\alpha\ \text{for}\ x\in\\{00,11\\},\end{cases}$
It follows that $\ell_{1}(f,g_{1})=2\alpha^{2}$, hence
$\mathsf{dCE}_{\mathcal{D}^{1}}(f)\leq 2\alpha^{2}$.
The conditional probabilities for the distribution $\mathcal{D}^{2}$ are given
by
$\displaystyle f^{*}_{2}(x)=\begin{cases}\frac{1}{2}+\alpha\ \text{if}\
x_{1}=1\\\ \frac{1}{2}-\alpha\ \text{if}\ x_{1}=0\end{cases}$
One can verify that the closest calibrated predictor is the constant $1/2$
predictor, so that $\mathsf{dCE}_{\mathcal{D}^{2}}(f)\geq\alpha.$
To verify that $\mathcal{D}^{1}_{f}=\mathcal{D}^{2}_{f}$, we observe that
under either distribution
$\displaystyle\mathop{\mathbb{E}}\left[y|f=\frac{1}{2}-\alpha\right]=\frac{1}{2}+\alpha,\
\ \mathop{\mathbb{E}}\left[y|f=\frac{1}{2}+\alpha\right]=\frac{1}{2}-\alpha.$
∎
This leads us to the quest for approximations that can be computed
(efficiently) in the Prediction-access model. It implies that one can at best
hope to get a degree $2$ approximation to $\mathsf{dCE}$.
###### Corollary 4.6.
Let $\mu(f)$ be a $(\ell_{1},c,s)$-consistent calibration measure computable
in the prediction access model. Then $s\geq 2c$.
###### Proof.
Using $c$-robust completeness for $\mathcal{D}^{1}$ we have
$\mu_{\mathcal{D}^{1}}(f)\leq a(2\alpha^{2})^{c}.$
Using $s$-robust soundness for $\mathcal{D}_{2}$ we have
$\mu_{\mathcal{D}^{2}}(f)\geq b(\alpha^{s}).$
Since $\mathcal{D}^{1}_{f}=\mathcal{D}^{2}_{f}$ and $\mu$ is computable in the
PA model, $\mu_{\mathcal{D}^{1}}(f)=\mu_{\mathcal{D}^{2}}(f)$. Hence
$b\alpha^{s}\leq a(2\alpha^{2})^{c}$, which gives
$\left(\frac{b}{a}\alpha\right)^{s/c}\leq 2\alpha^{2}.$
If $s/c<2$, this gives a contradiction for $\alpha\to 0$. ∎
Given this setup, we can now state our desiderata for an ideal calibration
measure $\mu$.
1. 1.
Access: $\mu_{\mathcal{D}}(f)=\mu(\mathcal{D}_{f})$ is well defined in the
Prediction-only access model.
2. 2.
Consistency: It is $(c,s)$-consistent. Ideally, it has degree $s/c=2$.
3. 3.
Efficiency: It can be computed within accuracy $\varepsilon$ in time
$\mathrm{poly}(1/\varepsilon)$ using $\mathrm{poly}(1/\varepsilon)$ random
samples from $\mathcal{D}_{f}$.
### 4.3 On $\mathsf{ECE}$ and Other Measures
Recall that for a predictor $f$, we define its expected calibration error
$\mathsf{ECE}(f)$ as
$\mathsf{ECE}(f)=\mathop{\mathbb{E}}[|\mathop{\mathbb{E}}[y|f]-f|].$
Clearly, $\mathsf{ECE}$ is well defined in the PA model. We analyze
$\mathsf{ECE}$ in our framework and show that it satisfies $1$-robust
soundness, but not robust completeness. For the former, we present an
alternate view of $\mathsf{ECE}$ in terms of $\ell_{1}$ distance. Recall that
$\mathrm{level}(f)$ is the partition of $\mathcal{X}$ into the level sets of
$f$, and that for a partition $\mathcal{S}=\\{S_{i}\\}$, the predictor
$g_{\mathcal{S}}$ maps each $x\in S_{i}$ to $\mathop{\mathbb{E}}[y|S_{i}]$.
###### Lemma 4.7.
Let $\mathcal{S}=\mathrm{level}(f)$. We have
$\mathsf{ECE}(f)=\ell_{1}(f,g_{\mathcal{S}})\geq\mathsf{dCE}_{\mathcal{D}}(f)$.
###### Proof.
We claim that
$\displaystyle\mathop{\mathbb{E}}[|f-\mathop{\mathbb{E}}[y|f]|]=\mathop{\mathbb{E}}[|f(x)-g_{\mathcal{S}}(x)|]$
which uses the observation that for all $x\in f^{-1}(v)$, $f(x)=v$,
$g(x)=\mathop{\mathbb{E}}[y|v]$. The LHS is $\mathsf{ECE}(f)$, while the RHS
is $\ell_{1}(f,g_{\mathcal{S}})$. Clearly this is larger than
$\mathsf{dCE}_{\mathcal{D}}(f)$ which minimizes the $\ell_{1}$ distance over
all $g\in\mathsf{cal}(\mathcal{D})$. ∎
The main drawbacks of $\mathsf{ECE}$ are that it does not satisfy robust
completeness, and is discontinuous at $0$.
###### Lemma 4.8.
$\mathsf{ECE}_{\mathcal{D}}(f)$ does not satisfy $c$-robust completeness for
any $c>0$. It can be discontinuous at $0$.
###### Proof.
Let $X=\\{0,1\\}$. $\mathcal{D}$ is specified by the uniform distribution on
$X$ with $f^{*}(0)=0,f^{*}(1)=1$. Consider the predictor
$f_{\varepsilon}(0)=1/2-\varepsilon,f_{\varepsilon}(1)=1/2+\varepsilon$. Let
$\overline{f}=1/2\in\mathsf{cal}(\mathcal{D})$. Clearly
$\mathsf{dCE}_{\mathcal{D}}(f_{\varepsilon})\leq\varepsilon$ whereas
$\mathsf{ECE}(f_{\varepsilon})=1/2-\varepsilon$.
This also shows that $\mathsf{ECE}$ is discontinuous at $0$ since
$\mathsf{ECE}(f_{\varepsilon})\to 1/2$ as $\varepsilon\to 0$, whereas
$f_{0}=\overline{f}$ has $\mathsf{ECE}(f_{0})=0$. ∎
Table 1 summarize how other calibration measures that have been studied in the
literature fare under our desiderata. Further discussion of these measures can
be found in appendix A.
## 5 Distance Based Measures in the PA Model
We start by defining upper and lower bounds to the true distance to
calibration in the PA model. Our main result in this subsection is Theorem 5.5
showing that these are the best possible bounds one can have on $\mathsf{dCE}$
in the PA model. To define and analyze these distances, we need some auxiliary
notions.
###### Definition 5.1.
Let $\Gamma$ be a distribution over $[0,1]\times\\{0,1\\}$. Define the set
${\mathsf{ext}}(\Gamma)$ to consist of all joint distributions $\Pi$ of
triples $(u,v,y)\in[0,1]\times[0,1]\times\\{0,1\\}$, such that
* •
the marginal distribution of $(v,y)$ is $\Gamma$;
* •
the marginal distribution $(u,y)$ is perfectly calibrated:
$\mathop{\mathbb{E}}_{\Pi}[y|u]=u$.
We define $\mathrm{lift}(\Gamma)$ to be all pairs $(\mathcal{D},f)$ where
* •
$\mathcal{D}$ is a distribution over $\mathcal{X}\times\\{0,1\\}$ for some
domain $\mathcal{X}$.
* •
$f:\mathcal{X}\to[0,1]$ is predictor so that $\mathcal{D}_{f}=\Gamma$.
We first define the upper distance to calibration.
###### Definition 5.2 (Upper distance to calibration).
For a distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$,let $K(\Gamma)$ denote
the set of transformations $\kappa:[0,1]\to[0,1]$ such that the distribution
of $(\kappa(v),y)$ for $(v,y)\sim\Gamma$ is perfectly calibrated. We define
the upper distance from calibration $\overline{\mathsf{dCE}}(\Gamma)$ as
$\overline{\mathsf{dCE}}(\Gamma)=\inf\limits_{\kappa\in
K(\Gamma)}\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[|v-\kappa(v)|],$
For a distribution $\mathcal{D}$ over $\mathcal{X}\times\\{0,1\\}$ and a
predictor $f:\mathcal{X}\to[0,1]$, we define the upper distance from
calibration $\overline{\mathsf{dCE}}_{\mathcal{D}}(f)$ to be
$\overline{\mathsf{dCE}}(\mathcal{D}_{f})$, or equivalently,
$\overline{\mathsf{dCE}}_{\mathcal{D}}(f):=\inf\limits_{\begin{subarray}{c}\kappa:[0,1]\to[0,1]\\\
\kappa\circ
f\in\mathsf{cal}(\mathcal{D})\end{subarray}}\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[|f(x)-\kappa(f(x))|].$
We call this the upper distance since we only compare $f$ with a calibrated
predictor $\kappa\circ f$ that can be obtained by applying a postprocessing
$\kappa$ to $f$. It follows immediately that
$\overline{\mathsf{dCE}}_{\mathcal{D}}(f)\geq\mathsf{dCE}_{\mathcal{D}}(f)$.
We next define the lower distance to calibration.
###### Definition 5.3 (Lower distance to calibration).
We define the _lower distance to calibration_ denoted
$\underline{\mathsf{dCE}}(\Gamma)$ as
$\underline{\mathsf{dCE}}(\Gamma):=\inf\limits_{\Pi\in{\mathsf{ext}}(\Gamma)}\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|.$
(7)
For a distribution $\mathcal{D}$ and a predictor $f$, we define
$\underline{\mathsf{dCE}}_{\mathcal{D}}(f):=\underline{\mathsf{dCE}}(\mathcal{D}_{f})$.
The following lemma justifies the terminology of upper and lower distance.
###### Lemma 5.4.
We have
$\underline{\mathsf{dCE}}_{\mathcal{D}}(f)\leq\mathsf{dCE}_{\mathcal{D}}(f)\leq\overline{\mathsf{dCE}}_{\mathcal{D}}(f)$
###### Proof.
Every calibrated predictor $g\in\mathsf{cal}(\mathcal{D})$ gives a
distribution $\Pi\in{\mathsf{ext}}(\mathcal{D}_{f})$ where we sample
$(x,y)\sim\mathcal{D}$ and return $(g(x),f(x),y)$. Note that
$\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}[|u-v|]=\ell_{1}(f,g)$. Minimizing over
$g\in\mathsf{cal}(\mathcal{D})$ gives the first inequality. The second follows
because in the definition of $\overline{\mathsf{dCE}}$ we minimize over a
subset of $\mathsf{cal}(\mathcal{D})$, namely only those $g=\kappa\circ f$
that can be obtained from $f$ via a postprocessing $\kappa$. ∎
We now show that these are the best possible bounds one can have on
$\mathsf{dCE}$ in the PA model.
###### Theorem 5.5.
The following identities hold
$\displaystyle\underline{\mathsf{dCE}}(\Gamma)$
$\displaystyle=\inf\limits_{(\mathcal{D},f)\in\mathrm{lift}(\Gamma)}\mathsf{dCE}_{\mathcal{D}}(f)$
$\displaystyle\overline{\mathsf{dCE}}(\Gamma)$
$\displaystyle=\sup\limits_{(\mathcal{D},f)\in\mathrm{lift}(\Gamma)}\mathsf{dCE}_{\mathcal{D}}(f).$
###### Proof.
By Lemma 5.4, for every $(\mathcal{D},f)\in\mathrm{lift}(\Gamma)$,
$\underline{\mathsf{dCE}}(\Gamma)=\underline{\mathsf{dCE}}_{\mathcal{D}}(f)\leq\mathsf{dCE}_{\mathcal{D}}(f)\leq\overline{\mathsf{dCE}}_{\mathcal{D}}(f)=\overline{\mathsf{dCE}}(\Gamma).$
This implies that
$\displaystyle\underline{\mathsf{dCE}}(\Gamma)\leq\inf\limits_{(\mathcal{D},f)\in\mathrm{lift}(\Gamma)}\mathsf{dCE}_{\mathcal{D}}(f)$
$\displaystyle\overline{\mathsf{dCE}}(\Gamma)\geq\sup\limits_{(\mathcal{D},f)\in\mathrm{lift}(\Gamma)}\mathsf{dCE}_{\mathcal{D}}(f).$
It remains to show the reverse inequalities
$\displaystyle\underline{\mathsf{dCE}}(\Gamma)$
$\displaystyle\geq\inf\limits_{(\mathcal{D},f)\in\mathrm{lift}(\Gamma)}\mathsf{dCE}_{\mathcal{D}}(f),$
(8) $\displaystyle\overline{\mathsf{dCE}}(\Gamma)$
$\displaystyle\leq\sup\limits_{(\mathcal{D},f)\in\mathrm{lift}(\Gamma)}\mathsf{dCE}_{\mathcal{D}}(f).$
(9)
To prove Equation (8), we choose $\mathcal{X}:=[0,1]\times[0,1]$ and define
predictors $f,g:\mathcal{X}\to[0,1]$ such that for any
$x=(u,v)\in\mathcal{X}$, it holds that $f(x)=v$ and $g(x)=u$. For a fixed
$\Pi\in{\mathsf{ext}}(\Gamma)$, we define a distribution $\mathcal{D}$ over
$\mathcal{X}\times\\{0,1\\}$ such that $(x,y)\sim\mathcal{D}$ is drawn by
first drawing $(u,v,y)\sim\Pi$ and then setting $x:=(u,v)$. By the definition
of $\Pi\in{\mathsf{ext}}(\Gamma)$, it holds that $\mathcal{D}_{f}=\Gamma$ and
that $g\in\mathsf{cal}(\mathcal{D})$. Therefore,
$\mathsf{dCE}_{\mathcal{D}}(f)\leq\ell_{1}(f,g)=\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|.$
The fact that $\mathcal{D}_{f}=\Gamma$ implies that
$(\mathcal{D},f)\in\mathrm{lift}(\Gamma)$, and thus
$\inf\limits_{(\mathcal{D},f)\in\mathrm{lift}(\Gamma)}\mathsf{dCE}_{\mathcal{D}}(f)\leq\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|.$
Choosing $\Pi\in{\mathsf{ext}}(\Gamma)$ for which the RHS is minimized and
equals $\underline{\mathsf{dCE}}_{\mathcal{D}}(f)$ proves Equation (8).
To prove (9), we choose $\mathcal{X}:=[0,1]$ and define predictor
$f:\mathcal{X}\to[0,1]$ such that $f(x)=x$ for every $x\in\mathcal{X}$. We
define the distribution $\mathcal{D}$ over $\mathcal{X}\times\\{0,1\\}$ to be
the distribution $\Gamma$. It is clear that $\mathcal{D}_{f}=\Gamma$, so
$(\mathcal{D},f)\in\mathrm{lift}(\Gamma)$. For any perfectly calibrated
predictor $g\in\mathsf{cal}(\mathcal{D})$, the distribution $(g(x),y)$ for
$(x,y)\sim\mathcal{D}$ is perfectly calibrated, which implies that the same
distribution $(g(v),y)$ for $(v,y)\sim\Gamma$ is also perfectly calibrated.
This implies that
$\overline{\mathsf{dCE}}(\Gamma)\leq\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}|v-g(v)|=\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}|f(x)-g(x)|=\ell_{1}(f,g).$
Taking infimum over $g\in\mathsf{cal}(\mathcal{D})$ proves that
$\overline{\mathsf{dCE}}(\Gamma)\leq\mathsf{dCE}_{\mathcal{D}}(f)$, which
gives Equation (9). ∎
The gap between each of these quantities can be at least quadratic, as the
distributions $\mathcal{D}^{1}$ and $\mathcal{D}^{2}$ in Lemma 4.5 shows.
Under both distributions $\mathcal{D}^{1}$ and $\mathcal{D}^{2}$, we have
$\underline{\mathsf{dCE}}(f)\leq 2\alpha^{2}$,
$\overline{\mathsf{dCE}}(f)=\alpha$. But
$\mathsf{dCE}_{\mathcal{D}^{1}}(f)=2\alpha^{2}$ while
$\mathsf{dCE}_{\mathcal{D}^{2}}(f)=\alpha$. We will show that this gap is
indeed tight in the next section using the notion of Interval calibration.
## 6 Interval Calibration
In this section, we introduce the notion of interval calibration. Our main
result is Theorem 6.2 which shows that it is quadratically related to the true
distance from calibration. Since $\mathsf{intCE}$ is defined in the PA model,
this implies in particular, that there might be at most quadratic gap between
$\underline{\mathsf{dCE}}$ and $\overline{\mathsf{dCE}}$. We exhibit a gap
instance showing that this is tight (Lemma 6.8). As defined, it is unclear if
Interval calibration can be efficiently estimated. We propose a surrogate
version of interval calibration which gives similar bounds and can be
efficiently estimated from samples in Section 6.2.
An interval partition $\mathcal{I}$ of $[0,1]$ is a partition of $[0,1]$ into
disjoint intervals $\\{I_{j}\\}_{j\in[m]}$. Let $w(I)$ denote the width of
interval $I$.
###### Definition 6.1 (Interval Calibration Error).
For a distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$ and interval partition
$\mathcal{I}$ define
$\mathsf{binnedECE}(\Gamma,\mathcal{I}):=\sum_{j\in[m]}|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(v-y){\mathbf{1}}(v\in
I_{j})]|.$
We define the _average interval width_
$w_{\Gamma}(\mathcal{I}):=\sum_{j\in[m]}\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[{\mathbf{1}}(v\in
I_{j})w(I_{j})].$
The interval calibration error $\mathsf{intCE}(\Gamma)$ is then the minimum of
$\mathsf{binnedECE}(\Gamma,\mathcal{I})+w_{\Gamma}(\mathcal{I})$ over all
interval partitions $\mathcal{I}$:
$\mathsf{intCE}(\Gamma):=\min_{\mathcal{I}:~{}\textrm{Interval
partition}}\left(\mathsf{binnedECE}(\Gamma,\mathcal{I})+w_{\Gamma}(\mathcal{I})\right).$
For a distribution $\mathcal{D}$ over $\mathcal{X}\times\\{0,1\\}$ and a
predictor $f:\mathcal{X}\to[0,1]$, we define
$\mathsf{intCE}_{\mathcal{D}}(f,\mathcal{I}):=\mathsf{intCE}(\mathcal{D}_{f},\mathcal{I})$
and $\mathsf{intCE}_{\mathcal{D}}(f):=\mathsf{intCE}(\mathcal{D}_{f})$.
Our main theorem about interval calibration is the following.
###### Theorem 6.2.
We have $\overline{\mathsf{dCE}}(\Gamma)\leq\mathsf{intCE}(\Gamma)\leq
4\sqrt{\underline{\mathsf{dCE}}(\Gamma)}$.
Combining this with Lemma 5.4, we conclude that $\mathsf{intCE}$ is indeed a
quadratic approximation to the true distance from calibration, which is the
best achievable in the PA model by Corollary 4.6.
###### Corollary 6.3.
$\mathsf{intCE}$ is a $(1/2,1)$-consistent calibration measure. We have
$\displaystyle\mathsf{dCE}_{\mathcal{D}}(f)\leq\mathsf{intCE}_{\mathcal{D}}(f)\leq
4\sqrt{\mathsf{dCE}_{\mathcal{D}}(f)}.$
Another corollary is the following bounds for distance measures, which shows
that the gaps presented in Lemma 4.5 are the largest possible.
###### Corollary 6.4.
We have
$\displaystyle\underline{\mathsf{dCE}}_{\mathcal{D}}(f)$
$\displaystyle\leq\mathsf{dCE}_{\mathcal{D}}(f)\leq
4\sqrt{\underline{\mathsf{dCE}}_{\mathcal{D}}(f)},$
$\displaystyle\frac{1}{16}\overline{\mathsf{dCE}}_{\mathcal{D}}(f)^{2}$
$\displaystyle\leq\mathsf{dCE}_{\mathcal{D}}(f)\leq\overline{\mathsf{dCE}}_{\mathcal{D}}(f).$
We first show the lower bound on $\mathsf{intCE}$, which is easier to show.
###### Proof of Theorem 6.2 (Part 1).
It suffices to prove the following statement: for any interval partition
$\mathcal{I}=\\{I_{j}\\}_{j\in[m]}$,
$\overline{\mathsf{dCE}}(\Gamma)\leq\mathsf{binnedECE}(\Gamma,\mathcal{I})$.
For $v\in I_{j}$, define $\kappa(v)=\mathop{\mathbb{E}}[y|v\in I_{j}]$. The
distribution of $(\kappa(v),y)$ is perfectly calibrated. Hence
$\displaystyle\overline{\mathsf{dCE}}(\Gamma)$
$\displaystyle\leq\mathop{\mathbb{E}}[|v-\kappa(v)|]$
$\displaystyle\leq\sum_{j\in[m]}\mathop{\mathbb{E}}[{\mathbf{1}}(v\in
I_{j})|v-\mathop{\mathbb{E}}[y|v\in I_{j}]|]$
$\displaystyle\leq\sum_{j\in[m]}\mathop{\mathbb{E}}[{\mathbf{1}}(v\in
I_{j})(|v-\mathop{\mathbb{E}}[v|v\in I_{j}]|+|[\mathop{\mathbb{E}}[y|v\in
I_{j}]-\mathop{\mathbb{E}}[v|v\in I_{j}]|)]$
$\displaystyle\leq\sum_{j\in[m]}\mathop{\mathbb{E}}[{\mathbf{1}}(v\in
I_{j})w(I_{j})]+\sum_{j\in[m]}\mathop{\mathbb{E}}[{\mathbf{1}}(v\in
I_{j})|[\mathop{\mathbb{E}}[y|v\in I_{j}]-\mathop{\mathbb{E}}[v|v\in I_{j}]|]$
where we use the fact that conditioned on $v\in I_{j}$, $v$ differs from its
expectation by at most $w(I_{j})$. The first sum is exactly the average width
$w_{\Gamma}(\mathcal{I})$. For the second, by the definition of conditional
expectations
$\displaystyle\mathop{\mathbb{E}}[{\mathbf{1}}(v\in
I_{j})\mathop{\mathbb{E}}[y|v\in I_{j}]]$
$\displaystyle=\mathop{\mathbb{E}}[{\mathbf{1}}(v\in I_{j})y]$
$\displaystyle\mathop{\mathbb{E}}[{\mathbf{1}}(v\in
I_{j})\mathop{\mathbb{E}}[v|v\in I_{j}]]$
$\displaystyle=\mathop{\mathbb{E}}[{\mathbf{1}}(v\in I_{j})v]$
Hence
$\displaystyle\mathop{\mathbb{E}}[{\mathbf{1}}(v\in
I_{j})|[\mathop{\mathbb{E}}[y|v\in I_{j}]-\mathop{\mathbb{E}}[v|v\in I_{j}]|]$
$\displaystyle=|\mathop{\mathbb{E}}[{\mathbf{1}}(v\in
I_{j})\mathop{\mathbb{E}}[y|v\in I_{j}]-{\mathbf{1}}(v\in
I_{j})\mathop{\mathbb{E}}[v|v\in I_{j}]]|$
$\displaystyle=|\mathop{\mathbb{E}}[{\mathbf{1}}(v\in I_{j})(y-v)]|$
We conclude that
$\displaystyle\overline{\mathsf{dCE}}(\Gamma)\leq
w_{\Gamma}(\mathcal{I})+\sum_{j\in[m]}|\mathop{\mathbb{E}}[{\mathbf{1}}(v\in
I_{j})(y-v)]|=w_{\Gamma}(\mathcal{I})+\mathsf{binnedECE}(\Gamma,\mathcal{I}).$
The claimed lower bound follows by minimizing over all interval partitions. ∎
To prove the upper bound on $\mathsf{intCE}$ in theorem 6.2, we consider a
variant of the interval calibration error where we focus on intervals with a
fixed width $\varepsilon$ and take expectation after shifting the intervals
randomly:
###### Definition 6.5 (Random Interval Calibration Error).
For a distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$ and an interval width
parameter $\varepsilon>0$, we define
$\mathsf{RintCE}(\Gamma,\varepsilon)=\mathop{\mathbb{E}}_{r}\Big{[}\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v){\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})]|\Big{]},$
where the outer expectation is over $r$ drawn uniformly from $[0,\varepsilon)$
and $I_{r,j}^{\varepsilon}$ is the interval
$[r+j\varepsilon,r+(j+1)\varepsilon)$.
Note that although the summation is over $j\in\mathbb{Z}$, there are only
finitely many $j$’s that can contribute to the sum (which are the $j$’s that
satisfy $I_{r,j}^{\varepsilon}\cap[0,1]\neq\emptyset$). The following claim
follows by averaging argument from the definitions of $\mathsf{intCE}$ and
$\mathsf{RintCE}$:
###### Claim 6.6.
$\mathsf{intCE}(\Gamma)\leq\mathsf{RintCE}(\Gamma,\varepsilon)+\varepsilon$.
The key to proving the upper bound on $\mathsf{intCE}(\Gamma)$ in Theorem 6.2
is the following lemma.
###### Lemma 6.7.
$\mathsf{RintCE}(\Gamma,\varepsilon)\leq(1+\frac{2}{\varepsilon})\,\underline{\mathsf{dCE}}(\Gamma)$.
We first complete the proof of Theorem 6.2 using Lemma 6.7.
###### Proof of Theorem 6.2 (Part 2).
We prove the upper bound on $\mathsf{intCE}(\Gamma)$ in Theorem 6.2. Combining
6.6 and Lemma 6.7, for every $\varepsilon>0$, we have
$\mathsf{intCE}(\Gamma)\leq\left(1+\frac{2}{\varepsilon}\right)\,\underline{\mathsf{dCE}}(\Gamma)+\varepsilon.$
Choosing $\varepsilon=\sqrt{2\,\underline{\mathsf{dCE}}(\Gamma)}$ to minimize
the right-hand side completes the proof. ∎
###### Proof of Lemma 6.7.
Let $\Pi$ be a distribution over $[0,1]\times[0,1]\times\\{0,1\\}$ in
${\mathsf{ext}}(\Gamma)$. Our goal is to prove that
$\mathsf{RintCE}(\Gamma,\varepsilon)\leq(1+2/\varepsilon)\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|.$
By the definition of $\Pi\in{\mathsf{ext}}(\Gamma)$, for $(u,v,y)\sim\Pi$, the
distribution of $(v,y)$ is $\Gamma$, and the distribution of $(u,y)$ is
perfectly calibrated. The latter implies that
$\mathop{\mathbb{E}}[(u-y)w(u)]=0\quad\text{for any function
}w:[0,1]\to[-1,1].$ (10)
For every $r\in[0,\varepsilon)$, we have the following inequality, where all
expectations and probabilities are over $(u,v,y)\sim\Pi$:
$\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}[(v-y){\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})]|\leq\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}[(v-u){\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})]|+\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}[(u-y){\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})]|.$ (11)
The following inequality bounds the first term in the RHS of (11):
$\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}[(v-u){\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})]|\leq\sum_{j\in\mathbb{Z}}\mathop{\mathbb{E}}[|v-u|{\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})]=\mathop{\mathbb{E}}|v-u|.$ (12)
The following inequality bounds the second term in the RHS of (11):
$\displaystyle\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}[(u-y){\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})]|$ $\displaystyle\leq{}$
$\displaystyle\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}[(u-y){\mathbf{1}}(u\in
I_{r,j}^{\varepsilon})]|+\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}[(u-y)({\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})-{\mathbf{1}}(u\in I_{r,j}^{\varepsilon}))]|$
$\displaystyle={}$
$\displaystyle\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}[(u-y)({\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})-{\mathbf{1}}(u\in I_{r,j}^{\varepsilon}))]|$ (by (10))
$\displaystyle\leq{}$
$\displaystyle\sum_{j\in\mathbb{Z}}\mathop{\mathbb{E}}|{\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})-{\mathbf{1}}(u\in I_{r,j}^{\varepsilon})|$ (by
$|u-y|\leq 1$) $\displaystyle={}$ $\displaystyle 2\Pr[j_{r}(u)\neq j_{r}(v)],$
(13)
where $j_{r}(u)\in\mathbb{Z}$ is the $j\in\mathbb{Z}$ such that $u\in
I_{r,j}^{\varepsilon}$ and $j_{r}(v)$ is defined similarly. Plugging (12) and
(13) into (11) and taking expectation over $r$ drawn uniformly from
$[0,\varepsilon)$, we have
$\displaystyle\mathsf{RintCE}(\Gamma,\varepsilon)$
$\displaystyle\leq\mathop{\mathbb{E}}|v-u|+2\mathop{\mathbb{E}}_{r}\Pr_{(u,v,y)\sim\Pi}[j_{r}(u)\neq
j_{r}(v)]$
$\displaystyle=\mathop{\mathbb{E}}|v-u|+2\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}\Pr_{r}[j_{r}(u)\neq
j_{r}(v)].$
It is easy to check that for fixed $u,v\in[0,1]$, we have
$\Pr_{r}[j_{r}(u)\neq j_{r}(v)]\leq\frac{1}{\varepsilon}|u-v|.$
Plugging this into the inequality above, we get
$\mathsf{RintCE}(\Gamma,\varepsilon)\leq\mathop{\mathbb{E}}|v-u|+\frac{2}{\varepsilon}\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|.\qed$
### 6.1 Quadratic Gap between Interval Calibration and Upper Calibration
Distance
For a distribution $\mathcal{D}$ and a predictor $f$, our results in previous
subsections imply the following chain of inequalities (omitting $\mathcal{D}$
in the subscript for brevity):
$\underline{\mathsf{dCE}}(f)\leq\overline{\mathsf{dCE}}(f)\leq\mathsf{intCE}(f)\leq
4\sqrt{\underline{\mathsf{dCE}}(f)}.$ (14)
These inequalities completely characterize the relationship between
$\underline{\mathsf{dCE}}(f)$ and $\overline{\mathsf{dCE}}(f)$ and also the
relationship between $\underline{\mathsf{dCE}}(f)$ and $\mathsf{intCE}(f)$ for
the following reason. By Lemma 4.5, we know that $\overline{\mathsf{dCE}}(f)$
can be as large as $\Omega(\sqrt{\underline{\mathsf{dCE}}(f)})$, which implies
that $\mathsf{intCE}(f)$ can be as large as
$\Omega(\sqrt{\underline{\mathsf{dCE}}(f)})$. Also, it is easy to show that
$\mathsf{intCE}(f)$ can be as small as $O(\underline{\mathsf{dCE}}(f))$ by
choosing $f$ to be a constant function, which implies that
$\overline{\mathsf{dCE}}(f)$ can be as small as
$O(\underline{\mathsf{dCE}}(f))$.
The remaining question is whether (14) completely characterizes the
relationship between $\overline{\mathsf{dCE}}(f)$ and $\mathsf{intCE}(f)$. We
show that the answer is yes by the following lemma (Lemma 6.8) which gives
examples where $\mathsf{intCE}(f)=\Omega((\overline{\mathsf{dCE}}(f))^{1/2})$.
We also show that $\mathsf{intCE}(f)$ can be discontinuous as a function of
$f$ in Lemma 6.9.
###### Lemma 6.8.
For any $\alpha\in(0,1/4)$, there exist distribution $\mathcal{D}$ and
predictor $f$ such that
$\overline{\mathsf{dCE}}_{\mathcal{D}}(f)\leq
5\alpha^{2}\quad\text{and}\quad\mathsf{intCE}_{\mathcal{D}}(f)\geq\alpha.$
###### Proof.
We use an example similar to the one in the proof of Lemma 4.5. Specifically,
we choose $\mathcal{X}=\\{00,01,10,11\\}$, and choose the distribution
$\mathcal{D}$ over $\mathcal{X}\times\\{0,1\\}$ such that the marginal
distribution $\mathcal{D}_{\mathcal{X}}$ over $\mathcal{X}$ is given by the
following probability mass function:
$\mathcal{D}_{\mathcal{X}}(x)=\begin{cases}\alpha,&\text{if
}x\in\\{00,11\\};\\\ 1/2-\alpha,&\text{if }x\in\\{01,10\\}.\end{cases}$
We choose the conditional distribution of $y$ given $x$ such that
$\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[y|x]=(x_{1}+x_{2})/2$, where
$x_{1}$ and $x_{2}$ are the two coordinates of $x$. Note that this
distribution $\mathcal{D}$ is the distribution $\mathcal{D}^{1}$ in the proof
of Lemma 4.5.
Defining $\beta=\alpha/2$, we choose the predictor $f$ slightly differently
from the proof of Lemma 4.5 as follows:
$\displaystyle f(00)$ $\displaystyle=1/2+\alpha+\beta$ $\displaystyle f(01)$
$\displaystyle=1/2+\alpha$ $\displaystyle f(10)$ $\displaystyle=1/2-\alpha$
$\displaystyle f(11)$ $\displaystyle=1/2-\alpha-\beta.$
Note that the function $f$ in the proof of Lemma 4.5 corresponds to choosing
$\beta=0$ instead. We define the perfectly calibrated predictor
$g\in\mathsf{cal}(\mathcal{D})$ as in Lemma 4.5. That is,
$g(x)=\begin{cases}1/2-\alpha,&\text{if }x_{2}=0;\\\ 1/2+\alpha,&\text{if
}x_{2}=1.\end{cases}$
It is easy to check that $\ell_{1}(f,g)=2\alpha(2\alpha+\beta)\leq
5\alpha^{2}$, which implies that $\overline{\mathsf{dCE}}_{\mathcal{D}}(f)\leq
5\alpha^{2}$.
Now we show a lower bound for $\mathsf{intCE}_{\mathcal{D}}(f)$. Consider an
interval partition $\mathcal{I}$ of $[0,1]$. If $1/2-\alpha$ and $1/2+\alpha$
are in different intervals, we have
$\mathsf{intCE}_{\mathcal{D}}(f,\mathcal{I})\geq|\mathop{\mathbb{E}}[(y-f(x)){\mathbf{1}}(f(x)\leq
1/2)]|+|\mathop{\mathbb{E}}[(y-f(x)){\mathbf{1}}(f(x)>1/2)]|\geq 2\alpha.$
If $1/2-\alpha$ and $1/2+\alpha$ are in the same interval, we have
$w_{\mathcal{D}_{f}}(\mathcal{I})\geq\Pr[x\in\\{10,01\\}]\cdot
2\alpha\geq\alpha.$
This implies that $\mathsf{intCE}_{\mathcal{D}}(f)\geq\alpha$. ∎
###### Lemma 6.9.
There exist a distribution $\mathcal{D}$ over $\mathcal{X}\times\\{0,1\\}$ and
a sequence of predictors $f_{n}:\mathcal{X}\to[0,1]$ converging uniformly to a
predictor $f:\mathcal{X}\to[0,1]$ as $n\to\infty$ such that
$\lim_{n\to\infty}\mathsf{intCE}_{\mathcal{D}}(f_{n})\neq\mathsf{intCE}_{\mathcal{D}}(f)$.
We defer the proof of Lemma 6.9 to Section B.2.
### 6.2 Efficient Estimation of Surrogate Interval Calibration
From the definition of interval calibration error ($\mathsf{intCE}(\Gamma)$),
it is unclear how one can efficiently estimate the notion given examples
$(v,y)$ drawn from $\Gamma$ partly because the definition involves a
minimization over interval partitions. In this subsection, we give an
efficient algorithm for estimating a surrogate version of interval calibration
which we define below. We show in Theorem 6.11 that this surrogate notion
enjoys the same quadratic relationship with $\overline{\mathsf{dCE}}(\Gamma)$
and $\underline{\mathsf{dCE}}(\Gamma)$ as in Theorem 6.2.
###### Definition 6.10 (Surrogate Interval Calibration Error).
For a distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$, we define
$\mathsf{SintCE}(\Gamma)=\inf\limits_{k\in\mathbb{Z}_{\geq
0}}\left(\mathsf{RintCE}(\Gamma,2^{-k})+2^{-k}\right).$
###### Theorem 6.11.
$\overline{\mathsf{dCE}}(\Gamma)\leq\mathsf{intCE}(\Gamma)\leq\mathsf{SintCE}(\Gamma)\leq
6\sqrt{\underline{\mathsf{dCE}}(\Gamma)}$
###### Proof.
The inequality $\overline{\mathsf{dCE}}(\Gamma)\leq\mathsf{intCE}(\Gamma)$ is
already shown in Theorem 6.2. The inequality
$\mathsf{intCE}(\Gamma)\leq\mathsf{SintCE}(\Gamma)$ is an immediate
consequence of 6.6. It remains to show that $\mathsf{SintCE}(\Gamma)\leq
6\sqrt{\underline{\mathsf{dCE}}(\Gamma)}$. For every $k\in\mathbb{Z}_{\geq
0}$, by Lemma 6.7,
$\mathsf{SintCE}(\Gamma)\leq\mathsf{RintCE}(\Gamma,2^{-k})+2^{-k}\leq(1+2\times
2^{k})\,\underline{\mathsf{dCE}}(\Gamma)+2^{-k}.$
Choosing $k$ such that
$\sqrt{\frac{1}{2}\,\underline{\mathsf{dCE}}(\Gamma)}\leq
2^{-k}\leq\sqrt{2\,\underline{\mathsf{dCE}}(\Gamma)}$ completes the proof. ∎
Now we give an efficient estimator for $\mathsf{SintCE}(\Gamma)$ up to error
$\varepsilon$. Let $k^{*}\in\mathbb{Z}_{\geq 0}$ be the integer satisfying
$\varepsilon/4<2^{-k^{*}}\leq\varepsilon/2$. We collect examples
$(v_{1},y_{1}),\ldots,(v_{n},y_{n})$ drawn i.i.d. from $\Gamma$. For
$k=0,\ldots,k^{*}$, we draw $r_{1},\ldots,r_{m}$ independently and uniformly
from $[0,2^{-k})$. We compute
$\widehat{\mathsf{RintCE}}(\Gamma,2^{-k}):=\frac{1}{m}\sum_{s=1}^{m}\sum_{j\in\mathbb{Z}}\left|\frac{1}{n}\sum_{\ell=1}^{n}(y_{\ell}-f_{\ell}){\mathbf{1}}(f_{\ell}\in
I_{r_{s},j}^{2^{-k}})\right|$
and then output
$\widehat{\mathsf{SintCE}}(\Gamma):=\min_{k=0,\ldots,k^{*}}\left(\widehat{\mathsf{RintCE}}(\Gamma,2^{-k})+2^{-k}\right).$
The following theorem ensures that $\widehat{\mathsf{SintCE}}(\Gamma)$ is an
accurate and efficient estimator for $\mathsf{SintCE}(\Gamma)$:
###### Theorem 6.12.
There exists an absolute constant $C>0$ with the following property. For
$\varepsilon,\delta\in(0,1/4)$, define $k^{*}\in\mathbb{Z}_{\geq 0}$ be the
integer satisfying $\varepsilon/4<2^{-k^{*}}\leq\varepsilon/2$. Assume that
$n\geq C\varepsilon^{-3}+C\varepsilon^{-2}\log(1/\delta)$, $m\geq
C\varepsilon^{-2}\log(k^{*}/\delta)$ and we compute
$\widehat{\mathsf{SintCE}}(\Gamma)$ as above for a distribution $\Gamma$ over
$[0,1]\times\\{0,1\\}$. Then with probability at least $1-\delta$,
$|\widehat{\mathsf{SintCE}}(\Gamma)-\mathsf{SintCE}(\Gamma)|\leq\varepsilon.$
In Section B.2, we prove Theorem 6.12 using standard uniform convergence
bounds.
## 7 Smooth Calibration and the Lower Distance to Calibration
In this section, we define and analyze the notion of smooth calibration. The
main result of this section is that the smooth calibration error
$\mathsf{smCE}$ is equivalent, up to a constant factor, to
$\underline{\mathsf{dCE}}$. We also give algorithms that can compute both
these quantities to within an additive $\varepsilon$ in time
$\mathrm{poly}(1/\varepsilon)$ on an empirical distribution.
At a high level, the proof that $\underline{\mathsf{dCE}}$ and $\mathsf{smCE}$
are related proceeds as follows.
1. 1.
For a distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$, our definition of
$\underline{\mathsf{dCE}}(\Gamma)$ (Definition 5.3) is based on couplings
$\Pi\in{\mathsf{ext}}(\Gamma)$ that connect $\Gamma$ to calibrated
distributions $\Gamma^{\prime}$. For a given distribution $\mathcal{D}$, the
space of predictors $f:\mathcal{X}\to[0,1]$ which are calibrated is non-convex
(for finite $\mathcal{X}$, it is a finite set). But when we move to the space
of distributions $\Gamma^{\prime}$ over $[0,1]\times\\{0,1\\}$, then the space
of perfectly calibrated distributions is convex. This is because for
$(v,b)\in[0,1]\times\\{0,1\\}$ if $\Gamma^{\prime}(v,b)$ denotes the
probability assigned to it, then the calibration constraint states that for
every $v$,
$\frac{\Gamma^{\prime}(v,1)}{\Gamma^{\prime}(v,0)+\Gamma^{\prime}(v,1)}=v$
which is a linear constraint for every $v$. This allows us to view the problem
of computing $\underline{\mathsf{dCE}}$ as optimization over couplings $\Pi$
connecting $\Gamma$ to some $\Gamma^{\prime}$ satisfying such linear
constraints.
2. 2.
We show that by suitably discretizing $[0,1]$, we can write the problem of
computing $\underline{\mathsf{dCE}}$ as a linear program. The dual of this
program (after some manipulation) asks for a $2$-Lipschitz function
$w:[0,1]\to[-1,1]$ which witnesses the lack of calibration of $f$, by showing
that $\mathop{\mathbb{E}}[w(v)(y-v)]$ is large. Rescaling gives a
$1$-Lipschitz function which proves that
$\mathsf{smCE}_{\mathcal{D}}(f)\geq\underline{\mathsf{dCE}}_{\mathcal{D}}(f)/2$.
The other direction which corresponds to weak duality is easy to show.
We now proceed with the formal definitions and proof. We start by defining a
general family of calibration measures called _weighted calibration error_
from [GKSZ22].
###### Definition 7.1 (Weighted calibration).
[GKSZ22] Let $W$ be a family of functions $w:[0,1]\to\mathbb{R}$. The
_weighted calibration error_ of a distribution $\Gamma$ over
$[0,1]\times\\{0,1\\}$ is defined as
$\mathsf{wCE}^{W}(\Gamma):=\sup\limits_{w\in
W}\left|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v)w(v)]\right|.$
Given a distribution $\mathcal{D}$ over $\mathcal{X}\times\\{0,1\\}$ and
predictor $f:\mathcal{X}\to[0,1]$, we denote the weighted calibration error of
$f$ under $\mathcal{D}$ as
$\mathsf{wCE}^{W}_{\mathcal{D}}(f):=\mathsf{wCE}^{W}(\mathcal{D}_{f})=\sup\limits_{w\in
W}\left|\mathop{\mathbb{E}}_{(x,y)\sim\mathcal{D}}[(y-f(x))w(f(x))]\right|.$
Clearly, any weighted calibration error notion is well defined in the PA
model. Moreover, all of those at the very least satisfy completeness: if
$\Gamma$ is perfectly calibrated, than for any $w$, we have
$\mathop{\mathbb{E}}_{\Gamma}[(y-v)w(v)]=\mathop{\mathbb{E}}[\mathop{\mathbb{E}}[(y-v)w(v)|v]],$
and since $\mathop{\mathbb{E}}[y|v]=v$ for a perfectly calibrated predictor,
this latter quantity is zero.
A particularly important calibration measure among those is the _smooth
calibration_ where $W$ is the family of all $1$-Lipschitz, bounded functions.
This was introduced in the work of [KF08] who termed it weak calibration, the
terminology of smooth calibration is from [GKSZ22].
###### Definition 7.2 (Smooth calibration).
Let $L$ be the family of all $1$-Lipschitz functions $w:[0,1]\to[-1,1]$. The
_smooth calibration error_ of a distribution $\Gamma$ over
$[0,1]\times\\{0,1\\}$ is defined as weighted calibration error with respect
to the family of all $1$-Lipschitz functions
$\mathsf{smCE}(\Gamma):=\mathsf{wCE}^{L}(\Gamma).$ (15)
Accordingly, for a distribution $\mathcal{D}$ and a predictor $f$, we define
$\mathsf{smCE}_{\mathcal{D}}(f):=\mathsf{smCE}(\mathcal{D}_{f})=\mathsf{wCE}^{L}(\mathcal{D}_{f})=\mathsf{wCE}^{L}_{\mathcal{D}}(f).$
Our main result on the smooth calibration error is the following.
###### Theorem 7.3.
For any distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$, we have
$\frac{1}{2}\underline{\mathsf{dCE}}(\Gamma)\leq\mathsf{smCE}(\Gamma)\leq
2\underline{\mathsf{dCE}}(\Gamma).$
Combining this with corollary 6.4, we conclude that $\mathsf{smCE}$ is a
$(1,2)$-consitent calibration measure, and yields an optimal degree-$2$
approximation to $\mathsf{dCE}$. Along the way, we will find an efficient
algorithm for computing $\underline{\mathsf{dCE}}(\Gamma)$ (see Remark 7.10).
As it is often the case, the inequality $\mathsf{smCE}(\Gamma)\leq
2\underline{\mathsf{dCE}}(\Gamma)$ is significantly easier to prove
(corresponding to the weak duality). We will start by proving this easier
direction. The following lemma is a strengthening of the fact that
$\mathsf{smCE}$ is Lipschitz continuous in the predictor $f$ (i.e. for two
predictors $f,g$ defined on the same set $\mathcal{X}$, we have
$|\mathsf{smCE}_{\mathcal{D}}(f)-\mathsf{smCE}_{\mathcal{D}}(g)|\leq
2\ell_{1}(f,g)$).
###### Lemma 7.4.
Let $\Pi$ be a distribution over $[0,1]\times[0,1]\times\\{0,1\\}$. For any
$1$-Lipschitz function $w:[0,1]\to[-1,1]$, we have
$\left|\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}[(y-u)w(u)]-\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}[(y-v)w(v)]\right|\leq
2\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|.$
###### Proof.
We have
$\displaystyle\mathop{\mathbb{E}}[(y-u)w(u)-(y-v)w(v)]\leq{}$
$\displaystyle\mathop{\mathbb{E}}|(y-u)(w(u)-w(v))|+\mathop{\mathbb{E}}|(u-v)w(v)|$
$\displaystyle\leq{}$ $\displaystyle 2\mathop{\mathbb{E}}|u-v|.$ (since
$|y-u|\leq 1$ and $|w(u)-w(v)|\leq|u-v|$)
∎
We use this to prove the upper bound on $\mathsf{smCE}(\Gamma)$ in Theorem
7.3.
###### Proof of Theorem 7.3 (upper bound).
By the definition of ${\mathsf{ext}}(\Gamma)$, for any distribution
$\Pi\in{\mathsf{ext}}(\Gamma)$, the distribution of $(u,y)$ for
$(u,v,y)\sim\Pi$ is perfectly calibrated. Therefore,
$\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}[(y-u)w(u)]=0.$
By Lemma 7.4,
$\displaystyle\left|\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}[(y-v)w(v)]\right|$
$\displaystyle\leq
2\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}[|u-v|]+\left|\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}[(y-u)w(u)]\right|$
$\displaystyle\leq 2\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}[|u-v|].$
Taking infimum over $\Pi\in{\mathsf{ext}}(\Gamma)$ completes the proof. ∎
To prove Theorem 7.3, it remains to prove the lower bound on
$\mathsf{smCE}(\Gamma)$. We prove that in the rest of the section.
### 7.1 Linear Program Formulation of Lower Calibration Distance
For a distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$, we show that a
discretized version of $\underline{\mathsf{dCE}}(\Gamma)$ can be formulated as
the optimal value of a linear program, and the error caused by the
discretization can be made arbitrarily small. We then use the strong duality
theorem of linear programming to prove the lower bound of
$\mathsf{smCE}(\Gamma)$ in Theorem 7.3. The linear programming formulation
also allows us to give an alternative proof of the upper bound in Theorem 7.3
using the weak duality theorem. Moreover, the linear program formulation gives
us an efficient algorithm for estimating $\underline{\mathsf{dCE}}(\Gamma)$.
Our first step is to assume that $\Gamma$ is a distribution over
$V\times\\{0,1\\}$ for some _finite_ set $V\subseteq[0,1]$. This is mostly
without loss of generality because for $\varepsilon>0$ we can round every
value $v\in[0,1]$ in $(v,y)\sim\Gamma$ to the closest value in
$\\{0,\varepsilon,2\varepsilon,\ldots\\}\cap[0,1]$ without changing
$\underline{\mathsf{dCE}}(\Gamma)$ by more than $\varepsilon$. The following
definition allows us to further define discretized versions of
${\mathsf{ext}}(\Gamma)$ and $\underline{\mathsf{dCE}}(\Gamma)$:
###### Definition 7.5.
Let $U,V\subseteq[0,1]$ be finite sets. Let $\Gamma$ be a distribution over
$V\times\\{0,1\\}$. Define the set ${\mathsf{ext}}^{U}(\Gamma)$ to consist of
all joint distributions $\Pi$ of triples $(u,v,y)\in U\times
V\times\\{0,1\\}$, such that
* •
the marginal distribution of $(v,y)$ is $\Gamma$;
* •
the marginal distribution $(u,y)$ is perfectly calibrated:
$\mathop{\mathbb{E}}_{\Pi}[y|u]=u$.
We define $\underline{\mathsf{dCE}}^{U}(\Gamma)$ to be
$\underline{\mathsf{dCE}}^{U}(\Gamma):=\inf\limits_{\Pi\in{\mathsf{ext}}^{U}(\Gamma)}\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|.$
(16)
Later in Lemma 7.11 we will show that $\underline{\mathsf{dCE}}^{U}(\Gamma)$
is close to $\underline{\mathsf{dCE}}(\Gamma)$ as long as $U$ is a suitably
rich class. For now, we show how to formulate
$\underline{\mathsf{dCE}}^{U}(\Gamma)$ as the optimal value of a linear
program:
###### Lemma 7.6.
Let $U,V,\Gamma$ be defined as in Definition 7.5 and assume
$\\{0,1\\}\subseteq U$. By a slight abuse of notation, we define $\Gamma(v,y)$
to be the probability mass of $\Gamma$ on $(v,y)\in V\times\\{0,1\\}$. Then
the following linear program with variables $\Pi(u,v,y)$ for $(u,v,y)\in
U\times V\times\\{0,1\\}$ is feasible and its optimal value equals
$\underline{\mathsf{dCE}}^{U}(\Gamma)$:
$\displaystyle\mathrm{minimize}\quad$ $\displaystyle\sum_{(u,v,y)\in U\times
V\times\\{0,1\\}}|u-v|\,\Pi(u,v,y)$ (17) $\displaystyle\mathrm{s.t.}\quad$
$\displaystyle\sum_{u\in U}\Pi(u,v,y)=\Gamma(v,y),$ $\displaystyle\text{for
every }(v,y)\in V\times\\{0,1\\};$ ($r(v,y)$) $\displaystyle(1-u)\sum_{v\in
V}\Pi(u,v,1)=u\sum_{v\in V}\Pi(u,v,0),$ $\displaystyle\text{for every }u\in
U;$ ($s(u)$) $\displaystyle\Pi(u,v,y)\geq 0,$ $\displaystyle\text{for every
}(u,v,y)\in U\times V\times\\{0,1\\}.$
Moreover, the dual of the linear program (17) is the following linear program
(18) with variables $r(v,y)$ and $s(u)$ for $u\in U,v\in V$ and
$y\in\\{0,1\\}$. By the duality theorem, the optimal value of (18) is also
$\underline{\mathsf{dCE}}^{U}(\Gamma)$.
$\displaystyle\mathrm{maximize}\quad$ $\displaystyle\sum_{(v,y)\in
V\times\\{0,1\\}}r(v,y)\Gamma(v,y)$ (18) $\displaystyle\mathrm{s.t.}\quad$
$\displaystyle r(v,y)\leq|u-v|+(y-u)s(u),\quad\text{for every }(u,v,y)\in
U\times V\times\\{0,1\\}.$ ($\Pi(u,v,y)$)
###### Proof.
Any distribution $\Pi$ over $U\times V\times\\{0,1\\}$ corresponds to a
function $\Pi:U\times V\times\\{0,1\\}\to\mathbb{R}$ where $\Pi(u,v,y)$ is the
probability mass on $(u,v,y)\in U\times V\times\\{0,1\\}$. It is easy to check
that if the distribution $\Pi$ belongs to ${\mathsf{ext}}(\Gamma)$, then the
function $\Pi$ satisfies the constraints of (17), and conversely, any function
$\Pi$ satisfying the constraints also corresponds to a distribution
$\Pi\in{\mathsf{ext}}(\Gamma)$. In particular, for $(u,v,y)\sim\Pi$, the first
constraint ensures that the marginal distribution of $(v,y)$ is $\Gamma$, and
the second constraint ensures that the marginal distribution of $(u,y)$ is
calibrated. Moreover, the objective of (17) corresponds to the expectation
$\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|$ in (16). This proves that
$\underline{\mathsf{dCE}}^{U}(\Gamma)$ is equal to the optimal value of the
linear program (17). To show that the linear program (17) is feasible,
consider setting $\Pi(u,v,y)=\Gamma(v,y)$ if $u=y$, and setting $\Pi(u,v,y)=0$
if $u\neq y$. It is easy to check that this choice of $\Pi$ satisfies the
constraints of (17) using our assumption that $\\{0,1\\}\subseteq U$. ∎
###### Claim 7.7.
Let $U,V\subseteq[0,1]$ be finite sets and assume $\\{0,1\\}\subseteq U$. The
optimal value of the dual linear program (18) does not change even if we add
the additional constraints $-1\leq s(u)\leq 1$ for every $u\in U$.
###### Proof.
Let $r,s$ be a feasible solution to (18). Setting $u=y$ in the constraint of
(18), we have
$r(v,y)\leq|v-y|\quad\text{for every }(v,y)\in V\times\\{0,1\\}.$ (19)
Consider any $u\in U$ for which $s(u)>1$. For every $v\in V$,
$\displaystyle r(v,0)$
$\displaystyle\leq|u-v|+(0-u)s(u)\leq|u-v|+(0-u),\quad\text{and}$
$\displaystyle r(v,1)$ $\displaystyle\leq|v-1|\leq|u-v|+|1-u|=|u-v|+(1-u).$
(by (19))
Therefore, changing $s(u)$ to $1$ does not violate the constraint of (18).
Similarly when $s(u)<-1$, we can change $s(u)$ to $-1$ without violating any
constraint. ∎
###### Remark 7.8.
Since $\Gamma(v,y)$ in the objective of (18) is nonnegative, it is always
without loss of generality to assume that $r(v,y)$ is as large as possible,
i.e.,
$r(v,y)=\min_{u\in U}\big{(}|u-v|+(y-u)s(u)\big{)}.$ (20)
Assuming (20), it is easy to check that $r(v,y)$ is $1$-Lipschitz in $v$,
i.e., $|r(v_{1},y)-r(v_{2},y)|\leq|v_{1}-v_{2}|$ for every $v_{1},v_{2}\in V$
and $y\in\\{0,1\\}$. When $\\{0,1\\}\subseteq U$, 7.7 allows us to assume that
$-1\leq s(u)\leq 1$ without loss of generality. When this assumption and (20)
are both satisfied, it is easy to verify that
$r(v,y)\in[-|v-y|,|v-y|]\subseteq[-1,1]$ and $r(v,1)-r(v,0)\in[-1,1]$ for
every $v\in V$ and $y\in\\{0,1\\}$. Indeed, in (20) we have
$|u-v|+(y-u)s(u)\geq|u-v|-|y-u|\geq-|v-y|$ and thus $r(v,y)\geq-|v-y|$. The
upper bound $r(v,y)\leq|v-y|$ has been shown in (19).
###### Remark 7.9.
When $U,V$ are finite sets satisfying $\\{0,1\\}\subseteq U=V\subseteq[0,1]$,
using Remark 7.8 one can verify that the dual linear program (18) has the same
optimal value as the following linear program:
$\displaystyle\mathrm{maximize}\quad$ $\displaystyle\sum_{(v,y)\in
V\times\\{0,1\\}}r(v,y)\Gamma(v,y)$ (21) $\displaystyle\mathrm{s.t.}\quad$
$\displaystyle|r(v_{1},y)-r(v_{2},y)|\leq|v_{1}-v_{2}|,$
$\displaystyle\text{for every }(v_{1},v_{2},y)\in V\times V\times\\{0,1\\};$
(22) $\displaystyle r(v,y)\leq(y-v)s(v),$ $\displaystyle\text{for every
}(v,y)\in V\times\\{0,1\\}.$
The constraints (22) can be enforced simply by checking neighboring pairs
$(v_{1},v_{2})$ when the values in $V$ are sorted. Thus the effective number
of constraints in (22) is $O(|V|)$.
###### Remark 7.10.
Let $U\subseteq[0,1]$ be a finite set satisfying $\\{0,1\\}\subseteq U$. Given
a distribution $\Gamma$ over $V\times\\{0,1\\}$ for a finite
$V\subseteq[0,1]$, Lemma 7.6 allows us to efficiently compute
$\underline{\mathsf{dCE}}^{U}(\Gamma)$ by solving either the primal linear
program (17) or the dual linear program (18). When $U=V$, it may be more
efficient to solve the equivalent linear program (21) which effectively has
only $O(|V|)$ constraints as we mention in Remark 7.9. Moreover, given two
distributions $\Gamma$ and $\Gamma^{\prime}$ that are close in a certain
Wasserstein distance, using the dual linear program (18) we can show that
$\underline{\mathsf{dCE}}^{U}(\Gamma^{\prime})$ and
$\underline{\mathsf{dCE}}^{U}(\Gamma)$ are close (we make this formal in Lemma
9.11). This allows us to estimate $\underline{\mathsf{dCE}}^{U}(\Gamma)$ only
using examples drawn from $\Gamma$ (see Section 9.2). In Lemma 7.11 below we
show that choosing $|U|=O(1/\varepsilon)$ suffices to ensure that
$\underline{\mathsf{dCE}}^{U}(\Gamma)$ approximates
$\underline{\mathsf{dCE}}(\Gamma)$ up to an additive error $\varepsilon$.
The following lemma relates $\underline{\mathsf{dCE}}^{U}(\Gamma)$ and
$\underline{\mathsf{dCE}}(\Gamma)$:
###### Lemma 7.11.
Let $\Gamma$ be a distribution over $V\times\\{0,1\\}$ for a finite
$V\subseteq[0,1]$. Let $U$ be a finite $\varepsilon$ covering of $[0,1]$
satisfying $\\{0,1\\}\subseteq U$. That is, there exists $\sigma:[0,1]\to U$
such that $|u-\sigma(u)|\leq\varepsilon$ for every $u\in[0,1]$. Then we have
$\underline{\mathsf{dCE}}(\Gamma)\leq\underline{\mathsf{dCE}}^{U}(\Gamma)\leq\underline{\mathsf{dCE}}(\Gamma)+2\varepsilon.$
###### Proof.
It is clear from the definitions that
$\underline{\mathsf{dCE}}(\Gamma)\leq\underline{\mathsf{dCE}}^{U}(\Gamma)$. It
remains to prove
$\underline{\mathsf{dCE}}^{U}(\Gamma)\leq\underline{\mathsf{dCE}}(\Gamma)+2\varepsilon$.
It suffices to prove the following for any arbitrary distribution
$\Pi\in{\mathsf{ext}}(\Gamma)$:
$\underline{\mathsf{dCE}}^{U}(\Gamma)\leq\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|+2\varepsilon.$
(23)
For a positive integer $m$, partition the interval $[0,1]$ into
$I_{1},\ldots,I_{m}$, where $I_{1}=[0,1/m]$ and $I_{j}=((j-1)/m,j/m]$ for
$j=2,\ldots,m$. For every $j=1,\ldots,m$, define
$u_{j}:=\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}[u|u\in I_{j}]$. For
$(u,v,y)\sim\Pi$, we define random variable $u^{\prime}:=u_{j}$ where $j$ is
chosen such that $u\in I_{j}$. Define $\Pi^{\prime}$ to be the joint
distribution of $(u^{\prime},v,y)$. It is easy to check that
$\Pi^{\prime}\in{\mathsf{ext}}^{U^{\prime}}(\Gamma)$, where
$U^{\prime}=\\{u_{1},\ldots,u_{m}\\}$. Therefore,
$\underline{\mathsf{dCE}}^{U^{\prime}}(\Gamma)\leq\mathop{\mathbb{E}}_{(u^{\prime},v,y)\sim\Pi^{\prime}}|u^{\prime}-v|\leq\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|+O(1/m).$
(24)
Now we show that
$\underline{\mathsf{dCE}}^{U}(\Gamma)\leq\underline{\mathsf{dCE}}^{U^{\prime}}(\Gamma)+2\varepsilon.$
(25)
Consider any feasible solution $r:V\times\\{0,1\\}\to\mathbb{R}$ and
$s:U\to\mathbb{R}$ to the dual linear program (18). To prove (25), it suffices
to construct $r^{\prime}:V\times\\{0,1\\}\to\mathbb{R}$ and
$s^{\prime}:U^{\prime}\to\mathbb{R}$ such that $r^{\prime}$ and $s^{\prime}$
are a feasible solution to (18) after we replace $U$ with $U^{\prime}$ in
(18), and
$\sum_{(v,y)\in V\times\\{0,1\\}}r^{\prime}(v,y)\Gamma(v,y)\geq\sum_{(v,y)\in
V\times\\{0,1\\}}r(v,y)\Gamma(v,y)-2\varepsilon.$ (26)
By Remark 7.8, it is without loss of generality to assume that
$s(u)\in[-1,1]$. We choose $r^{\prime}(v,y)=r(v,y)-2\varepsilon$ and
$s^{\prime}(u)=s(\sigma(u))$. This immediately guarantees (26). The following
calculation verifies that $r^{\prime}$ and $s^{\prime}$ are feasible
solutions: for $(u,v,y)\in U^{\prime}\times V\times\\{0,1\\}$,
$\displaystyle|u-v|+(y-u)s^{\prime}(u)$ $\displaystyle={}$
$\displaystyle|u-v|+(y-u)s(\sigma(u))$ $\displaystyle\geq{}$
$\displaystyle|\sigma(u)-v|+(y-\sigma(u))s(\sigma(u))-2\varepsilon$ (by
$|u-\sigma(u)|\leq\varepsilon$ and $|s(\sigma(u))|\leq 1$)
$\displaystyle\geq{}$ $\displaystyle r(v,y)-2\varepsilon$ $\displaystyle={}$
$\displaystyle r^{\prime}(v,y).$
Combining (24) and (25),
$\underline{\mathsf{dCE}}^{U}(\Gamma)\leq\mathop{\mathbb{E}}_{(u,v,y)\sim\Pi}|u-v|+2\varepsilon+O(1/m).$
Taking $m$ to infinity proves (23). ∎
We prove lower and upper bounds for $\mathsf{smCE}(\Gamma)$ using
$\underline{\mathsf{dCE}}^{U}(\Gamma)$ in the two lemmas below.
###### Lemma 7.12.
Let $\Gamma$ be a distribution over $V\times\\{0,1\\}$ for a finite
$V\subseteq[0,1]$. Define $U=V\cup\\{0,1\\}$. Then
$\underline{\mathsf{dCE}}^{U}(\Gamma)\leq 2\mathsf{smCE}(\Gamma)$.
###### Proof.
It suffices to prove that for any feasible solution $r$ and $s$ to the dual
linear program (18), it holds that
$\sum_{(v,y)\in V\times\\{0,1\\}}r(v,y)\Gamma(v,y)\leq
2\mathsf{smCE}(\Gamma).$
By Remark 7.8, we can assume without loss of generality that
$|r(v_{1},y)-r(v_{2},y)|\leq|v_{1}-v_{2}|$ and $r(v,1)-r(v,0)\in[-1,1]$.
Define $w(v):=r(v,1)-r(v,0)$. Then $w$ is $2$-Lipschitz and $w(v)\in[-1,1]$,
which implies that
$2\mathsf{smCE}(\Gamma)\geq\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v)w(v)].$
Moreover, setting $u=v$ in the constraint of (18), we have
$(1-v)r(v,0)+vr(v,1)\leq-(1-v)vs(v)+v(1-v)s(v)=0,$
which implies that
$-vw(v)\geq r(v,0).$
Therefore,
$(y-v)w(v)\geq yw(v)-vw(v)\geq y(r(v,1)-r(v,0))+r(v,0)=r(v,y).$
This implies that
$2\mathsf{smCE}(\Gamma)\geq\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v)w(v)]\geq\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}r(v,y)=\sum_{(v,y)\in
V\times\\{0,1\\}}r(v,y)\Gamma(v,y).\qed$
###### Lemma 7.13.
Let $\Gamma$ be a distribution over $V\times\\{0,1\\}$ for a finite
$V\subseteq[0,1]$. For any finite $U\subseteq[0,1]$, we have
$\mathsf{smCE}(\Gamma)\leq 2\underline{\mathsf{dCE}}^{U}(\Gamma)$.
###### Proof.
Let $w:[0,1]\to[-1,1]$ be a $1$-Lipschitz function. We choose
$r(v,y)=(y-v)w(v)/2$ and $s(u)=w(u)/2$. The following calculation verifies
that this choice of $r$ and $s$ satisfies the constraints in (18):
$\displaystyle r(v,y)-(y-u)s(u)$
$\displaystyle=\frac{1}{2}((y-v)w(v)-(y-u)w(u))$
$\displaystyle=\frac{1}{2}((u-v)w(v)+(y-u)(w(v)-w(u)))$ (by $|w(v)|\leq 1$,
$|y-u|\leq 1$, and $|w(v)-w(u)|\leq|u-v|$) $\displaystyle\leq|u-v|.$
Therefore,
$\underline{\mathsf{dCE}}^{U}(\Gamma)\geq\sum_{(v,y)\in
V\times\\{0,1\\}}r(v,y)\Gamma(v,y)=\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[r(v,y)]=\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v)w(v)/2].$
Taking supremum over $w$ completes the proof. ∎
In the proofs of Lemmas 7.12 and 7.13 above, we use the fact that
$\underline{\mathsf{dCE}}^{U}(\Gamma)$ is equal to the optimal value of the
dual linear program (18). However, for Lemma 7.12 we only need the fact that
$\underline{\mathsf{dCE}}^{U}(\Gamma)$ is _at most_ the optimal value, whereas
for Lemma 7.13 we only need the fact that
$\underline{\mathsf{dCE}}^{U}(\Gamma)$ is _at least_ the optimal value. That
is, our proof of Lemma 7.12 is based on the strong duality theorem, whereas
the proof of Lemma 7.13 is based on the weak duality theorem. Below we apply
Lemma 7.12 and Lemma 7.13 to prove the lower and upper bounds of
$\mathsf{smCE}(\Gamma)$ in Theorem 7.3, respectively.
###### Proof of Theorem 7.3.
For $\varepsilon_{1}>0$, we round the value $v\in[0,1]$ in $(v,y)\sim\Gamma$
to the closest value
$v^{\prime}\in\\{0,\varepsilon_{1},2\varepsilon_{1},\ldots,\\}\cap[0,1]$. Let
$\Gamma^{\prime}$ be the distribution of $(v^{\prime},y)$. It is clear that
$|\underline{\mathsf{dCE}}(\Gamma^{\prime})-\underline{\mathsf{dCE}}(\Gamma)|\leq\varepsilon_{1}$,
and by Lemma 7.4 we have
$|\mathsf{smCE}(\Gamma^{\prime})-\mathsf{smCE}(\Gamma)|\leq 2\varepsilon_{1}$.
By Lemma 7.11, for any $\varepsilon_{2}>0$, there exists a finite set
$U\subseteq[0,1]$ such that
$\underline{\mathsf{dCE}}(\Gamma^{\prime})\leq\underline{\mathsf{dCE}}^{U}(\Gamma^{\prime})\leq\underline{\mathsf{dCE}}(\Gamma^{\prime})+\varepsilon_{2}$.
Moreover, we can always choose $U$ so that $\\{0,1\\}\cup V\subseteq U$. Now
by Lemma 7.12,
$\displaystyle\underline{\mathsf{dCE}}(\Gamma)-\varepsilon_{1}\leq\underline{\mathsf{dCE}}(\Gamma^{\prime})\leq\underline{\mathsf{dCE}}^{U}(\Gamma^{\prime})\leq
2\mathsf{smCE}(\Gamma^{\prime})\leq 2\mathsf{smCE}(\Gamma)+4\varepsilon_{1}.$
By Lemma 7.13,
$\mathsf{smCE}(\Gamma)-2\varepsilon_{1}\leq\mathsf{smCE}(\Gamma^{\prime})\leq
2\underline{\mathsf{dCE}}^{U}(\Gamma^{\prime})\leq
2\underline{\mathsf{dCE}}(\Gamma^{\prime})+2\varepsilon_{2}\leq
2\underline{\mathsf{dCE}}(\Gamma)+2\varepsilon_{1}+2\varepsilon_{2}.$
Taking $\varepsilon_{1},\varepsilon_{2}\to 0$ completes the proof. ∎
We conclude with an efficient algorithm for smooth calibration error. The
generalization bound to accompany it will be proved in Corollary 9.9 in
Section 9.
###### Theorem 7.14.
For the empirical distribution $\Gamma$ over a sample
$S=((v_{1},y_{1}),\ldots(v_{n},y_{n}))\in([0,1]\times\\{0,1\\})^{n}$ we can
calculate
$\mathsf{smCE}(\Gamma):=\sup\limits_{w\in
L}\frac{1}{n}\sum_{i}(y_{i}-v_{i})w(v_{i})$
in time $\mathrm{poly}(n)$, where $L$ is the family of all $1$-Lipschitz
functions $w:[0,1]\to[-1,1]$.
###### Proof.
For a sample $S$, the supremum above can be computed using the following
linear maximization problem in variables $z_{i}$ (that are intended to be
equal to $w(v_{i})$).
$\displaystyle\max\frac{1}{n}\sum_{i}(y_{i}-v_{i})z_{i}$
$\displaystyle\mathrm{s.t.}\quad$ $\displaystyle-1\leq z_{i}\leq
1,\quad~{}\forall i;$
$\displaystyle|z_{i}-z_{j}|\leq|v_{i}-v_{j}|,\quad~{}\forall i,j.$
Indeed, on one hand any Lipschitz function $w$ yields a feasible solution to
this linear program, by setting $z_{i}:=w(v_{i})$. On the other hand, for any
feasible solution to this program, we can find a $1$-Lipschitz function $w$
satisfying $w(v_{i})=z_{i}$ for all $i$, using a piecewise linear extension. ∎
## 8 Kernel Calibration Error
We now consider kernel calibration ($\mathsf{kCE}^{K}$), which is a special
case of weighted calibration (Definition 7.1) where the family of weight
functions lies in a Reproducing Kernel Hilbert Space $\mathcal{H}$. This
notion was previously defined in [KSJ18] (called “MMCE”), motivated as a
differentiable proxy for ECE.
We advance the theory of kernel calibration in several ways. First, we show
that the kernel calibration error for the _Laplace_ kernel is in fact a
consistent calibration measure. This provides strong theoretical justification
for measuring kernel calibration, and also gives a reason to use the _Laplace_
kernel specifically, among other choices of kernel. Indeed, we complement the
Laplace kernel with a negative result: using the Gaussian kernel does not
yield a consistent calibration measure.
Finally, as a curiosity, we observe that the techniques of [RR07] yield an
alternate estimator for Laplace kernel calibration error, which bears
similarity to the randomized-binning estimator of interval calibration error.
### 8.1 Preliminaries
We consider a _Reproducing Kernel Hilbert Space_ of functions on a real line
$\mathbb{R}$, i.e. a Hilbert space $\mathcal{H}$ of functions
$h:\mathbb{R}\to\mathbb{R}$, with the associated norm
$\|\cdot\|_{\mathcal{H}}$. This space is equipped with the feature map
$\phi:\mathbb{R}\to\mathcal{H}$, satisfying $\langle
h,\phi(v)\rangle_{\mathcal{H}}=h(v)$. The associated kernel
$K:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ is now defined as
$K(u,v)=\langle\phi(u),\phi(v)\rangle_{\mathcal{H}}$.
###### Definition 8.1 (Kernel Calibration Error [KSJ18]).
Given a RKHS $\mathcal{H}$ with the norm $\|\cdot\|_{\mathcal{H}}$, we can
consider a class of functions bounded by $1$ with respect to this norm
$B_{\mathcal{H}}:=\\{h\in\mathcal{H}:\|h\|_{\mathcal{H}}\leq 1\\}$, and we can
study the associated weighted calibration error
$\mathsf{wCE}^{B_{\mathcal{H}}}$ (as in Definition 7.1).
The _kernel calibration error_ of a distribution $\Gamma$ over
$[0,1]\times\\{0,1\\}$ associated with the kernel $K$ is defined as weighted
calibration error with respect to the family of weight functions
$B_{\mathcal{H}}$
$\mathsf{kCE}^{K}(\Gamma):=\mathsf{wCE}^{B_{\mathcal{H}}}(\Gamma).$ (27)
Accordingly, for a distribution $\mathcal{D}$ and a predictor $f$, we define
$\mathsf{kCE}^{K,\mathcal{D}}(f):=\mathsf{kCE}_{K}(\mathcal{D}_{f})$.
The following results are standard, from [KSJ18]. First, $\mathsf{kCE}_{K}$
can be written as the $K$-norm of a certain function, without explicitly
maximizing over weight functions $h\in\mathcal{H}$.
###### Lemma 8.2 ([KSJ18]).
For any kernel $K$ and the associated RKHS $\mathcal{H}$, and any distribution
$\Gamma$ over $[0,1]\times\\{0,1\\}$,
$\mathsf{kCE}^{K}(\Gamma)=\|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v)\phi(v)]\|_{\mathcal{H}}.$
We reproduce the proof of this for completeness in Appendix B.3. This
expression can be efficiently evaluated for an empirical distribution on a
samples $S=\\{(y_{1},v_{1}),\ldots(y_{k},v_{k})\\}$.
###### Claim 8.3 ([KSJ18]).
Let $\Gamma$ be the empirical distribution over a given sample
$\\{(v_{1},y_{1}),\ldots,(v_{n},y_{n})\\}$. We can compute
$\mathsf{kCE}^{K}(\Gamma)$ in time $\mathcal{O}(n^{2})$ using
$\mathcal{O}(n^{2})$ evaluations of the kernel function:
$\mathsf{kCE}^{K}(\Gamma)^{2}=\frac{1}{n^{2}}\sum_{i,j}(y_{i}-v_{i})(y_{j}-v_{j})K(v_{i},v_{j}).$
(28)
In Section 9 we discuss the convergence of the kernel calibration error for
the empirical distribution over the sample, to the kernel calibration error of
the entire distribution — this convergence, together with 8.3 gives an
efficient way to estimate the kernel calibration error of a given predictor
from a boudned number of samples from the underlying distribution.
##### The Laplace Kernel.
We recall standard facts about the Laplace kernel
$K_{\mathsf{Lap}}(u,v):=\exp(-|u-v|)$, and its associated RKHS. It turns out
that the norm induced by functions in the associated RKHS has simple explicit
expression — the corresponding space is a Sobolev space.
###### Fact 8.4 ([BTA11]).
For the Laplace kernel $K_{\mathsf{Lap}}(u,v)=\exp(-|u-v|)$, we have
associated RKHS
$\mathcal{H}_{\mathsf{Lap}}=\\{h:\mathbb{R}\to\mathbb{R}:\int\hat{h}(\omega)^{2}(1+\omega^{2})\,\mathrm{d}\omega<\infty\\}$,
where $\hat{h}$ denotes the Fourier transform of $u$. The associated inner
product is given by
$\langle
h_{1},h_{2}\rangle_{K_{\mathsf{Lap}}}=\int_{-\infty}^{\infty}\hat{h}_{1}(\omega)\hat{h}_{2}(\omega)(1+\omega^{2})\,\mathrm{d}\omega,$
in particular, for function $h:\mathbb{R}\to\mathbb{R}$,
$\|h\|_{K_{\mathsf{Lap}}}^{2}=\int_{-\infty}^{\infty}\hat{h}(\omega)^{2}(1+\omega^{2})\,\mathrm{d}\omega=\|h\|_{2}^{2}+\|h^{\prime}\|_{2}^{2}.$
### 8.2 Laplace Kernel Calibration Error is a Consistent Calibration Measure
We now ask whether there is a kernel $K$ for which $\mathsf{kCE}_{K}$ is a
consistent calibration measure. The main result in this section is to show
that this is the case for the _Laplace_ kernel. Specifically, we prove that:
###### Theorem 8.5.
The Laplace kernel calibration error
$\mathsf{kCE}^{\mathsf{Lap}}:=\mathsf{kCE}^{K_{\mathsf{Lap}}}$ satisfies the
following inequalities
$\frac{1}{3}\mathsf{smCE}(\Gamma)\leq\mathsf{kCE}^{\mathsf{Lap}}(\Gamma)\leq\sqrt{\underline{\mathsf{dCE}}(\Gamma)}.$
By Corollary 6.4 and Theorem 7.3 it follows that $\mathsf{kCE}^{\mathsf{Lap}}$
is a $(1/2,2)$-consistent calibration measure. Interestingly, the choice of
kernel is crucial: we show that for the Gaussian kernel, the resulting measure
does not satisfy robust soundness anymore. Specifically in Appendix C, we
prove the following theorem.
###### Theorem 8.6.
For every $\varepsilon$, there is a distribution $\Gamma_{\varepsilon}$ over
$[0,1]\times\\{0,1\\}$, such that
$\mathsf{smCE}(\Gamma_{\varepsilon})\geq\Omega(\varepsilon^{\mathcal{O}(1)})$,
and
$\mathsf{kCE}^{\mathsf{Gauss}}(\Gamma_{\varepsilon})\leq\mathcal{O}(\exp(-1/\varepsilon))$,
where $\mathsf{kCE}^{\mathsf{Gauss}}:=\mathsf{kCE}^{K_{\mathsf{Gauss}}}$ is
the Gaussian kernel calibration error with
$K_{\mathsf{Gauss}}(u,v)=\exp(-(u-v)^{2})$.
We will start by proving continuity of Laplace kernel calibration error. This
following lemma is strengthening of the upper bound in theorem 8.5.
###### Lemma 8.7.
Let $\Pi$ be a distribution over $[0,1]\times[0,1]\times\\{0,1\\}$. For
$(u,v,y)\sim\Pi$, let $\Gamma_{1}$ be the distribution of $(u,y)$ and
$\Gamma_{2}$ be the distribution of $(v,y)$. Assume
$\mathop{\mathbb{E}}|u-v|\leq\varepsilon$. Then
$|\mathsf{kCE}^{\mathsf{Lap}}(\Gamma_{1})-\mathsf{kCE}^{\mathsf{Lap}}(\Gamma_{2})|\leq
2\sqrt{2\varepsilon}.$
###### Proof.
Using Lemma 8.2 and a triangle inequality, we need to show that
$\|\mathop{\mathbb{E}}[(y-u)\phi(u)-(y-v)\phi(v)]\|_{\mathcal{H}}\leq\mathcal{O}(\sqrt{\varepsilon}).$
By convexity of a norm, we have
$\|\mathop{\mathbb{E}}[(y-u)\phi(u)-(y-v)\phi(v)]\|_{\mathcal{H}}\leq\mathop{\mathbb{E}}\|u\phi(u)-v\phi(v)\|+\mathop{\mathbb{E}}\|\phi(u)-\phi(v)\|_{\mathcal{H}}.$
We will bound both of those terms separately. By Jensen inequality, we have
$\mathop{\mathbb{E}}\|\phi(u)-\phi(v)\|_{\mathcal{H}}\leq\sqrt{\mathop{\mathbb{E}}\|\phi(u)-\phi(v)\|_{\mathcal{H}}^{2}},$
and similarly for $\|u\phi(u)-v\phi(v)\|_{\mathcal{H}}$. Now
$\mathop{\mathbb{E}}\|\phi(u)-\phi(v)\|_{\mathcal{H}}^{2}=2-\mathop{\mathbb{E}}2K(u,v)=2-\mathop{\mathbb{E}}2\exp(-|u-v|)\leq
2\varepsilon,$
where the inequality follows from $\exp(-x)\geq 1-x$. Similarly,
$\mathop{\mathbb{E}}\|u\phi(u)-v\phi(v)\|_{\mathcal{H}}^{2}=u^{2}+v^{2}-2uv\mathop{\mathbb{E}}K(u,v)\leq
u^{2}+v^{2}-2uv(1-\varepsilon)\leq 2\varepsilon.$
Adding those two together, we get the final bound
$\|\mathop{\mathbb{E}}(y-u)\phi(u)-(y-v)\phi(v)\|_{\mathcal{H}}\leq
2\sqrt{2\varepsilon}.\qed$
###### Lemma 8.8.
Let $B_{\mathsf{Lap}}$ be a set of functions $w:\mathbb{R}\to\mathbb{R}$
bounded by one with respect to the norm induced by the Laplace kernel
$\exp(-|u-v|)$ on the associated RKHS. Then for any $1$-Lipschitz function
$w:[0,1]\to[-1,1]$ and any $\varepsilon$, there is
$\|\tilde{w}_{\varepsilon}\|_{\mathcal{H}_{\mathsf{Lap}}}\leq 3$, such that
$|w-\tilde{w}_{\varepsilon}|<\varepsilon$ for all $x\in[0,1]$.
###### Proof.
For a $1$-Lipschitz function $w:[0,1]\to[-1,1]$, we will first construct the
$1$-Lipschitz extension of $w$, say $\tilde{w}:\mathbb{R}\to[-1,1]$, by taking
$\tilde{w}(t)=\left\\{\begin{array}[]{ll}0&\textrm{ for }w\leq-1\\\
w(0)(1+t)&\textrm{ for }w\in[-1,0]\\\ w(t)&\textrm{ for }w\in[0,1]\\\
w(1)(-t+2)&\textrm{ for }w\in[1,2]\\\ 0&\textrm{otherwise.}\end{array}\right.$
It is standard fact that by taking a convolution of
$\tilde{w}_{\varepsilon}:=\tilde{w}*g_{\varepsilon}$, where $g_{\varepsilon}$
is a smooth non-negative function, supported on $[-\varepsilon,\varepsilon]$
with $\int_{\mathbb{R}}g_{\varepsilon}=1$, then $\tilde{w}_{\varepsilon}$ will
satisfy the following conditions [hss].
1. 1.
$~{}\forall v,\,|\tilde{w}(v)-\tilde{w}_{\varepsilon}(v)|\leq\varepsilon$,
2. 2.
$\operatorname{supp}(\tilde{w}_{\varepsilon})\subset[-1-\varepsilon,2+\varepsilon]$,
3. 3.
$\tilde{w}_{\varepsilon}$ is smooth, and moreover $~{}\forall
v,\,|w^{\prime}(v)|\leq 1$.
Combining properties those, we get
$\|\tilde{w}_{\varepsilon}\|_{\mathcal{H}_{\mathsf{Lap}}}^{2}=\|f\|_{2}^{2}+\|f^{\prime}\|_{2}^{2}\leq(3+\varepsilon)(1+\varepsilon)^{2}+(3+\varepsilon)\leq
7.$
∎
We are now ready to prove Theorem 8.5
###### Proof of Theorem 8.5.
By Lemma 8.8, we have
$\mathsf{smCE}(\Gamma)\leq\mathsf{kCE}^{\mathsf{Lap}}(\Gamma)$. Indeed, let us
take a Lipschitz weight function $w$, such that
$\mathsf{smCE}(\Gamma)=\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}(y-v)w(v).$ Now we
can take $\tilde{w}\in 3B_{\mathsf{Lap}}$ as in Lemma 8.8, such that
$\mathop{\mathbb{E}}(y-v)\tilde{w}(v)\geq\mathsf{smCE}(\Gamma)-\varepsilon$,
and therefore
$\mathsf{kCE}^{\mathsf{Lap}}(\Gamma)\geq\frac{1}{3}(\mathsf{smCE}(\Gamma)-\varepsilon)$.
Taking $\varepsilon\to 0$ proves the desired fact.
The other inequality
$\mathsf{kCE}^{\mathsf{Lap}}(\Gamma)\leq\underline{\mathsf{dCE}}(\Gamma)^{1/2}$
follows directly from Lemma 8.7: by definition of $\underline{\mathsf{dCE}}$
we can find distribution $\Pi\in{\mathsf{ext}}(\Gamma)$ of $(u,v,y)$ s.t.
$(v,y)$ is distributed according to $\Gamma$, and the distribution
$\Gamma^{\prime}$ of $(u,y)$ is perfectly calibrated, and
$\mathop{\mathbb{E}}|u-v|=\underline{\mathsf{dCE}}(\Gamma)$. Now
$\mathsf{kCE}^{\mathsf{Lap}}(\Gamma)\leq\mathsf{kCE}^{\mathsf{Lap}}(\Gamma^{\prime})+\mathcal{O}(\sqrt{\underline{\mathsf{dCE}}(\Gamma^{\prime})})=\mathcal{O}(\sqrt{\underline{\mathsf{dCE}}(\Gamma^{\prime})}).\qed$
### 8.3 Alternate Estimation Algorithms
Computing the exact kernel calibration error from $n$ samples requires
$O(n^{2})$ time, by the computation in 8.3. However, we can approximate this
quantity efficiently, by simply sub-sampling terms independently from the
$n^{2}$ terms in Equation (28). By standard arguments, sub-sampling
$\mathcal{O}(\varepsilon^{-2}\log(\delta^{-1}))$ terms yields an estimator
accurate within $\pm\varepsilon$, with probability $1-\delta$.
In this section we will describe two alternate algorithms for estimating
Laplace kernel calibration specifically. These algorithms do not improve over
the naive sub-sampling estimator in worst-case guarantees, and are not
algorithmically novel— the algorithms are corollaries of [RR07]. Nevertheless,
we include them to highlight an intriguing connection: one of these algorithms
involves a randomized binning-like estimator, which is suggestive of (though
not formally equivalent to) our notion of $\mathsf{intCE}$. We consider it
somewhat surprising that the formal connection between $\mathsf{intCE}$ and
$\mathsf{kCE}$ extends to an _algorithmic_ similarity between their respective
estimators.
We present the algorithms as constructing an unbiased and bounded estimator
for the following quantity.
###### Definition 8.9 (Empirical Kernel Calibration Error).
For a sample $S=((v_{1},y_{1}),(v_{2},y_{2}),\ldots,(v_{k},y_{k}))$ we define
the empirical kernel calibration error as
$\widehat{\mathsf{kCE}}^{K}(S)^{2}=\frac{1}{k^{2}}\sum_{i,j}(y_{i}-v_{i})(y_{j}-v_{j})K(v_{i},v_{j}).$
In the abstract, Rahimi-Recht provides an efficient way of finding a low-rank
approximation of the Kernel matrix $M\in\mathbb{R}^{k\times k}$ given by
$M_{ij}=K(v_{i},v_{j})$. Specifically, a version of the Claim 1 in their paper
can be stated as follows.
###### Theorem 8.10.
[RR07] Let $v_{1},\ldots v_{k}\in[0,1]$, and let us consider the Kernel matrix
$M\in\mathbb{R}^{k\times k}$, $M_{i,j}=K(v_{i},v_{j})$ where $K(u,v)=K(|u-v|)$
is a shift invariant positive definite kernel. There is a random
$z\in\mathbb{C}^{n}$, such that if we take
$\tilde{M}:=zz^{*}\in\mathbb{C}^{n\times n}$, then
$\mathop{\mathbb{E}}[\tilde{M}]=M$, and moreover $\|z\|_{\infty}\leq 1$.
For the specific case of the Laplace kernel, the random vector
$u\in\mathbb{C}^{n}$ guaranteed by Theorem 8.10 can be sampled as follows. The
proof of 8.11 is standard, and included for completeness in Appendix B.3.
###### Claim 8.11.
For any sequence of points $v_{1},v_{2},\ldots,v_{k}\in\mathbb{R}$, if we
chose $\omega\sim\mathrm{Cauchy}(1)$, and a vector $z\in\mathbb{C}^{n}$ given
by $z_{j}:=\exp(-i\omega v_{j})$, then $\|z\|_{\infty}\leq 1$ and
$\mathop{\mathbb{E}}[zz^{*}]=M\in\mathbb{R}^{k\times k}$, where
$M_{i,j}=\exp(-|v_{i}-v_{j}|)$.
###### Remark 8.12.
Rahimi-Recht [RR07] actually prove a significantly stronger version of Theorem
8.10, showing that under certain additional conditions on the kernel $K$ the
average of independent rank one matrices
$M^{\prime}:=\frac{1}{s}\sum_{i}\tilde{M}_{i}$ as above uniformly converges to
$M$. That is, for $s=\mathcal{O}(\varepsilon^{-2}\log(\delta^{-1}))$, with
probability $1-\delta$ they provide a bound $~{}\forall
i,j\,|M_{ij}-M^{\prime}_{ij}|\leq\varepsilon$. These additional conditions are
not satisfied by the Laplace kernel, but we will not need this stronger,
uniform bound.
Algorithm 1 Random Fourier features algorithm for Laplace kernel calibration
error estimation
1:function LaplaceFourierEstimate($(v_{1},y_{1}),\ldots(v_{n},y_{k})$)
2: $\omega$ $\leftarrow$ $\mathrm{Cauchy}(1)$
3: $R$ $\leftarrow$ $\sum_{j}(v_{j}-y_{j})\exp(-i\omega v_{i})$
4: return $\frac{|R|^{2}}{k^{2}}$
5:end function
With Theorem 8.10 we can find in linear time an unbiased and bounded estimate
of the quantity $\widehat{\mathsf{kCE}}(S)^{2}$. Indeed, we have
$\widehat{\mathsf{kCE}}(S)^{2}=\frac{1}{k^{2}}\sum_{i,j}(y_{i}-v_{i})(y_{j}-v_{j})K(v_{i},v_{j})=\frac{1}{k^{2}}r^{T}Mr$
where $r_{i}:=(v_{i}-y_{i})$ and $M$ is the kernel matrix defined above. Now,
on one hand for a random vector $z$ as in Theorem 8.10 we have
$\frac{1}{k^{2}}\mathop{\mathbb{E}}[r^{T}zz^{*}r]=\frac{1}{n^{2}}r^{T}(\mathop{\mathbb{E}}[zz^{*}])r=\frac{r^{T}Mr}{k^{2}}=\widehat{\mathsf{kCE}}(S)^{2}.$
On the other hand
$\frac{r^{T}zz^{*}R}{k^{2}}=\frac{|\langle z,r\rangle|^{2}}{k^{2}}\leq 1,$
since both $z$ and $r$ have entries bounded by $1$. Therefore the quantity
$\frac{|\langle z,r\rangle|^{2}}{n^{2}}$ is an unbiased and bounded by one
estimate of $\widehat{\mathsf{kCE}}(S)^{2}$.
Algorithm 1 describes an implementation of the above estimator for the Laplace
kernel, which runs in linear time. The argument described above (directly
applying the results of [RR07]) yields the following guarantee.
###### Theorem 8.13.
The algorithm LaplaceFourierEstimate provides an unbiased and bounded by $1$
estimate for $\widehat{\mathsf{kCE}}(S)^{2}$ and runs on $n$ samples in time
$\mathcal{O}(n)$.
If we want to translate this unbiased estimator into a confidence interval, it
is enough to run LaplaceFourierEstimate independently
$\mathcal{O}(\varepsilon^{-2}\log\delta^{-1})$ times, and average the results,
to get an estimate of $\widehat{\mathsf{kCE}}_{\mathsf{\mathsf{Lap}}}(S)^{2}$
that is accurate within $\pm\varepsilon$ with probability $1-\delta$, via
standard Chernoff bound arguments.
Interestingly, Claim 2 in the same Rahimi-Recht work [RR07] authors provide a
different randomized low-rank approximation to the kernel matrix $M$, the
_Random Binning Featues_ algorithm. It is insightful to write down explicitly
our kernel calibration estimator with this different low-rank approximation.
As it turns out, by unrolling the abstraction, we can uncover a new and
“natural” unbiased and bounded estimate of $\widehat{\mathsf{kCE}}(S)^{2}$,
described in Algorithm 2. The final algorithm is similar in spirit to the
binning algorithms estimating $\mathsf{intCE}$ (see Section 6), but here we
are using both random bin size (from a carefully chosen distribution) and
random shifts.
Algorithm 2 Binning algorithm for Laplace kernel calibration error estimation
1:function LaplaceBinningEstimate($(v_{1},y_{1}),\ldots(v_{k},y_{k})$)
2: $\delta$ $\leftarrow$ $\mathrm{Gamma}(k=2,\theta=1)$
3: $\tau$ $\leftarrow$ $\mathrm{Unif}(0,\delta)$
4: $B[0,\ldots,\lfloor 1/\delta\rfloor+2]$ $\leftarrow$ 0
5: for $i\leftarrow 1\textrm{ to }k$ do
6: $t$ $\leftarrow$ $\lfloor{\frac{v_{i}+\tau}{\delta}}\rfloor$
7: $B[t]$ $\leftarrow$ $B[t]+(v_{i}-y_{i})$
8: end for
9: return $\sum_{i}B[i]^{2}$
10:end function
The following theorem is essentially implicit in [RR07] and follows directly
from their proof method. Since this exact statement is not present in their
paper, we repeat the proof in Appendix B.3.
###### Theorem 8.14.
The algorithm LaplaceBinningEstimate provides an unbiased and bounded by $1$
estimate for $\widehat{\mathsf{kCE}}(S)^{2}$ and runs on $n$ samples in time
$\mathcal{O}(n)$.
### 8.4 A Practical Note
For practitioners measuring Laplace kernel calibration, we emphasize that
while it requires $\mathcal{O}(n^{2})$ time to compute _exactly_ from $n$
samples, it can be computed _approximately_ much more efficiently.
Specifically, for a sample of prediction-label pairs
$S=((v_{1},y_{1}),(v_{2},y_{2}),\ldots,(v_{k},y_{k}))$, the exact kernel error
computation is given by Equation (28), reproduced below for the Laplace
kernel:
$\mathsf{kCE}^{\mathsf{Lap}}(\Gamma)^{2}=\frac{1}{n^{2}}\sum_{i,j}(y_{i}-v_{i})(y_{j}-v_{j})\exp(-|v_{i}-v_{j}|)$
(29)
This is an average of $n^{2}$ terms. To approximate this, we can instead
average $M$ of these terms, chosen independently at random with replacement.
This will yield an estimate of $(\mathsf{kCE}_{K})^{2}$ accurate to within
$\pm\widetilde{\mathcal{O}}(1/\sqrt{M})$. For example, a reasonable setting of
parameters is $M=10n$, which yields a linear-time estimator that is accurate
to within $\pm\widetilde{\mathcal{O}}(1/\sqrt{n})$. Note, finally, that the
above computation computes the _square_ of the kernel calibration error
$\mathsf{kCE}$.
## 9 Estimating Calibration Measures Using Random Sample
When the distribution $\Gamma$ is the uniform distribution over data points
$(v_{1},y_{1}),\ldots,(v_{n},y_{n})\in[0,1]\times\\{0,1\\}$, we can compute
$\underline{\mathsf{dCE}}(\Gamma)$, $\mathsf{smCE}(\Gamma)$, and
$\mathsf{kCE}^{K}(\Gamma)$ efficiently by Remark 7.10, Theorem 7.14, and
Sections 8.3 and 8.4. In this section, we show that we can efficiently
estimate these quantities for general $\Gamma$ using i.i.d. data drawn from
$\Gamma$.
### 9.1 Estimating $\mathsf{wCE},\mathsf{kCE}$ and $\mathsf{smCE}$
In this subsection we prove bounds on the number of samples from the
distribution $\Gamma$ that can be used in order to estimate the
$\mathsf{wCE}^{W}(\Gamma)$ in terms of the _Rademacher complexity_ of a
function class $W$. Using known bounds on the Rademacher complexity of the
class of all $1$-Lipschitz functions, we prove that $\mathsf{smCE}(f)$ can be
efficiently estimated using $\mathcal{O}(\varepsilon^{-2}\log\delta^{-1})$
samples, in polynomial time. Together with known bounds on the Rademacher
complexity of a unit balls in RKHS associated with the kernel $K$, we also
give an alternative prove of the result in [KSJ18] that the $\mathsf{kCE}$ can
be estimated efficiently up to an error $\varepsilon$ using
$\mathcal{O}(\varepsilon^{-2}\log\delta^{-1})$ samples.
###### Definition 9.1 (Empirical weighted calibration error).
For a sample
$S=\\{(v_{1},y_{1}),\ldots(v_{k},y_{k})\\}\subset[0,1]\times\\{0,1\\}$ we
define the _empirical weighted calibration error_ of $S$ with respect to the
family $W$ as
$\widehat{\mathsf{wCE}}^{W}(S):=\sup\limits_{w\in
W}\frac{1}{k}\sum_{i}(y_{i}-v_{i})w(v_{i}).$
Note that this definition coincides with $\mathsf{wCE}$ applied to the
empirical distribution over the sample $S$.
We will show that for function classes with small Rademacher complexity, the
empirical weighted calibration error is with high probability close to the
weighted calibration error.
Let us first briefly reintroduce the relevant notions in the theory of
Rademacher complexity. We refer interested reader to [MRT18] for more detailed
exposition.
###### Definition 9.2 (Rademacher complexity).
For a set $A\subset\mathbb{R}^{k}$ we define its Rademacher complexity
$\mathcal{R}(A)$ as
$\mathcal{R}(A):=\mathop{\mathbb{E}}_{\sigma}\left[\sup\limits_{a\in
A}\sum_{i=1}^{n}\sigma_{i}a_{i}\right],$
where the expectation is taken over $\sigma_{i}\sim\\{\pm 1\\}$ independent
Rademacher random variables.
For a function class $\mathcal{F}$ from $\mathcal{X}\to\mathbb{R}$ we define
the Rademacher complexity of $\mathcal{F}$ with respect to the sample
$S\in\mathcal{X}^{n}$, $S=(s_{1},s_{2},\ldots s_{n})$ as
$\mathcal{R}_{S}(\mathcal{F}):=\mathcal{R}(\\{(f(s_{1}),f(s_{2}),\ldots,f(s_{n})):f\in\mathcal{F}\\}).$
Finally, for a function class $\mathcal{F}$ and a distribution $\mathcal{D}$
over $\mathcal{X}$ we define the Rademacher complexity of the function class
$\mathcal{F}$ with respect to distribution $\mathcal{D}$ with sample size $n$
as
$\mathcal{R}_{\mathcal{D},n}(\mathcal{F})=\mathop{\mathbb{E}}_{S\sim\mathcal{D}^{n}}\mathcal{R}_{S}(\mathcal{F}).$
In what follows we will skip the subscript $\mathcal{D}$ to simplify the
notation.
The main application of the notion of Rademacher complexity is the following
bound on the _generalization error_ — this theorem will be crucial in our
proof that for function classes $W$ with small Rademacher complexity, we can
estimate $\mathsf{wCE}_{W}(\Gamma)$ using small number of samples.
###### Theorem 9.3.
[MRT18] For a random sample $S\sim\mathcal{D}^{n}$, with probability at least
$1-\delta$ we have
$\sup\limits_{w\in
W}\left|\mathop{\mathbb{E}}_{x\sim\mathcal{D}}w(x)-\frac{1}{n}\sum_{i}w(s_{i})\right|\leq\mathcal{R}_{\mathcal{D},n}(W)+\mathcal{O}\left(\sqrt{\frac{\log\delta^{-1}}{n}}\right).$
We need also the following technical statement about Rademacher complexity.
###### Theorem 9.4 ([MZ03]).
] Let $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$ be a contraction (i.e.
$\|T(x)-T(y)\|_{2}\leq\|x-y\|_{2}$ for all $x,y\in\mathbb{R}^{n}$). Then
$\mathcal{R}(T(A))\leq\mathcal{R}(A).$
With these results in hand we will prove the followin theorem.
###### Theorem 9.5.
Let $W$ be a family of functions from $[0,1]\to\mathbb{R}$ with bounded
Rademacher complexity. Let $\Gamma$ be a distribution over
$[0,1]\times\\{0,1\\}$. Then with probability $1-\delta$ over a random sample
$S\sim\Gamma^{n}$, we have
$|\mathsf{wCE}^{W}(\Gamma)-\widehat{\mathsf{wCE}}^{W}(S)|\leq\mathcal{R}_{n}(W)+\mathcal{O}\left(\sqrt{\frac{\log\delta^{-1}}{n}}\right),$
where the Rademacher complexity $\mathcal{R}_{n}(W)$ is with respect to the
marginal distribution of $v$, with $(v,y)\sim\Gamma$.
In particular if $\mathcal{R}_{n}(W)\leq\sqrt{n^{-1}R_{0}}$, it is possible to
estimate $\mathsf{wCE}(\Gamma)$ up to additive error $\varepsilon$ with
probability $1-\delta$, using
$\mathcal{O}(\varepsilon^{-2}(R_{0}+\log\delta^{-1}))$ samples.
This theorem follows almost immediately from the standard generalizations
bounds using Rademacher complexity (Theorem 9.3), the only minor technical
step is showing that if family of functions $W$ has small Rademacher
complexity, the same is true for a family $\hat{W}$ of functions of form
$\hat{w}(v,y):=w(v)(y-v)$, used in the definition of $\mathsf{wCE}$.
###### Lemma 9.6.
Let $W$ be a family of functions from $[0,1]$ to $\mathbb{R}$, and let
$\hat{W}$ be a family of functions $[0,1]\times\\{0,1\\}\to\mathbb{R}$,
consisting of functions $\hat{w}(v,y)=w(v)(y-v)$ for each $w\in W$.
Then for a distribution $\Gamma$ over $[0,1]\times\\{0,1\\}$, such that the
marginal distribution of the projection onto the first coordinate is
$\Gamma_{1}$, we have
$\mathcal{R}_{\Gamma,n}(\hat{W})\leq\mathcal{R}_{\Gamma_{1},n}(W).$ (30)
###### Proof.
It is enough to prove the corresponding inequality for a fixed sample
$S=\\{(v_{1},y_{1}),\ldots,(v_{n},y_{n})\\}$. We wish to show that
$\sup\limits_{w}\mathop{\mathbb{E}}_{\sigma}\sum_{i}\sigma_{i}(v_{i}-y_{i})w(v_{i})\leq\sup\limits_{w}\mathop{\mathbb{E}}_{\sigma}\sum_{i}\sigma_{i}w(v_{i}).$
(31)
Indeed, since the sample $S$ is fixed, we can consider a set
$A\subset\mathbb{R}^{n}$ given by $A=\\{(w(v_{1}),\ldots,w(v_{n})):w\in W\\}$,
and note that a map $T:\mathbb{R}^{n}\to\mathbb{R}^{n}$ which maps a vector
$(w_{1},\ldots w_{n})\mapsto((v_{1}-y_{1})w_{1},\ldots(v_{n}-y_{n})w_{n})$ is
a linear contraction (i.e. for any $w,w^{\prime}$
$\|T(w)-T(w^{\prime})\|\leq\|w-w^{\prime}\|$), since
$~{}\forall_{i}|v_{i}-y_{i}|\leq 1$.
By Theorem 9.4, the Rademacher complexity of any set $A$ does not increase
under contraction, i.e. $\mathcal{R}(T(A))\leq\mathcal{R}(A)$, which, by
definition of $\mathcal{R}(A)$, $T$ and $A$ is exactly (31). ∎
With this lemma in hand the proof of Theorem 9.5 follows directly from the
generalization bound (Theorem 9.3)
###### Proof of Theorem 9.5.
Applying Theorem 9.3, to the family $\hat{W}$ defined in Lemma 9.6, we see
that
$|\mathsf{wCE}^{W}(f)-\widehat{\mathsf{wCE}}^{W}(S)|=\sup\limits_{\hat{w}}|\frac{1}{n}\sum\hat{w}(v_{i},y_{i})-\mathop{\mathbb{E}}\hat{w}(v,y)|\leq\mathcal{R}_{n}(\hat{W})+\mathcal{O}(\sqrt{\frac{\log\delta^{-1}}{n}}),$
since by Lemma 9.6, we have $\mathcal{R}_{n}(\hat{W})\leq\mathcal{R}_{n}(W)$,
the statement of the theorem follows. ∎
Finally, for classes of functions given by the unit ball in some RKHS
associated with the kernel $K$, the Rademacher complexity has been explicitly
bounded.
###### Theorem 9.7.
[Men02]] Let $K:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ be a kernel
associated with RKHS $\mathcal{H}$. Let $B_{K}=\\{h:\|h\|_{\mathcal{H}}\leq
1\\}$. Then for any distribution $\Gamma$. we have
$\mathcal{R}_{n}(B_{K})\leq\mathcal{O}(\frac{B}{\sqrt{n}})\ \text{where}\
B=\sup\limits_{v\in\mathbb{R}}\sqrt{K(v,v)}.$
In particular this together with Theorem 9.5 implies the following bound on
the number of samples needed to estimate the $\mathsf{kCE}^{K}(\Gamma)$ for
any kernel $K$ with bounded $\sup\limits\sqrt{K(v,v)}\leq B$ — this Theorem
was already proven directly in [KSJ18], we provide an alternative proof by
composing Theorem 9.5 and Theorem 9.7.
###### Theorem 9.8.
Let $K:\mathbb{R}\times\mathbb{R}\to\mathbb{R}$ be a kernel associated with
RKHS $\mathcal{H}$. Let $B_{K}=\\{h:\|h\|_{\mathcal{H}}\leq 1\\}$. Then for
any distribution $\Gamma$ we can estimate the $\mathsf{kCE}^{K}(\Gamma)$ with
probability at least $1-\delta$, with additive error at most $\varepsilon$
using $n=\mathcal{O}((B^{2}+\log(1/\delta))\varepsilon^{-2})$ samples.
Since the class of $1$-Lipschitz functions also has bounded Rademacher
complexity, it follows that we can estimate $\mathsf{smCE}(\Gamma)$ using
$n=\mathcal{O}(\ln(1/\delta)/\varepsilon^{2})$ samples in time
$\mathrm{poly}(n)$.
###### Corollary 9.9.
We can estimate $\mathsf{smCE}(\Gamma)$ using
$n=\mathcal{O}((\ln(1/\delta))/\varepsilon^{2})$ samples from the distribution
$D_{f}$ in time $\mathrm{poly}(n)$.
###### Proof.
The class of $[-1,1]$ bounded $1$-Lipschitz functions $L_{1}$ has bounded
Rademacher complexity $\mathcal{R}_{n}(L_{1})\leq\mathcal{O}(1/\sqrt{n})$
[AST04]. Therefore, by Theorem 9.5, with probability $1-\delta$, for a random
sample of size $n=C\log(1/\delta)/{\varepsilon^{2}}$, we have
$\mathsf{wCE}^{L_{1}}(\Gamma)=\widehat{\mathsf{wCE}}^{L_{1}}(S)\pm\varepsilon$.
The claim about efficient computation follows from Theorem 7.14. ∎
### 9.2 Estimating $\underline{\mathsf{dCE}}$
We will prove that using $\mathcal{O}(\varepsilon^{-2})$ samples from the
distribution $\Gamma$, we can estimate $\underline{\mathsf{dCE}}(\Gamma)$ up
to an error $\varepsilon$ with probability $2/3$. The efficient algorithm for
estimating $\underline{\mathsf{dCE}}$ of an empirical distribution on the
sample was described in Section 7, here we will show that
$\underline{\mathsf{dCE}}(f)$ on the empirical distribution over a sample $S$
drawn independently from distribution $\mathcal{D}_{f}$ approximates the
$\underline{\mathsf{dCE}}$ of the distribution $\Gamma$.
###### Theorem 9.10.
Let $S=((v_{1},y_{1}),\ldots(v_{n},y_{n}))$ be a sample drawn from
$\Gamma^{n}$, where $n=\Omega(\varepsilon^{-2})$. Then with probability $2/3$
we have
$\underline{\mathsf{dCE}}(S)=\underline{\mathsf{dCE}}(\Gamma)\pm\varepsilon,$
where $\underline{\mathsf{dCE}}(S)$ is the $\underline{\mathsf{dCE}}$ for the
empirical distribution over the sample $S$.
In order to prove this, we will need the following lemma, where we use
$W_{1}(P_{1},P_{2})$ to denote the Wasserstein $1$-distance of two
distributions $P_{1},P_{2}$ over $[0,1]$:
$W_{1}(P_{1},P_{2})=\sup\limits_{f}\left(\mathop{\mathbb{E}}_{v\sim
P_{1}}f(v)-\mathop{\mathbb{E}}_{v\sim P_{2}}f(v)\right),$
where the supremum is over all $1$-Lipschitz functions $f:[0,1]\to\mathbb{R}$.
###### Lemma 9.11.
For two distributions $\Gamma^{1},\Gamma^{2}$ over $[0,1]\times\\{0,1\\}$, let
us denote $\lambda_{b}:=\Pr_{\Gamma^{2}}(y=b)$. If the following conditions
are satisfied,
1. 1.
$W_{1}((v|y=b)_{\Gamma^{1}},(v|y=b)_{\Gamma^{2}})\leq\frac{\varepsilon}{\lambda_{b}}$
for $b\in\\{0,1\\}$,
2. 2.
$|\Pr_{\Gamma^{1}}(y=1)-\lambda_{1}|\leq\varepsilon$,
then we have
$\underline{\mathsf{dCE}}(\Gamma^{1})=\underline{\mathsf{dCE}}(\Gamma^{2})\pm\mathcal{O}(\varepsilon).$
###### Proof.
Similarly to the proof of Theorem 7.3 in Section 7.1, by discretization it
suffices to consider the case where $\Gamma^{1}$ and $\Gamma^{2}$ are both
distributions over $V\times\\{0,1\\}$ for a finite $V\subseteq[0,1]$ and show
that for every finite $U\subseteq[0,1]$ satisfying $\\{0,1\\}\subseteq U$, it
holds that
$\underline{\mathsf{dCE}}^{U}(\Gamma^{1})=\underline{\mathsf{dCE}}^{U}(\Gamma^{2})\pm\mathcal{O}(\varepsilon).$
By Remark 7.8, it suffices to show that for every function
$r:V\times\\{0,1\\}\to[-1,1]$ satisfying
$|r(v_{1},y)-r(v_{2},y)|\leq|v_{1}-v_{2}|$ for every $v_{1},v_{2}\in V$ and
$y\in\\{0,1\\}$, it holds that
$|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma_{1}}[r(v,y)]-\mathop{\mathbb{E}}_{(v,y)\sim\Gamma_{2}}[r(v,y)]|\leq\mathcal{O}(\varepsilon).$
(32)
By our assumption in Item 1 and the definition of $W_{1}$, we have
$|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma_{1}}[r(v,y)|y=b]-\mathop{\mathbb{E}}_{(v,y)\sim\Gamma_{2}}[r(v,y)|y=b)]|\leq\varepsilon/\lambda_{b}.$
(33)
Therefore,
$\displaystyle|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma_{1}}[r(v,y),y=b]-\mathop{\mathbb{E}}_{(v,y)\sim\Gamma_{2}}[r(v,y),y=b]|$
$\displaystyle={}$
$\displaystyle|\mathop{\mathbb{E}}_{\Gamma_{1}}[r(v,y)|y=b]\Pr_{\Gamma_{1}}[y=b]-\mathop{\mathbb{E}}_{\Gamma_{2}}[r(v,y)|y=b]\Pr_{\Gamma_{2}}[y=b]|$
$\displaystyle\leq{}$
$\displaystyle\left|\mathop{\mathbb{E}}_{\Gamma_{1}}[r(v,y)|y=b]-\mathop{\mathbb{E}}_{\Gamma_{2}}[r(v,y)|y=b]\right|\Pr_{\Gamma_{2}}[y=b]$
$\displaystyle+\mathop{\mathbb{E}}_{\Gamma_{1}}[r(v,y)|y=b]\left|\Pr_{\Gamma_{1}}[y=b]-\Pr_{\Gamma_{2}}[y=b]\right|$
$\displaystyle\leq{}$
$\displaystyle(\varepsilon/\lambda_{b})\lambda_{b}+\varepsilon$ (By (33) and
Item 2) $\displaystyle={}$ $\displaystyle\mathcal{O}(\varepsilon).$
Summing up over $b=0,1$ proves (32). ∎
In order to prove that the conditions of the lemma above are indeed satisfied,
where $\Gamma^{2}$ is an empirical distribution over a sample from
$\Gamma^{1}$, we will need to following concentration theorem.
###### Theorem 9.12 ([FG15]).
Let $\mu$ be a distribution over $[0,1]$. If we sample
$n=\Omega(\varepsilon^{-2}\log\delta^{-1})$ points from the distribution
$(x_{1},x_{2},\ldots x_{n})\sim\mu^{n}$, then the empirical distribution
$\hat{\mu}_{S}$ on the sample $S$ satisfies
$\Pr(W_{1}(\hat{\mu}_{S},\mu)>\varepsilon)\leq\delta.$
We are now ready to prove the following lemma.
###### Lemma 9.13.
If we draw $n=\Omega(\varepsilon^{-2})$ samples from a distribution
$\Gamma^{1}$, and denote by $\Gamma^{2}$ the empirical distribution on the
sample $S=((v_{1},y_{1}),\ldots,(v_{n},y_{n}))$, then with probability at
least $2/3$ the pair of distributions $(\Gamma^{1},\Gamma^{2})$ satisfies the
conditions of Lemma 9.11.
###### Proof.
First of all, except with probability $1/10$ we have
$|\Pr_{\Gamma^{1}}(y=1)-\Pr_{\Gamma^{2}}(y=1)|\leq\varepsilon$ by the standard
Chernoff bound argument.
If this is the case, for $b\in\\{0,1\\}$ we get $n_{b}$ independent samples
from the conditional distribution $(v|y=b)_{\Gamma^{1}}$, where
$n_{b}=\Pr_{\Gamma^{2}}(y=b)n=\Omega\left(\Pr_{\Gamma^{2}}(y=b)\varepsilon^{-2}\right).$
Hence, by Theorem 9.12, with sufficiently high probability, for
$b\in\\{0,1\\}$,
$W_{1}((v|y=b)_{\Gamma^{1}},(v|y=b)_{\Gamma^{2}})\leq\mathcal{O}\left(\frac{1}{\sqrt{n_{b}}}\right)\leq\mathcal{O}\left(\frac{\varepsilon}{\sqrt{\Pr_{\Gamma^{2}}(y=b)}}\right)\leq\mathcal{O}\left(\frac{\varepsilon}{\Pr_{\Gamma^{2}}(y=b)}\right).\qed$
###### Proof of Theorem 9.10.
Follows by composition of Lemma 9.11 and Lemma 9.13. ∎
## 10 Experiments
We now conduct several experiments evaluating the calibration measures
discussed in this work on synthetic datasets. The purpose of this is twofold:
First, to explore how our measures behave (and compare to each other) on
reasonable families of distributions, beyond our worst-case theoretical
bounds. And second, to demonstrate that these measures can be computed
efficiently, some even in linear time.
### 10.1 Synthetic Datasets
Figure 1: Synthetic Dataset Family. Reliability diagrams for the family of
synthetic distributions $\mathcal{D}_{\beta}$, as the inverse-temperature
$\beta$ is varied. The center plot $\mathcal{D}_{1}$, at unit temperature, is
perfectly calibrated. Temperatures greater than $1$ are underconfident, while
temperatures less than $1$ are overconfident. The bottom row shows density
plots for the conditional distributions $p(f\mid y=0)$ and $p(f\mid y=1)$.
We consider evaluating calibration on the following family of distributions.
Define the “baseline” distribution $\mathcal{D}_{1}$ as the following joint
distribution $(f,y)\sim\mathcal{D}_{1}$:
$\displaystyle f\sim\textrm{Unif}[0,1]$ $\displaystyle
y\sim\textrm{Bernoulli}(f)$
This is a perfectly calibrated distribution, where predictions $f$ are uniform
over $[0,1]$. Now consider modifying the predicted outputs, by changing their
“temperature.” Formally, for any inverse-temperature parameter
$\beta=1/T\in\mathbb{R}_{>0}$, define the distribution $\mathcal{D}_{\beta}$
as:
$\mathcal{D}_{\beta}:=\left\\{\left(\frac{f^{\beta}}{f^{\beta}+(1-f)^{\beta}}~{},~{}y\right)\right\\}_{(f,y)\sim\mathcal{D}_{1}}$
(34)
If the original predictions in $\mathcal{D}_{1}$ were the output of a softmax,
the distribution $\mathcal{D}_{\beta}$ corresponds exactly to changing the
softmax temperature to $(1/\beta)$. For small temperatures, this pushes
predictions towards the endpoints of the interval $[0,1]$, yielding an
overconfident classifier. And for large temperatures, this pushes predictions
towards $0.5$, yielding an underconfident classifier. This family of
distributions $\\{\mathcal{D}_{\beta}\\}$ is illustrated in Figure 1.
### 10.2 Implementation Details
We will evaluate calibration measures on the family of distributions
$\\{\mathcal{D}_{\beta}\\}$. We draw $N=10000$ samples from each distribution
$\mathcal{D}_{\beta}$, and evaluate the calibration measures as described
below. All of the measures here are computed in linear time $\mathcal{O}(N)$,
except for smooth calibration which requires solving a linear program.
##### binnedECE.
This is the standard measurement of binnedECE (Equation 4), using 20 equal-
width bins.
##### binnedECEw.
This is simply binnedECE plus the average bin width (which is $1/20$ in our
case). As noted in Equation (5), adding the bin width penalty turns binnedECE
into a provable upper-bound on $\mathsf{dCE}$.
##### kCE.
Kernel calibration error using the Laplace kernel. We compute the approximate
kernel calibration using $M=10N$ terms, as described in Section 8.4. This runs
in $\mathcal{O}(N)$ time.
##### intCE.
Interval calibration error. We approximate this via the surrogate described in
Section 6.2. Specifically, we compute $\widehat{\mathsf{SintCE}}$ with choice
of precision $\varepsilon=0.01$.
##### smCE.
Smooth calibration error. We compute this from samples via the linear program
described in Theorem 7.14.
### 10.3 Evaluation and Discussion
Figure 2: Experimental Evaluation. We experimentally evaluate calibration
measures on the family of distributions $\\{\mathcal{D}_{\beta}\\}$ of varying
temperature. Measures are computed over the empirical distribution of
$N=10000$ independent samples (top figure), and $N=1000$ samples (bottom
figure). We plot mean and standard deviations over $50$ independent trials.
Note that these measures concentrate well around their mean, even for $N=1000$
samples.
In Figure 2, we numerically evaluate calibration measures as we vary the
temperature of the distribution $\mathcal{D}_{\beta}$. The true calibration
distance $\mathsf{dCE}$ lies between $\mathsf{smCE}$ and $\mathsf{intCE}$, as
per Equation (6), but its true value cannot be determined in the prediction-
access model.
We observe that while all metrics are correlated at temperatures $T\leq 1$
(when the classifier is overconfident), the behavior at $T\gg 1$ is more
subtle. At moderately large temperatures ($T\in[1,10]$),
$\\{\mathsf{intCE},\mathsf{binnedECE}\\}$ are much higher than
$\\{\mathsf{kCE},\mathsf{smCE}\\}$. This is because as predictions concentrate
around $0.5$, the distribution $\mathcal{D}_{\beta}$ becomes similar to the
construction of Lemma 6.8, which exhibits a quadratic gap between
$\mathsf{intCE}$ and $\mathsf{smCE}$. As the temperature $T\to\infty$, the
true calibration distance $\mathsf{dCE}\to 0$, and so all consistent measures
($\mathsf{kCE},\mathsf{smCE},\mathsf{intCE}$) will also go to $0$. However
$\mathsf{binnedECE}$, which is not consistent, does not go to $0$.
Finally, we note that $\mathsf{smCE}$ is numerically close to $\mathsf{kCE}$
for the range of distributions tested here.
#### Acknowledgements
Part of this work was performed while LH was interning at Apple. LH is also
supported by Moses Charikar’s and Omer Reingold’s Simons Investigators awards,
Omer Reingold’s NSF Award IIS-1908774, and the Simons Foundation Collaboration
on the Theory of Algorithmic Fairness.
JB is supported by a Junior Fellowship from the Simons Society of Fellows.
PN and JB acknowledge the city of New Orleans, for providing an environment
conducive to both research and recreation in the early stages of this project.
PN also acknowledges his partner Shreya Shankar for their invaluable support.
## References
* [AIGT+22] Imanol Arrieta-Ibarra, Paman Gujral, Jonathan Tannen, Mark Tygert, and Cherie Xu. Metrics of calibration for probabilistic predictions. arXiv preprint arXiv:2205.09680, 2022.
* [AST04] Amiran Ambroladze and John Shawe-Taylor. Complexity of pattern classes and lipschitz property. In International Conference on Algorithmic Learning Theory, pages 181–193. Springer, 2004.
* [BdW02] Harry Buhrman and Ronald de Wolf. Complexity measures and decision tree complexity: a survey. Theor. Comput. Sci., 288(1):21–43, 2002.
* [BLR90] Manuel Blum, Michael Luby, and Ronitt Rubinfeld. Self-testing/correcting with applications to numerical problems. In Harriet Ortiz, editor, Proceedings of the 22nd Annual ACM Symposium on Theory of Computing, May 13-17, 1990, Baltimore, Maryland, USA, pages 73–83. ACM, 1990.
* [Bri50] Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1–3, 1950.
* [BTA11] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Springer US, 2011.
* [CAT16] Cynthia S Crowson, Elizabeth J Atkinson, and Terry M Therneau. Assessing calibration of prognostic risk scores. Statistical methods in medical research, 25(4):1692–1706, 2016\.
* [Daw82a] A. P. Dawid. Objective probability forecasts. University College London, Dept. of Statistical Science. Research Report 14, 1982.
* [Daw82b] A Philip Dawid. The well-calibrated bayesian. Journal of the American Statistical Association, 77(379):605–610, 1982.
* [Daw84] A Philip Dawid. Present position and potential developments: Some personal views statistical theory the prequential approach. Journal of the Royal Statistical Society: Series A (General), 147(2):278–290, 1984.
* [Daw85] A Philip Dawid. Calibration-based empirical probability. The Annals of Statistics, pages 1251–1274, 1985.
* [DF83] Morris H DeGroot and Stephen E Fienberg. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society: Series D (The Statistician), 32(1-2):12–22, 1983.
* [DKR+21] Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, and Gal Yona. Outcome indistinguishability. In ACM Symposium on Theory of Computing (STOC’21), 2021.
* [Doi07] Kunio Doi. Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Computerized medical imaging and graphics, 31(4-5):198–211, 2007\.
* [FG15] Nicolas Fournier and Arnaud Guillin. On the rate of convergence in wasserstein distance of the empirical measure. Probability Theory and Related Fields, 162(3):707–738, 2015.
* [FH18] Dean P. Foster and Sergiu Hart. Smooth calibration, leaky forecasts, finite recall, and nash dynamics. Games Econ. Behav., 109:271–293, 2018.
* [FH21] Dean P Foster and Sergiu Hart. Forecast hedging and calibration. Journal of Political Economy, 129(12):3447–3490, 2021.
* [FV98] Dean P. Foster and Rakesh V. Vohra. Asymptotic calibration. Biometrika, 85(2):379–390, 1998.
* [GGR98] Oded Goldreich, Shafi Goldwasser, and Dana Ron. Property testing and its connection to learning and approximation. J. ACM, 45(4):653–750, 1998.
* [GHK+23] Parikshit Gopalan, Lunjia Hu, Michael P. Kim, Omer Reingold, and Udi Wieder. Loss Minimization Through the Lens Of Outcome Indistinguishability. In Yael Tauman Kalai, editor, 14th Innovations in Theoretical Computer Science Conference (ITCS 2023), volume 251 of Leibniz International Proceedings in Informatics (LIPIcs), pages 60:1–60:20, Dagstuhl, Germany, 2023. Schloss Dagstuhl – Leibniz-Zentrum für Informatik.
* [GKR+22] Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, and Udi Wieder. Omnipredictors. In Innovations in Theoretical Computer Science (ITCS’2022), 2022\.
* [GKSZ22] Parikshit Gopalan, Michael P. Kim, Mihir Singhal, and Shengjia Zhao. Low-degree multicalibration. In Conference on Learning Theory, 2-5 July 2022, London, UK, volume 178 of Proceedings of Machine Learning Research, pages 3193–3234. PMLR, 2022.
* [GPSW17] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, pages 1321–1330. PMLR, 2017.
* [GRA+20] Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, and Richard Hartley. Calibration of neural networks using splines. arXiv preprint arXiv:2006.12800, 2020.
* [Hal20] Cleve Hallenbeck. Forecasting precipitation in percentages of probability. Monthly Weather Review, 48(11):645–647, 1920.
* [HKRR18] Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, and Guy N. Rothblum. Multicalibration: Calibration for the (computationally-identifiable) masses. In Proceedings of the 35th International Conference on Machine Learning, ICML, 2018.
* [hss] Yiorgos S. Smyrlis (https://math.stackexchange.com/users/57021/yiorgos-s smyrlis). How to approximate a globally lipschitz function by differentiable functions with bounded derivatives? Mathematics Stack Exchange. URL:https://math.stackexchange.com/q/938390 (version: 2014-09-19).
* [Hua19] Hao Huang. Induced subgraphs of hypercubes and a proof of the sensitivity conjecture. Annals of Mathematics, 190:949–955, 2019.
* [JOKOM12] Xiaoqian Jiang, Melanie Osl, Jihoon Kim, and Lucila Ohno-Machado. Calibrating predictive model estimates to support personalized medicine. Journal of the American Medical Informatics Association, 19(2):263–274, 2012.
* [KCT+21] Archit Karandikar, Nicholas Cain, Dustin Tran, Balaji Lakshminarayanan, Jonathon Shlens, Michael C Mozer, and Becca Roelofs. Soft calibration objectives for neural networks. Advances in Neural Information Processing Systems, 34:29768–29779, 2021.
* [KF08] Sham Kakade and Dean Foster. Deterministic calibration and nash equilibrium. Journal of Computer and System Sciences, 74(1):115–130, 2008.
* [KLM19] Ananya Kumar, Percy S Liang, and Tengyu Ma. Verified uncertainty calibration. In Advances in Neural Information Processing Systems, pages 3792–3803, 2019.
* [KMR17] Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair determination of risk scores. In 8th Innovations in Theoretical Computer Science Conference, ITCS, 2017.
* [KNRW17] Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. arXiv preprint arXiv:1711.05144, 2017.
* [KNRW18] Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning, pages 2564–2572, 2018.
* [KSB21] Benjamin Kompa, Jasper Snoek, and Andrew L Beam. Second opinion needed: communicating uncertainty in medical machine learning. NPJ Digital Medicine, 4(1):1–6, 2021.
* [KSJ18] Aviral Kumar, Sunita Sarawagi, and Ujjwal Jain. Trainable calibration measures for neural networks from kernel mean embeddings. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2805–2814. PMLR, 10–15 Jul 2018\.
* [LAS+04] Feng Li, Masahito Aoyama, Junji Shiraishi, Hiroyuki Abe, Qiang Li, Kenji Suzuki, Roger Engelmann, Shusuke Sone, Heber MacMahon, and Kunio Doi. Radiologists’ performance for differentiating benign from malignant lung nodules on high-resolution ct using computer-estimated likelihood of malignancy. American Journal of Roentgenology, 183(5):1209–1215, 2004.
* [LHHD22] Donghwan Lee, Xinmeng Huang, Hamed Hassani, and Edgar Dobriban. T-cal: An optimal test for the calibration of predictive models. arXiv preprint arXiv:2203.01850, 2022.
* [LLS00] Yi Li, Philip M. Long, and Aravind Srinivasan. Improved bounds on the sample complexity of learning. J. Comput. Syst. Sci., 62:516–527, 2000.
* [MDR+21] Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. Revisiting the calibration of modern neural networks. Advances in Neural Information Processing Systems, 34:15682–15694, 2021.
* [Men02] Shahar Mendelson. Geometric parameters of kernel machines. In International conference on computational learning theory, pages 29–43. Springer, 2002.
* [MLF+21] Basil Mustafa, Aaron Loh, Jan Freyberg, Patricia MacWilliams, Megan Wilson, Scott Mayer McKinney, Marcin Sieniek, Jim Winkens, Yuan Liu, Peggy Bui, et al. Supervised transfer learning at scale for medical imaging. arXiv preprint arXiv:2101.05913, 2021.
* [MRT18] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning, second edition. Adaptive Computation and Machine Learning series. MIT Press, 2018.
* [MSR+10] Matthew Mealiffe, Renee Stokowski, Brian Rhees, Ross Prentice, Mary Pettinger, and David Hinds. Clinical validity assessment of a breast cancer risk model combining genetic and clinical information. Nature Precedings, pages 1–1, 2010.
* [Mur98] Allan H Murphy. The early history of probability forecasts: Some extensions and clarifications. Weather and forecasting, 13(1):5–15, 1998.
* [MW84] Allan H Murphy and Robert L Winkler. Probability forecasting in meteorology. Journal of the American Statistical Association, 79(387):489–500, 1984.
* [MZ03] Ron Meir and Tong Zhang. Generalization error bounds for bayesian mixture algorithms. Journal of Machine Learning Research, 4(Oct):839–860, 2003.
* [NCH14] Mahdi Pakdaman Naeini, Gregory F Cooper, and Milos Hauskrecht. Binary classifier calibration: Non-parametric approach. arXiv preprint arXiv:1401.3390, 2014.
* [NCH15] Mahdi Pakdaman Naeini, Gregory F Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the… AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, volume 2015, page 2901\. NIH Public Access, 2015.
* [NDZ+19] Jeremy Nixon, Michael W. Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring calibration in deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019.
* [NS94] Noam Nisan and Mario Szegedy. On the degree of boolean functions as real polynomials. Comput. Complex., 4:301–313, 1994.
* [PRR06] Michal Parnas, Dana Ron, and Ronitt Rubinfeld. Tolerant property testing and distance approximation. J. Comput. Syst. Sci., 72(6):1012–1042, 2006.
* [R+21] Rahul Rahaman et al. Uncertainty quantification and deep ensembles. Advances in Neural Information Processing Systems, 34:20063–20075, 2021.
* [RCSM22] Rebecca Roelofs, Nicholas Cain, Jonathon Shlens, and Michael C Mozer. Mitigating bias in calibration error estimation. In International Conference on Artificial Intelligence and Statistics, pages 4036–4054. PMLR, 2022.
* [RR07] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems, volume 20. Curran Associates, Inc., 2007.
* [RS96] Ronitt Rubinfeld and Madhu Sudan. Robust characterizations of polynomials with applications to program testing. SIAM J. Comput., 25(2):252–271, 1996.
* [SGR97] Man-hung Siu, Herbert Gish, and Fred Richardson. Improved estimation, evaluation and applications of confidence measures for speech recognition. In Fifth European Conference on Speech Communication and Technology, 1997.
* [VCV15] Ben Van Calster and Andrew J Vickers. Calibration of risk prediction models: impact on decision-analytic performance. Medical decision making, 35(2):162–169, 2015.
* [ZKH20] Jize Zhang, Bhavya Kailkhura, and T Yong-Jin Han. Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning. In International conference on machine learning, pages 11117–11128. PMLR, 2020.
## Appendix A Other Related Works
##### Boolean function complexity.
A number of low-level complexity measures for Boolean functions have been
studied in the literature [BdW02]. Starting from the seminal work of Nisan and
Szegedy [NS94], it has been shown that several of these measures are in fact
polynomially related; the latest addition being sensitivity[Hua19]. One can
view polynomial degree or decision tree depth as the central notion in this
web of reductions. Our work suggests a similar structure in the world of
calibration measures, where all consistent calibration measures for a
particular notion of ground truth distance are polynomially related. We show
that the $\ell_{p}$ metrics give rise to one family of consistent measures,
and our framework allows for other families which are consistent with other
metrics.
##### ECE Variants.
Many metrics are essentially variants of the $\mathsf{ECE}$, and also inherit
flaws of the $\mathsf{ECE}$. We use the term $\mathsf{ECE}$ to refer to
Definition 1.1, which does not satisfy continuity. _Binned-ECE_ refers to the
family of metrics which discretize the range of $f$ into a constant number of
bins, and then compute ECE of this rounded function. Binned-ECE, and in
general most “binned” estimators, do not satisfy soundness: some functions
which are not perfectly calibrated will have Binned-ECE of 0. This is
intuitively because binning allows calibration error to cancel-out within
bins. Moreover, Binned-ECE is not continuous, due to discontinuities at bin
boundaries. Among proposed modifications to the Binned-ECE, “debiasing”
adjustments (such as [RCSM22, KLM19]) unfortunately remain discontinuous and
unsound. To enforce continuity, a smoothed version of Binned-ECE was proposed
in [KCT+21], but it is still unsound due to the finite number of
(smoothed-)bins. Finally, the recently-introduced T-Cal [LHHD22] is an
estimator of the population ECE that applies to distributions satisfying
certain smoothness conditions. Such smoothness assumptions are necessary in
T-Cal, since without them, the ECE is both discontinuous and impossible to
estimate from finite samples.
##### Continuous Calibration
The notion of _Continuous Calibration_ was introduced in [FH21] — the
continuous calibration is not directly a calibration measure in our sense,
instead it is a way of assigning for a sequence of probability distributions
$\Gamma_{t}$ over $[0,1]\times\\{0,1\\}$, whether $\Gamma_{t}$ is
asymptotically calibrated. Their notion of continuous calibration of a
sequence $\Gamma_{t}$ turns out to be equivalent to saying for any consistent
calibration measure $\mu$ that as $t\to\infty$ the $\mu(\Gamma_{t})\to 0$.
This is equivalent to saying that $\mu(\Gamma_{t})\to 0$ for _all_ consistent
calibration measures $\mu$, since they are all polynomially related).
On the other hand, the notion of _Continuous Calibration_ cannot be used to
say anything quantitative about the miscalibration of one given classifier.
## Appendix B Proofs
### B.1 Proofs from Section 4
###### Proof of Lemma 4.3.
Consider any calibration measure $\mu$ as in the statement. For any
$g\in\mathsf{cal}(\mathcal{D})$,
$\displaystyle\mu_{\mathcal{D}}(f)=\mu_{\mathcal{D}}(f)-\mu_{\mathcal{D}}(g)\leq
L\ m_{\mathcal{D}}(f,g)$
where the equality is because $\mu_{\mathcal{D}}(g)=0$ by Soundness, and the
inequality by Lipschitz continuity. Hence
$\displaystyle\mu_{\mathcal{D}}(f)\leq\min_{g\in\mathsf{cal}(\mathcal{D})}L\
m_{\mathcal{D}}(f,g)=L\ \mathsf{dCE}_{\mathcal{D}}(f).$ (35)
∎
###### Proof of Lemma 4.4.
Since for every $\mathcal{D}$ we have the relation
$(\ell_{p}^{\mathcal{D}}(f,g))^{p}\leq\ell_{1}^{\mathcal{D}}(f,g)\leq\ell_{p}^{\mathcal{D}}(f,g)$
it follows that
$(\mathsf{dCE}^{\ell_{p}}_{\mathcal{D}}(f))^{p}\leq\mathsf{dCE}^{\ell_{1}}_{\mathcal{D}}(f)\leq\mathsf{dCE}^{\ell_{p}}_{\mathcal{D}}(f,g)$
Assume that $\mu$ is $(\ell_{1},c,s)$-robust. This implies
$b(\mathsf{dCE}^{\ell_{p}}_{\mathcal{D}}(f))^{ps}\leq
b(\mathsf{dCE}^{\ell_{1}}_{\mathcal{D}}(f))^{s}\leq\mu(f)\leq
a(\mathsf{dCE}^{\ell_{1}}_{\mathcal{D}}(f))^{c}\leq
a(\mathsf{dCE}^{\ell_{p}}_{\mathcal{D}}(f))^{c}$
hence $\mu$ is $(\ell_{p},c,ps)$-robust. Conversely, if $\mu$ is
$(\ell_{p},c^{\prime},s^{\prime})$ robust then
$b^{\prime}(\mathsf{dCE}^{\ell_{1}}_{\mathcal{D}}(f))^{s^{\prime}}\leq
b^{\prime}(\mathsf{dCE}^{\ell_{p}}_{\mathcal{D}}(f))^{s^{\prime}}\leq\mu(f)\leq
a^{\prime}(\mathsf{dCE}^{\ell_{p}}_{\mathcal{D}}(f))^{c^{\prime}}\leq
a^{\prime}(\mathsf{dCE}^{\ell_{p}}_{\mathcal{D}}(f))^{{}^{\prime}c/p}$
hence $\mu$ is $(\ell_{1},c^{\prime}/p,s^{\prime})$-robust. Note that in
either case, the translation loses a factor of $p$ in the approximation
degree. ∎
### B.2 Proofs from Section 6
We first prove Theorem 6.12 using B.1 and Lemma B.2 below. We prove Lemma 6.9
after that.
###### Claim B.1.
$\mathsf{RintCE}(\Gamma,2\varepsilon)\leq\mathsf{RintCE}(\Gamma,\varepsilon)$.
###### Proof.
The claim is proved by the following chain, where the distribution of $r$ is
the uniform distribution over $[0,2\varepsilon)$.
$\displaystyle\mathsf{RintCE}(\Gamma,\varepsilon)$
$\displaystyle=\mathop{\mathbb{E}}_{r}\Big{[}\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v){\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})]|\Big{]}$
$\displaystyle=\mathop{\mathbb{E}}_{r}\Big{[}\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v){\mathbf{1}}(v\in
I_{r,2j}^{\varepsilon})]|+|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v){\mathbf{1}}(v\in
I_{r,2j+1}^{\varepsilon})]|\Big{]}$
$\displaystyle\geq\mathop{\mathbb{E}}_{r}\Big{[}\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v){\mathbf{1}}(v\in
I_{r,2j}^{\varepsilon}\cup I_{r,2j+1}^{\varepsilon})]|\Big{]}$
$\displaystyle=\mathop{\mathbb{E}}_{r}\Big{[}\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v){\mathbf{1}}(v\in
I_{r,j}^{2\varepsilon})]|\Big{]}$
$\displaystyle=\mathsf{RintCE}(\Gamma,2\varepsilon).\qed$
###### Lemma B.2.
There exists an absolute constant $C>0$ with the following property. For
$\varepsilon_{0},\varepsilon_{1},\delta\in(0,1/2)$, let $n\in\mathbb{Z}_{>0}$
satisfy $n\geq C\varepsilon_{0}^{-2}(\log(1/\delta)+\varepsilon_{1}^{-1})$.
Let $(v_{1},y_{1}),\ldots,(v_{n},y_{n})$ be drawn i.i.d. from a distribution
$\Gamma$ over $[0,1]\times\\{0,1\\}$. Then with probability at least
$1-\delta$, for every $\varepsilon\geq\varepsilon_{1}$ and every
$r\in[0,\varepsilon)$, we have
$\left|\sum_{j\in\mathbb{Z}}\left|\frac{1}{n}\sum_{\ell=1}^{n}(y_{\ell}-v_{\ell}){\mathbf{1}}(v_{\ell}\in
I_{r,j}^{\varepsilon})\right|-\sum_{j\in\mathbb{Z}}\left|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v){\mathbf{1}}(v\in
I_{r,j}^{\varepsilon})]\right|\right|\leq\varepsilon_{0}.$ (36)
###### Proof of Lemma B.2.
For every $\varepsilon\geq\varepsilon_{1}$ and $r\in[0,\varepsilon)$, define a
class $B_{r}^{\varepsilon}$ consisting of functions $b:[0,1]\to\\{-1,1\\}$
such that for every $j\in\mathbb{Z}$, there exists $b_{j}\in\\{-1,1\\}$ such
that $b(v)=b_{j}$ for every $v\in[0,1]\cap I_{r,j}^{\varepsilon}$. It is now
clear that
$\displaystyle\sum_{j\in\mathbb{Z}}\left|\frac{1}{n}\sum_{\ell=1}^{n}(y_{\ell}-v_{\ell}){\mathbf{1}}(v_{\ell}\in
I_{r,j})\right|$
$\displaystyle=\sum_{j\in\mathbb{Z}}\sup\limits_{b_{j}\in\\{-1,1\\}}\frac{1}{n}\sum_{\ell=1}^{n}(y_{\ell}-v_{\ell}){\mathbf{1}}(v_{\ell}\in
I_{r,j})b_{j}$ $\displaystyle=\sup\limits_{b\in
B_{r}^{\varepsilon}}\frac{1}{n}\sum_{\ell=1}^{n}(y_{\ell}-v_{\ell})b(v_{\ell}),$
(37)
and
$\displaystyle\sum_{j\in\mathbb{Z}}\left|\mathop{\mathbb{E}}[(y-v){\mathbf{1}}(v\in
I_{r,j})]\right|$
$\displaystyle=\sum_{j\in\mathbb{Z}}\sup\limits_{b_{j}\in\\{-1,1\\}}\mathop{\mathbb{E}}[(y-v){\mathbf{1}}(v\in
I_{r,j})b_{j}]$ $\displaystyle=\sup\limits_{b\in
B_{r}^{\varepsilon}}\mathop{\mathbb{E}}[(y-v)b(v)].$ (38)
It is clear that the functions class
$B:=\bigcup_{\varepsilon\geq\varepsilon_{1}}\bigcup_{r\in[0,\varepsilon)}B_{r}^{\varepsilon}$
has VC dimension $O(\varepsilon_{1}^{-1})$. Let $G$ be the function class
consisting of functions $g:\\{0,1\\}\times[0,1]\to[-1,1]$ such that there
exists $b\in B$ satisfying
$g(y,v)=(y-v)b(v)\quad\text{for every }(y,v)\in\\{0,1\\}\times[0,1].$
The fact that $B$ has VC dimension $O(\varepsilon_{1}^{-1})$ implies that $G$
has pseudo-dimension $O(\varepsilon_{1}^{-1})$. By standard uniform
convergence results [LLS00], with probability at least $1-\delta$,
$\left|\frac{1}{n}\sum_{\ell=1}^{n}(y_{\ell}-v_{\ell})b(v_{\ell})-\mathop{\mathbb{E}}[(y-v)b(v)]\right|\leq\varepsilon_{0}\quad\text{for
every }b\in B.$ (39)
By (37) and (38), the inequality (39) implies that (36) holds for every
$\varepsilon\geq\varepsilon_{1}$ and $r\in[0,\varepsilon)$. ∎
###### Proof of Theorem 6.12.
By Lemma B.2, with probability at least $1-\delta/2$, for every
$k=0,\ldots,k^{*}$,
$\left|\widehat{\mathsf{RintCE}}(\Gamma,2^{-k})-\frac{1}{m}\sum_{s=1}^{m}\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v){\mathbf{1}}(v\in
I_{r_{s},j}^{\varepsilon})]|\right|\leq\varepsilon/4.$
By the Chernoff bound and the union bound, with probability at least
$1-\delta/2$, for every $k=0,\ldots,k^{*}$,
$\left|\frac{1}{m}\sum_{s=1}^{m}\sum_{j\in\mathbb{Z}}|\mathop{\mathbb{E}}_{(v,y)\sim\Gamma}[(y-v){\mathbf{1}}(v\in
I_{r_{s},j}^{\varepsilon})]|-\mathsf{RintCE}(\Gamma,2^{-k})\right|\leq\varepsilon/4.$
Combining the above inequalities using the union bound, with probability at
least $1-\delta$, for every $k=0,\ldots,k^{*}$,
$|\widehat{\mathsf{RintCE}}(\Gamma,2^{-k})-\mathsf{RintCE}(\Gamma,2^{-k})|\leq\varepsilon/2.$
This implies that
$\displaystyle\mathsf{SintCE}(\Gamma)$
$\displaystyle\leq\min_{k=0,\ldots,k^{*}}\left(\mathsf{RintCE}(\Gamma,2^{-k})+2^{-k}\right)$
$\displaystyle\leq\min_{k=0,\ldots,k^{*}}\left(\widehat{\mathsf{RintCE}}(\Gamma,2^{-k})+2^{-k}\right)+\varepsilon/2$
$\displaystyle=\widehat{\mathsf{SintCE}}(\Gamma)+\varepsilon/2.$
It remains to prove that
$\mathsf{SintCE}(\Gamma)\geq\widehat{\mathsf{SintCE}}(\Gamma)-\varepsilon$.
For every $k\in\mathbb{Z}_{\geq 0}$, if $k\leq k^{*}$, we have
$\mathsf{RintCE}(\Gamma,2^{-k})+2^{-k}\geq\widehat{\mathsf{RintCE}}(\Gamma,2^{-k})+2^{-k}-\varepsilon/2\geq\widehat{\mathsf{SintCE}}(\Gamma)-\varepsilon/2.$
If $k\geq k^{*}$, we have
$\displaystyle\mathsf{RintCE}(\Gamma,2^{-k})+2^{-k}$
$\displaystyle\geq\mathsf{RintCE}(\Gamma,2^{-k^{*}})$ (by B.1)
$\displaystyle\geq\widehat{\mathsf{RintCE}}(\Gamma,2^{-k^{*}})-\varepsilon/2$
$\displaystyle\geq\widehat{\mathsf{SintCE}}(\Gamma)-2^{-k^{*}}-\varepsilon/2$
$\displaystyle\geq\widehat{\mathsf{SintCE}}(\Gamma)-\varepsilon.$
Therefore, we have
$\mathsf{RintCE}(\Gamma,2^{-k})+2^{-k}\geq\widehat{\mathsf{SintCE}}(\Gamma)-\varepsilon$
for every $k\in\mathbb{Z}_{\geq 0}$, which implies that
$\mathsf{SintCE}(\Gamma)\geq\widehat{\mathsf{SintCE}}(\Gamma)-\varepsilon$, as
desired. ∎
We prove Lemma 6.9 using the following example. We choose
$\mathcal{X}=\\{a,b,c,d\\}$. Define $\alpha=1/6$ and $\beta=1/48$ and let
$\varepsilon\in(0,\beta)$ be a parameter. We choose the distribution
$\mathcal{D}$ over $\mathcal{X}\times\\{0,1\\}$ such that the marginal
distribution of $x$ in $(x,y)\sim\mathcal{D}$ is the uniform distribution over
$\mathcal{X}$, and the conditional distribution of $y$ given $x$ is defined
according to the $\mathop{\mathbb{E}}[y|x]$ values in Table 2. We consider two
predictors $f_{1},f_{2}:\mathcal{X}\to[0,1]$ also defined in Table 2.
$x$ | $a$ | $b$ | $c$ | $d$
---|---|---|---|---
$f_{1}(x)$ | $1/2-\beta$ | $1/2$ | $1/2$ | $1/2+\beta$
$f_{2}(x)$ | $1/2-\beta$ | $1/2-\varepsilon$ | $1/2+\varepsilon$ | $1/2+\beta$
$\mathop{\mathbb{E}}_{\mathcal{D}}[y|x]$ | $1/2-\beta+\alpha$ | $1/2-\varepsilon-\alpha$ | $1/2+\varepsilon+2\alpha$ | $1/2+\beta-2\alpha$
Table 2: Discontinuity Example for $\mathsf{intCE}$
The following claim follows immediately from the definitions of $f_{1}$ and
$f_{2}$:
###### Claim B.3.
As $\varepsilon\to 0$, $f_{2}$ converges to $f_{1}$ uniformly.
###### Claim B.4.
$\mathsf{intCE}_{\mathcal{D}}(f_{2})\leq\beta$.
###### Proof.
Any interval partition $\mathcal{I}$ that contains the two intervals
$[1/2-\beta,1/2)$ and $[1/2,1/2+\beta)$ satisfy
$\mathsf{intCE}_{\mathcal{D}}(f_{2},\mathcal{I})=0$ and
$w_{\mathcal{D}_{f_{2}}}(\mathcal{I})=\beta$. ∎
###### Claim B.5.
$\mathsf{intCE}_{\mathcal{D}}(f_{1})\geq 2\beta$.
###### Proof.
Consider any interval partition $\mathcal{I}$. If $1/2-\beta$ and $1/2+\beta$
are not in the same interval,
$\mathsf{intCE}_{\mathcal{D}}(f_{1},\mathcal{I})\geq\alpha/4=2\beta$. If
$1/2-\beta$ and $1/2+\beta$ are in the same interval, then
$w_{\mathcal{D}_{f_{1}}}(\mathcal{I})\geq 2\beta$. ∎
###### Proof of Lemma 6.9.
The lemma follows immediately from combining the three claims above. ∎
### B.3 Proofs from Section 8
###### Proof of Lemma 8.2.
By duality:
$\displaystyle\|\mathop{\mathbb{E}}(y-v)\phi(v)\|_{\mathcal{H}}$
$\displaystyle=\sup\limits_{w\in
B_{\mathcal{H}}}\langle\mathop{\mathbb{E}}(y-v)\phi(v),w\rangle_{\mathcal{H}}$
$\displaystyle=\sup\limits_{w\in
B_{\mathcal{H}}}\mathop{\mathbb{E}}(y-v)\langle\phi(v),w\rangle_{\mathcal{H}}$
$\displaystyle=\sup\limits_{w\in B_{\mathcal{H}}}\mathop{\mathbb{E}}(y-v)w(v)$
$\displaystyle=\mathsf{wCE}^{B_{\mathcal{H}}}(\Gamma).\qed$
###### Proof of 8.11.
The Fourier transform (or characterstic function, using probabilistic
terminology) of $h(v)=\exp(-|v|)$ is exactly the probability density function
of the Cauchy distribution $\hat{h}(\omega)=\frac{1}{1+\omega^{2}}$.
Using inverse Fourier transform formula, we get
$\exp(-|v|)=\int\exp(-i\omega
v)\frac{1}{1+\omega^{2}}\,\,\mathrm{d}\omega=\mathop{\mathbb{E}}_{\omega\sim\mathrm{Cauchy}(1)}\exp(-i\omega
v).$
We can now calculate $\mathop{\mathbb{E}}zz^{*}$ explicitly. The expectation
of the $(i,j)$ entry is
$\mathop{\mathbb{E}}(zz^{*})_{ij}=\mathop{\mathbb{E}}z_{i}\overline{z_{j}}=\mathop{\mathbb{E}}_{\omega\sim\mathrm{Cauchy}}\exp(-i\omega(v_{i}-v_{j}))=\exp(-|v_{i}-v_{j}|)=M_{ij}.\qed$
###### Proof of Theorem 8.14.
For fixed $\delta\in(0,1),\tau\in[0,\delta)$, let
$t_{\tau,\delta}(i):=\lfloor\frac{v_{i}+\tau}{\delta}\rfloor$ be the index of
the bin into which the sample $(v_{i},y_{i})$ is assigned (with bin sizes
$\delta$, and random shift $\tau$).
The LaplaceBinningEstimate algorithm takes random $\delta$ sampled from the
distribution $\mathrm{Gamma}(k=2,\theta=1)$, random
$\tau\sim\mathrm{Unif}(0,\delta)$ and returns
$T_{\delta,\tau}:=\frac{1}{k^{2}}\sum_{k}\left(\sum_{i\in
t^{-1}_{\tau,\delta}(k)}(v_{i}-y_{i})\right)^{2}.$
We can expand the square and write it as
$T_{\delta,\tau}=\frac{1}{n^{2}}\sum_{i\leq n,j\leq
n}(v_{i}-y_{i})(v_{j}-y_{j})\mathbf{1}[t_{\tau,\delta}(i)=t_{\tau,\delta}(j)].$
First, let us prove that in fact $T_{\delta,\tau}\leq 1$. Indeed, clearly
$|v_{i}-y_{i}|\leq 1$, and
$\mathbf{1}[t_{\tau,\delta}(i)=t_{\tau,\delta}(j)]\leq 1$, and therefore
$T_{\delta,\tau}\leq\frac{1}{n^{2}}\sum_{i\leq k,j\leq k}1\leq 1$.
We will now prove that the quantity $T_{\delta,\tau}$ is in fact an unbiased
estimator of $\widehat{\mathsf{kCE}}(S)^{2}$.
The expression
$\mathbf{E}_{\delta,\tau}\mathbf{1}[t_{\tau,\delta}(i)=t_{\tau,\delta}(j)]$ is
just the probability that $v_{i}$ and $v_{j}$ will land in the same bin, when
we sample random bin-size $\delta$ and random shift $\tau$. We wish to show
that this probability satisfies
$\mathop{\mathbb{E}}_{\delta,\tau}[\mathbf{1}{t_{\tau,\delta}(i)=t_{\tau,\delta}(j)}]=\exp(-|v_{i}-v_{j}|).$
Indeed, if this is the case, we will have
$\mathop{\mathbb{E}}_{\tau,\delta}T_{\delta,\tau}=\sum_{i,j}(v_{i}-y_{i})(v_{j}-y_{i})\mathop{\mathbb{E}}\mathbf{1}[t_{\tau,\delta}(i)=t_{\tau,\delta}(j)]=\sum_{i,j}(v_{i}-y_{i})(v_{j}-y_{j})\exp(-|v_{i}-v_{j}|)=\widehat{\mathsf{kCE}}_{Lap}(S)^{2}.$
For a fixed size of the bin $\delta$, we have
$\mathop{\mathbb{E}}_{\tau\sim[0,\delta]}\mathbf{1}[t(i)=t(j)]=\max(1-\frac{|v_{i}-v_{j}|}{\delta},0).$
The probability density function of the Gamma distribution with the shape
parameter $k=2$ and scale parameter $\theta=1$ is
$p(v)=v\exp(-v)\mathbf{1}[v\geq 0]$ . We now have
$\displaystyle\mathop{\mathbb{E}}_{\delta}\max(1-\frac{|u-v|}{\delta},0)$
$\displaystyle=\int_{0}^{\infty}t\exp(-t)\max(1-\frac{|u-v|}{t},0)\,\mathrm{d}t$
$\displaystyle=\int_{|u-v|}^{\infty}t\exp(-t)(1-\frac{|u-v|}{t})\,\,\mathrm{d}t$
$\displaystyle=\int_{|u-v|}^{\infty}t\exp(-t)\,\,\mathrm{d}t-\int_{|u-v|}^{\infty}|u-v|\exp(-t)\,\,\mathrm{d}t$
(integrating first term by parts)
$\displaystyle=[-t\exp(-t)]_{|u-v|}^{\infty}-\int_{|u-v|}^{\infty}\exp(-t)\,\,\mathrm{d}t-\int_{|u-v|}^{\infty}|u-v|\exp(-t)\,\,\mathrm{d}t$
$\displaystyle=\exp(-|u-v|).\qed$
## Appendix C Gaussian Kernel Appendix
In this appendix we will discuss the $\mathsf{kCE}$ measure with respect to
the Gaussian kernel $K(u,v)=\exp(-(u-v)^{2}).$ The associated RKHS has the
following explicit description.
###### Fact C.1 ([BTA11]).
For the Gaussian kernel $K_{\mathsf{Gauss}}(u,v):=\exp(-(u-v)^{2})$, we have
associated RKHS
$\mathcal{H}_{\mathsf{Gauss}}=\\{h:\mathbb{R}\to\mathbb{R}:\int\hat{h}(\omega)^{2}\exp(\omega^{2})\,\mathrm{d}\omega<\infty\\}$,
where $\hat{h}$ denotes the Fourier transform of $h$. The associated inner
product is given by
$\langle
h_{1},h_{2}\rangle_{K_{\mathsf{Gauss}}}=\int_{-\infty}^{\infty}\hat{h}_{1}(\omega)\hat{h}_{2}(\omega)\exp(\omega^{2})\,\mathrm{d}\omega,$
in particular, for function $h:\mathbb{R}\to\mathbb{R}$,
$\|h\|_{K_{\mathsf{Gauss}}}^{2}=\int_{-\infty}^{\infty}\hat{h}(\omega)^{2}\exp(\omega^{2})\,\mathrm{d}\omega=\sum_{k}\frac{\|h^{(k)}\|_{2}^{2}}{k!}.$
We will show that if we chose Gaussian kernel, the associated calibration
measure $\mathsf{kCE}^{\mathsf{Gauss}}$ does not satisfy robust soundness.
###### Theorem C.2.
For every $\varepsilon$, there is a distribution $\Gamma_{\varepsilon}$ over
$[0,1]\times\\{0,1\\}$, such that
$\mathsf{smCE}(\Gamma_{\varepsilon})\geq\Omega(\varepsilon^{5/2})$, and
$\mathsf{kCE}^{\mathsf{Gauss}}(\Gamma_{\varepsilon})\leq\mathcal{O}(\exp(-\varepsilon^{-1}))$.
On the other hand it satisfies significantly weaker continuity at $0$
property.
###### Theorem C.3.
For any $f$, if $\mathsf{smCE}(f)=\delta$, then
$\exp(-\mathcal{O}(\delta^{-4}))\leq\mathsf{kCE}^{\mathsf{Gauss}}(f)\leq\mathcal{O}(\sqrt{\delta}).$
### C.1 Lower Bound
In order to prove theorem C.2, we will use the following lemma.
###### Lemma C.4.
For any $\varepsilon$, there exist $\mathcal{O}(\varepsilon^{-1})$-Lipschitz
function $h_{\varepsilon}:\mathbb{R}\to[-1,1]$, satisfying the following three
properties
$\displaystyle\int\hat{h}_{\varepsilon}(\omega)^{2}\exp(-\omega^{2})$
$\displaystyle\leq\exp(-\varepsilon^{-1})$ (40)
$\displaystyle\|h_{\varepsilon}-h_{\varepsilon}\mathbf{1}_{[-1/4,1/4]}\|_{2}$
$\displaystyle\leq\exp(-\Omega(\varepsilon^{-1}))$ (41)
$\displaystyle\int_{-\sqrt{\varepsilon}}^{\sqrt{\varepsilon}}h_{\varepsilon}^{2}$
$\displaystyle\geq\Omega(\sqrt{\varepsilon}).$ (42)
###### Proof.
Let us denote $\phi_{\gamma}(t):=\exp(-t^{2}/\gamma^{2})$. We define
$h_{\varepsilon}(t):=\cos(t/\varepsilon)\phi_{\sqrt{\varepsilon}}(t).$
This function is indeed $\mathcal{O}(\varepsilon^{-1})$-Lipschitz, since
$|h_{\varepsilon}^{\prime}(t)|=\left|\frac{1}{\varepsilon}(\sin(t/\varepsilon))\phi_{\sqrt{\varepsilon}}-2\frac{t}{\varepsilon}\cos(t/\varepsilon)\phi_{\sqrt{\varepsilon}}(t)\right|\leq\frac{3}{\varepsilon}.$
Let us denote $\omega_{0}:=\frac{1}{\varepsilon}$. By standard properties of
Fourier transform (i.e. $\widehat{fg}=\widehat{f}*\widehat{g}$,
$\widehat{\cos(\omega_{0}t)}=\delta_{-\omega_{0}}+\delta_{\omega_{0}}$ and
$\widehat{\phi_{\varepsilon}}=\phi_{\varepsilon^{-1}}$), we have
$\widehat{q_{\varepsilon}}(\omega)=\exp(-\frac{(\omega-\omega_{0})^{2}}{\omega_{0}})+\exp(-\frac{(\omega+\omega_{0})^{2}}{\omega_{0}}).$
We will show that
$\int\exp(-2\frac{(\omega-\omega_{0})^{2}}{\omega_{0}}-\omega^{2})\,\,\mathrm{d}\omega\leq
2\exp(-\omega_{0}).$
Indeed, we can rewrite
$2\frac{(\omega-\omega_{0})^{2}}{\omega_{0}}+\omega^{2}=(\omega-\alpha)^{2}-\beta+2\omega_{0},$
where $\alpha\in[0,2],\beta\in[0,1]$ depend on $\omega_{0}$. Therefore
$\int\exp(-2\frac{(\omega-\omega_{0})^{2}}{\omega_{0}}-\omega^{2})\,\,\mathrm{d}\omega=\exp(-2\omega_{0}+1)\int\exp(-(\omega-\alpha)^{2})\leq\mathcal{O}(\exp(-2\omega_{0})).$
This implies
$\int\widehat{q_{\varepsilon}}(\omega)^{2}\exp(-\omega^{2})\,\,\mathrm{d}\omega\leq\mathcal{O}(\exp(-\Omega(\omega_{0}))),$
proving (40).
In order to prove (41), we have
$|h_{\varepsilon}(t)|\leq\exp(-\omega_{0}t^{2})$, implying
$\int_{1/4}^{\infty}h_{\varepsilon}(t)^{2}\leq\exp(-\omega_{0}/8),$
and similarly for the other tail.
Finally, for (42), since $\phi_{\omega_{0}}(t)\geq\Omega(1)$ for
$t\in[-\sqrt{\varepsilon},\sqrt{\varepsilon}]$ we have
$\displaystyle\int_{-{\sqrt{\varepsilon}}}^{\sqrt{\varepsilon}}h_{\varepsilon}(t)^{2}$
$\displaystyle\gtrsim\int_{-\sqrt{\varepsilon}}^{\sqrt{\varepsilon}}\sin^{2}(t/\varepsilon)$
$\displaystyle\gtrsim\Omega(\sqrt{\varepsilon}).\qed$
###### Proof of Theorem C.2.
Distribution $\Gamma_{\varepsilon}$ of $(v,y)\in[0,1]\times\\{0,1\\}$ will be
given in the following way: we will sample $v$ uniformly from $[1/4,3/4]$, and
sample $y|v$ such that $\mathop{\mathbb{E}}[y-v|v]=h_{\varepsilon}(v-1/2)/4$,
where $h_{\varepsilon}$ is a function as in Lemma C.4.
Let us define $r(v):=\mathop{\mathbb{E}}[y-v|v]$. First, we need to check, if
it provides a well-specified distribution, i.e. if
$\mathop{\mathbb{E}}[y|v]\in[0,1]$. Indeed, since $v\in[1/4,3/4]$, and
$h_{\varepsilon}\in[-1,1]$, we have $v+h_{\varepsilon}/4\in[0,1]$.
Now, we wish to upper bound
$\mathsf{kCE}^{\mathsf{Gauss}}(\Gamma_{\varepsilon}).$ For any $w\in
B_{\mathcal{H}_{\mathsf{Gauss}}}$, we have
$\displaystyle\int_{-1/4}^{1/4}r(v)w(v)\,\mathrm{d}v$
$\displaystyle\leq\int_{-\infty}^{\infty}r(v)w(v)\,\mathrm{d}v+\exp(-\Omega(\omega_{0})),$
using (41).
We will focus on bounding this latter integral.
$\displaystyle\int_{-\infty}^{\infty}r(v)w(v)$
$\displaystyle=\int\hat{r}(\omega)\hat{w}(\omega)\,\mathrm{d}v$
$\displaystyle=\int\hat{r}(\omega)\exp(-\omega^{2}/2)\hat{w}(\omega)\exp(\omega^{2}/2)\,\mathrm{d}\omega$
$\displaystyle\leq\left(\int\hat{r}(\omega)^{2}\exp(-\omega^{2})\,\,\mathrm{d}\omega\right)\left(\int\hat{w}(\omega)^{2}\exp(\omega^{2})\,\,\mathrm{d}\omega\right)$
$\displaystyle\leq C\exp(-\Omega(\omega_{0}))$
where the last inequality follow from (40) in Lemma C.4 and the fact that
$\|w\|_{\mathcal{H}_{\mathsf{Gauss}}}\leq 1$.
Finally, to show that $\mathsf{smCE}(v)\geq\varepsilon^{5/2}$, we take a test
function $w(v):=\varepsilon r(v)$ which is Lipschitz, and observe that
$\mathop{\mathbb{E}}(y-v)w(v)=\int_{1/4}^{3/4}r(v)^{2}\geq\varepsilon^{5/2}$
by (42). ∎
### C.2 Weak Continuity at Zero
###### Lemma C.5.
For any $1$-Lipschitz function $w:[0,1]\to[-1,1]$, and any $\varepsilon$,
there is a function $\tilde{w}:\mathbb{R}\to\mathbb{R}$ satisfying
$\|\tilde{w}|\|_{\mathcal{H}_{\mathsf{Gauss}}}\leq\exp(-\varepsilon^{-4})$
and
$~{}\forall t\in[0,1],|w(t)-\tilde{w}(t)|\leq\varepsilon.$
###### Proof.
Let us take a $1$-Lipschitz extension of $w$ to the entire $\mathbb{R}$, such
that $\operatorname{supp}(w)\subset[-1,2]$. We can assume without loss of
generality that $w$ is differentiable (indeed, otherwise, as in the proof of
Lemma 8.8, we can convolve it with mollifier $g_{\varepsilon}$ to get a
$1$-Lipschitz and differentiable approximation of $w$ up to error
$\varepsilon/2$.
Note that
$\int\hat{w}(\omega)^{2}\omega^{2}\,\,\mathrm{d}\omega=\|w^{\prime}\|_{2}^{2}\leq
3.$ (43)
Let us now let fix some cut-off frequency $\omega_{0}$, and define
$\hat{\tilde{w}}(\omega)=\hat{w}(\omega)\mathbf{1}_{[-\omega_{0},\omega_{0}]}(\omega).$
We have $\tilde{w}(t)$ given by the inverse Fourier transform
$\tilde{w}(t)=\int_{-\infty}^{\infty}\exp(-i\omega
t)\hat{\tilde{w}}(\omega)\,\,\mathrm{d}\omega.$
We wish to show that $|\tilde{w}(t)-w(t)|\leq\varepsilon$. We have
$\displaystyle w(t)-\tilde{w}(t)$
$\displaystyle=\int\mathbf{1}[|\omega|\geq\omega_{0}]\hat{w}(\omega)\exp(-i\omega
t)\,\,\mathrm{d}t$ $\displaystyle=\int\hat{w}(\omega)\omega\exp(-i\omega
t)\omega^{-1}\mathbf{1}[|\omega|\geq\omega_{0}]\,\,\mathrm{d}t$
$\displaystyle\leq\left(\int\hat{w}(\omega)^{2}\omega^{2}\,\,\mathrm{d}t\right)^{1/2}\left(\int\exp(-i\omega
t)\omega^{-2}\mathbf{1}[|\omega|>\omega_{0}]\,\,\mathrm{d}t\right)^{1/2}.$
The first term is bounded by (43), for the second
$\int_{\omega_{0}}^{\infty}\exp(-i\omega
t)\omega^{-2}\,\,\mathrm{d}\omega\leq\int_{\omega_{0}}^{\infty}\omega^{-2}\,\,\mathrm{d}\omega=\frac{1}{\omega_{0}}.$
Therefore $|w(t)-\tilde{w}(t)|\leq\frac{6}{\sqrt{\omega_{0}}}$.
Taking $\omega_{0}=\varepsilon^{2}$, we obtain the desired approximation.
Finally, we need to show the bound on
$\|\tilde{w}\|_{\mathcal{H}_{\mathsf{Gauss}}}$. We have
$\displaystyle\|\tilde{w}\|_{\mathcal{H}_{\mathsf{Gauss}}}^{2}$
$\displaystyle=\int\hat{\tilde{w}}(\omega)^{2}\exp(\omega^{2})\,\,\mathrm{d}\omega$
$\displaystyle\leq\exp(\omega_{0}^{2})\int\hat{\tilde{w}}(\omega)^{2}\,\,\mathrm{d}\omega$
$\displaystyle\leq 3\|w\|_{2}^{2}\exp(\omega_{0}^{2})\leq
9\exp(\omega_{0}^{2}).\qed$
###### Lemma C.6.
For any two predictors $f,g$ on the same space $\mathcal{X}$ satisfying
$\mathop{\mathbb{E}}|f-g|\leq\varepsilon$, we have
$|\mathsf{kCE}^{\mathsf{Gauss}}(f)-\mathsf{kCE}^{\mathsf{Gauss}}(g)|\leq\mathcal{O}(\sqrt{\varepsilon}).$
###### Proof.
The proof is identical to Lemma 8.7.
We need to show
$\|\mathop{\mathbb{E}}(y-f)\phi(f)-(y-g)\phi(g)\|_{\mathcal{H}}\leq\mathcal{O}(\sqrt{\varepsilon}),$
by triangle inequality, and Jensen inequality, we need to show only
$\left(\mathop{\mathbb{E}}\|\phi(f)-\phi(g)\|_{\mathcal{H}}^{2}\right)^{1/2}+\left(\mathop{\mathbb{E}}\|f\phi(f)-g\phi(g)\|_{\mathcal{H}}^{2}\right)^{1/2}\leq\mathcal{O}(\sqrt{\varepsilon}).$
Indeed,
$\mathop{\mathbb{E}}\|\phi(f)-\phi(g)\|_{\mathcal{H}}^{2}=2-\mathop{\mathbb{E}}2K(f,g)\leq\mathop{\mathbb{E}}|f-g|^{2}\leq\mathop{\mathbb{E}}|f-g|\leq\varepsilon,$
and similarly
$\mathop{\mathbb{E}}\|f\phi(f)-g\phi(g)\|_{\mathcal{H}}^{2}=\mathop{\mathbb{E}}[f^{2}+g^{2}-2fgK(f,g)]\leq\mathop{\mathbb{E}}[f^{2}+g^{2}-2fg(1-|f-g|^{2})]\leq\mathop{\mathbb{E}}[3(f-g)^{2}]\leq
3\varepsilon.$
∎
###### Proof of Theorem C.3.
The statement follows from Lemma C.6 and Lemma C.5 in the exactly same way as
Theorem 8.5. ∎
|
L PREPRINT - accepted at IEEE Latin American Symposium on Circuits and Systems
(LASCAS), 2023.4.5cm,1cm1.001.8
# Quantitative Information Flow for Hardware: Advancing the Attack Landscape
Lennart M. Reimann, Sarp Erdönmez, Dominik Sisejkovic and Rainer Leupers RWTH
Aachen University, Germany
{lennart.reimann, erdonmez, sisejkovic<EMAIL_ADDRESS>
###### Abstract
Security still remains an afterthought in modern Electronic Design Automation
(EDA) tools, which solely focus on enhancing performance and reducing the chip
size. Typically, the security analysis is conducted by hand, leading to
vulnerabilities in the design remaining unnoticed. Security-aware EDA tools
assist the designer in the identification and removal of security threats
while keeping performance and area in mind. State-of-the-art approaches
utilize information flow analysis to spot unintended information leakages in
design structures. However, the classification of such threats is binary,
resulting in negligible leakages being listed as well. A novel quantitative
analysis allows the application of a metric to determine a numeric value for a
leakage. Nonetheless, current approximations to quantify the leakage are still
prone to overlooking leakages. The mathematical model 2D-QModel introduced in
this work aims to overcome this shortcoming. Additionally, as previous work
only includes a limited threat model, multiple threat models can be applied
using the provided approach. Open-source benchmarks are used to show the
capabilities of 2D-QModel to identify hardware Trojans in the design while
ignoring insignificant leakages.
###### Index Terms:
confidentiality, hardware security, quantitative information flow
## I Introduction
Due to the high complexity of modern hardware designs, developers rely more
and more on electronic design automation tools (EDA). The tools optimize the
description in terms of area and performance while not altering the
functionality. Afterward, functional tests and formal methods are used to
check for functional mistakes. Most designers rely on a manual inspection of
security features. Incorporating security as a metric into EDA tools would
reduce the mistakenly implemented and overlooked security vulnerabilities [1].
The field of information flow analysis is broadly seen as a solid methodology
to prove security properties such as confidentiality and integrity [2]. The
analysis yields whether sensitive data can be transferred from sensitive data
to untrusted parts in the hardware. The undesired leakages can be detected
statically for a hardware, or dynamically by simulating with a set of test
cases. Static approaches do not rely on the test coverage to yield a complete
assurance of a signal’s confidentiality. However, both approaches work with
the non-interference property and thus cannot differentiate between negligible
leakages and major threats to data security [3].
A quantitative analysis of information flow provides a metric that allows a
designer to put a threat into context [4]. This allows an EDA tool to neglect
minor vulnerabilities if their removal has a significant impact on the
design’s performance or size. QFlow [5] represents a user-friendly framework
that allows a quantification of leakage for every secret data bit.
Additionally, the tool outputs a leakage path that can be analyzed to
circumvent the threat. Nonetheless, the quantification results in an
intolerable computational complexity. Thus, tools like QFlow use
approximations to form a metric. In this work, the disadvantages of QFlow’s
quantification and the limitations in the choice of the assumed attack model
are presented. Additionally, a new mathematical model is depicted that
challenges the vulnerabilities present in the state-of-the-art quantification
tools.
The major contributions of this paper are: (1) A new mathematical model that
quantifies the leakage of different Boolean functions more accurately. (2)
Two-dimensional quantification that allows the designer to understand the type
of obfuscation applied to the data. (3) The possibility to elaborate different
attack models on the design’s secret data.
## II Background
Figure 1: Toolflow of QFlow.
### II-A Information Flow Analysis
In the field of Information Flow Analysis (IFA), a hardware design or program
is separated into areas with different security classes [6]. A trusted area
and untrusted components. In the remainder of this work, sensitive signals are
referred to as ’high signals’ and untrusted or observable signals are ’low
signals’. IFA relies on proving the non-interference property. Hardware with
this property does not allow high signals to influence low signals. As this
binary property limits the expressiveness of a threat analysis, quantitative
metrics are elaborated in recent research.
### II-B Quantitative Information Flow
Quantitative Information Flow (QIF) [7] allows classifying minor information
leakages as negligible. It utilizes information theory to quantify the threat
to a secret that is processed by a system. The probability distribution of the
inputs and the system’s functionality are utilized to determine how much
information about the secret is leaked to an output at most. This value
represents the leakage of that secret bit.
A digital system can be represented using an abstract channel description.
Once the secret passes through the system and the attacker can observe
outputs, information can be gathered. For every output, the intruder will
guess the most likely input. It can be differentiated between multiple kinds
of channels. A channel that depends on additional inputs that can be observed
by the adversary, low inputs, introduce obfuscation. Low inputs determine
whether information can be observed at all. A simple example is given with a
multiplexer that depends on an observable input. It can decide whether a
secret is forwarded or not. This scenario describes an external fixed
probability choice [7]. For a low input, the adversary will have all
information about the channel and knows what information about the secret is
forwarded.
Figure 2: A second secret enables the channel that the investigated secret
passes. The adversary cannot know which channel is used. Confusion is
introduced.
Additionally, other secret bits of a secret introduce confusion. If an
additional secret determines the channel’s outcome, the adversary has to guess
about the outcome depending on the information gathered about the respective
secret. This scenario is illustrated in Fig. 2. Such a channel is described as
an internal fixed probability choice, as the attacker cannot know which of the
two channels is currently in use.
In this work, a leakage for each kind of channel is computed so that the
designer can differentiate between confusion and obfuscation. Encryption
should introduce a high confusion; otherwise, user data might be available for
a low input pattern.
## III Related Work
QIF-Verilog [8] forms a timing-independent data flow graph from a Verilog
description to quantify the flow of information from a signal marked as
’highly’ sensitive. The framework quantifies how much uncertainty is
introduced by the operations being performed on the secret before reaching an
output of the top module. A higher uncertainty shall indicate a higher
obfuscation. However, due to the many assumptions that have been made for QIF-
Verilog, vulnerabilities may be overlooked.
A bitwise analysis that introduces the Posterior Bayes Vulnerability as a
metric is introduced in the framework QFlow, as illustrated in Fig. 1.
Operations are not further analyzed one by one, but clusters are formed so
that inter-signal dependencies can be considered. This reduces the
computational error caused by the approximation.
The Posterior Bayes Vulnerabilities [7] are used as multiplication factors,
representing a range of $[0.5,1]$. As leakages can be further reduced than
0.5, a new mathematical model is needed to approximate such a leakage model
securely. By replacing the quantification of the vulnerabilities and the
leakage (Fig. 1), these shortcomings are removed. Furthermore, QFlow only
supports a single limited attack model.
QFlow’s Attack Model and Assumptions: The attacker knows the complete hardware
design. The adversary may not set or observe any sensitive data directly. But,
observable (low) inputs and outputs are accessible by the user. This
information may be used to gather information. QFlow determines how much
information is available using those signals.
QFlow’s attack model is extended in this work to evaluate the threat for
additional scenarios in which the adversary may set certain inputs or values
in the design.
## IV Model
In the proposed model, two leakages are computed for every labeled secret bit.
As treating the design as a single channel leads to immense computational
complexity, the design is split into smaller abstract channels. This
approximates the computation so that leakages for all smaller channels are
determined and combined to compute the overall leakage. For this
concatenation, a multiplication factor is defined that intends to avoid
introducing a small negative error while maintaining a small positive error.
We define two leakages, the common and the advanced leakage. The common
leakage represents the obfuscation introduced by external fixed probability
channels [7]. Low observable inputs determine the likeliness of information
being observable at all. Obfuscation is introduced by one-way functions.
Multiple inputs might result in the same output so that the adversary must
guess the more likely bit. If both inputs (0 and 1) have an equal chance of
being true, no leakage is present for the channel. The output does not depend
on that secret bit.
The second parameter, the advanced leakage, responds to the internal fixed
probability channels [7]. It illustrates how much information is present in
the output if it is present (common leakage) at all. If secrets are mixed with
other secrets, information gets lost as well. The attacker cannot know what
operation is being applied to the secret under analysis, as this decision
depends on another unknown secret. This means that the advanced leakage is
related to the ”confusion” property defined by Shannon [9]. Cryptographic
algorithms with high confusion will have a lower advanced leakage. But, a
circuit with a high advanced leakage and a common leakage of 0 will still not
leak any information at all. Then, the information that is merely confused is
not observable.
### IV-A Channel Compositions
The channel can implement four operations or behaviors for a binary input:
buffer, not, stuck-at-0, and stuck-at-1. Only buffer and not operations make
the value of the secret affect the output; in other words, they propagate the
secret value. The remaining two operations halt the secret and obfuscate it
for the output, as a change in the secret cannot be observed at the output.
Multiple channels constitute a channel composition and behave similarly. The
low inputs and outputs are clustered, and the high ones respectively. A
Boolean equation represents such a channel composition. Depending on the
inputs, such a channel composition implements one of the four mentioned
behavior for that single secret bit.
#### Low Channels
For low channel compositions, only common leakage is introduced. The channel
consists of an observable output, a secret input, and multiple or no
additional low inputs. Low inputs determine which channel in the channel
composition is active for the secret input. The obfuscation is introduced by
the low likelihood of the value being forwarded. Advanced leakage for
different low channels is either 1 or 0.
#### High channels
High channels introduce multiple secret bits into a channel description but no
low inputs. High inputs determine which channel in the composition is active
for the secret input. These channel introduce confusion as they map the
content of multiple secrets onto a single bit. For the high channels, the
common leakage multiplier is always 1.
#### Mixed channel compositions
If the active channel depends on both high and low inputs, both common and
advanced leakage multipliers have to be computed for the secret under
observation. Both values can get any value between 0 and 1 depending on the
channel composition.
### IV-B Leakage computation
QModel [5] uses the Bayes Posterior Vulnerability for quantifying the change
of leakage caused by the channels. The Posterior vulnerability is a suitable
quantifier for understanding channels, and it is proposed to be used in more
abstract systems. However, as a channel multiplier, the Posterior
Vulnerability is not appropriate, as it over approximates the leakage
significantly. The Posterior Vulnerability may never be lower than the Prior
Vulnerability, as it implies that the attacker has lost prior information she
had about the secret. Thus, it can never represent low leaking channels
accurately. The leakage for such a multiplier is not reduced enough leading to
a higher number of false positives.
So, a new channel multiplier is required. As the new model facilitates two
respective leakages, common and advanced leakage, two channel multipliers are
required. For every observable input combination, one abstract channel is
defined. This abstract channel consists of one or multiple high channels. The
channel multipliers are computed in two steps. Firstly, the highest
probability value among the leaking channels (buffer or not) is determined.
This probability represents $p_{max}$. Secondly, we sum the high channel’s
probabilities for the not and buffer behavior separately, $p_{sum-not}$ and
$p_{sum-buf}$. The maximum of those two represents the higher threat and is
stored in $p_{threat}$. The leakage factor for the respective abstract channel
is computed with Eq. 1.
$p_{leak}=p_{threat}-((1-p_{threat})-abs(p_{1}-p_{0})).$ (1)
$p_{1}$ and $p_{0}$ represent the probability of a stuck-at-1 and stuck-at-0
channel, respectively. The subtrahend in this equation represents the
confusion introduced by the underlying channel. A higher confusion results in
a reduced leakage.
Furthermore, it is checked whether the most probable leaking channel $p_{max}$
results in a higher multiplier than the computed value $p_{leak}$, with the
intent that an overestimation is guaranteed and no vulnerability is
overlooked:
$p_{C^{\prime}}=max(p_{\text{{leak}}},p_{\text{{max}}}).$ (2)
$p_{C^{\prime}}$ needs to be computed for every observable input-output
combination, thus for every abstract channel. Afterward, the abstract channel
with the highest $p_{C^{\prime}}$ is determined as the highest threat and
chosen as the advanced leakage multiplier for that channel composition. If the
probability for that low input combination, which specifies that channel, is
0, the channel is ignored in that decision as it would not be forwarded
anyways, as the input combination is impossible. The probability of the
observable input-output combination is the low multiplier.
After the individual leakage multipliers for every channel composition in the
design are determined, the secret’s leakage value can be propagated through
the system. For every secret bit, the common and advanced leakage is
initialized with [1.0, 1.0]. For every channel, the secret passes its leakage
vectors are multiplied with their individual computed channel multipliers.
Once an output bit is reached. The final leakage for that secret bit and
output is determined.
TABLE I: 2D-QModel AES results with Observe attack model | QModel [5] | 2D-QModel
---|---|---
| #Detected | Avg. Det. | Time | #Detected | Avg. Det. | #Unleaked | Avg. Sec. | Det.? | Time
Benchmark | /# Actual | Leakage | (s) | /# Actual | Leakage | /# Actual | Leakage | | (s)
AES-T100 | 8/8 | 1 | 246 | 8/8 | [1, 1] | 120/120 | [1, $1.31\cdot 10^{-4}$] | Y | 230
AES-T200 | 8/8 | 1 | 214 | 8/8 | [1, 1] | 120/120 | [1, $1.31\cdot 10^{-4}$] | Y | 236
AES-T400 | 128/128 | 0.183 | 245 | 128/128 | [$5.02\cdot 10^{-41}$, 1] | 0/0 | - | Y | 235
AES-T700 | 8/8 | 1 | 236 | 8/8 | [1, 1] | 120/120 | [1, $1.31\cdot 10^{-4}$] | Y | 236
AES-T800 | 8/8 | 1 | 232 | 8/8 | [1, 1] | 120/120 | [1, $1.31\cdot 10^{-4}$] | Y | 237
AES-T900 | 8/8 | 1 | 231 | 8/8 | [1, 1] | 120/120 | [1, $1.31\cdot 10^{-4}$] | Y | 233
AES-T1000 | 8/8 | 1 | 232 | 8/8 | [1, 1] | 120/120 | [1, $1.31\cdot 10^{-4}$] | Y | 232
AES-T1100 | 8/8 | 1 | 238 | 8/8 | [1, 1] | 120/120 | [1, $1.31\cdot 10^{-4}$] | Y | 233
AES-T1200 | 8/8 | 1 | 233 | 8/8 | [1, 1] | 120/120 | [1, $1.31\cdot 10^{-4}$] | Y | 235
AES-T1600 | 128/128 | 0.222 | 234 | 128/128 | [$2.00\cdot 10^{-4}$, 1] | 0/0 | - | Y | 237
AES-T1700 | 128/128 | 0.295 | 148 | 128/128 | [$8.04\cdot 10^{-41}$, 1] | 0/0 | - | Y | 244
The leakage values for all outputs are collected. The highest leakage value is
assigned to be the leakage value of the respective secret bit.
TABLE II: 2D-QModel AES results with different attack models | Observe | Set-Inputs | Set-Conds
---|---|---|---
Benchmark | Avg. Det. | Avg. Det. | Avg. Det.
AES-T400 | $1.93\cdot 10^{-6}$, 1 | $2.47\cdot 10^{-4}$, 1 | $2.47\cdot 10^{-4}$, 1
AES-T1600 | $1.92\cdot 10^{-6}$, 1 | $2.47\cdot 10^{-4}$, 1 | $2.47\cdot 10^{-4}$, 1
AES-T1700 | $1.10\cdot 10^{-80}$, 1 | $1.10\cdot 10^{-80}$, 1 | $2.54\cdot 10^{-3}$, 1
### IV-C Supported Attack Models
An additional advantage that is given by the separation of the leakage value
into two respective metrics is that multiple attack models can be evaluated.
By analyzing what channels have the highest threat in terms of the advanced
leakage, it can be analyzed whether the observable inputs can be fixed to a
value to increase the likelihood of that leakage occurring.
Observe: Input probabilities for the high and low inputs are provided, and the
attacker can observe the low inputs and outputs. The same model is used for
QModel in QFlow.
Set-Inputs: The design is analyzed to determine the branch conditions in the
design that are solely dependent on low inputs. Trigger conditions that are
depending on the internal states, such as counters are excluded.
Set-Conds: Each condition in the design is elaborated to check whether it can
be used to forward the secret data. In that case, the condition is set
accordingly.
Some of the provided attack models are overly sensitive, as it is not checked
whether conditions exclude each other.
## V Evaluation
Open-source Trojan-infested benchmarks are used to evaluate the capabilities
of the new mathematical model. Trust-Hub [10] provides design descriptions of
cryptographic accelerators that include Trojans leaking the encryption keys.
The channels are clustered using at most 5 input bits. The detection threshold
is set to 0.3 and the warning threshold to 0.01539. The thresholds were
derived from an empirical analysis for a set of benchmarks including multiple
obfuscation schemes.
### V-A Results
$0$$127$$10^{-4}$$10^{-3}$$10^{-2}$$10^{-1}$$10^{0}$Secret bitsLeakage (bit)
Figure 3: Advanced leakage of the AES-T100 key bits.
The computed advanced leakage of the AES-T100 benchmark is illustrated in Fig.
3. The first 8 bits of the key are leaked by the Trojan, which can be observed
in the illustration. The advanced leakages for those bits exceed the remaining
bits clearly as they are confused within the structure of the design before
reaching an output. Table I illustrates the detection results of the novel
model’s computation compared to QFlow’s QModel for Trust-Hub’s Trojan-infested
AES accelerators. Both models allow the detection of all threats while
neglecting the intended information flows caused by the encryption itself.
While the runtimes remain similar, the two-dimensional analysis allows a more
detailed analysis of the threats. As indicated by the second value in
2D-QModel’s leakage value, the secrets are not experiencing confusion, which
should be the case for an encryption scheme. The secrets are only obfuscated
using low signals, which reduces the probability that the secret bit can be
observed entirely. For eight of the Trojans the respective secret bits are
observable at all times, while for AES-T400, AES-T1600, and AES-T1700 branch
conditions apply so that their likelihood (common leakage) is reduced.
Those Trojans can be further elaborated using the additional attack models
that can be set for 2D-QModel. The computed leakages for the different attack
models are shown in Table II.
The Trojans in the T400 and T1600 benchmarks are input triggered, while T1700
is triggered by an internal counter mechanism. For the attack models Set-
Inputs, the basic leakage for the input triggered Trojans are increased, while
the threat of the remaining Trojan remains unchanged. The corresponding
Trigger values and the leakage path is returned by a program, which allows a
more detailed evaluation of the threat than for QFlow. Using the remaining
attack model Set-Internal-Conditions, the counter is set automatically by the
tool, which increases the likelihood of the data being leaked, depicted by the
increased basic leakage for the T1700 benchmark. Using the attack models, the
designer can elaborate the threat for multiple threat models and identify
trigger conditions that lead to an increased leakage.
## VI Conclusion
This work presented a new mathematical model to quantify information flow in
digital circuits for different attack models. Such a model facilitates a
security-aware design process on RTL. In comparison to the state of the art,
the quantification was improved, multiple attack models can be set as a
parameter, and the type of obfuscation can be differentiated while not showing
an increase in the computational complexity. The capabilities of the 2D-QModel
was evaluated using open-source hardware benchmarks. In future work, the
attack models will be elaborated further to examine whether certain branch
conditions exclude each other.
## References
* [1] P. Kocher _et al._ , “Spectre attacks: Exploiting speculative execution,” in _SP 2019_ , 2019, pp. 1–19.
* [2] W. Hu _et al._ , “Hardware information flow tracking,” _ACM Comput. Surv._ , vol. 54, no. 4, may 2021.
* [3] P. Ryan _et al._ , “Non-interference, who needs it?” in _Proceedings. 14th IEEE Computer Security Foundations Workshop, 2001._ , 2001\.
* [4] B. Köpf _et al._ , _Automation of Quantitative Information-Flow Analysis_. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 1–28.
* [5] L. M. Reimann _et al._ , “QFlow: Quantitative Information Flow for Security-Aware Hardware Design in Verilog,” _ICCD 2021_ , Oct 2021.
* [6] A. Ferraiuolo _et al._ , “Verification of a practical hardware security architecture through static information flow analysis,” _ASPLOS_ , 2017.
* [7] G. Smith, “On the Foundations of Quantitative Information Flow,” in _Foundations of Software Science and Computational Structures_. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 288–302.
* [8] X. Guo _et al._ , “QIF-Verilog: Quantitative Information-Flow based Hardware Description Languages for Pre-Silicon Security Assessment,” _HOST 2019_.
* [9] “Shannon, C.E. (1945) A Mathematical Theory of Cryptography. Bell System Technical Memo MM 45-110-02,” september 1.
* [10] National Science Foundation, “Trust-Hub,” 2021. [Online]. Available: https://trust-hub.org/#/home
|
Date:
Large-scale geometry of the Universe
Yassir Awwad♠ and Tomislav Prokopec♢
♢ Institute for Theoretical Physics, Spinoza Institute & EMME$\Phi$
Utrecht University, Princetonplein 5, 3584 CC Utrecht, The Netherlands
Abstract
The large scale geometry of the late Universe can be decomposed as
${\mathds{R}}\times{\Sigma}_{3}$, where ${\mathds{R}}$ stands for cosmic time
and ${\Sigma}_{3}$ is the three dimensional spatial manifold. We conjecture
that the spatial geometry of the Universe’s spatial section ${\Sigma}_{3}$
conforms with the Thurston-Perelman theorem, according to which the geometry
of $\Sigma_{3}$ is either one of the eight geometries from the Thurston
geometrization conjecture, or a combination of Thurston geometries smoothly
sewn together. We assume that topology of individual geometries plays no
observational role, i.e. the size of individual geometries is much larger than
the Hubble radius today. We investigate the dynamics of each of the individual
geometries by making use of the simplifying assumption that our local Hubble
patch consists of only one such geometry, which is approximately homogeneous
on very large scales, but spatial isotropy is generally violated.
Spatial anisotropies grow in time in decelerating universes, but they decay in
accelerating universes. The thus-created anisotropy problem can be solved by a
period of primordial inflation, akin to how the flatness problem is solved.
Therefore, as regards Universe’s large scale geometry, any of the Thurston’s
geometries should be considered on a par with Friedmann’s geometries.
We consider two observational methods that can be used to test our conjecture:
one based on luminosity distance and one on angular diameter distance
measurements, but leave for the future their detailed forecasting
implementations.
♠ e-mail<EMAIL_ADDRESS>
♢ e-mail<EMAIL_ADDRESS>
###### Contents
1. 1 Introduction & Motivation
1. 1.1 The assumption of isotropy
2. 1.2 Thurston-Perelman’s geometrization theorem
2. 2 Thurston Space-Times
1. 2.1 The FLRW geometries, $\mathds{R}^{3}$, $\mathds{H}^{3}$ and $\mathbf{S}^{3}$
2. 2.2 $\mathds{R}\times\mathds{H}^{2}$ and $\mathds{R}\times\mathbf{S}^{2}$
3. 2.3 $\widetilde{\text{U}(\mathds{H}^{2})}$
1. 2.3.1 A sidenote on $\widetilde{\text{SL}(2,\mathds{R})}$
4. 2.4 Nil and Solv
3. 3 Background Evolution
1. 3.1 General solution
2. 3.2 Friedmann Equations
3. 3.3 Length Scales
4. 4 Distance Measures
1. 4.1 Distance measures in an isotropic universe
1. 4.1.1 Angular diameter distance
2. 4.1.2 Luminosity distance
3. 4.1.3 Etherington’s reciprocity theorem
2. 4.2 Distance measures in an anisotropic universe
1. 4.2.1 Angular diameter distance
2. 4.2.2 Luminosity distance
3. 4.2.3 The anisotropic reciprocity theorem
5. 5 Distance Measures Visualised
1. 5.1 Angular diameter distance
2. 5.2 Luminosity distance
6. 6 Anisotropic Scale Factors
1. 6.1 Growth of anisotropies in an epoch with matter and cosmological constant
2. 6.2 The Anisotropy problem
7. 7 Conclusion and Discussion
8. A Geodesics of the $\mathds{R}\times\mathds{H}^{2}$ and $\mathds{R}\times\mathbf{S}^{2}$ geometries
9. B Geodesics of the $\widetilde{\text{U}(\mathds{H}^{2})}$ geometry
10. C Geodesics of the Nil geometry
11. D Geodesics of the Solv geometry
## 1 Introduction & Motivation
### 1.1 The assumption of isotropy
Much of modern cosmology, in particular the $\Lambda$CDM model, is based on
the Cosmological Principle. This Principle is succinctly summarised by Milne,
[1], according to whom “Two observers in uniform relative motion have
identical views of the Universe’, i.e. that each sees the same evolving
sequence of world-pictures.’ In its modern rendition, the Principle states
that statistical properties of the Universe are the same to all local
observers 111 The principle applies only to inertial observers in the rest
frame of the Cosmic Microwave Backgroud (CMB) photons, which defines the rest
frame of the Universe. Any observer moving with respect to that frame
perceives a CMB dipole and a large scale motion of the Universe’s Large Scale
Structure (LSS). and, in particular, that it is spatially homogeneous and
isotropic at large enough scales.
The Cosmological Principle naturally leads one to also use a spatially
homogeneous and isotropic metric – that is to say, the Friedmann-Lemaître-
Robertson-Walker (FLRW) metric – to describe the space-time background of the
Universe. By placing small metric perturbations on top of background that
break these symmetries, one can account for the formation of structures such
as galaxies and clusters of galaxies. However, there is no strong a priori
reason to believe that the symmetries of the background metric ought to be
exact. In fact, the observational evidence for spatial isotropy and
homogeneity is rather weak, if not controversial.
Based on the WMAP data, the assumption of spatial isotropy was questioned in
Ref. [2], where the authors pointed at the anomalous alignment of the CMB
quadrupole and octopole (at the level $1/60$) (for a recent review of CMB
anomalies, see Ref. [3]). The authors of Ref. [2] pointed out that the
Universe with toroidal topology could explain such an alignment (albeit other
features of this model were absent in the data); but they did not attempt
geometric explanations. Later Refs. [4, 5] found no evidence for toroidal
topology in the Planck satellite data. Land and Magueijo [7, 6, 8] further
worked out the ideas in Ref. [2], and pointed out that the Universe may have a
preferred axis, which approximately corresponds to that of the CMB dipole,
suggesting that on very large scales the Universe violates spatial isotropy.
Given that the WMAP observations of low multipoles were precise enough, the
Planck data have not brought deep new insights into this question, see e.g.
Refs. [9] and [10]. 222The Planck team is cautious regarding whether
statistical anomalies in the CMB are real or just a statistical fluke: “The
existence of these features is uncontested, but, given the modest
significances at which they deviate from the standard $\Lambda$CDM
cosmological model, and the a posteriori nature of their detection, the extent
to which they provide evidence for a violation of isotropy in the CMB remains
unclear. It is plausible that they are indeed simply statistical
fluctuations.” However, recent LSS data tend to corroborate the CMB data. For
example, in his recent essay, Peebles [11] takes a more positive view on the
anomalies, regarding them as tantalizing hints for new, as-yet-undisclosed
physics. In particular, in section 3, Peebles considers anomalies in large
scale structures, and remarks: “The measured dipoles in the distribution of
quasars and in the distributions of radio galaxies cataloged at several radio
frequencies are in about the predicted direction, but the dipole amplitudes
are too large, an anomaly,” for details see Refs. [12, 13, 14, 15]. Peebles
also mentions some other (more local) anomalies, including the local void. For
a more comprehensive overview of the existing anomalies in $\Lambda$CDM (which
include the Hubble and $\sigma_{8}$ tensions) see Refs. [16], [3] and [17].
When combined, these observations and remarks present a well-grounded
motivation for considering more general cosmological models that do not make
the assumption of spatial isotropy, but are nonetheless capable of accounting
what we observe in the night sky.
Another important question to which we do not have an unambiguous answer is:
“do we live in a spatially flat or in a spatially curved universe?” When CMB
data from the Planck mission [18] are combined with those from the South Pole
Observatory (ACT), and assuming a FLRW geometry, one obtains a Universe which
is consistent with flat spatial sections, i.e. when LSS BAO data are also
included one obtains $\Omega_{\kappa}=-\kappa/H_{0}^{2}=0.001\pm 0.002$, where
$H_{0}$ is the expansion rate of the Universe today and $\kappa$ Gauss’
curvature of the spatial sections. However, Planck’s data alone show evidence
for a positively curved universe ($\kappa>0$), and depending on the type of
analysis used, one obtains
$\Omega_{\kappa}=-0.044{+0.018\atop-0.015}\;(3.4\sigma)$ when baseline Plik
likelihood is used and $\Omega_{\kappa}=-0.035{+0.018\atop-0.013}\;(\gtrsim
2\sigma)$ when CamCode likelihood analysis is used. At this moment it is
unclear whether that is yet another anomaly, or a calibration problem. While
the Planck collaboration analyses are based on the whole sky data, the ACT
covers a fraction of the sky, possibly indicating a directional dependence in
spatial curvature. A definite answer of this intriguing question will have to
wait for EUCLID and SKAO, whose observations will break Planck data
degeneracies with regard to $\Omega_{\kappa}$ and allow for a highly accurate
measurement of spatial curvature, with an error of the order
$\Delta\Omega_{\kappa}\sim 10^{-3}$ [19, 20].
In this paper, we will explore the consequences of dropping the assumption of
spatial isotropy from the outset and considering a broader set of possible
spatial geometries. We are specifically interested in space-times that
decompose into one time-like dimension and a three-dimensional spatial section
as,
$\displaystyle\mathcal{M}$ $\displaystyle=$
$\displaystyle\mathds{R}\times\Sigma_{3}$ (1.1) $\displaystyle{\rm d}s^{2}$
$\displaystyle=$ $\displaystyle-{\rm
d}t^{2}+a(t)^{2}\Big{(}\gamma_{ij,\Sigma}\ {\rm d}x^{i}{\rm d}x^{j}\Big{)}\,.$
(1.2)
We retain the assumption that spatial sections expand isotropically in (1.2)
for now, 333In section 6 we drop the assumption of spatial isotropy of the
scale factor, and present a detailed analysis of its dynamics. but place no
spatial isotropy requirement on the spatial section ($\Sigma_{3}$,
$\gamma_{ij,\Sigma}$) itself.
The breakdown in equations (1.1–1.2) hinges on the assumption that there
exists a spacelike three-dimensional hypersurface on which fluctuations in
matter density vanish or are very small, and it exists if the Cosmological
Principle holds approximately. There is a large body of observations which
supports its approximate version; as discussed above, the data suggest a small
violation of spatial isotropy.
Here we consider geometry of the Universe on large scales, and leave the
question of topology for a future investigation. Namely, to each geometry one
can associate different topologies. 444For example, if the Universe’s spatial
section are flat, then its geometry is ${\mathds{R}}^{3}$, which can be
considered as the covering space of various topologies, the simplest ones
being toroidal (${\mathds{T}}^{3}$), cuboidal
(${\mathds{T}}^{2}\times{\mathds{R}}$) and slab
(${\mathds{T}}\times{\mathds{R}}^{2}\simeq S^{1}\times{\mathds{R}}^{2}$).
There is a large body of work devoted to investigating topology of the
Universe, for reviews of existing attempts see Refs. [21, 22, 23, 4, 5]. Early
attempts include the works of Starobinsky [24] and de Oliveira-Costa and Smoot
[25], where the authors use the COBE data to investigate whether the Universe
has toroidal topology, and find no evidence for it. Refs. [26, 27] look for
circles in the CMB sky that would be an important sign of topology, but find
no compelling evidence in favor of such circles.
It is interesting to note that topology can affect the amplitude of CMB
fluctuations on large angular scales. Thus Ref. [28] showed that dodecahedral
space topology in a spatially flat universe can suppress the amplitude of CMB
fluctuations on the largest angular scales, thus explaining some of the large
scale CMB anomalies. Similar results were found in Ref. [29], where the
authors used ${\mathds{R}}^{2}\times S^{1}$ topology to explain the deficit in
low CMB multipoles. 555 These findings are to be contrasted with the Planck
2013 [5], where one reads: “we consider flat spaces with cubic toroidal (T3),
equal-sided chimney (T2) and slab (T1) topologies, three multi-connected
spaces of constant positive curvature (dodecahedral, truncated cube and
octahedral) and two compact negative-curvature spaces. These searches yield no
detection of the compact topology with the scale below the diameter of the
last scattering surface.”
The search of candidates for $\Sigma_{3}$ leads us quite naturally to consider
classification schemes for three-dimensional manifolds. For such a
classification schema we will look towards the eight model geometries
described in William Thurston’s geometrization conjecture [30], proven by
Grigori Perelman in the early 2000s [31, 32, 33]. In particular, we will
consider the effect of the large-scale anisotropy inherent in these model
geometries on the evolution of the Universe and how it deforms objects in the
night sky.
### 1.2 Thurston-Perelman’s geometrization theorem
Thurston-Perelman’s geometrization theorem 666This theorem was initially
formulated as a conjecture by Thurston and was later proven in Perelman. We
therefore prefer to term it as a theorem as opposed to the conjecture moniker
it commonly retains in literature. is a partial classification of three-
dimensional manifolds analogous to the uniformization theorem that classifies
the possible geometries of Riemann surfaces. The main difference lies in the
fact that not every 3-manifold can be endowed with a unique geometry, but
rather every 3-manifold can be cut into pieces that can. Many formulations of
the conjecture exist, we will present the wording of Thurston’s original
publication.
Thurston-Perelman’s geometrization theorem [30], [31, 32, 33]
The interior of every compact 3-manifold has a canonical decomposition into
pieces which have geometric structures.
Central to this theorem are the concepts of a geometry and a geometric
structure. Briefly put, a _Geometry_ is a pair $(X,\text{Isom(}X\text{)})$
consisting of a simply connected, complete and homogeneous Riemannian manifold
$X$ and its isometry group. A geometry $(Y,\text{Isom(}Y\text{)})$ is said to
have a _Geometric Structure_ based on $X$ if there is a subgroup
$A\subset\text{Isom(}X\text{)}$ such that $Y$ is isometric to $X/A$ and
$\text{Isom(}Y\text{)}$ is homeomorphic to $\text{Isom(}X\text{)}/A$.
To give an example, the manifold $\mathds{H}^{2}\times S^{1}$ has a geometry
based on $\mathds{H}^{2}\times\mathds{R}$ since one can obtain the former from
the latter taking a quotient with the subgroup
$\mathds{1}_{\mathds{H}^{2}}\times\mathds{Z}$ of
$\text{Isom(}\mathds{H}^{2}\times\mathds{R}\text{)}$.
This notion can be used to define an ordering within the set of geometries,
where geometry $A$ is said to be be of lower order of $B$ if
$\text{Isom(}A\text{)}$ is properly contained in $\text{Isom(}B\text{)}$. A
natural follow-up question is to ask whether there are maximal geometries with
respect to this odering: are there geometries $(X,\text{Isom(}X\text{)})$ for
which $\text{Isom(}X\text{)}$ is not properly contained in the isometry group
of any other manifold? Thurston provides an answer to this exact question in
his paper, which we will paraphrase here.
Any maximal, simply connected, three-dimensional geometry $X$ that admits a
compact quotient is equivalent to one of the eight geometries below.
* •
$\mathds{R}^{3}$
* •
$\widetilde{\text{U}(\mathds{H}^{2})}$
* •
$\mathds{H}^{3}$
* •
$\mathds{H}^{2}\times\mathds{R}$
* •
$S^{3}$
* •
$S^{2}\times\mathds{R}$
* •
Nil
* •
Solv .
These eight maximal geometries can be said to form the building blocks of all
compact 3-manifolds. This means that if we want to investigate space-time
manifolds that decompose as in equation (1.1), then we can make a good start
by modeling $\Sigma_{3}$ as one of the eight geometries of the above
classification.
The fact that this classification applies only to compact manifolds should not
worry us too much. We will see in section 3.3 that the curvature radius of any
of these geometries is larger than the diameter of the observable universe.
Therefore any notions of closedness or periodicity on spatial section is
largely irrelevant to our local patch of the observable universe. 777 Due to
the fact inflation exponentially enlarges spatial sections of the Universe, it
is highly likely that any complex geometric structure of the very early
Universe is hidden behind the observable patch of the Universe, if we assume
that cosmic inflation occurred. Therefore it is reasonable to assume that the
observable patch of the Universe corresponds to one of the Thurston
geometries, and that any subtleties arising due superimposed topology is not
observable. Nevertheless, from the phenomenological point of view, it is worth
investigating the signatures of topology of the Universe and confront them
with the data. In fact, there is a large body of work mentioned above and
doing precisely that, for reviews see Refs. [21, 22, 23, 4, 5]. All of these
works assume that the geometry of the spatial section is flat, i.e.
$\Sigma_{3}={\mathds{R}}^{3}$. The question of how topology may affect the
analyses performed in this work we will not focus on.
Instead, we will concentrate on the consequences of introducing large-scale
spatial anisotropy. Can we build a consistent model of the Universe under the
relaxed assumption of isotropy and what effects would this have on what we see
in the sky?
## 2 Thurston Space-Times
In this section we will present explicit coordinate representations of space-
times based on Thurston geometries. Following the approach outlined in the
previous section, we will decompose space-time at large scales as
$\displaystyle\mathcal{M}$ $\displaystyle=\mathds{R}\times\Sigma_{3}$ (2.1)
$\displaystyle{\rm d}s^{2}$ $\displaystyle=-{\rm
d}t^{2}+a(t)^{2}\bigg{(}\gamma_{ij,\text{Thurston}}\ {\rm d}x^{i}{\rm
d}x^{j}\bigg{)},$ (2.2)
where $\gamma_{ij,\text{Thurston}}$ is specific to each of the eight
geometries of this theorem.
The spatial part contains a curvature parameter $\kappa\in\mathds{R}$ that
distinguishes between positive ($\kappa>0$), zero ($\kappa=0$) and negative
curvature ($\kappa<0$). This parameter also defines a length scale, the radius
of curvature, which can be written as $L=1/\sqrt{\kappa}$ for geometries with
positive curvature, $L=1/\sqrt{-\kappa}$ for those negative curvature and in
the zero curvature case $L\rightarrow\infty$ as $\kappa$ approaches zero. We
will use the curvature parameter $\kappa$ and curvature radius $L$
interchangeably in this text.
### 2.1 The FLRW geometries, $\mathds{R}^{3}$, $\mathds{H}^{3}$ and
$\mathbf{S}^{3}$
The first three Thurston geometries, $\mathds{R}^{3}$, $\mathds{H}^{3}$ and
$\mathbf{S}^{3}$, are exactly the spatial slices of the familiar FLRW space-
time. We can parameterise all three simultaneously in hyperspherical
coordinates as follows.
$\displaystyle{\rm d}s^{2}$ $\displaystyle=-{\rm d}t^{2}+a(t)^{2}\bigg{(}{\rm
d}\chi^{2}+S_{\kappa}(\chi)^{2}{\rm d}\mathbf{\Omega}^{2}\bigg{)}=-{\rm
d}t^{2}+a(t)^{2}\bigg{(}{\rm d}\chi^{2}+S_{\kappa}(\chi)^{2}\big{(}{\rm
d}\theta^{2}+\sin(\theta)^{2}{\rm d}\phi^{2}\big{)}\bigg{)}$ (2.3)
$\displaystyle S_{\kappa}(\chi)$
$\displaystyle=\begin{cases}\sin(\chi\sqrt{\kappa})/\sqrt{\kappa}&\text{ if
}\kappa>0\\\ \chi&\text{ if }\kappa=0\\\
\sinh(\chi\sqrt{-\kappa})/\sqrt{-\kappa}&\text{ if }\kappa<0\end{cases}$ (2.4)
$\displaystyle=\begin{cases}L\sin(\chi/L)&\text{ if }\kappa>0\\\ \chi&\text{
if }\kappa=0\\\ L\sinh(\chi/L)&\text{ if }\kappa<0\end{cases}$ (2.5)
where $\chi\in[0,\infty)$, $\theta\in[0,\pi)$ and $\phi\in[0,2\pi)$. The
coordinate $\chi$ measures comoving distance along a radial geodesic and the
angles $\theta$ and $\phi$ are the _polar_ and _azimuthal_ angles
respectively.
### 2.2 $\mathds{R}\times\mathds{H}^{2}$ and $\mathds{R}\times\mathbf{S}^{2}$
The next two geometries, $\mathds{R}\times\mathds{H}^{2}$ and
$\mathds{R}\times\mathbf{S}^{2}$, are two-dimensional analogues of the
ordinary FLRW spatial sections. We will present them using hyperspherical
coordinates of one dimension lower.
${\rm d}s^{2}=-{\rm d}t^{2}+a(t)^{2}\bigg{(}{\rm d}z^{2}+{\rm
d}\chi^{2}+S_{\kappa}(\chi)^{2}{\rm d}\phi^{2}\bigg{)},$ (2.6)
Here $\chi\in[0,\infty)$ and $\phi\in[0,2\pi)$ as before; but rather than
including a polar angle $\theta$, we instead have a real coordinate
$z\in\mathds{R}$ orthogonal to the $(\chi,\phi)$ plane. The parameter $\kappa$
again distinguishes between positive and negative curvature trough
$S_{\kappa}$, as defined as in (2.4).
### 2.3 $\widetilde{\text{U}(\mathds{H}^{2})}$
The sixth Thurston geometry, $\widetilde{\text{U}(\mathds{H}^{2})}$, is the
universal cover of the unit tangent bundle of the hyperbolic plane. To derive
its metric we will mostly follow the derivation of Fagundes in [36] and begin
with the following metric of $\mathds{H}^{2}$.
${\rm d}s^{2}={\rm d}x^{2}+\cosh^{2}(x){\rm d}y^{2}$ (2.7)
A unit tangent vector $\hat{u}_{p}\in\text{U}_{p}(\mathds{H}^{2})$ at any
point $p=(x,y)\in\mathds{H}^{2}$ must now satisfy
$\hat{u}_{p}\cdot\hat{u}_{p}=1$. This means means we can write
$\hat{u}_{p}=\big{(}\cos(\phi),\ \tfrac{\sin(\phi)}{\cosh(x)}\big{)}$, with
$0\leq\phi<2\pi$. For a small displacement $dp^{i}$ we can calculate the total
differential,
${\rm D}u^{i}=\left(\frac{\partial u^{i}}{\partial
p^{k}}+\Gamma^{i}_{jk}u^{j}\right){\rm d}p^{k}+\frac{\partial
u^{i}}{\partial\phi}{\rm d}\phi,$ (2.8)
where $i,\ j,\ k\in\\{1,2\\}$, $p^{1}=x$ and $p^{2}=y$. The nonzero
Christoffel symbols (calculated from (2.7)) are
$\Gamma^{1}_{22}=\sinh(x)\cosh(x)$ and
$\Gamma^{2}_{12}=\Gamma^{2}_{21}=\tanh(x)$. Therefore we get
$\displaystyle{\rm D}\hat{u}^{1}$ $\displaystyle=-\sinh(x)\sin(\phi){\rm
d}y-\sin(\phi){\rm d}z$ (2.9) $\displaystyle{\rm D}\hat{u}^{2}$
$\displaystyle=-\tanh(x)\cos(\phi){\rm d}y-\frac{\cos(\phi)}{\cosh(x)}{\rm
d}z.$ (2.10)
The length of ${\rm D}\hat{u}$ is then given by $\big{(}{\rm
D}\hat{u}^{1}\big{)}^{2}+\big{(}{\rm D}\hat{u}^{2}\big{)}^{2}\cosh^{2}(x)$, so
that the metric on of $\text{U}(\mathds{H}^{2})$ can be written as:
$\displaystyle{\rm d}s_{\Sigma}^{2}$ $\displaystyle={\rm
d}x^{2}+\cosh^{2}(x){\rm d}y^{2}+\big{(}{\rm D}\hat{u}^{1}\big{)}^{2}+({\rm
D}\hat{u}^{2})^{2}\cosh^{2}(x)$ (2.11) $\displaystyle={\rm
d}x^{2}+\cosh^{2}(x){\rm d}y^{2}+\big{(}{\rm d}\phi+\sinh(x){\rm
d}y\big{)}^{2}.$ (2.12)
Note that the topology of $\widetilde{\text{U}(\mathds{H}^{2})}$ is
homeomorphic to the Cartesian product of the 2-plane with the circle. This
means that this space is path-connected, but not simply-connected: there are
loops that wind around $\phi$ that cannot be shrunk to a point. Taking the
universal cover of $\text{U}(\mathds{H}^{2})$ means that we must ‘unroll’ the
circle $\mathbf{S}$ to a line by promoting the angle $\phi$ to a real variable
$z$. Since the metric in (2.11–2.12) does not contain a length scale, we will
introduce it by setting $x\rightarrow x\sqrt{-\kappa}=x/L$ in the argument of
the hyperbolic functions.
The (identity component of the) $\widetilde{\text{U}(\mathds{H}^{2})}$ space-
time can then be presented as $\mathds{R}^{1,3}$ with the following metric:
$\displaystyle{\rm d}s_{s}^{2}$ $\displaystyle=-{\rm d}t^{2}+a(t)\left({\rm
d}x^{2}+\cosh^{2}(x\sqrt{-\kappa}){\rm d}y^{2}+\big{(}{\rm
d}z+\sinh(x\sqrt{-\kappa}){\rm d}y\big{)}^{2}\right)$ (2.13)
$\displaystyle=-{\rm d}t^{2}+a(t)\left({\rm d}x^{2}+\cosh^{2}(x/L){\rm
d}y^{2}+\big{(}{\rm d}z+\sinh(x/L){\rm d}y\big{)}^{2}\right).$ (2.14)
#### 2.3.1 A sidenote on $\widetilde{\text{SL}(2,\mathds{R})}$
In literature, $\widetilde{\text{U}(\mathds{H}^{2})}$ is often used
interchangeably with $\widetilde{\text{SL}(2,\mathds{R})}$ in the context of
the geometrization theorem. This identification is sensible in a topology or
differential geometry context, as $\text{SL}(2,\mathds{R})$ acts naturally on
$\mathds{H}^{2}$ by way of a Möbius transformation. This action can be
extended to $\text{U}(\mathds{H}^{2})$ through the tangent map, which induces
a diffeomorphism between the two manifolds. This diffeomorphism is not an
isometry, however. To see this, we note that $\text{SL}(2,\mathds{R})$ can be
constructed as a unit sphere in $\mathds{R}^{2,2}$.
$g=\begin{pmatrix}a&b\\\
c&d\end{pmatrix}=\begin{pmatrix}X_{1}+X_{3}&X_{4}+X_{2}\\\
X_{4}-X_{2}&X_{1}-X_{3}\end{pmatrix}\in\text{SL}(2,\mathds{R}).$ (2.15)
The defining equation for $\text{SL}(2,\mathds{R})$, $\det(g)=1$ now becomes
$X_{1}^{2}+X_{2}^{2}-X_{3}^{2}-X_{4}^{2}=1$. The metric on
$\widetilde{\text{SL}(2,\mathds{R})}$ can then be induced from
$\mathds{R}^{2,2}$ by using the Iwasawa decomposition and then taking similar
steps to the above to pass to the universal cover and to restore a length
scale $L$. We then get the following metric,
$\displaystyle{\rm d}s_{\Sigma}^{2}$ $\displaystyle={\rm d}x^{2}+{\rm
d}y^{2}-{\rm d}z^{2}+2\sinh(2x\sqrt{-\kappa}){\rm d}y{\rm d}z$ (2.16)
$\displaystyle={\rm d}x^{2}+{\rm d}y^{2}-{\rm d}z^{2}+2\sinh(2x/L){\rm d}y{\rm
d}z.$ (2.17)
Note the sign of ${\rm d}z^{2}$, which shows that this
$\text{SL}(2,\mathds{R})$ is not locally Euclidean and cannot be isometric to
$\text{U}(\mathds{H}^{2})$. For the purposes of this paper we will therefore
refrain from identifying these two spaces.
### 2.4 Nil and Solv
The last two geometries, Nil and Solv, are also hyperbolic geometries
($\kappa<0$) that are the geometries of a Lie group. Nil can be described as
the geometry of the Heisenberg group, while Solv can be described as the
geometry of the identity component of the 2-dimensional Poincaré group. There
are standard ways of presenting these manifolds as $\mathds{R}^{3}$ endowed
with a special metric.
In the case of Nil, the space-time can be presented as
$\displaystyle{\rm d}s^{2}$ $\displaystyle=-{\rm d}t^{2}+a(t)^{2}\Bigg{(}{\rm
d}x^{2}+\Big{(}1-\kappa x^{2}\Big{)}{\rm d}y^{2}+{\rm
d}z^{2}-2x\sqrt{-\kappa}\ {\rm d}y{\rm d}z\Bigg{)}$ (2.18)
$\displaystyle=-{\rm d}t^{2}+a(t)^{2}\Bigg{(}{\rm
d}x^{2}+\Big{(}1+x^{2}/L^{2}\Big{)}{\rm d}y^{2}+{\rm d}z^{2}-2x/L\ {\rm
d}y{\rm d}z\Bigg{)}.$ (2.19)
The space-time based on Solv can be presented as
$\displaystyle{\rm d}s^{2}$ $\displaystyle=-{\rm d}t^{2}+a(t)^{2}\Bigg{(}{\rm
e}^{z\sqrt{-\kappa}}{\rm d}x^{2}+{\rm e}^{-z\sqrt{-\kappa}}{\rm d}y^{2}+{\rm
d}z^{2}\Bigg{)}$ (2.20) $\displaystyle=-{\rm d}t^{2}+a(t)^{2}\Bigg{(}{\rm
e}^{z/L}{\rm d}x^{2}+{\rm e}^{-z/L}{\rm d}y^{2}+{\rm d}z^{2}\Bigg{)}.$ (2.21)
## 3 Background Evolution
In this section we will derive the evolution of the cosmological background
for each of the Thurston space-times. It will turn out that the evolution of
these space-times is very similar to the ordinary FLRW case. This is because
the metrics in equations (2.3), (2.6), (2.13), (2.18) and (2.20) presented in
the previous section all admit a very similar Einstein tensor:
$G^{\mu}_{\hskip 3.01389pt\nu}=-\text{diag}\left(3\frac{\dot{a}}{a},\hskip
4.30554pt\frac{\dot{a}+2\ddot{a}a}{a^{2}},\hskip
4.30554pt\frac{\dot{a}+2\ddot{a}a}{a^{2}},\hskip
4.30554pt\frac{\dot{a}+2\ddot{a}a}{a^{2}}\right)+\frac{\kappa}{a^{2}}\text{diag}\left(K^{(0)},\hskip
4.30554ptK^{(1)},\hskip 4.30554ptK^{(2)},\hskip 4.30554ptK^{(3)}\right)\,.$
(3.1)
Here $K^{(0)}$, $K^{(1)}$, $K^{(2)}$ and $K^{(3)}$ are a set of 4
parameters888We have used round brackets around the indices to indicate that
these are a set of parameters and not a vector. specific to each Thurston
geometry that determine the strength with terms in the Energy-Momentum tensor
are coupled to the curvature parameter $\kappa$. This makes the calculation
fairly straightforward, as we can leave these parameters implicit to solve for
all geometries simultaneously.
The Nil and $\widetilde{\text{U}(\mathds{H}^{2})}$ geometries are slightly
more complicated in that their Einstein tensor has an additional nonzero off-
diagonal term.
$\displaystyle G^{3}_{\hskip 3.01389pt2,\text{Nil}}$
$\displaystyle=x\sqrt{-\kappa}\frac{\kappa}{a}$ (3.2) $\displaystyle
G^{3}_{\hskip 3.01389pt2,\widetilde{\text{U}(\mathds{H}^{2})}}$
$\displaystyle=2\sinh(x\sqrt{-\kappa})\frac{\kappa}{a}.$ (3.3)
For all geometries, the Ricci scalar takes the form
$R=6\,\frac{a\ddot{a}+\dot{a}^{2}}{a^{2}}-2K^{(0)}\frac{\kappa}{a^{2}}.$ (3.4)
This expression is devoid of any coordinates, which confirms that the Thurston
space-times are indeed homogeneous – despite this not being immediately
manifest from their metric representation. This means that we are free to
choose the origin of our coordinate system. We will exploit this in later
sections to simplify calculations.
### 3.1 General solution
The most general fluid solution compatible with this Einstein tensor is:
$T^{\mu}_{\hskip 3.01389pt\nu}=\rho\;u^{\mu}u_{\nu}+p\;\delta^{\mu}_{\hskip
3.01389pt\nu}+\pi^{\mu}_{\hskip 3.01389pt\nu}\,,$ (3.5)
where $\rho=\rho(t)$ and $p=p(t)$ denote the fluid energy density and
pressure, respectively, $u^{\mu}=u^{\mu}(t)$ is the velocity vector of the
fluid and $\pi^{\hskip 3.01389pt\mu}_{\nu}$ is the shear tensor, which is
symmetric, $\pi^{\mu}_{\hskip 3.01389pt\nu}=\pi^{\hskip 3.01389pt\mu}_{\nu}$,
traceless, $\pi^{\mu}_{\hskip 3.01389pt\mu}=0$, and transverse,
$\pi^{\mu}_{\hskip 3.01389pt\nu}u^{\nu}=0$. Clearly the Ansatz (3.5) goes
beyond the perfect fluid Ansatz of the standard Friedmann geometries, for
which the shear tensor vanishes. In section 6 we consider an alternative
approach where we set $\pi^{\mu}_{\hskip 3.01389pt\nu}=0$, but allow for
anisotropic expansion rates.
We now consider the Einstein equation in the rest frame of this fluid, where
$u^{\mu}=(1,0,0,0)$. This yields a slight modification of the familiar
Friedmann equations,
$\displaystyle^{0}_{\hskip 3.01389pt0}$ equation: $\displaystyle H^{2}$
$\displaystyle=\frac{8\pi G}{3}\rho+\frac{\Lambda}{3}+\frac{\kappa
K^{(0)}}{3a^{2}}$ (3.6) $\displaystyle^{\hat{i}}_{\hskip 3.01389pt\hat{i}}$
equation: $\displaystyle\frac{\ddot{a}}{a}$ $\displaystyle=-\frac{4\pi
G}{3}(\rho+3p)+\frac{\Lambda}{3}+\frac{\kappa}{a^{2}}\left(\frac{K^{(\hat{i})}}{2}-\frac{K^{(0)}}{6}\right)-4\pi
G\pi^{\hat{i}}_{\hskip 3.01389pt\hat{i}}.$ (3.7)
There is no summation over repeated indices carrying hats in the equations in
this section. We have omitted the $\hat{i}_{\hskip 3.01389pt\hat{j}}$ equation
for $\hat{i}\neq\hat{j}$; for all geometries except Nil and
$\widetilde{\text{U}(\mathds{H}^{2})}$ this reads $\pi^{\hat{i}}_{\hskip
3.01389pt\hat{j}}=0$ for $\hat{i}\neq\hat{j}$, but for these two exceptions we
get
$\displaystyle\pi^{3}_{\hskip 3.01389pt2,\text{Nil}}$
$\displaystyle=x\sqrt{-\kappa}\frac{\kappa}{8\pi Ga^{2}}$ (3.8)
$\displaystyle\pi^{3}_{\hskip
3.01389pt2,\widetilde{\text{U}(\mathds{H}^{2})}}$
$\displaystyle=2\sinh(x\sqrt{-\kappa})\frac{\kappa}{8\pi Ga^{2}}$ (3.9)
The right-hand side of (3.7) has terms containing the (fixed) spatial index
$\hat{i}$, however the left-hand side does not. This means that all index-
carrying terms on the right-hand side must be the same independent of the
choice of index $\hat{i}$. Therefore for all $\hat{i}$, $\hat{j}$ it must hold
that:
$\displaystyle 8\pi Ga^{2}\pi^{\hat{i}}_{\hskip 3.01389pt\hat{i}}-\kappa
K^{(\hat{i})}=8\pi Ga^{2}\pi^{\hat{j}}_{\hskip 3.01389pt\hat{j}}-\kappa
K^{(\hat{j})}$ (3.10)
Since $\pi$ is traceless, we can solve for its diagonal elements,
$\pi^{\hat{i}}_{\hskip 3.01389pt\hat{i}}=\frac{\kappa}{24\pi
Ga^{2}}\left(2K^{(\hat{i})}-\sum_{\hat{j}\neq\hat{i}}K^{(\hat{j})}\right).$
(3.11)
This can be used to rewrite the second Friedmann-like equation (3.7) into a
more complete form,
$\displaystyle\frac{\ddot{a}}{a}$ $\displaystyle=-\frac{4\pi
G}{3}(\rho+3p)+\frac{\Lambda}{3}+\frac{\kappa}{6a^{2}}\left(-K^{(0)}+K^{(1)}+K^{(2)}+K^{(3)}\right).$
(3.12)
When we plug in the values for the $K$-parameters, it will turn out that the
last term in this equation, $-K^{(0)}+K^{(1)}+K^{(2)}+K^{(3)}$, vanishes for
all of the Thurston space-times and so (3.12) reduces to the ordinary form of
the second Friedmann equation. We obtain the evolution of $\rho$ directly by
differentiating (3.6) with respect to time
$2H\left(\frac{\ddot{a}}{a}-H^{2}\right)=\frac{8\pi
G}{3}\dot{\rho}-2H\frac{K^{(0)}\kappa}{3a^{2}}.\\\ $ (3.13)
Using(3.6) and (3.12), we can rewrite this as,
$\dot{\rho}+3H(\rho+p)=\frac{\kappa H}{8\pi
Ga^{2}}\,\left(-K^{(0)}+K^{(1)}+K^{(2)}+K^{(3)}\right)=0\,.\\\ $ (3.14)
### 3.2 Friedmann Equations
Let’s now consolidate equations (3.6), (3.12) and (3.14) into the following
set:
Friedmann I: $\displaystyle H^{2}\ $ $\displaystyle=\ \frac{8\pi
G}{3}\rho+\frac{\Lambda}{3}+\frac{\kappa K^{(0)}}{3a^{2}}$ (3.15) Friedmann
II: $\displaystyle\frac{\ddot{a}}{a}\ $ $\displaystyle=\ -\frac{4\pi
G}{3}(\rho+3p)+\frac{\Lambda}{3}$ (3.16) Energy Evolution: $\displaystyle 0\ $
$\displaystyle=\ \dot{\rho}+3H(\rho+p)$ (3.17)
In this anisotropic context, we see that we obtain a set of Friedmann
equations very similar to the isotropic cases. Importantly, space-times based
on Thurston geometries admit the usual matter, radiation, and dark energy or
cosmological constant contributions to the constituents of the universe that
we are used to from ordinary FLRW space-time.
There are however two main differences between these equations and their
isotropic equivalent. First, the strength of the curvature contribution to the
energy density is determined through the parameter $K^{(0)}$, which varies
between geometries, and secondly, the energy-momentum tensor picks up shear
terms $\pi^{i}_{\hskip 3.01389ptj}$. The relevant values are given in Table 1.
Space-Time | $K^{(0)}$ | $K^{(1)}$ | $K^{(2)}$ | $K^{(3)}$ | $8\pi Ga^{2}\pi^{1}_{\hskip 3.01389pt1}/\kappa$ | $8\pi Ga^{2}\pi^{2}_{\hskip 3.01389pt2}/\kappa$ | $8\pi Ga^{2}\pi^{3}_{\hskip 3.01389pt3}/\kappa$ | $8\pi Ga^{2}\pi^{3}_{\hskip 3.01389pt2}/\kappa$
---|---|---|---|---|---|---|---|---
FLRW | -3 | -1 | -1 | -1 | 0 | 0 | 0 | 0
$\mathbf{\mathds{R}\times\mathds{H}^{2}/S^{2}}$ | -1 | 0 | 0 | -1 | 1/3 | 1/3 | -2/3 | 0
$\widetilde{\text{U}(\mathds{H}^{2})}$ | -5/4 | 1/4 | 1/4 | -7/4 | 2/3 | 2/3 | -4/3 | $-2\sinh(x\sqrt{-\kappa})$
Nil | -1/4 | 1/4 | 1/4 | -3/4 | 1/3 | 1/3 | -2/3 | $x\sqrt{-\kappa}$
Solv | -1 | -1 | -1 | 1 | -2/3 | -2/3 | 4/3 | 0
Table 1: Several parameters for the Thurston space-times.
The second difference deserves further attention. The shear contributions
$\pi^{i}_{\hskip 3.01389ptj}$ scale as $\propto a^{-2}$ instead of $\propto
a^{-3}$ for matter, $\propto a^{-4}$ for radiation or $\propto a^{0}$ for dark
energy. In other words, the presence of large-scale anisotropic curvature
requires us to cook up some exotic fluid with these scaling properties that
can sustain the large-scale anisotropy on the spatial slices it inhabits.
In section 6 we will show that this can be remedied by introducing
anisotropies in the scale factor $a$. This alternative approach allows us to
solve the Friedmann equations using a perfect fluid Ansatz at the cost of
picking up anisotropies in expansion of the space-time. However, we will show
that these remain small compared to leading-order contributions and can
therefore be neglected. In the rest of this work we will therefore focus on
leading-order contributions.
The Nil and $\widetilde{\text{U}(\mathds{H}^{2})}$ space-times remain an
exception, as their $\pi^{3}_{\hskip 3.01389pt2}$ terms cannot be absorbed
into anisotropies int he scale factor. However, we note that this term is
additionally suppressed by $x/L$, which is much less than unity on sub-Hubble
scales. Since this will be observationally small compared to other curvature
terms in the energy-momentum tensor, we will also ignore this term in further
calculations.
### 3.3 Length Scales
As a last topic in this section, we will calculate several length scales;
these will be used to generate the plots in section 5. In particular, we are
interested in the inherent curvature radius $L$ of the geometry and in
$\eta_{*}$, the conformal time until the last scattering surface.
To this end, we introduce $\rho_{\Lambda}=\Lambda/(8\pi G)$ and
$\rho_{\kappa}=K^{(0)}\kappa/(8\pi Ga^{2})$ to put the cosmological constant
and curvature on the same footing as the other contents of the Universe. We
can use this to rewrite (3.15) in terms of energy densities,
$H^{2}=\frac{8\pi
G}{3}\big{(}\rho_{\text{contents}}+\rho_{\Lambda}+\rho_{\kappa})\,.$ (3.18)
By setting $\rho_{i}(t)=\rho_{i}(0)a(t)^{-2\epsilon_{i}}$, where
$\epsilon_{i}$ denotes the slow roll parameter associated with the fluid $i$
and is defined as $\epsilon_{i}=-\dot{H}/H^{2}$ if that fluid would dominate
the energy content of the Universe, we can re-express Eq. (3.18) in terms of
fractions of critical energy density $\Omega_{i,0}=\rho_{i,0}/\rho_{\rm
crit,0}$ as,
$H^{2}(t)=\frac{8\pi
G}{3}\sum_{i}\frac{\rho_{i,0}}{a(t)^{2\epsilon_{i}}}=H_{0}^{2}\sum_{i}\frac{\Omega_{i,0}}{a(t)^{2\epsilon_{i}}}=H_{0}^{2}\sum_{i}\Omega_{i,0}\big{(}1+z(t)\big{)}^{2\epsilon_{i}}\,,$
(3.19)
where $z(t)$ is the redshift, $H_{0}^{2}=\frac{8\pi G}{3}\rho_{\text{crit},0}$
denotes the expansion rate today and $\rho_{\text{crit},0}$ is the energy
density supporting it. Assuming the Universe has a matter, radiation,
curvature and cosmological constant (dark energy) component, we can derive an
integral expression for the (conformal) time between two events,
$\displaystyle\eta$ $\displaystyle=\int^{\eta_{2}}_{\eta_{1}}{\rm
d}\eta^{\prime}\ =\int^{t_{2}}_{t_{1}}\frac{{\rm d}t}{a}\
=\int^{a_{2}}_{a_{1}}\frac{{\rm d}a}{a^{2}H(a)}\
=\frac{1}{H_{0}}\int^{a_{2}}_{a_{1}}\frac{{\rm
d}a}{\big{(}\sum_{i}\Omega_{i,0}a^{4-2\epsilon_{i}}\big{)}^{1/2}}\,.$ (3.20)
This is an elliptic integral that does not admit an easy analytic solution in
terms of elementary functions [34, 35]. Fortunately, we will not need a full
analytic solution for (3.20) and we can instead compute the integral
numerically by plugging in the relevant literature values.
Next, we calculate the effective curvature length scale from equations (3.15)
and (3.19),
$\Omega_{\kappa}=\frac{K^{(0)}\kappa}{3H^{2}a^{2}}\ \ \Leftrightarrow\ \
\kappa=\frac{3H^{2}a^{2}\Omega_{\kappa}}{A^{(0)}}\ \ \Leftrightarrow\ \
L=\frac{1}{aH}\sqrt{\left|\frac{K^{(0)}}{3\Omega_{\kappa}}\right|}.$ (3.21)
To get a sense of the largest-scale of observable effects in the next chapter,
we will calculate $\eta_{*}$, the conformal time to the surface of last
scattering at $z_{*}\simeq 1091$, $a_{*}\simeq 1/1092$ in terms of the
inherent curvature length scale $L$ of the underlying 3-manifolds 999$L$
coincides with the physical curvature radius of the present. For earlier
times, the physical curvature radius can be expressed as the invariant spatial
distance $a(t)L$, but this will not be used in this paper.,
$\eta_{*}=L\sqrt{\left|\frac{3\Omega_{\kappa,0}}{K^{(0)}}\right|}\int^{1}_{a_{*}}\frac{1}{\sqrt{\Omega_{\Lambda,0}\
a^{4}+\Omega_{\kappa,0}\ a^{2}+\Omega_{\rm m,0}\ a+\Omega_{R,0}}}{\rm d}a.$
(3.22)
We now evaluate the integral using numerical values from the 2018 Planck
Results [18] in Table 2.
$\Omega_{\Lambda,0}$ | $\Omega_{\kappa,0}$ | $\Omega_{R,0}$ | $\Omega_{\rm m,0}$
---|---|---|---
$0.6889\pm 0.0056$ | $0.0007\pm 0.0019$ | $(9.18\pm 0.17)\cdot 10^{-5}$ | $0.3111\pm 0.0056$
Table 2: Cosmological parameters from [18].
We will use the average values for $\Omega_{\Lambda,0}$, $\Omega_{R,0}$ and
$\Omega_{\rm m,0}$ and we will take the 2-$\sigma$ bounds of
$\Omega_{\kappa,0}$, $0.0007+2\times 0.0019=0.0045$ for negatively curved
geometries and $0.0007-2\times 0.0019=-0.0031$ for positively curved
geometries. The resulting values are shown in Table 3.
Space-Time | $\Omega_{\kappa}$ | $\eta_{*}H_{0}$ | $LH_{0}$ | $\eta_{*}/L$
---|---|---|---|---
$\mathbf{\mathds{R}^{3}}$ | 0 | 3.133 | $\infty$ | 0
$\mathbf{\mathds{H}^{3}}$ | 0.0045 | 3.136 | 17.96 | 0.175
$\mathbf{S^{3}}$ | -0.0031 | 3.138 | 14.91 | 0.210
$\mathbf{\mathds{R}\times\mathds{H}^{2}}$ | 0.0045 | 3.136 | 10.37 | 0.302
$\mathbf{\mathds{R}\times S^{2}}$ | -0.0031 | 3.128 | 8.61 | 0.363
$\widetilde{\text{U}(\mathds{H}^{2})}$ | 0.0045 | 3.136 | 11.59 | 0.271
Nil | 0.0045 | 3.136 | 5.18 | 0.605
Solv | 0.0045 | 3.136 | 10.37 | 0.302
Table 3: Length Scales conforming to the 2-$\sigma$ observational bounds,
$-0.0031\leq\Omega_{\kappa}\leq 0.0045$.
This table tells us two important things. Firstly, it tells us that $L$ larger
than $\eta_{*}$ (by about half an order of magnitude for most geometries),
which means that we can expand complicated expressions in powers of
$\eta_{*}/L$ to get approximate results. This will be very useful in section
5, as we are not able to find exact expressions for all geometries.
Secondly, and perhaps more importantly, the curvature radius $L$ is larger
than the Hubble radius $H_{0}^{-1}$ for all Thurston geometries. This allows
us to substantiate the following assumption: If spatial sections of our
Universe are well-described by a patchwork of Thurston geometries, then the
Hubble sphere – and moreso Earth’s low-redshift surroundings – are very likely
to belong only a _single_ of these patches. As a result, we argue that any
detectable effects of anisotropy are similarly likely to be sourced by a
_single_ geometry.
Under this assumption we can start to make predictions about how our Universe
would look if we took any one of the eight Thurston-Perelman geometries as our
local spatial section and our Universe were indeed curved and anisotropic. In
the next section we will discuss two ways in which these effects may become
manifest.
## 4 Distance Measures
Since our current observations put the curvature radius as significantly
larger than the size of the observable universe, we likely stand no chance of
devising a local experiment that is sensitive enough to detect curvature
anisotropy. Rather, we must look for the effects of anisotropy over very long
distances or in very large-scale phenomena. We propose to use photon
trajectories for this purpose, as they are bent by the presence of curvature
and thus directly affected by the anisotropies in the metric. The farther a
photon travels, the more apparent any deflection in its trajectory will be, so
we expect any effects to be magnified for higher redshifts.
Anisotropic curvature will behave like a large lens that deforms the image of
distant objects. The magnitude and shape of this deformation will depend on
the position of the object in the night sky in a way that is unique to each of
our geometries. This pattern of deformations can serve as a fingerprint by
which we can identify a geometry.
In this section, we will derive expressions for these deformations by
considering the angular diameter distance $d_{A}$, which is affected by the
deformations directly, and the luminosity distance $d_{L}$, which is affected
by the net change in apparent luminosity. In the next section, we will present
several figures to visualise this effect for each geometry.
Plotting these distance measures will require us to derive null geodesics for
each of the Thurston space-times. Fortunately, this becomes a very tractable
problem when we transform Eq. (1.2) to conformal time:
$\displaystyle 0={\rm d}s^{2}=-{\rm d}t^{2}+a(t)^{2}{\rm
d}\Sigma_{3}^{2}=a(t)^{2}\Big{(}\\!-{\rm d}\eta^{2}+{\rm
d}\Sigma_{3}^{2}\Big{)}\,,\qquad\big{(}{\rm
d}\Sigma_{3}^{2}=\gamma_{ij,\Sigma}\ {\rm d}x^{i}{\rm d}x^{j}\big{)}\,.$ (4.1)
This equation has the implicit solution ${\rm d}\eta^{2}={\rm
d}\Sigma_{3}^{2}$, 101010 Given that ${\rm d}\Sigma_{3}^{2}$ depends on
spatial variables only, Eq. (4.1) implies that conformal time is a good affine
parameter along geodesics, and can be used to parametrize geodesic distances
on $\Sigma_{3}$. Rigorously speaking this ceases to be the case when
anisotropic expansion rates are considered, in which case $d\Sigma_{3}^{2}$
becomes dynamical (see section 6 for more details). which reduces the problem
to a 3-dimensional one. That is to say, if we can find some spatial geodesic
$x^{i}(\lambda)$ parameterised by proper distance $\lambda$ on $\Sigma_{3}$,
then $(\eta,x^{i}(\eta))$ is a geodesic in the full Thurston space-time
parameterised by conformal time $\eta$. Finding these spatial geodesics is
further simplified by the fact that the manifolds in Thurston’s theorem are
homogeneous and, as explained in Section 3, we are at liberty to place the
observer at the origin of the coordinate system and to consider only radial
geodesics. We will leave the derivation of such geodesics to the appendices
and assume for the rest of this section that they are known.
Rigorously speaking the analysis in this section applies to rigid geometries.
To capture the dynamical aspects of various Thurston geometries discussed in
detail in section 6, one would have to suitably modify the analysis of this
section. However, since geometries evolve rather slowly, the results derived
in this section apply to most of the situations of interest.
### 4.1 Distance measures in an isotropic universe
In FLRW space-time, calculating both distance measures is straightforward due
to spherical symmetry on spatial slices.
#### 4.1.1 Angular diameter distance
Angular diameter distance $d_{A}$ can be defined as the ratio of an object’s
physical size $h$ at the time of emission, to the object’s apparent angular
size $\delta\omega$ as viewed from the observer. In essence, it is the answer
to the question, ‘in a flat universe, how far away would an object of a known
size need to be in order to appear as large as it does?’
Due to spatial isotropy, we may choose a spherical coordinate system so that
the object, which we will assume to be spherical itself, 111111The assumption
of sphericity is convenient, but not necessary, for the derivation of Eq.
(4.2). More generally, the source size can be characterised by an arc, see
Figure 1, and we will adopt a more general approach in the anisotropic case.
lies along the equator at coordinate-distance $r$. Again exploiting isotropy,
we can decide to measure the angular diameter along an arc that lies in the
$(r,\theta)$-plane, as in Figure 1.
Figure 1: Angular Diameter Distance in FLRW space-time
Assuming that the angular size $\delta\omega$ of the object is small, we may
write
$d_{A,\text{
FLRW}}=\frac{h}{\delta\omega}=\frac{a\,r\,\tan(\delta\omega)}{\delta\omega}\approx\frac{a\,r\,\delta\omega}{\delta\omega}=ar=aS_{\kappa}(\chi)\,,$
(4.2)
where the $a$ in this equation and in the rest of this section is taken at the
time of emission of the photon.
#### 4.1.2 Luminosity distance
Luminosity distance $d_{L}$ can be defined through a relationship between the
intrinsic luminosity $\mathcal{L}$ (in Js${}^{\text{-}1}$) of a distant source
and the observed flux $f$ (in Jm${}^{\text{-}2}$s${}^{\text{-}1}$) as measured
by an observer on Earth,
$f=\frac{\mathcal{L}}{4\pi d_{L}^{2}}\hskip
20.00003pt\longleftrightarrow\hskip
20.00003ptd_{L}=\sqrt{\frac{\mathcal{L}}{4\pi f}}\,.$ (4.3)
In essence, it answers a question similar to the angular diameter distance:
‘in a flat universe, how far away would an object of known luminosity need to
be to appear as bright as it does?’
In FLRW space-time, the calculation is again fairly straightforward. Since the
space-time is homogenous and isotropic, light emitted from a source spreads
evenly in all directions. The flux $f$ measured by an observer at proper
distance $\chi$ is simply the luminosity $\mathcal{L}$ of the source divided
by the area $4\pi S_{\kappa}(\chi)^{2}$ of a hypersphere with the same radius.
We also need to take into account the expansion of the Universe, which will
multiply the flux by a factor of $a$ due to red-shifting of photons and
another factor of $a$ due to the expansion slowing down the rate of incoming
photons,
$f=\frac{\mathcal{L}a^{2}}{A_{\text{hypersphere}}}=\frac{\mathcal{L}a^{2}}{4\pi
S_{\kappa}(\chi)^{2}}\,.$ (4.4)
It follows directly from (4.3) and (4.4) that
$d_{L,\text{ FLRW}}=\frac{S_{\kappa}(\chi)}{a}\,.$ (4.5)
#### 4.1.3 Etherington’s reciprocity theorem
By comparing equations (4.2) and (4.5) we see that there is a relationship
between angular diameter distance and luminosity distance known as
Etherington’s reciprocity theorem. Sometimes also referred to as the distance
duality relation, it states simply that
$d_{L}=(1+z)^{2}d_{A}=\frac{d_{A}}{a^{2}}\,.$ (4.6)
### 4.2 Distance measures in an anisotropic universe
In an anisotropic setting, spherical symmetry is broken. This means that the
angular diameter distance and luminosity distance will become an explicit
function of the direction in which an observer is looking. Furthermore, if
axial symmetry is also broken along this direction, the angular diameter
distance will additional depend on the orientation of the arc along which the
observer measures the object’s angular size. In a more general setting, these
two distance measures become a quantity assigned to a particular choice of arc
or solid angle, and a proper distance.
#### 4.2.1 Angular diameter distance
In what follows we first exploit homogeneity of the Thurston-Perelman
geometries to put the observer at the origin of the coordinate system. We then
introduce some additional notation to parameterize arcs on the unit sphere
around the observer. For a given direction $\hat{P}=(P_{x},P_{y},P_{z})$ on
the unit sphere in which we point a telescope, we can define two orthogonal
unit vectors,
$\displaystyle\hat{\theta}$
$\displaystyle=\left(\frac{P_{x}P_{z}}{\sqrt{1-P_{z}^{2}}},\frac{P_{y}P_{z}}{\sqrt{1-P_{z}^{2}}},-\sqrt{1-P_{z}^{2}}\right)$
(4.7) $\displaystyle\hat{\phi}$
$\displaystyle=\left(-\frac{P_{y}}{\sqrt{1-P_{z}^{2}}},\frac{P_{x}}{\sqrt{1-P_{z}^{2}}},0\right)$
(4.8)
so that the triple ($\hat{P}$, $\hat{\theta}$, $\hat{\phi}$) is an orthonormal
basis on $\mathds{R}^{3}$. We can now define any arc $\mathcal{A}$ through
$\hat{P}$ by picking two angles, $\zeta$ and $\delta\omega$, and then writing
$\mathcal{A}({\hat{P}},\zeta,\delta\omega):=\bigl{\\{}\mathcal{G}({\hat{P}},\zeta,s)\,|\,\zeta={\rm
fixed}\;\&\;s\in[-\delta\omega/2,\delta\omega/2)\bigr{\\}}\,.$ (4.9)
where $\mathcal{G}$ characterises as point on the unit sphere121212Even though
the coordinate frame (4.8) is singular when $P_{z}=\pm 1$, it is more
convenient for our purposes than a nonsingular frame one would obtain by e.g.
a Gram-Schmidt procedure. The results of this section are not affected by the
choice of approach. through
$\mathcal{G}({\hat{P}},\zeta,s)=\cos(s)\hat{P}+\sin(s)\left(\cos(\zeta)\hat{\theta}+\sin(\zeta)\hat{\phi}\right).$
(4.10)
Here $\delta\omega\in(0,2\pi)$ determines the angular size of the arc and the
parameter $\zeta$ specifies the orientation of the arc around the vector
$\hat{P}$. For instance, if $\hat{P}$ lies along the equator, then setting
$\zeta=0$ means the arc lies orthogonal to the equator, while if we set
$\zeta=\pi/2$ then it lies parallel to the equator.
Now suppose that we have chosen $\hat{P}$ and $\mathcal{A}$ so that our
telescope points at a distant object and so that $\delta\omega$ coincides with
the object’s apparent (angular) size in a given direction. In order to derive
the angular diameter distance, we then want to know how the angular size
$\delta\omega$ relates to the object’s proper size $h$ along this direction.
In order to do this, we need to trace the rays of incoming light back to their
source along this arc. However, since we can no longer assume isotropy, we
cannot simply extend the initial directions specified by $\mathcal{A}$ to a
set of straight lines. Instead, we must account for the fact that the
curvature of space may curve photon trajectories, as shown in Figure 2.
Figure 2: Angular Diameter Distance in an anisotropic space-time.
This means we must solve the geodesic equation for light-like (radial)
geodesics with an arbitrary initial direction, which we will do explicitly in
the appendices. For now, we assume that we know a family of geodesics
$\\{\gamma(\hat{P},\lambda)\\}$ characterised by the initial direction
$\hat{P}$ and dependent on geodesic proper distance $\lambda$ along the
spatial 3-manifold $\Sigma_{3}$. 131313Or, equivalently, conformal time, see
(4.1). The geodesics within this set additionally satisfy
$\gamma(\hat{P},0)=0\,,\hskip 20.00003pt\frac{{\rm
d}\gamma(\hat{P},\lambda)}{{\rm d}\lambda}\Big{|}_{\lambda=0}=\hat{P}.$ (4.11)
For each choice of $\hat{P}$ and $\mathcal{A}$, we can define a 1-parameter
sub-family $\gamma_{\mathcal{A}}$ as
$\gamma_{\mathcal{A}}(s,\lambda)=\gamma\left(\mathcal{G}(\hat{P},\zeta,s),\lambda\right)\,,$
(4.12)
where $\zeta$ is the (fixed) orientation of the arc and the angle
$s\in[-\delta\omega/2,\delta\omega/2]$ parameterizes the internal angle
between the initial direction of a given geodesic and the midpoint of the arc
$\hat{P}$. By construction, the set of initial directions of this sub-family
corresponds exactly to the arc $\mathcal{A}$, that is to say,
$\left\\{\frac{{\rm d}\gamma_{\mathcal{A}}(s,\lambda)}{{\rm
d}\lambda}\,\Bigg{|}\,s\in[-\delta\omega/2,\delta\omega/2]\;\&\;\lambda=0\right\\}=\mathcal{A}\,.$
(4.13)
If we now fix $\lambda$ to the proper distance $\lambda_{0}$ of a faraway
object, then the angle $s$ now traces a distant arc between opposite sides of
this faraway object as we let $s$ vary from $-\delta\omega/2$ to
$\delta\omega/2$. Under the assumption that $\delta\omega$ is small compared
to unity and $\lambda_{0}$ is small compared to $L$, the arc length $\ell$ of
this distant arc multiplied by $a$ coincides with the proper size $h$ of the
object. This means we can write,
$h\simeq a\ell=a\int^{\delta\omega/2}_{-\delta\omega/2}{\rm
d}s\sqrt{\gamma_{ij,\Sigma}\,\frac{{\rm
d}\gamma^{i}_{\mathcal{A}}(s,\lambda_{0})}{{\rm d}s}\,\frac{{\rm
d}\gamma^{j}_{\mathcal{A}}(s,\lambda_{0})}{{\rm d}s}}\,,$ (4.14)
where $a$ in this equation, is again understood to be the scale factor at the
time of emission of the photons. Hence, for any of the Thurston space-times,
the angular diameter distance of an object visible in the direction $\hat{P}$
that sits at proper distance $\lambda_{0}$ measured along an arc $\mathcal{A}$
of apparent size $\delta\omega$ and orientation $\zeta$ can be expressed as,
$d_{A}(\hat{P},\lambda_{0},\delta\omega,\zeta):=a\frac{\ell}{\delta\omega}=\frac{a}{\delta\omega}\int^{\delta\omega/2}_{-\delta\omega/2}{\rm
d}s\sqrt{\gamma_{ij,\Sigma}\,\frac{{\rm d}}{{\rm
d}s}\gamma^{i}\left(\mathcal{A}(\hat{P},\zeta,s),\lambda_{0}\right)\,\frac{{\rm
d}}{{\rm
d}s}\gamma^{j}\left(\mathcal{A}(\hat{P},\zeta,s),\lambda_{0}\right)}\,.\\\ $
(4.15)
In the regime where $L>>\lambda_{0}>>h$, the arc is small and far away enough
that $\delta\omega$ is small compared to unity, but the curvature effects are
not so strong that small deviations from the initial angle will lead to
extreme differences at distance $\lambda_{0}$. In this regime, the integrand
in the previous equation is approximately the same for all
$s\in[-\delta\omega/2,\delta\omega/2]$ and so can be approximated as constant.
This means we can approximate the expressions for $\ell$ and $d_{A}$ to a high
degree of accuracy as,
$\ell\simeq\delta\omega\sqrt{\gamma_{ij,\Sigma}\,\frac{{\rm d}}{{\rm
d}s}\gamma^{i}\left(\mathcal{A}(\hat{P},\zeta,s),\lambda_{0}\right)\,\frac{{\rm
d}}{{\rm
d}s}\gamma^{j}\left(\mathcal{A}(\hat{P},\zeta,s),\lambda_{0}\right)}\Bigg{|}_{s=0},$
(4.16)
such that
$\boxed{d_{A}(\hat{P},\lambda_{0},\zeta)\simeq
a\sqrt{\gamma_{ij,\Sigma}\,\frac{{\rm d}}{{\rm
d}s}\gamma^{i}\left(\mathcal{A}(\hat{P},\zeta,0),\lambda_{0}\right)\,\frac{{\rm
d}}{{\rm
d}s}\gamma^{j}\left(\mathcal{A}(\hat{P},\zeta,0),\lambda_{0}\right)}\Bigg{|}_{s=0}}\,,$
(4.17)
and the expression is no longer dependent on $\delta\omega$. This is the
anisotropic equivalent of approximating $\ell=r\delta\omega$ for sufficiently
small $\delta\omega$. In this regime, $\delta\omega$ drops out of the
expression for the angular diameter distance, and we are left with a more
manageable expression that is solely dependent on direction $\hat{P}$,
distance $\lambda_{0}$ and orientation $\zeta$ of the arc.
Lastly, we check the isotropic limit of this derivation. In FLRW space-time,
equation (4.15) reduces to a familiar form given in Eq. (4.2),
$d_{A,\text{
FLRW}}=\frac{h}{\delta\omega}\simeq\frac{a}{\delta\omega}\int^{\delta\omega/2}_{-\delta\omega/2}{\rm
d}s\sqrt{r^{2}\Big{(}\sin^{2}(s)+\cos^{2}(s)\Big{)}}=\frac{ar\delta\omega}{\delta\omega}=aS_{\kappa}(\chi)\,.\\\
$ (4.18)
#### 4.2.2 Luminosity distance
To study the general case for luminosity distance, consider a beam of light
emitted over a small solid angle $\delta\Omega$ from the source, which we will
approximate as point-like. If the source radiates equally in every direction,
then the power through this solid angle can be written as
$\mathcal{L}_{\text{Beam}}=\mathcal{L}_{\text{Source}}\times\frac{\delta\Omega}{4\pi}\,.$
(4.19)
Now suppose that this beam of light terminates on a photosensitive plate of a
detector (e.g. a telescope) at some proper distance $\lambda$ as in Figure 3.
Conservation of energy implies that the power flowing through this plate must
be equal to the power flowing through the initial solid angle $\delta\Omega$,
up to powers of $a$ to compensate for the expansion of the universe.
141414Note that we make the additional implicit assumption here $\lambda$ is
small compared to $L$ so that no extreme lensing effects occur that might
cause beams emitted in disparate directions to intersect.
Figure 3: Luminosity distance in an anisotropic space-time
The flux through this plate can thus be written as
$f_{\text{Detector}}=\frac{\mathcal{L}_{\text{Beam}}a^{2}}{\delta
A}=\frac{\mathcal{L}_{\text{Source}}}{4\pi}\times\frac{\delta\Omega
a^{2}}{\delta A}\,,$ (4.20)
where $\delta A$ is the beam’s cross-sectional area at distance $\lambda$.
Comparing this to the definition of $d_{L}$ in equation (4.3), we can write a
general (geometric) expression for the luminosity distance as,
$d_{L}=\frac{1}{a}\sqrt{\frac{\delta A}{\delta\Omega}}\,.$ (4.21)
Following the approach in section 4.2.1, we use homogeneity of spatial
sections to place the source at the origin and pick $\hat{\overline{P}}$ to be
the initial direction of an emitted photon. We will assume that the solid
angle $\delta\Omega$ around this initial direction is convex for convenience
and can sketch out the situation in Figure 4.
Figure 4: Luminosity Distance in an anisotropic space-time.
This assumption of convexity means we can describe this solid angle
$\delta\Omega$ by specifying the angular distance $\delta\omega(\zeta)/2$ from
the vector $\hat{\overline{P}}$ to the edge of $\delta\Omega$ along every
direction $\zeta$. This means we can write
$\delta\Omega(\hat{P},\delta\omega(\zeta)):=\bigl{\\{}\mathcal{G}({\hat{P}},\zeta,s)\,|\,\zeta\in[0,2\pi)\;\&\;s\in[0,\delta\omega(\zeta)/2]\bigr{\\}}\,,$
(4.22)
where $\mathcal{G}$ is the same as in (4.10). As in the previous subsection,
for each $\delta\Omega$ described this way, we can now find a 2-parameter
family of geodesics
$\gamma_{\delta\Omega}(\zeta,s,\lambda)=\gamma\left(\mathcal{G}(\hat{\overline{P}},\zeta,s),\lambda\right),$
(4.23)
characterized by $\zeta$ and $s\in[0,\delta\omega(\zeta)/2]$, that terminates
on the plate of the detector at $\lambda=\lambda_{0}$. From here, it is not
difficult to write out the areas $\delta\Omega$ and $\delta A$ in integral
form:
$\displaystyle\delta A$
$\displaystyle=\frac{1}{2}\int_{0}^{2\pi}\left(\frac{\ell(\zeta)}{2}\right)^{2}{\rm
d}\zeta=\frac{1}{2}\int_{0}^{2\pi}\left(\int^{\delta\omega(\zeta)/2}_{0}{\rm
d}s\sqrt{\gamma_{ij,\Sigma}\,\frac{{\rm
d}\gamma^{i}_{\delta\Omega}(\zeta,s,\lambda_{0})}{{\rm d}s}\,\frac{{\rm
d}\gamma^{j}_{\delta\Omega}(\zeta,s,\lambda_{0})}{{\rm d}s}}\right)^{2}{\rm
d}\zeta$ (4.24) $\displaystyle\delta\Omega$
$\displaystyle=\frac{1}{2}\int_{0}^{2\pi}\left(\frac{\delta\omega(\zeta)}{2}\right)^{2}{\rm
d}\zeta.$ (4.25)
If we divide the first integral by the second, we get the ratio we are looking
for. However, there is one subtlety to be addressed. We have derived an
expression for the luminosity distance dependent on $\hat{\overline{P}}$, the
direction in which the luminous source emitted the light that reaches the
observer in the local inertial frame of the star. Observationally, we are more
interested in an expression dependent on $\hat{P}$, see figure 5, the
direction in which an observer views a luminous source as measured in the
local inertial frame of the observer. So what remains to be done is to find
some way of connecting these vectors and translating them from one frame to
another.
Figure 5: Relationship between $\hat{P}$ and $\hat{\bar{P}}$.
Homogeneity of spatial slices and reversibility of solutions to the geodesic
equation make this relatively straightforward. In the frame of the observer,
light is incoming along some direction $\hat{P}$ and, since geodesic distance
is invariant, we can simply find the reverse geodesic
$\gamma(\hat{P},\lambda)$ from the observer back to the source, where the
source sits at $\lambda=\lambda_{0}$. We can find the initial direction of the
emitted photons by taking a derivative and reversing the sign (see figure 5):
$\hat{\overline{P}}_{o}=-\frac{{\rm d}\gamma(\hat{P}_{o},\lambda)}{{\rm
d}\lambda}\Big{|}_{\lambda=\lambda_{0}}\,.$ (4.26)
However, this is $\hat{\overline{P}}_{o}$ expressed in the coordinates of the
observer, not those of the source. To obtain $\hat{\overline{P}}_{s}$ in the
coordinates of the source, we must construct the transformation $T$ from the
frame of the observer to the frame of the source and then apply the tangent
map of this transformation, $dT$, to the equation above,
$\boxed{\hat{\overline{P}}_{s}(\hat{P}_{o}):={\rm d}T\left(-\frac{{\rm
d}\gamma(\hat{P}_{o},\lambda)}{{\rm
d}\lambda}\Big{|}_{\lambda=\lambda_{0}}\right)}\,.$ (4.27)
We will work out the Solv geometry case explicitly. Suppose that we have two
local inertial frames, $O$ and $O^{\prime}$, and the second frame has its
origin at $(a,b,c)$ with respect to the coordinates of the first, then we can
express the metric of both frames in terms of the coordinates of the first:
$\displaystyle{\rm d}s^{2}$ $\displaystyle={\rm e}^{2z/L}{\rm d}x^{2}+{\rm
e}^{-2z/L}{\rm d}y^{2}+{\rm d}z^{2}$ (4.28) $\displaystyle{\rm d}s^{\prime 2}$
$\displaystyle={\rm e}^{2z^{\prime}/L}{\rm d}x^{\prime 2}+{\rm
e}^{-2z^{\prime}/L}{\rm d}y^{\prime 2}+{\rm d}z^{\prime 2}$ (4.29)
$\displaystyle={\rm e}^{2(z+c)/L}{\rm d}x^{\prime 2}+{\rm e}^{-2(z+c)/L}{\rm
d}y^{\prime 2}+{\rm d}z^{\prime 2}$ (4.30)
Since both frames must agree about the geometry of the spatial section, we can
equate $ds^{2}$ and $ds^{\prime 2}$ and then easily relate the two frames by
the transformation
$\displaystyle T(x,y,z)$
$\displaystyle=(xe^{c/L}+a,ye^{-c/L}+b,z+c)=(x^{\prime},y^{\prime},z^{\prime})$
(4.31)
Therefore, for the Solv geometry this means that can relate the direction of
emission of light from the source, $\hat{\overline{P}}_{s}$, to the direction
at which the observer receives this light, $\hat{P}_{o}$ by
$\hat{\overline{P}}_{s}(\hat{P}_{o})=-\left(\frac{{\rm
d}\gamma^{1}(\hat{P}_{o},\lambda)}{{\rm d}\lambda}e^{c/L},\,\frac{{\rm
d}\gamma^{2}(\hat{P}_{o},\lambda)}{{\rm d}\lambda}e^{-c/L},\,\frac{{\rm
d}\gamma^{3}(\hat{P}_{o},\lambda)}{{\rm
d}\lambda}\right)_{\lambda=\lambda_{0}}\,.$ (4.32)
As a result, the only change that we need to make to equation (4.24) is to
change the $\hat{P}$ that is implicit therein into $\hat{\overline{P}}$. So,
for any of the Thurston space-times, we can express the luminosity distance of
an object visible in the direction $\hat{P}$ that sits at proper distance
$\lambda_{0}$ as
$\left(d_{L}(\hat{P},\lambda_{0},\delta\Omega)\right)^{2}\ :=\
\frac{1}{a^{2}}\frac{\delta
A}{\delta\Omega}=\frac{1}{a^{2}}\frac{\int_{0}^{2\pi}\left(\int^{\delta\omega(\zeta)/2}_{0}{\rm
d}s\sqrt{\gamma_{ij,\Sigma}\,\frac{{\rm d}}{{\rm
d}s}\gamma^{i}\left(\mathcal{A}(\hat{\overline{P}},\zeta,s),\lambda_{0}\right)\,\frac{{\rm
d}}{{\rm
d}s}\gamma^{j}\left(\mathcal{A}(\hat{\overline{P}},\zeta,s),\lambda_{0}\right)}\right)^{2}{\rm
d}\zeta}{\int_{0}^{2\pi}\left(\frac{\delta\omega(\zeta)}{2}\right)^{2}{\rm
d}\zeta}\,.$ (4.33)
This expression is not immediately useful, though, as we do not a-priori know
what the solid angle $\delta\Omega$ and looks like. One would have to define
the shape of the detector plate by setting $\ell(\zeta)$ and then finding
$\delta\omega(\zeta)$ so that the family $\gamma_{\delta\Omega}$ maps
$\delta\Omega$ exactly onto $\delta A$. This is, however, a hard problem to
solve in general.
A more tractable approach can be taken by restricting to the regime where
$L>>\lambda_{0}>>\sqrt{\delta A}$ like we did with angular diameter distance.
In this regime, the detector is small and far away enough from the source that
$\delta\Omega$ is small compared to unity, but the curvature effects are not
so strong that small deviations from the initial angle will lead to extreme
differences at distance $\lambda_{0}$. We can then make use of the
approximation in (4.16) to make the following simplification,
$\displaystyle\delta A$
$\displaystyle\simeq\frac{1}{8}\int_{0}^{2\pi}\ell^{2}(\zeta){\rm d}\zeta$
(4.34) $\displaystyle\delta\Omega$
$\displaystyle\simeq\frac{1}{8}\int_{0}^{2\pi}\frac{\ell^{2}(\zeta)}{\Big{[}\gamma_{ij,\Sigma}\,\frac{{\rm
d}}{{\rm
d}s}\gamma^{i}\left(\mathcal{A}(\hat{\overline{P}},\zeta,s),\lambda_{0}\right)\frac{{\rm
d}}{{\rm
d}s}\gamma^{j}\left(\mathcal{A}(\hat{\overline{P}},\zeta,s),\lambda_{0}\right)\Big{]}\Big{|}_{s=0}}\,{\rm
d}\zeta\,.$ (4.35)
This allows us to express the luminosity distance directly as a function of
the shape of the detector without making any reference to $\delta\omega$.
Within this same regime, the incoming photon flux is approximately constant
across the detector plate. Hence we can make the additional simplifying
assumption that the detector is disk-shaped, i.e. $\ell(\zeta)=\ell$, as it is
the area of the detector plate that is important at first order and the
effects of the shape will contribute at higher-order. This means that the
dependence on $\ell$ also drops out of the expression entirely and $d_{L}$
becomes a purely geometric quantity depending only on $\hat{P}$ and
$\lambda_{0}$,
$\boxed{\left(d_{L}(\hat{P},\lambda_{0})\right)^{2}\simeq\frac{2\pi}{a^{2}}\left(\int_{0}^{2\pi}\frac{1}{\Big{[}\gamma_{ij,\Sigma}\,\frac{{\rm
d}}{{\rm
d}s}\gamma^{i}\left(\mathcal{A}(\hat{\overline{P}},\zeta,s),\lambda_{0}\right)\frac{{\rm
d}}{{\rm
d}s}\gamma^{j}\left(\mathcal{A}(\hat{\overline{P}},\zeta,s),\lambda_{0}\right)\Big{]}\Big{|}_{s=0}}{\rm
d}\zeta\right)^{-1}}\,.$ (4.36)
Lastly, we check the isotropic limit of our derivation. In FLRW space-time,
equation (4.33) reduces to a familiar form,
$(d_{L,\text{ FLRW}})^{2}=\frac{1}{a^{2}}\frac{\delta
A}{\delta\Omega}=\frac{1}{a^{2}}\frac{\int_{0}^{2\pi}\left(\int^{\delta\omega/2}_{0}{\rm
d}s\sqrt{r^{2}\left(\sin^{2}(s)+\cos^{2}(s)\right)}\right)^{2}{\rm
d}\zeta}{\int_{0}^{2\pi}\left(\frac{\delta\omega(\zeta)}{2}\right)^{2}{\rm
d}\zeta}=\frac{1}{a^{2}}\frac{2\pi
r^{2}\left(\tfrac{\delta\omega}{2}\right)^{2}}{2\pi\left(\tfrac{\delta\omega}{2}\right)^{2}}=\frac{S_{\kappa}(\chi)^{2}}{a^{2}}\,,\quad$
(4.37)
which agrees with Eq. (4.5).
#### 4.2.3 The anisotropic reciprocity theorem
The statement of Etherington’s theorem can be amended to hold in a more
general anisotropic context. With the notational machinery we have developed
in the previous section, this is not a difficult task. We will work explicitly
in the $L\gg\lambda\gg h$ and $L\gg\lambda\gg\sqrt{\delta A}$ regimes so that
we can start from equation (4.36). Next, we recognise term in the denominator
of the integrand in this equation as the term on the right-hand side of (4.17)
and we write
$\displaystyle\left(d_{L}(\hat{P},\lambda)\right)^{-2}$
$\displaystyle=\frac{a^{2}}{2\pi}\int_{0}^{2\pi}\frac{1}{\gamma_{ij,\Sigma}\,\frac{{\rm
d}}{{\rm
d}s}\gamma^{i}\left(\mathcal{A}(\hat{\overline{P}},\zeta,s),\lambda\right)\frac{{\rm
d}}{{\rm
d}s}\gamma^{j}\left(\mathcal{A}(\hat{\overline{P}},\zeta,s),\lambda\right)\Big{|}_{s=0}}{\rm
d}\zeta$ (4.38)
$\displaystyle=\frac{a^{2}}{2\pi}\int_{0}^{2\pi}\frac{a^{2}}{\left(d_{A}(\hat{\overline{P}},\lambda,\zeta)\right)^{2}}{\rm
d}\zeta$ (4.39)
$\displaystyle\Longrightarrow\;\boxed{\frac{1}{d_{L}^{2}(\hat{P},\lambda)}=\frac{a^{4}}{2\pi}\int_{0}^{2\pi}\frac{1}{d_{A}^{2}(\hat{\overline{P}},\lambda,\zeta)}{\rm
d}\zeta\,}$ (4.40)
In the isotropic case where $d_{A}$ is not dependent on $\zeta$, this of
course reduces to the familiar isotropic form of Etherington’s theorem in
(4.6). Its generalization to anisotropic spaces in Eq. (4.40) is one of the
principal results of this paper.
## 5 Distance Measures Visualised
With the machinery we have developed in the previous section, we are only a
small step away from visualizing the effect of large-scale curvature on
angular diameter distance and luminosity distance. To show the maximal
potential extent of these effects, we have opted to generate plots for the
largest redshift that we can conceivably optically measure, $z_{*}\simeq
1091$, or the redshift at the time of recombination.
The plots in this section 151515Mathematica notebooks are available upon
request. were generated by taking the expressions for $d_{A}$ and $d_{L}$,
(4.17) and (4.36) from the previous section and setting $\lambda_{0}$ to
$\eta_{*}$ as shown in Table 3. Where necessary for computational speed,
expressions were expanded to order $L^{-6}$, or, equivalently, $\kappa^{3}$.
The resulting length scales are plotted on the $(\phi,\theta)$-plane relative
to the flat scenario. We have used red to signify a distance that is shorter
than the spatially flat case and vice versa for blue.
### 5.1 Angular diameter distance
The figures below show the angular diameter distance relative to a flat
geometry for a large triangle whose far side lies at redshift of $z_{*}\simeq
1091$. Two bars are drawn for each point on the figure, one representing the
direction in which the angular diameter distance is smallest and one
representing the direction in which the angular diameter distance is largest.
The length of the bar represents the (absolute) magnitude of the effect and
the color gradient represents magnitude and sign: the bar is colored red if
the angular distance is shorter than in the flat case and vice versa for blue.
Thus a long red bar indicates that the angular diameter is much smaller than
flat (relative to other points on the figure) when measured along that
direction, while a small blue bar indicates that the distance is a little bit
larger than the flat case when measured along that direction. These graphs can
also be read as indicating the axes along which objects would be maximally
observationally deformed by anisotropic curvature compared to a flat scenario.
Note that every figure is plotted on a separate color gradient, so the scale
varies between plots. The Nil geometry shows the strongest effect on any one
single arc, followed by the other anisotropic geometries and then the
isotropic geometries.
Figure 6: $d_{A}$ for the $\mathds{H}^{3}$ geometry.
Figure 7: $d_{A}$ for the $S^{3}$ geometry.
As one would expect in the isotropic case, the angular diameter distance shows
no dependence on the the direction or orientation of the arc for the
$\mathds{H}^{3}$ and $S^{3}$ geometries. In the hyperbolic case, a given solid
angle in Earth’s night sky corresponds to a greater physical area at a given
distance from the origin than in a flat model. This area will be populated by
proportionally more objects, each of which will appear smaller and thus seem
to be further away (blue), see figure 7. The opposite is true of the spherical
case, objects will appear larger and therefore appear closer (red), because a
smaller cross-section of space gets projected onto the same solid angle, see
figure 7.
Figure 8: $d_{A}$ for the $\mathds{R}\times\mathds{H}^{2}$ geometry.
Figure 9: $d_{L}$ for the $\mathds{R}\times S^{2}$ geometry.
The $\mathds{R}\times\mathds{H}^{2}$ and $\mathds{R}\times S^{2}$ geometries
show the first signs of anisotropy. We see again that the hyperbolic geometry
tends to make objects appear further and spherical geometry makes them appear
closer. While axial symmetry around the $z$-axis is preserved, we see that
arcs in different directions and orientations are affected differently.
Arcs along the $\phi$-directions are maximally stretched or compressed, while
arcs along the $\theta$ direction are unaffected. The effect is greatest at
the equator, which indeed is the plane in which $\mathds{H}^{2}$ and $S^{2}$
lie, and vanishes as we turn as we rotate the arc to the poles. This pattern
exactly matches the set-up of the spatial sections, which are flat in the
$z$-direction and curved along the $(x,y)$-plane.
Figure 10: $d_{A}$ for the $\widetilde{\text{U}(\mathds{H}^{2}})$ geometry.
Figure 11: $d_{A}$ for the Nil geometry.
The figure for the $\widetilde{\text{U}(\mathds{H}^{2})}$ geometry bears some
resemblance to that of the $\mathds{R}\times\mathds{H}^{2}$ geometry in that
arcs along the equator are contracted. This is not entirely unsurprising, as
both geometries use the hyperbolic 2-plane $\mathds{H}^{2}$ as part of their
definition. There are some major differences, however.
Most notably, although rotational symmetry around the $z$-axis is preserved,
the arcs that are maximally contracted no longer lie along the
$\phi$-direction but are tilted slightly out of the $(x,y)$-plane. This means
that we lose mirror symmetry along the cardinal directions, which was present
for the previous geometries. Additionally, we see that there is some slight
stretching happening just off from the $\theta$-direction, and objects also
become stretched in the $\phi$-direction towards the poles.
Nil looks very similar to a more extreme version of
$\widetilde{\text{U}(\mathds{H}^{2})}$ in that the overall behaviour is
similar (although mirrored), but the magnitudes are more severe. The scale of
the graph has increased by more than a factor of $2$, we see that the degree
of stretching is more significant compared to the degree of compression and
the orientation of the arcs most affected by the anisotropy is tilted out of
the cardinal planes to a greater degree.
Figure 12: $d_{A}$ for the Solv geometry.
The Solv behaves like an inverse of $\widetilde{\text{U}(\mathds{H}^{2})}$
geometry, objects are stretched rather than compressed along approximately the
$\phi$-direction at the equator and vice versa for the $\theta$-direction.
However, there are also significant differences, as the graph for Solv shows a
richer behavior.
We see that the angular diameter distance loses its axial symmetry entirely,
as we see the two bars are oriented differently at each point on the plot. In
addition, we see that the chiefly vertically-oriented arcs are always a deep
blue, while the more horizontally-oriented arcs go from a deep blue at the
poles to a deep red at the equator. This means that objects are very
compressed at the poles, whereas they get deformed significantly at the
equator – being elongated in one direction and flattened in another.
### 5.2 Luminosity distance
The figures below show the luminosity distance relative to a flat geometry at
redshift of $z_{*}\simeq 1091$. Since this measure is now not dependent on a
choice of arc, we can use a heat map to show the behavior in greater
resolution. Red again indicates that the distance is shorter than the flat
case and vice versa for blue. Alternatively, these figures can be read as
indicating the relative change of apparent luminosity due to anisotropic
curvature when compared to a flat geometry.
As before, every figure has its own color gradient, and the scale of the
gradient gets larger for each subsequent plot. The growth of the scale in this
plots is typically slower than for the angular diameter distance, as the image
of an object may be very compressed in one direction but slightly stretched in
another direction; this averages out to a moderate decrease in luminosity
distance. We see that the anisotropic geometries generate ‘hot’ (brighter) and
‘cold’ (darker) spots at the poles relative to the equator.
Figure 13: $d_{L}$ for the $\mathds{H}^{3}$ geometry.
Figure 14: $d_{L}$ for the $S^{3}$ geometry.
The heat maps for the isotropic geometries are again exactly what we expect.
They are constant with respect to $\theta$ and $\phi$, with luminosity
distances being longer (blue) in the hyperbolic geometric and shorter (red) in
the spherical one.
Figure 15: $d_{L}$ for the $\mathds{R}\times\mathds{H}^{2}$ geometry.
Figure 16: $d_{L}$ for the $\mathds{R}\times S^{2}$ geometry.
The heat maps for the $\mathds{R}\times\mathds{H}^{2}$ and $\mathds{R}\times
S^{2}$ geometries display the same behaviour as their barred counterparts
above. Since objects get deformed in the $\phi$-direction – compressed (blue)
for $\mathds{H}^{2}$ and elongated (red) for $S^{2}$ – the net apparent
luminosity changes accordingly. This creates the pattern that we see, the
distances are maximally elongated or shortened along the equator of the
geometry and the effect vanished towards the poles. Note also that the scale
on these luminosity plots differs by about a factor of two from the scale in
the angular diameter distance plots, reflecting the fact that the anisotropy
only affects one direction.
Figure 17: $d_{L}$ for the $\widetilde{\text{U}(\mathds{H}^{2})}$ geometry.
Figure 18: $d_{L}$ for the Nil geometry.
The heatmap for $\widetilde{\text{U}(\mathds{H}^{2})}$ and Nil geometries
approximately match what we would expect from their barred plots, an overall
decrease in the apparent luminosity along the equator and in increase towards
the poles. However, these heatmaps do not show the neat, axially symmetric
pattern that we observed for angular diameter distance, but is instead
deformed into a wave-like pattern. This deformation is caused by the
transformation between $\hat{P}$ and $\hat{\overline{P}}$ discussed in the
previous section, and the direct result of geodesic trajectories being curved
by the geometry.
Figure 19: $d_{L}$ for the Solv geometry
Something similar happens for the Solv geometry, as the equatorial band gets
deformed into a wavelike pattern. Note that the scale for the luminosity
distance plot of this geometry is similar to that of its angular diameter
distance plot, owing to the fact that objects get compressed maximally in both
the $\phi$ and $\theta$ directions close to the poles.
Anticipicating the use of astronomical data for testing these geometries, we
point at an interesting feature of these geometries, namely that, when
expanded in powers of $\lambda/L$, the coefficent of $(\lambda/L)^{\ell}$,
with $\ell\in\mathbb{N}$ and $n\geq 2$ is a linear combination of spherical
harmonics $Y_{\ell m}$ of order $\ell$. A visual illustration of this feature
for the Nil geometry is given in figures 21-23. 161616The heat maps of
harmonics for other geometries can be obtained upon request.
Figure 20: The $\ell=2$ modes for the Nil geometry.
Figure 21: The $\ell=3$ modes for the Nil geometry.
Figure 22: The $\ell=4$ modes for the Nil geometry.
Figure 23: The $\ell=5$ modes for the Nil geometry.
## 6 Anisotropic Scale Factors
In section 3 we found that solving the Friedmann equations for anisotropic
geometries required the introduction of a rather peculiar fluid with a
nonvanishing shear tensor $\pi^{i}_{\hskip 3.01389ptj}$ with an $\propto
a^{-2}$ scaling property. In this section we will ask the question whether we
can get rid of this shear tensor by dropping the assumption of isotropy for
the scale factor $a$. In particular, we are interested in figuring out whether
this approach is compatible with a perfect fluid solution and in studying the
time dependence of anisotropies in the expansion.
We will make a slight alteration to the metric equation (2.2) by splitting the
scale factor $a$ into three components, $A_{1}$, $A_{2}$, and $A_{3}$ that are
a priori independent.
${\rm d}s^{2}=-{\rm d}t^{2}+\hskip 4.30554pt\gamma_{ab,\text{Thurston}}\hskip
4.30554pt\begin{pmatrix}A_{1}(t)&0&0\\\ 0&A_{2}(t)&0\\\
0&0&A_{3}(t)\end{pmatrix}^{a}_{\hskip
3.01389pti}\begin{pmatrix}A_{1}(t)&0&0\\\ 0&A_{2}(t)&0\\\
0&0&A_{3}(t)\end{pmatrix}^{b}_{\hskip 3.01389ptj}{\rm d}x^{i}{\rm
d}x^{j}\,,\quad$ (6.1)
The Friedmann equations in an expanding universe in the metric (6.1) (cf. e.g.
Ref. [37]) depend on the Thurston geometry under consideration, but will admit
a generic perfect fluid solution after some work. By splitting $a$, we
introduce non-zero terms in the off-diagonal components of the Einstein tensor
that depend on (derivatives of) $A_{1}$, $A_{2}$, and $A_{3}$. If we insist on
working in a perfect fluid solution, we must require that these off-diagonal
terms vanish. This means that we must solve one or more differential equations
that constrain $A_{1}$, $A_{2}$, and $A_{3}$.
For instance, in the Solv geometry, $G^{0}_{\hskip 3.01389pt3}$ picks up a
terms dependent on $A_{1}$ and $A_{2}$, must vanish if we wish to obtain a
perfect fluid solution. This means that we must require
$G^{0}_{\hskip
3.01389pt3,\text{Solv}}=\frac{\sqrt{-\kappa}}{A_{3}^{2}}\left(\frac{\dot{A_{2}}}{A_{2}}-\frac{\dot{A_{1}}}{A_{1}}\right)=0.$
(6.2)
One can easily satisfy this equation by setting $A_{1}$ proportional to
$A_{2}$; their relative (constant) factor $A_{2}/A_{1}$ can be absorbed by a
suitable rescaling of the $y$-coordinate in the metric. The constraints are
shown in Table 4.
Space-time | Constraints | $A_{\rm dom}$
---|---|---
$\mathbf{\mathds{R}^{3}}$ | $\kappa=0$ | n/a
$\mathbf{\mathds{H}^{3}}/\mathbf{S^{3}}$ | $A_{1}=A_{2}=A_{3}$ | $a$
$\mathbf{\mathds{R}\times\mathds{H}^{2}/S^{2}}$ | $A_{1}=A_{2}$ | $A_{1}$
$\widetilde{\text{U}(\mathds{H}^{2})}$ | $A_{1}=A_{2}=A_{3}$ | $a$
Nil | $A_{2}=A_{3}$ | $A_{1}$
Solv | $A_{1}=A_{2}$ | $A_{3}$
Table 4: Anisotropic scale factor constraints and the dominant contribution to
the EFE.
The $\widetilde{\text{U}(\mathds{H}^{2})}$ and Nil geometries are again
somewhat exceptional here, as $G^{3}_{\hskip
3.01389pt2}=x\sqrt{-\kappa}\kappa/A_{1}(t)^{2}$ cannot be made to vanish by
equating scale factors. However, as discussed in section 3, this term is
suppressed by $x/L\ll 1$ with respect to the leading curvature contributions,
and therefore can be neglected at the leading order in $\kappa$.
What is of additional note about $\widetilde{\text{U}(\mathds{H}^{2})}$ is
that the constraint equations force the scale factors for all three directions
to be the same. This means that it is not possible for this geometry to get
rid of the special $\propto a^{-2}$ terms in the Einstein Tensor by
introducing anisotropic expansion while leaving the underlying geometric
structure static. This means we don’t know how a universe with this geometry
evolves, as spatial sections evolve dynamically; but for sufficiently large
$L$ we know that anisotropies remain small and for the periods of time and
scales at which we can measure, this effect is likely to be small. We will not
spend any additional time on this and leave it as an open problem to devise a
peculiar fluid with the appropriate scaling properties or to understand how
this geometry can be supported dynamically.
With these constraints satisfied, the diagonal elements of the Einstein Field
Equations take the following form:
${}^{0}_{\hskip 3.01389pt0}\hskip
10.00002pt\text{equation:}\qquad\frac{\dot{A}_{1}}{A_{1}}\frac{\dot{A}_{2}}{A_{2}}+\frac{\dot{A}_{2}}{A_{2}}\frac{\dot{A}_{3}}{A_{3}}+\frac{\dot{A}_{3}}{A_{3}}\frac{\dot{A}_{1}}{A_{1}}$
$\displaystyle=$ $\displaystyle 8\pi G\rho+\Lambda+\frac{K^{(0)}\kappa}{A_{\rm
dom}^{2}}$ (6.3) ${}^{1}_{\hskip 3.01389pt1}\hskip
10.00002pt\text{equation:}\qquad\qquad\,\frac{\ddot{A}_{2}}{A_{2}}+\frac{\ddot{A}_{3}}{A_{3}}+\frac{\dot{A}_{2}}{A_{2}}\frac{\dot{A}_{3}}{A_{3}}$
$\displaystyle=$ $\displaystyle 8\pi G(-{\cal
P})+\Lambda+\frac{K^{(1)}\kappa}{A_{\rm dom}^{2}}$ (6.4) ${}^{2}_{\hskip
3.01389pt2}\hskip
10.00002pt\text{equation:}\qquad\qquad\;\frac{\ddot{A}_{3}}{A_{3}}+\frac{\ddot{A}_{1}}{A_{1}}+\frac{\dot{A}_{3}}{A_{3}}\frac{\dot{A}_{1}}{A_{1}}$
$\displaystyle=$ $\displaystyle 8\pi G(-{\cal
P})+\Lambda+\frac{K^{(2)}\kappa}{A_{\rm dom}^{2}}$ (6.5) ${}^{3}_{\hskip
3.01389pt3}\hskip
10.00002pt\text{equation:}\qquad\qquad\;\frac{\ddot{A}_{1}}{A_{1}}+\frac{\ddot{A}_{2}}{A_{2}}+\frac{\dot{A}_{1}}{A_{1}}\frac{\dot{A}_{2}}{A_{2}}$
$\displaystyle=$ $\displaystyle 8\pi G(-{\cal
P})+\Lambda+\frac{K^{(3)}\kappa}{A_{\rm dom}^{2}}\,,\quad$ (6.6)
where ${\cal P}$ denotes the (isotropic) pressure, $K^{(0)}$, $K^{(1)}$,
$K^{(2)}$ and $K^{(3)}$ are the curvature coupling parameters from section 3,
and $A_{\rm dom}$ one of the three anisotropic scale factors that is more
dominant in the Einstein Field Equations (see Table 4 above).
It is instructive to introduce the Universe’s volume, $V=A_{1}A_{2}A_{3}$, in
terms of which one can define the average expansion rate $H(t)$ and the scale
factor $a(t)$ as,
$H(t)\equiv\frac{\dot{A}_{1}}{A_{1}}=\frac{1}{3}\left(\frac{\dot{A}_{1}}{A_{1}}+\frac{\dot{A}_{2}}{A_{2}}+\frac{\dot{A}_{3}}{A_{3}}\right)\,,\qquad
a(t)\equiv V^{1/3}=[A_{1}A_{2}A_{3}]^{1/3}\,.$ (6.7)
By pairwise subtraction of equations. (6.4–6.6) one easily obtains,
$\displaystyle{{}^{2}_{\hskip 3.01389pt2}}-{{}^{1}_{\hskip 3.01389pt1}}\hskip
10.00002pt\rightarrow\hskip 10.00002pt$ $\displaystyle\frac{{\rm d}}{{\rm
d}t}\left[\frac{\dot{A}_{1}}{A_{2}}-\frac{\dot{A}_{2}}{A_{2}}\right]+3H\left[\frac{\dot{A}_{1}}{A_{2}}-\frac{\dot{A}_{2}}{A_{2}}\right]=\kappa\left[\frac{K^{(2)}-K^{(1)}}{A_{\rm
dom}^{2}}\right]$ (6.8) $\displaystyle{{}^{3}_{\hskip
3.01389pt3}}-{{}^{2}_{\hskip 3.01389pt2}}\hskip 10.00002pt\rightarrow\hskip
10.00002pt$ $\displaystyle\frac{{\rm d}}{{\rm
d}t}\left[\frac{\dot{A}_{2}}{A_{2}}-\frac{\dot{A}_{3}}{A_{3}}\right]+3H\left[\frac{\dot{A}_{2}}{A_{2}}-\frac{\dot{A}_{3}}{A_{3}}\right]=\kappa\left[\frac{K^{(3)}-K^{(2)}}{A_{\rm
dom}^{2}}\right]$ (6.9) $\displaystyle{{}^{1}_{\hskip
3.01389pt1}}-{{}^{3}_{\hskip 3.01389pt3}}\hskip 10.00002pt\rightarrow\hskip
10.00002pt$ $\displaystyle\frac{{\rm d}}{{\rm
d}t}\left[\frac{\dot{A}_{3}}{A_{3}}-\frac{\dot{A}_{1}}{A_{2}}\right]+3H\left[\frac{\dot{A}_{3}}{A_{3}}-\frac{\dot{A}_{1}}{A_{2}}\right]=\kappa\left[\frac{K^{(1)}-K^{(3)}}{A_{\rm
dom}^{2}}\right]\,.\quad$ (6.10)
It is convenient to introduce differential expansion rates,
$\Delta
H_{12}=\frac{\dot{A}_{1}}{A_{1}}-\frac{\dot{A}_{2}}{A_{2}}\,,\qquad\Delta
H_{23}=\frac{\dot{A}_{2}}{A_{2}}-\frac{\dot{A}_{3}}{A_{3}}\,,\qquad\Delta
H_{31}=\frac{\dot{A}_{3}}{A_{3}}-\frac{\dot{A}_{1}}{A_{1}}\,,\qquad$ (6.11)
in terms of which equations (6.8–6.10) become,
$\displaystyle\frac{1}{a^{3}}\frac{{\rm d}}{{\rm d}t}\left[a^{3}\Delta
H_{12}(t)\right]\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\kappa\left[\frac{K^{(2)}-K^{(1)}}{A_{\rm
dom}^{2}}\right]$ (6.12) $\displaystyle\frac{1}{a^{3}}\frac{{\rm d}}{{\rm
d}t}\left[a^{3}\Delta H_{23}(t)\right]\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\kappa\left[\frac{K^{(3)}-K^{(2)}}{A_{\rm
dom}^{2}}\right]$ (6.13) $\displaystyle\frac{1}{a^{3}}\frac{{\rm d}}{{\rm
d}t}\left[a^{3}\Delta H_{31}(t)\right]\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\kappa\left[\frac{K^{(1)}-K^{(3)}}{A_{\rm
dom}^{2}}\right]\,.\quad$ (6.14)
It is now clear that the anisotropic expansion is generated by $\kappa$, i.e.
when $\kappa=0$ there exists isotropic Universe solutions. Therefore, when
$\kappa$ is small, one can study anisotropies in powers of $\kappa$. With this
in mind, equations (6.12–6.14) can be simplified to,
$\displaystyle\frac{1}{a^{3}}\frac{{\rm d}}{{\rm d}t}\left[a^{3}\Delta
H_{12}(t)\right]\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{\kappa}{a^{2}}\left[K^{(2)}-K^{(1)}\right]+{\cal
O}(\kappa^{2})$ (6.15) $\displaystyle\frac{1}{a^{3}}\frac{{\rm d}}{{\rm
d}t}\left[a^{3}\Delta H_{23}(t)\right]\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{\kappa}{a^{2}}\left[K^{(3)}-K^{(2)}\right]+{\cal
O}(\kappa^{2})$ (6.16) $\displaystyle\frac{1}{a^{3}}\frac{{\rm d}}{{\rm
d}t}\left[a^{3}\Delta H_{31}(t)\right]\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{\kappa}{a^{2}}\left[K^{(1)}-K^{(3)}\right]+{\cal
O}(\kappa^{2})\,.\quad$ (6.17)
Note that $A_{\rm dom}$ drops out from these equations as any deviation from
$a$ is absorbed into the ${\cal O}(\kappa^{2})$ terms. This means that the
results from hereon are generically applicable to every Thurston geometry.
Equations (6.15–6.17) can be integrated upon introducing the following time
variable, ${\rm d}\tau=a{\rm d}t$,
$\displaystyle\Delta H_{12}(t)\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{\kappa\tau}{a^{3}}\left[K^{(2)}-K^{(1)}\right]+{\cal
O}(\kappa^{2})$ (6.18) $\displaystyle\Delta H_{23}(t)\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{\kappa\tau}{a^{3}}\left[K^{(3)}-K^{(2)}\right]+{\cal
O}(\kappa^{2})$ (6.19) $\displaystyle\Delta H_{31}(t)\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{\kappa\tau}{a^{3}}\left[K^{(1)}-K^{(3)}\right]+{\cal
O}(\kappa^{2})\,,\quad$ (6.20)
where
$\tau(t)=\int^{t}a(t^{\prime}){\rm d}t^{\prime}\,,$ (6.21)
and we assumed that the initial anisotropies (incorporated in the integration
constants) are negligibly small. Notice that the individual expansion rates
can be expressed in terms of the average expansion rate $H(t)$ and
differential expansion rates,
$\displaystyle H_{1}\equiv\frac{\dot{A}_{1}}{A_{1}}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!H+\frac{\Delta H_{12}-\Delta H_{31}}{3}$ (6.22)
$\displaystyle H_{2}\equiv\frac{\dot{A}_{2}}{A_{2}}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!H+\frac{\Delta H_{23}-\Delta H_{12}}{3}$ (6.23)
$\displaystyle H_{3}\equiv\frac{\dot{A}_{3}}{A_{3}}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!H+\frac{\Delta H_{31}-\Delta H_{23}}{3}\,.\quad$ (6.24)
Inserting this into the Friedmann equation (6.3) gives,
$\displaystyle\\!\\!3H^{2}+\frac{1}{9}\begin{bmatrix}(\Delta
H_{12}\\!-\\!\Delta H_{31})(\Delta H_{23}\\!-\\!\Delta H_{12})\\\ +(\Delta
H_{23}\\!-\\!\Delta H_{12})(\Delta H_{31}\\!-\\!\Delta H_{23})\\\ +(\Delta
H_{31}\\!-\\!\Delta H_{23})(\Delta H_{12}\\!-\\!\Delta
H_{31})\end{bmatrix}=8\pi G\rho+\Lambda+\frac{\kappa K^{(0)}}{a^{2}}\,,\quad$
(6.25)
Inserting into this the solutions (6.18–6.20) and moving these terms to the
right hand side, results in,
$\displaystyle H^{2}\\!\\!$ $\displaystyle=$ $\displaystyle\\!\\!\frac{8\pi
G}{3}\rho+\frac{\Lambda}{3}+\frac{\kappa
K^{(0)}}{3a^{2}}-\frac{\kappa^{2}\tau^{2}}{27a^{6}}\begin{bmatrix}(K^{(1)}\\!+\\!K^{(2)}\\!-\\!2K^{(3)})(K^{(2)}\\!+\\!K^{(3)}\\!-\\!2K^{(1)})\\\
+(K^{(2)}\\!+\\!K^{(3)}\\!-\\!2K^{(1)})(K^{(3)}\\!+\\!K^{(1)}\\!-\\!2K^{(2)})\\\
+(K^{(3)}\\!+\\!K^{(1)}\\!-\\!2K^{(2)})(K^{(1)}\\!+\\!K^{(2)}\\!-\\!2K^{(3)})\end{bmatrix}$
$\displaystyle=$ $\displaystyle\\!\\!\frac{8\pi
G}{3}\rho+\frac{\Lambda}{3}+\frac{\kappa
K^{(0)}}{3a^{2}}-\frac{\kappa^{2}\tau^{2}}{27a^{6}}\times
3\big{[}K^{(1)}K^{(2)}+K^{(2)}K^{(3)}+K^{(3)}K^{(1)}-(K^{(1)})^{2}-(K^{(2)})^{2}-(K^{(3)})^{2}\big{]}\,,\qquad\;\;$
where the last term is of the second order in $\kappa$, and it represents the
backreaction of the anisotropic expansion on the average (isotropic) expansion
rate. The next step is to determine how the individual scale factors deviate
from the isotropic expansion. Upon inserting equations (6.18–6.20) and into
(6.22–6.24) one obtains,
$\displaystyle\frac{{\rm d}}{{\rm
d}t}\ln\left(\frac{A_{1}(t)}{a(t)}\right)\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{\kappa\tau(t)}{3a^{3}(t)}\left(K^{(2)}+K^{(3)}-2K^{(1)}\right)$
(6.27) $\displaystyle\frac{{\rm d}}{{\rm
d}t}\ln\left(\frac{A_{2}(t)}{a(t)}\right)\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{\kappa\tau(t)}{3a^{3}(t)}\left(K^{(3)}+K^{(1)}-2K^{(2)}\right)$
(6.28) $\displaystyle\frac{{\rm d}}{{\rm
d}t}\ln\left(\frac{A_{3}(t)}{a(t)}\right)\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{\kappa\tau(t)}{3a^{3}(t)}\left(K^{(1)}+K^{(2)}-2K^{(3)}\right)\,.\quad$
(6.29)
By introducing a compact notation, $\Delta
K_{(1)}=(K^{(2)}+K^{(3)}-2K^{(1)})/K^{(0)}$ (plus cyclical permutation for
$K_{(2)}$ and $K_{(3)}$), these equations can be written as one equation for
$A_{i}(t)=A_{1}(t)$, $A_{2}(t)$, or $A_{3}(t)$ as,
$\ln\left(\frac{A_{i}(t)}{a(t)}\right)=\int^{t}\frac{\kappa\tau(t^{\prime})\Delta
K_{(i)}K^{(0)}}{3a(t^{\prime})^{3}}\,{\rm d}t^{\prime}\,.$ (6.30)
### 6.1 Growth of anisotropies in an epoch with matter and cosmological
constant
It is well known that relatively recently (at a redshift $z=z_{\rm DE}\simeq
0.7$) the Universe entered a dark energy dominated epoch, during which it
exhibits an accelerated expansion. It is therefore essential to include dark
energy when studying the dynamics of anisotropies.
In this section we study the growth of anisotropies in an era dominated by
nonrelativistic matter ($\rho_{\rm m}(t)=\rho_{\rm m,0}/a^{3}$) and
cosmological constant $\Lambda$, which we take to represent dark energy. This
is an excellent approximation for the observed Universe from the decoupling at
$z_{*}\simeq 1091$ up to today ($z=0$). Neglecting fow now the curvature
contribution $\propto\kappa$ (which we know is small), the Friedmann equation
(3.15) (cf. also the anisotropic Friedmann equation (6)) simplifies to,
$H^{2}=\frac{\dot{a}^{2}}{a^{2}}=\frac{8\pi G}{3}\frac{\rho_{\rm
m,0}}{a^{3}}+\frac{\Lambda}{3}\,,$ (6.31)
whose solution is,
$a(t)=a_{\text{eq}}\sinh^{\frac{2}{3}}\left(\frac{\sqrt{3\Lambda}}{2}t\right)\,,\qquad
a_{\text{eq}}=\left(\frac{8\pi G\rho_{\rm
m,0}}{\Lambda}\right)^{\frac{1}{3}}=\left(\frac{\Omega_{\text{m,0}}}{\Omega_{\Lambda,0}}\right)^{1/3}\,,$
(6.32)
where $\Omega_{\rm m,0}=\rho_{\rm m,0}/H_{0}^{2}$,
$\Omega_{\Lambda,0}=\Lambda/(3H_{0}^{2})$, $t$ is cosmological time, with
$t=0$ corresponding to the Big Bang singularity. The first step towards
undestanding the dynamics of anisotropies dictated by Eq. (6.30) is to
evaluate the time variable $\tau(t^{\prime})$ defined in Eq. (6.21), which for
the problem at hand reduces to the following integral,
$\displaystyle\tau(t)=\int^{t}a(\tilde{t}\,){\rm
d}\tilde{t}=\sqrt{\frac{3}{\Lambda}}\frac{1}{(a_{\text{eq}})^{\frac{3}{2}}}\int^{a}\frac{\tilde{a}^{\frac{3}{2}}}{\sqrt{1+\left(\frac{\tilde{a}}{a_{\text{eq}}}\right)^{3}}}{\rm
d}\tilde{a}=\frac{2}{\sqrt{3\Lambda}}a_{\text{eq}}\int^{(a/a_{\text{eq}})^{3/2}}\frac{\tilde{x}^{\frac{2}{3}}}{\sqrt{1+\tilde{x}^{2}}}{\rm
d}\tilde{x}\,,\quad$ (6.33)
where $\tilde{x}=(\tilde{a}/a_{\text{eq}})^{\frac{3}{2}}$ and we made use of,
$\frac{{\rm d}a}{{\rm
d}t}=\sqrt{\frac{\Lambda}{3}}a_{\text{eq}}\frac{\cosh\left(\frac{\sqrt{3\Lambda}}{2}t\right)}{\sinh^{\frac{1}{3}}\left(\frac{\sqrt{3\Lambda}}{2}t\right)}=\sqrt{\frac{\Lambda}{3}}\frac{(a_{\text{eq}})^{\frac{3}{2}}}{a(t)^{\frac{1}{2}}}\sqrt{1+\left(\frac{a}{a_{\text{eq}}}\right)^{3}}\,.$
(6.34)
Next, by expanding
$(1+\tilde{x}^{2})^{-\frac{1}{2}}=\sum_{n=0}^{\infty}\left(\frac{1}{2}\right)_{n}(-\tilde{x}^{2})^{n}/n!$,
where $(z)_{n}=\Gamma(z+n)/\Gamma(z)$ denotes the Pochhammer symbol, and
exchanging the sum and the integral, the integral in equation (6.33) can be
evaluated,
$\displaystyle\tau(t)$ $\displaystyle=$
$\displaystyle\int^{t}a(\tilde{t}\,){\rm
d}\tilde{t}=\frac{2a_{\text{eq}}}{\sqrt{3\Lambda}}\sum_{n=0}^{\infty}\frac{\left(\frac{1}{2}\right)_{n}(-1)^{n}}{n!}\int^{x=(a/a_{\text{eq}})^{3/2}}\tilde{x}^{\frac{2}{3}+2n}{\rm
d}\tilde{x}=\frac{a_{\text{eq}}}{\sqrt{3\Lambda}}\sum_{n=0}^{\infty}\frac{\left(\frac{1}{2}\right)_{n}(-1)^{n}}{n!}\frac{x^{\frac{5}{3}+2n}}{\frac{5}{6}+n}$
(6.35) $\displaystyle=$
$\displaystyle\frac{a_{\text{eq}}}{\sqrt{3\Lambda}}x^{\frac{5}{3}}\frac{\Gamma\left(\frac{5}{6}\right)}{\Gamma\left(\frac{11}{6}\right)}\sum_{n=0}^{\infty}\frac{\left(\frac{1}{2}\right)_{n}\left(\frac{5}{6}\right)_{n}}{\left(\frac{11}{6}\right)_{n}}\frac{(-x^{2})^{n}}{n!}=\frac{a_{\text{eq}}}{\sqrt{3\Lambda}}\frac{6}{5}\left(\frac{a}{a_{\text{eq}}}\right)^{\frac{5}{2}}\times_{2}\\!F_{1}\left(\frac{1}{2},\frac{5}{6};\frac{11}{6};-\left(\frac{a}{a_{\text{eq}}}\right)^{3}\right)\,,\quad$
where ${}_{2}F_{1}(\alpha,\beta;\gamma;z)$ denotes Gauss’ hypergeometric
function. Upon inserting this result into Eqs. (6.27–6.29) one obtains,
$\displaystyle\ln\left[\frac{A_{1}(t)}{a(t)}\right]\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\big{(}K^{(2)}\\!+\\!K^{(3)}\\!-\\!2K^{(1)}\big{)}\,\frac{2\kappa}{5\Lambda
a_{\text{eq}}^{3}}\int^{a(t)}\\!\\!\\!\frac{{\rm
d}a^{\prime}}{\sqrt{1+\left(\frac{a^{\prime}}{a_{\text{eq}}}\right)^{3}}}\times_{2}\\!F_{1}\left(\frac{1}{2},\frac{5}{6};\frac{11}{6};-\left(\frac{a^{\prime}}{a_{\text{eq}}}\right)^{3}\right)\,\qquad\;\,$
(6.36) $\displaystyle\ln\left[\frac{A_{2}(t)}{a(t)}\right]\\!\\!$
$\displaystyle=$
$\displaystyle\\!\\!\big{(}K^{(3)}\\!+\\!K^{(1)}\\!-\\!2K^{(2)}\big{)}\,\frac{2\kappa}{5\Lambda
a_{\text{eq}}^{3}}\int^{a(t)}\\!\\!\\!\frac{{\rm
d}a^{\prime}}{\sqrt{1+\left(\frac{a^{\prime}}{a_{\text{eq}}}\right)^{3}}}\times_{2}\\!F_{1}\left(\frac{1}{2},\frac{5}{6};\frac{11}{6};-\left(\frac{a^{\prime}}{a_{\text{eq}}}\right)^{3}\right)\,\qquad\;\,$
(6.37) $\displaystyle\ln\left[\frac{A_{3}(t)}{a(t)}\right]\\!\\!$
$\displaystyle=$
$\displaystyle\\!\\!\big{(}K^{(1)}\\!+\\!K^{(2)}\\!-\\!2K^{(3)}\big{)}\,\frac{2\kappa}{5\Lambda
a_{\text{eq}}^{3}}\int^{a(t)}\\!\\!\\!\frac{{\rm
d}a^{\prime}}{\sqrt{1+\left(\frac{a^{\prime}}{a_{\text{eq}}}\right)^{3}}}\times_{2}\\!F_{1}\left(\frac{1}{2},\frac{5}{6};\frac{11}{6};-\left(\frac{a^{\prime}}{a_{\text{eq}}}\right)^{3}\right)\,.\qquad\;\,$
(6.38)
By making a few substitutions, and using the compact notation from Eq. (6.30),
Eqs. (6.36-6.38) can be rewritten as
$\displaystyle\ln\left[\frac{A_{i}(t)}{a(t)}\right]$ $\displaystyle=\Delta
K_{(i)}\Omega_{\kappa,0}\times\,\frac{2}{5\Omega_{\rm
m,0}}\int^{a}da^{\prime}\;\frac{{}_{2}F_{1}\left(\frac{1}{2},\frac{5}{6};\frac{11}{6};-a^{\prime
3}\frac{\Omega_{\Lambda,0}}{\Omega_{\rm m,0}}\right)}{\sqrt{1+a^{\prime
3}\frac{\Omega_{\Lambda,0}}{\Omega_{\rm m,0}}}}\,.$ (6.39)
Using the values of $\Omega_{\Lambda,0}$ and $\Omega_{\rm m,0}$ from Table 2
we can put constraints on (the growth of) this ratio by constraining each term
in kind. In figures 25 and 25 we plot the evolution of anisotropies
$\ln(A_{i}/a)$ (in units of $\Delta K_{(i)}\Omega_{\kappa,0}$) from early in
matter era up to roughly today (left panel) and up to a distant future (right
panel). We see that $\ln(A_{i})$ grows linearly with the scale factor in
matter era, but the growth slows down in the dark energy dominated epoch,
asymptoting to a constant in a distant future.
Figure 24: The evolution of anisotropy from in equation (6.39); the
integration constant is chosen such that the ratio vanishes today (when
$a=1$). The evolution of $\ln(A_{i}/a)$ is linear in $a$ in the matter-
dominated epoch and slows down in the dark energy-dominated epoch.
Figure 25: The same graph as on the left, but with a log scale on the
horizontal axis to show late-time behaviour and the integration constant
chosen such that the ratio vanishes as $a\rightarrow\infty$. Whereas in the
matter-dominated era anisotropies diverge over time, in the dark energy epoch
anisotropies asymptote to a constant value.
In order to understand whether the growth of anisotropies (6.36-6.38) can be
neglected in the analysis presented in earlier sections, note first that in
predominantly matter era ($z\gg z_{\rm DE}\simeq 0.7$) the argument of the
hypergeometric function in Eq. (6.39) is small, justifying its Taylor
expansion (cf. Eq. (6.35)). Keeping the two leading terms, the integral in
(6.36-6.38) evaluates to,
$\ln\left[\frac{A_{i}(t)}{a(t)}\right]=\Delta
K_{(i)}\Omega_{\kappa,0}\times\,\frac{2}{5\Omega_{\rm
m,0}}a\left(1-\frac{2}{11}\frac{\Omega_{\Lambda,0}}{\Omega_{\rm
m,0}}a^{3}\right)+{\cal O}\big{(}a^{7}\big{)}\,,$ (6.40)
where we assumed that at the initial time there is no anisotropy in the scale
factors and on the right hand side we neglected the contributions of the
initial scale factor. This expansion is valid for times earlier than today,
i.e. when $a=1/(1+z)<1$ ($z>0$).
To get an expansion valid at late times ($a\gg 1$), one can use Eq. (9.132.2)
in Ref. [38] 171717According to Eq. (9.132.2) in Ref. [38],
${}_{2}F_{1}\left(\frac{1}{2},\frac{5}{6};\frac{11}{6};z\right)=\frac{5}{2}\big{(}\\!-z\big{)}^{-\frac{1}{2}}\times_{2}\\!F_{1}\left(\frac{1}{2},-\frac{1}{3};\frac{2}{3};\frac{1}{z}\right)+\frac{\Gamma\big{(}\frac{11}{6}\big{)}\Gamma\big{(}-\frac{1}{3}\big{)}}{\sqrt{\pi}}\big{(}\\!-z\big{)}^{-\frac{5}{6}}\,,\qquad\left(z=-\frac{\Omega_{\Lambda,0}}{\Omega_{\rm
m,0}}{a^{\prime}}^{3}\right)\,.$ (6.41) to obtain,
$\displaystyle\ln\left[\frac{A_{i}(t)}{a(t)}\right]\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!-\ln\left[\frac{A_{i}(t)}{a(t)}\right]_{\infty}-\Delta
K_{(i)}\Omega_{\kappa,0}\times\,\frac{1}{2\Omega_{\Lambda,0}}\frac{1}{a^{2}}\left[1-\frac{2}{3}\frac{\Gamma\big{(}\frac{5}{6}\big{)}\Gamma\big{(}\frac{2}{3}\big{)}}{\sqrt{\pi}}\left(\frac{\Omega_{\rm
m,0}}{\Omega_{\Lambda,0}}\right)^{\frac{1}{3}}\frac{1}{a}-\frac{1}{10}\frac{\Omega_{\rm
m,0}}{\Omega_{\Lambda,0}}\frac{1}{a^{3}}\right]\\!+\\!{\cal
O}\big{(}a^{-6}\big{)}\,,\qquad\qquad$ (6.42)
where we made use of,
$\frac{{}_{2}F_{1}\left(\frac{1}{2},-\frac{1}{3};\frac{2}{3};-a^{\prime
3}\frac{\Omega_{\Lambda,0}}{\Omega_{\rm m,0}}\right)}{\sqrt{1+a^{\prime
3}\frac{\Omega_{\Lambda,0}}{\Omega_{\rm
m,0}}}}\simeq\frac{5}{2}\frac{\Omega_{\rm
m,0}}{\Omega_{\Lambda,0}}\frac{1-\frac{1}{4}\frac{1}{a^{\prime
3}}\frac{\Omega_{\rm
m,0}}{\Omega_{\Lambda,0}}-\frac{\Gamma\big{(}\frac{5}{6}\big{)}\Gamma\big{(}\frac{2}{3}\big{)}}{\sqrt{\pi}}\Big{(}\\!a^{\prime
3}\frac{\Omega_{\Lambda,0}}{\Omega_{\rm m,0}}\Big{)}^{-\frac{1}{3}}}{a^{\prime
3}}+{\cal O}\big{(}a^{-7}\big{)}\,.$ (6.43)
where $\ln\left[\frac{A_{i}(t)}{a(t)}\right]_{\infty}$ is the constant which
can be evaluated numerically (or read off from the graph in Figure 25). This
constant has no relevance for the dynamics of anisotropies, as it can be
subtracted by a suitable rescaling of the spatial coordinates, implying that,
in a cosmological constant domainted era (such as inflation), anisotropies in
the scale factors $\ln\left[\frac{A_{i}(t)}{a(t)}\right]$ decay rapidly
(exponentially fast) in time as $\propto 1/a^{2}\propto{\rm
e}^{-2H_{\Lambda}t}$, where
$H_{\Lambda}^{2}=\Omega_{\Lambda,0}H_{0}^{2}=\Lambda/3$.
Next we comment on the implications of the above findings. From observations
we know that spatial anisotropies today are small, or more quantitatively,
$\Omega_{\kappa,0}\ll 1$ and $\Delta K_{(i)}$ are at most of the order of
unity today. Combining this with the observation that, from the beginning of
matter era, $\ln\left[\frac{A_{i}(t)}{a(t)}\right]/[\Delta
K_{(i)}\Omega_{\kappa,0}]\simeq 1$ (which can be read off from Figure 25), one
can infer from Eq. (6.40) that the change in anisotropies $\Delta(A_{i})$ from
the beginning of matter era up to today ($a=1$) can be estimated as,
$\Delta\ln\left[\frac{A_{i}}{a}\right]\simeq\Delta\Big{(}\frac{A_{i}}{a}\Big{)}=\frac{A_{i}}{a}-1\simeq\Delta
K_{(i)}\Omega_{\kappa,0}\ll 1\,,$ (6.44)
thus justifying our treatment in Sections 2, 3, 4 and 5, where we neglected
the dynamics of anisotropies. 181818 Even though it is true that spatial
anisotropies are small today and were even smaller in the past, they do grow
rapidly in matter era, $\Delta\big{(}A_{i}(t)/a(t)\big{)}\sim\Delta
K_{(i)}\Omega_{\kappa,0}a(t)\propto 1/(1+z)$, which seems to put into question
our estimates of the luminosity distance and angular diameter distance at
large redshifts. One can argue that is not a legitimate concern as follows.
The question boils down to whether in the distance measures $d_{L}$ and
$d_{A}$ one can replace a directional scale factor $A_{i}(t)$ by the average
scale factor $a(t)$. The relative error one makes in this way (when estimating
the effects of spatial anisotropies on $d_{L}$ and $d_{A}$) is at most of the
order $\Delta\big{(}A_{i}(t)/a(t)\big{)}\sim\Delta K_{(i)}\Omega_{\kappa,0}$,
which is $\propto\kappa$, impacting the distance measures at a higher (second)
order in $\kappa$, such that this effect can be consistently neglected.
### 6.2 The Anisotropy problem
With these results in mind, we can make the following remarks regarding
whether there is an anisotropy problem in the Universe, which can be
formulated as follows:
* Given that anisotropies grow in radiation and matter era, and that the observed anisotropies are small today, one must fine-tune the initial geometry of the Universe such to be isotropic to a very high precision.
In order to better understand this problem, let us briefly recall the flatness
problem. In an expanding universe with $\epsilon=-\dot{H}/H^{2}={\rm
constant}$, $\rho_{\rm m}\propto a^{-2\epsilon}$ ($\epsilon\simeq 0$ in
inflation, $\epsilon\simeq 2$ in radiation and $\epsilon=3/2$ in matter), and
therefore,
$\Omega_{\kappa}=\frac{\rho_{\kappa}}{\rho_{\rm m}}\sim
a^{2(\epsilon-1)}\sim{\rm
e}^{2(\epsilon-1)N}\,,\qquad\big{(}N=\ln(a)\big{)}\,,$ (6.45)
meaning that $\Omega_{\kappa}$ decays in inflation as $\Omega_{\kappa}\sim{\rm
e}^{-2N_{I}}$, and it grows in radiation and matter eras as
$\Omega_{\kappa}\sim{\rm e}^{2N_{R}}$ and $\Omega_{\kappa}\sim{\rm
e}^{N_{M}}$, respectively, where $N_{I},N_{R}$ and $N_{M}$ denotes the number
of e-foldings in the respective epoch. From this one easily infers that the
flatness problem is solved if inflation lasts at least,
${\tt Flatness\;\;problem:}\qquad N_{I}>(N_{I})_{\rm
min}=N_{R}+\frac{1}{2}N_{M}\,.$ (6.46)
Let us now consider the growth of anisotropies in an $\epsilon={\rm constant}$
epoch. From Eq. (6) we see that anisotropies scale as, $\tau^{2}/a^{6}\sim
a^{2\epsilon-4}$, which when compared with $\rho\sim a^{-2\epsilon}$ yields,
$\Omega_{\rm anis}\sim a^{4(\epsilon-1)}\sim{\rm e}^{4(\epsilon-1)N}$ and
$\Omega_{\rm anis}/\Omega_{\kappa}\sim a^{2(\epsilon-1)}\sim{\rm
e}^{2(\epsilon-1)N}$, implying that both $\Omega_{\rm anis}=\rho_{\rm
anis}/\rho$ and $\Omega_{\rm anis}/\Omega_{\kappa}$ decay (grow) in
accelerating (decelerating) space-times. From this we infer in inflation,
$\Omega_{\rm anis}\sim{\rm e}^{-4N_{I}}$, and in radiation and matter era,
$\Omega_{\rm anis}\sim{\rm e}^{4N_{R}}$ and $\Omega_{\rm anis}\sim{\rm
e}^{2N_{M}}$, respectively. From these observations we see that the anisotropy
problem is solved when $-4N_{I}+4N_{R}+2N_{M}<0$, or equivalently,
${\tt Anisotropy\;\;problem:}\qquad N_{I}>(N_{I})_{\rm
min}=N_{R}+\frac{1}{2}N_{M}\,,$ (6.47)
which is the same as the condition in Eq. (6.46). Curiously (but not
surprisingly), the same condition is obtained if one requires $\Omega_{\rm
anis}/\Omega_{\kappa}<1$. Notice that the result (6.47) could have been
guessed from Eqs. (6.18–6.24), according to which the relative differences in
the Hubble rates scale as, $\Delta H/H\propto a^{2(\epsilon-1)}\sim{\rm
e}^{2(\epsilon-1)N}$.
For our two exceptional cases, Nil and $\widetilde{\text{U}(\mathds{H}^{2})}$
geometries, we note that the off-diagonal contributions to the energy momentum
tensor in Eqs. (3.8–3.9) scale as $\propto 1/a^{2}$. Even though we do not
know the precise dynamical geometry of these space-times, it is reasonable to
assume that the growth of these-off diagonal contributions will follow the
same time patterns as the other geometries given this scaling property. Then,
by the same logic as footnote 18 we, can trust that the relative error is
small as long as equation (6.47) holds.
The principal result of this section is that inflation solves the anisotropy
problem of the Universe precisely in the same way as it solves the flatness
problem of standard Friedmann cosmologies. This implies that, for a first-
principle point of few, any of the eight Thurston geometries is equally well
motivated to be the geometry of the Universe. Which one of these is the
preferred description of the Universe is a matter to be settled
observationally.
## 7 Conclusion and Discussion
In this paper we advance the hypothesis that the large scale geometry of
spatial sections of the Universe (see equations (1.1–1.2)) can be described as
a patchwork of one or more of the eight Thurston-Perelman geometries that are
sewn together smoothly. The first three of these eight geometries, as
described in section 2, are the well-known homogeneous and isotropic
Friedmann-Lemaître-Robertson-Walker (FLRW) geometries. Since FLRW space-times
are widely studied and well-understood, we have focused our attention on the
remaining five geometries, which are all homogeneous, but violate spatial
isotropy. To our knowledge, these geometries were not given a serious
consideration in the literature.
In section 3 we have shown that these geometries give rise to a set of
Friedmann equations (3.15–3.17), compatible with usual cosmological inventory:
matter, radiation and dark energy. However, satisfying these equations
requires the introduction of a field with peculiar scaling properties to
support the anisotropy of the underlying geometry, which we have assumed in
sections 3, 4 and 5 to make the analysis simpler, and thus more pedagogical.
An important question is how one could observationally test the large scale
geometry of the Universe and, in particular, how one could distinguish whether
we live in a spatially isotropic universe characterised by one of the three
FLRW geometries, or in a universe based on one of the five anisotropic
Thurston geometries. To answer this question, in Appendices A, B, C and D we
have worked out how light propagates in all of the anisotropic Thurston
geometries, which are then used in section 4 to calculate angular diameter
distance and luminosity distance for all of the geometries. Because of spatial
anisotropy of these geometries, we have derived general relations which show
how angular diameter distance and luminosity distance depend the underlying
geometry. To improve clarity we provide visual representations for all
Thurston geometries for both angular diameter distance and luminosity distance
in Section 5. These figures show how distant objects would be deformed and
dimmed or brightened by the presence of large-scale anisotropies. In
particular, one can infer from Figures 18–19 that, when expanded in powers of
$\lambda/L$, where $\lambda$ is the geodesic distance and $L$ the curvature
scale, the geometries $\widetilde{U({\mathds{H}}_{2})}$, Nil and Solv show a
specific angular dependence described by spherical harmonics $Y_{\ell m}$
($m\in[-\ell,\ell]$), which can be a useful feature when confronting these
geometries against data.
The validity and limitations of the assumption of spatial isotropy are
discussed in detail in section 6. In particular, we have shown that in the
matter era most of the Thurston geometries 191919The exceptions are
$\widetilde{U({\mathds{H}}_{2})}$ and Nil. Even though we do not know the
precise dynamics of these geometries, we give a simple argument in favour of
long time stability of these geometries. can be supported by a standard,
spatially isotropic cosmological (perfect) fluid, provided one introduces
anisotropic scale factors, which allow for different expansion rates in
different spatial directions. Even though anisotropies in the scale factors
grow rapidly in the matter era (see Eqs. (6.39–6.42) and Figures 25 and 25),
one can show that the Universe’s geometry can remain stable over large periods
of time (many e-foldings), and moreover that the corrections in the distance
measures induced by the dynamics of anisotropies are of a higher order in
curvature $\kappa$, cf. Eq. (6), and therefore can be consistently neglected.
Furthermore, even though Thurston geometries pose an anisotropy problem in
standard Big Bang cosmologies, it can be solved by a sufficiently long period
of cosmic inflation (see Eq. (6.47)) in a way analogous to the flatness
problem of standard FLRW space-times.
The next natural step is to make use of the results of this paper to
investigate whether the current data contain evidence to single out any one of
the Thurston geometries as a preferred candidate, and make forecasts for the
upcoming observations. The methods developed in the recent study [39] can aid
such investigations.
Further theoretical work could also be done to study the effects of anisotropy
on the polarization of light. As both axial symmetry and mirror symmetry can
be broken in the Thurston geometries, one might expect that photons with
different polarizations will also be affected differently. Finding such a
difference which may open up additional avenues for probing curvature
anisotropies.
## Acknowledgements
This work is part of the Delta ITP consortium, a program of the Netherlands
Organisation for Scientific Research (NWO) that is funded by the Dutch
Ministry of Education, Culture and Science (OCW).
## Appendices
## Appendix A Geodesics of the $\mathds{R}\times\mathds{H}^{2}$ and
$\mathds{R}\times\mathbf{S}^{2}$ geometries
This second Thurston space-time is the first nontrivial case. We start from
spatial sections of metric (2.6) presented in Section 2,
${\rm d}\Sigma_{3}^{2}={\rm d}z^{2}+{\rm d}\chi^{2}+S_{\kappa}(\chi)^{2}{\rm
d}\phi^{2}.$ (A.1)
Since this metric is anisotropic, we cannot rely on three-dimensional
rotational symmetry to place geodesics along convenient axes. We must
therefore solve the Killing equation explicitly, which becomes considerably
more simple by using the isometries of the metric in (A.1). If $K^{i}$ is a
Killing vector we can construct a conserved charge,
$\displaystyle Q_{K}=K_{i}(x)\frac{{\rm d}x^{i}}{{\rm d}\lambda}\,,$ (A.2)
that is constant along geodesic trajectories. With enough Killing vectors, we
can solve a set of first-order equations instead. The metric (A.1) has two
Killing vectors, $\partial_{z}$ and $\partial_{\phi}$, which leads to two
conserved quantities,
$\displaystyle P_{z}$ $\displaystyle=\dot{z}\,,$ (A.3) $\displaystyle
L_{\phi}$ $\displaystyle=S_{\kappa}(\chi)^{2}\dot{\phi}\,.$ (A.4)
We can use these conserved charges together with the requirement that the
geodesics are space-like,
$\displaystyle\epsilon=-g_{ij}\frac{{\rm d}x^{i}}{{\rm d}\lambda}\frac{{\rm
d}x^{j}}{{\rm d}\lambda}=-1,$ (A.5)
to write a general (implicit) expression for geodesics of the metric (A.1) as,
$\displaystyle z(\lambda)$ $\displaystyle=P_{z}\lambda+z_{0}$ (A.6)
$\displaystyle\dot{\chi}(\lambda)$
$\displaystyle=\pm\sqrt{1-P_{z}^{2}-L^{2}_{\phi}/S_{\kappa}(\chi(\lambda))^{2}}$
(A.7) $\displaystyle\dot{\phi}(\lambda)$
$\displaystyle=L_{\phi}/S_{\kappa}(\chi(\lambda))^{2}.$ (A.8)
Radial Geodesics.
Setting $L_{\phi}=0$ restricts us to just the spatial radial geodesics and by
further requiring that that $x^{i}(\lambda=0)=\vec{0}$, they will start the
the origin of our coordinate system. These geodesics are then given by
$\displaystyle z(\lambda)$ $\displaystyle=P_{z}\lambda$ (A.9)
$\displaystyle\chi(\lambda)$ $\displaystyle=P_{\chi}\lambda$ (A.10)
$\displaystyle\phi(\lambda)$ $\displaystyle=\phi_{0}\,,$ (A.11)
under the constraint that $P_{z}^{2}$ $+$ $P_{\chi}^{2}=1$. This strongly
motivates a change of coordinates $z=\rho\cos(\theta)$,
$\chi=\rho\sin(\theta)$, under which the metric (A.1) changes to,
${\rm d}\Sigma_{3}^{2}={\rm d}\rho^{2}+\rho^{2}{\rm
d}\theta^{2}+S_{\kappa}(\rho\sin(\theta))^{2}{\rm d}\phi^{2}\,.$ (A.12)
In these coordinates, the geodesics look like
$\displaystyle\rho(\lambda)$ $\displaystyle=\lambda$ (A.13)
$\displaystyle\theta(\lambda)$ $\displaystyle=\theta_{0}$ (A.14)
$\displaystyle\phi(\lambda)$ $\displaystyle=\phi_{0}\,.$ (A.15)
Hence (radial) proper distance to the origin is simply given by
$\ell_{\text{rad}}=\lambda_{f}\equiv\rho_{0}$.
## Appendix B Geodesics of the $\widetilde{\text{U}(\mathds{H}^{2})}$
geometry
We start with the metric that we derived in Section 2,
${\rm d}\Sigma_{3}^{2}={\rm d}x^{2}+{\rm d}y^{2}\cosh^{2}(x/L)+\big{(}{\rm
d}y\sinh(x/L)^{2}+{\rm d}z\big{)}^{2}={\rm d}x^{2}+{\rm
d}y^{2}\cosh(2x/L)+{\rm d}z^{2}+2{\rm d}y{\rm d}z\sinh(x/L),$ (B.1)
where we have written $L=1/\sqrt{\kappa}$ for notational convenience. Nil has
two obvious Killing vectors, $\partial_{y}$ and $\partial_{z}$, which leads to
two conserved quantities and therefore to two first-order equations.
$\displaystyle Q_{1}$ $\displaystyle=\dot{y}\cosh(2x/L)+\dot{z}\sinh(x/L)$
$\displaystyle\longrightarrow$ $\displaystyle\dot{y}$
$\displaystyle=\frac{Q_{1}-Q_{2}\sinh(x/L)}{\cosh^{2}(x/L)}.$ (B.2)
$\displaystyle Q_{2}$ $\displaystyle=\dot{y}\sinh(x/L)+\dot{z}$
$\displaystyle\longrightarrow$ $\displaystyle\dot{z}$
$\displaystyle=\frac{-Q_{1}\sinh(x/L)+Q_{2}\cosh(2x/L)}{\cosh^{2}(x/L)}.$
(B.3)
We can again use the space-like nature of the geodesics so solve for x.
$\displaystyle 1=g_{ij}\frac{{\rm d}x^{i}}{{\rm d}\lambda}\frac{{\rm
d}x^{j}}{{\rm
d}\lambda}=\dot{x}^{2}+\dot{y}^{2}\cosh^{2}(x/L)+\Big{(}\dot{y}\sinh(x/L)+\dot{z}\Big{)}^{2},$
(B.4)
Plugging in conserved charges from (B.2) and (B.3) gives us
$\displaystyle\dot{x}=\pm\sqrt{1-Q_{2}^{2}-\frac{\Big{(}Q_{1}-Q_{2}\sinh(x/L)\Big{)}^{2}}{\cosh^{2}(x/L)}}.$
(B.5)
Note that at the origin $(x,y,z)=\vec{0}$ the first-order equations reduce to
$\displaystyle\dot{x}$ $\displaystyle=\pm\sqrt{1-Q_{1}^{2}-Q_{2}^{2}}$ (B.6)
$\displaystyle\dot{y}$ $\displaystyle=Q_{1}$ (B.7) $\displaystyle\dot{z}$
$\displaystyle=Q_{2}.$ (B.8)
This means we can reformulate the conserved charges for geodesics crossing the
origin in terms of initial momenta, $P_{x}=\pm\sqrt{1-Q_{1}^{2}-Q_{2}^{2}}$,
$P_{y}=Q_{1}$ and $P_{z}=Q_{2}$, subject to the constraint that
$P_{x}^{2}+P_{y}^{2}+P_{z}^{2}=1$. Since $P_{x}$ must be real-valued, this
tells us that $Q_{1}^{2}+Q_{2}^{2}\leq 1$.
We can now rewrite equation (B.5) as an integral equation by using separation
of variables,
$\displaystyle\int{\rm d}\lambda=\int{\rm
d}x\left[1-Q_{2}^{2}-\frac{\Big{(}Q_{1}-Q_{2}\sinh(x/L)\Big{)}^{2}}{\cosh^{2}(x/L)}\right]^{-\frac{1}{2}}.$
(B.9)
We now substitute $w=\sinh(x/L)$, so that ${\rm d}w=\tfrac{{\rm
d}x}{L}\cosh(x/L)=\tfrac{{\rm d}x}{L}\sqrt{1+w^{2}}$. With this substitution
we can rewrite the integral equation as,
$\displaystyle\int{\rm d}\lambda$ $\displaystyle=L\int\frac{{\rm
d}w}{\sqrt{1+w^{2}}}\left[1-Q_{2}^{2}-\tfrac{\big{(}Q_{1}-Q_{2}w\big{)}^{2}}{1+w^{2}}\right]^{-\frac{1}{2}}$
(B.10) $\displaystyle\int{\rm d}\lambda$ $\displaystyle=L\int\frac{{\rm
d}w}{\sqrt{\smash[b]{\underbrace{\Big{(}1-2Q_{2}^{2}\Big{)}}_{A}}w^{2}+\smash[b]{\underbrace{\Big{(}2Q_{1}Q_{2}\Big{)}}_{B}}w+\smash[b]{\underbrace{\Big{(}1-Q_{1}^{2}-Q_{2}^{2}\Big{)}}_{C}}}}.$
(B.11)
We have now reduced solving the geodesic equation to solving the following
integral:
$\displaystyle\mathcal{I}=\int\frac{{\rm d}w}{\sqrt{Aw^{2}+Bw+C}}.$ (B.12)
Since we are interested in real-valued solutions, we will restrict ourselves
to the region where $p\equiv Aw^{2}+Bw+C>0$. Note that $A$ and $B$ take values
in the interval $[-1,1]$ depending on $Q_{1}$ and $Q_{2}$ while $C=P_{x}^{2}$
is restricted to $[0,1]$. This means we must proceed carefully as both $A$ and
the discriminant $\Delta\equiv B^{2}-4AC$ have the potential to take positive
and negative values.
To more easily study the different regions, we parameterise the initial
velocities as
$\displaystyle P_{x}$ $\displaystyle=\sin(\theta)\cos(\phi)$ (B.13)
$\displaystyle P_{y}$ $\displaystyle=\sin(\theta)\sin(\phi)$ (B.14)
$\displaystyle P_{z}$ $\displaystyle=\cos(\theta).$ (B.15)
We can then plot the sign of $A$ and $\Delta$ on the $(\phi,\theta)$ plane in
Figure 26 below to visualise the relevant regions.
Figure 26: An overview of the regions
Case 1: $\mathbf{A>0}$.
The first big region we consider is the region where $A$ is positive. In this
case, we can complete the square and substitute $u=w+B/2A$ to get,
$\displaystyle\mathcal{I}=\frac{1}{\sqrt{A}}\int\frac{{\rm
d}w}{\sqrt{\left(w+\frac{B}{2A}\right)^{2}-\frac{\Delta}{4A^{2}}}}=\frac{1}{\sqrt{A}}\int\frac{{\rm
d}u}{\sqrt{u^{2}-\frac{\Delta}{4A^{2}}}}\,.$ (B.16)
If sign$(\Delta)=\pm 1$ then we can define $a=\sqrt{\mp\Delta}/2A$ and write
equation (B.16) as
$\displaystyle\mathcal{I}=\begin{dcases}\frac{1}{\sqrt{A}}\int\frac{{\rm
d}u}{\sqrt{u^{2}+a^{2}}}=\frac{1}{\sqrt{A}}\ln\left|\frac{u+\sqrt{u^{2}+a^{2}}}{a}\right|=\frac{1}{\sqrt{A}}\ln\left|\frac{2Aw+B+2\sqrt{A}\sqrt{p}}{\sqrt{-\Delta}}\right|\,,&\text{if
}\Delta<0\\\ \frac{1}{\sqrt{A}}\int\frac{{\rm
d}u}{\sqrt{u^{2}-a^{2}}}=\frac{1}{\sqrt{A}}\ln\left|\frac{u+\sqrt{u^{2}-a^{2}}}{a}\right|=\frac{1}{\sqrt{A}}\ln\left|\frac{2Aw+B+2\sqrt{A}\sqrt{p}}{\sqrt{\Delta}}\right|\,,&\text{if
}\Delta>0\,.\end{dcases}$ (B.17)
Let’s investigate the sign of the expression inside the absolute value
brackets. Since $A>0$ and $p>0$ by assumption, we can only have an overall
minus sign inside the logarithm if $|2Aw+B|>2\sqrt{A}\sqrt{Aw^{2}+Bw+C}$ and
$2Aw+B<0$. The first condition is equivalent to $\Delta>0$ (square both sides
and rearrange), and the second one to $w<-B/2A$. This means that we never get
an overall minus sign in the $\Delta<0$ case and only sometimes for the
$\Delta>0$ case.
Since the first condition is met for the $\Delta>0$ case by assumption, we
know that the polynomial $p$ has two distinct real-valued roots,
$w_{\pm}=-B/2A\pm\sqrt{\Delta}/2A$. Since $A>0$ we know that $p$ takes non-
positive values for $w\in[w_{-},w_{+}]$ and so $\mathcal{I}$ will only have
real-valued solutions for $w\in(-\infty,w_{-})\cup(w_{+},\infty)$. Since
$2Aw+B$ is negative if $w<-B/2A$, this means that the argument of the
logarithm in the $\Delta>0$ case is negative when $w<w_{-}$ and positive when
$w>w_{+}$.
In the third case, $\Delta=0$, $p$ has exactly one root $w_{0}=-B/2A$ and we
can rewrite equation (B.16) as
$\displaystyle\mathcal{I}=\frac{1}{\sqrt{A}}\int\frac{{\rm
d}u}{\sqrt{u^{2}}}=\frac{\text{sign}(u)}{\sqrt{A}}\ln|u|=\begin{dcases}-\frac{1}{\sqrt{A}}\ln\left(-w-B/2A\right)\,,&\text{if
}\Delta=0\text{ and }w<w_{0}\\\
+\frac{1}{\sqrt{A}}\ln\left(+w+B/2A\right)\,,&\text{if }\Delta=0\text{ and
}w>w_{0}\,.\end{dcases}$ (B.18)
So far we have left the integration constant implicit. We can pick and choose
convenient values for this integration constant to write a complete solution
to equation (B.12) when $A>0$:
$\displaystyle\mathcal{I}=\begin{dcases}+\frac{1}{\sqrt{A}}\ln\left(-2Aw-B-2\sqrt{A}\sqrt{Aw^{2}-Bw+C}\right)\,,&\text{if
}\Delta>0\text{ and }w<w_{-}\\\
-\frac{1}{\sqrt{A}}\ln\left(-2Aw-B-2\sqrt{A}\sqrt{Aw^{2}+Bw+C}\right)\,,&\text{if
}\Delta=0\text{ and }w<w_{0}\\\
+\frac{1}{\sqrt{A}}\ln\left(+2Aw+B+2\sqrt{A}\sqrt{Aw^{2}+Bw+C}\right)\,,&\text{otherwise}\,.\end{dcases}$
(B.19)
Figure 27 below visualises the different regions of this solution.
Figure 27: An overview of the domain of $\mathcal{I}$ in the case where $A>0$.
The three solutions above have their domain on the three regions specified,
the first being above the dashed line, the second being on the dashed line and
the third being on the other side of the dashed line. $p=0$ on the dotted
lines and at (0,0), and the grey area indicates that $p<0$; there are no real-
valued solutions to $\mathcal{I}$ in these regions.
We now plug this result into equation (B.12) to get an equation for the
geodesic,
$\displaystyle\begin{cases}\chi{\rm
e}^{+\sqrt{A}\frac{\lambda}{L}}=-2Aw-B-2\sqrt{A}\sqrt{Aw^{2}-Bw+C}\,,&\text{if
}\Delta>0\text{ and }w<w_{-}\\\ \tfrac{1}{\chi}{\rm
e}^{-\sqrt{A}\frac{\lambda}{L}}=-2Aw-B-2\sqrt{A}\sqrt{Aw^{2}+Bw+C}\,,&\text{if
}\Delta=0\text{ and }w<w_{0}\\\ \chi{\rm
e}^{+\sqrt{A}\frac{\lambda}{L}}=+2Aw+B+2\sqrt{A}\sqrt{Aw^{2}+Bw+C}\,,&\text{otherwise}\,.\end{cases}$
(B.20)
Here $\chi>0$ is an integration constant coming from the $d\lambda$-integral.
We can straightforwardly invert these relations to get $w$ as a function of
$\lambda$,
$\displaystyle
w(\lambda)=\begin{dcases}+\frac{B^{2}-4AC-\chi^{2}}{4A\chi}\sinh\left(\sqrt{A}\frac{\lambda}{L}\right)-\frac{B^{2}-4AC+\chi^{2}}{4A\chi}\cosh\left(\sqrt{A}\frac{\lambda}{L}\right)-\frac{B}{2A}\,,&\text{if
}\Delta>0\text{ and }w<w_{-}\\\
-\frac{B^{2}-4AC-\chi^{2}}{4A\chi}\sinh\left(\sqrt{A}\frac{\lambda}{L}\right)-\frac{B^{2}-4AC+\chi^{2}}{4A\chi}\cosh\left(\sqrt{A}\frac{\lambda}{L}\right)-\frac{B}{2A}\,,&\text{if
}\Delta=0\text{ and }w<w_{0}\\\
-\frac{B^{2}-4AC-\chi^{2}}{4A\chi}\sinh\left(\sqrt{A}\frac{\lambda}{L}\right)+\frac{B^{2}-4AC+\chi^{2}}{4A\chi}\cosh\left(\sqrt{A}\frac{\lambda}{L}\right)-\frac{B}{2A}\,,&\text{otherwise}\,.\end{dcases}$
(B.21)
The requirement that these geodesics are radial translates to $x(0)=w(0)=0$,
which fixes $\chi$.
$\displaystyle\chi=\begin{dcases}-B\pm 2\sqrt{A}\sqrt{C}\,,&\text{if
}\Delta>0\text{ and }w<w_{-}\\\ -B\pm 2\sqrt{A}\sqrt{C}\,,&\text{if
}\Delta=0\text{ and }w<w_{0}\\\ +B\pm
2\sqrt{A}\sqrt{C}\,,&\text{otherwise}\,.\end{dcases}$ (B.22)
If we plug these values for $\chi$ into the expressions for $w(\lambda)$ they
collapse to a single expression that is valid for all cases:
$\displaystyle w(\lambda)$
$\displaystyle=\pm\frac{\sqrt{C}}{\sqrt{A}}\sinh\left(\sqrt{A}\frac{\lambda}{L}\right)+\frac{B}{2A}\left[\cosh\left(\sqrt{A}\frac{\lambda}{L}\right)-1\right]$
(B.23) $\displaystyle w(\lambda)$
$\displaystyle=\frac{P_{x}}{\sqrt{1-2P_{z}^{2}}}\sinh\left(\sqrt{1-2P_{z}^{2}}\frac{\lambda}{L}\right)+\frac{P_{y}P_{z}}{1-2P_{z}^{2}}\left[\cosh\left(\sqrt{1-2P_{z}^{2}}\frac{\lambda}{L}\right)-1\right]$
(B.24) $\displaystyle x(\lambda)$
$\displaystyle=L\sinh^{-1}\left[\frac{P_{x}}{\sqrt{1-2P_{z}^{2}}}\sinh\left(\sqrt{1-2P_{z}^{2}}\frac{\lambda}{L}\right)+\frac{P_{y}P_{z}}{1-2P_{z}^{2}}\left[\cosh\left(\sqrt{1-2P_{z}^{2}}\frac{\lambda}{L}\right)-1\right]\right].$
(B.25)
This expression is a solution to (B.4) and has the correct large-$L$
behaviour, $\lim_{L\rightarrow\infty}x(\lambda)=P_{x}\lambda$.
Case 2: $\mathbf{A=0}$.
If $A=0$ then $\Delta=B^{2}$ and we only need to consider the $\Delta=0$ and
$\Delta>0$ scenarios. If $\Delta>0$ then $p$ reduces to a first-order
polynomial $Bw+C$ and if $\Delta=0$ then $p$ reduces to a constant $C$. We can
then simply write,
$\displaystyle\lambda-\lambda_{0}=\begin{dcases}L\frac{2\sqrt{Bw+C}}{B}\,,&\text{if
}\Delta>0\\\ L\frac{w}{\sqrt{C}}\,,&\text{if }\Delta=0\,.\end{dcases}$ (B.26)
We can invert these relationships and require that $x(0)=0$ to get
$\displaystyle w(\lambda)$
$\displaystyle=\pm\sqrt{C}\frac{\lambda}{L}+\frac{B}{4}\frac{\lambda^{2}}{L^{2}}$
(B.27) $\displaystyle w(\lambda)$
$\displaystyle=P_{x}\frac{\lambda}{L}+\frac{P_{y}P_{z}}{2}\frac{\lambda^{2}}{L^{2}}$
(B.28) $\displaystyle x(\lambda)$
$\displaystyle=L\sinh^{-1}\left[P_{x}\frac{\lambda}{L}+\frac{P_{y}P_{z}}{2}\frac{\lambda^{2}}{L^{2}}\right].$
(B.29)
This expression again covers both the $\Delta=0$ and $\Delta>0$ cases and
solves (B.4).
Case 3: $\mathbf{A<0}$.
When $A<0$ we know that $\Delta=B^{2}-4AC\geq 0$ since $C$ is non-negative. If
$\Delta=0$ then $P_{x}=0$, $P_{y}=0$ and $P_{z}=\pm 1$ so that
$\vec{x}=(0,0,P_{z}\lambda)$ is a solution to the geodesic equation.
If $\Delta>0$ then we proceed similarly to the $A>0$ case. We now only find
real-valued solutions for $\mathcal{I}$ if $\Delta>0$ and $w\in(w_{-},w_{+})$.
This means we can again complete the square and substitute $u=w+B/2A$ and
$a=\sqrt{\Delta}/2A$ to get
$\displaystyle\mathcal{I}=\frac{1}{\sqrt{-A}}\int\frac{{\rm
d}w}{\sqrt{-\left(w+\frac{B}{2A}\right)^{2}+\frac{\Delta}{4A^{2}}}}=\frac{1}{\sqrt{-A}}\int\frac{{\rm
d}u}{\sqrt{-u^{2}+a^{2}}}\,.$ (B.30)
This is a standard integral and we can therefore find a complete solution for
(B.11) in the $A<0$ case as
$\displaystyle\lambda-\lambda_{0}=-\frac{L}{\sqrt{-A}}\arcsin\left(\frac{u}{a}\right)=-\frac{L}{\sqrt{-A}}\arcsin\left(\frac{2Aw+B}{\sqrt{\Delta}}\right)\,,$
$\displaystyle\quad\text{where }|2Aw+B|<\sqrt{\Delta}\,.$ (B.31)
We again invert these relationships and require that $x(0)=0$ to get
$\displaystyle w(\lambda)$
$\displaystyle=\frac{\sqrt{\Delta}}{2(-A)}\sin\left(\sqrt{-A}\frac{\lambda}{L}-\arcsin\left(\frac{B}{\sqrt{\Delta}}\right)\right)-\frac{B}{2A}$
(B.32) $\displaystyle w(\lambda)$
$\displaystyle=\frac{\sqrt{P_{y}^{2}P_{z}^{2}+P_{x}^{2}(2P_{z}^{2}-1)}}{2P_{z}^{2}-1}\sin\left[\sqrt{2P_{z}^{2}-1}\frac{\lambda}{L}-\arcsin\left(\frac{P_{y}P_{z}}{\sqrt{P_{y}^{2}P_{z}^{2}+P_{x}^{2}(2P_{z}^{2}-1)}}\right)\right]-\frac{P_{y}P_{z}}{2P_{z}^{2}-1}$
(B.33) $\displaystyle x(\lambda)$
$\displaystyle=L\sinh^{-1}\left[w(\lambda)\right]\,.$ (B.34)
Final geodesics.
We can now write the geodesics for the $\widetilde{\text{U}(\mathds{H}^{2})}$
geometry in a compact form.
$\displaystyle x(\lambda)$ $\displaystyle=L\sinh^{-1}\left[w(\lambda)\right]$
(B.35) $\displaystyle y(\lambda)$ $\displaystyle=\int{\rm
d}\lambda\;\frac{-P_{z}w+P_{y}}{1+w^{2}}$ (B.36) $\displaystyle z(\lambda)$
$\displaystyle=\int{\rm
d}\lambda\;\frac{2P_{z}w^{2}-P_{y}w+P_{z}}{1+w^{2}}\,,$ (B.37)
where $w$ is determined by the value of $P_{z}$,
$\displaystyle\\!\\!\\!w(\lambda)=\begin{cases}\frac{P_{x}}{\sqrt{1-2P_{z}^{2}}}\sinh\left(\sqrt{1-2P_{z}^{2}}\frac{\lambda}{L}\right)+\frac{P_{y}P_{z}}{1-2P_{z}^{2}}\left[\cosh\left(\sqrt{1-2P_{z}^{2}}\frac{\lambda}{L}\right)-1\right]\,,&\text{if
}P_{z}^{2}<1/2\\\
P_{x}\frac{\lambda}{L}+\frac{P_{y}P_{z}}{2}\frac{\lambda^{2}}{L^{2}}\,,&\text{if
}P_{z}^{2}=1/2\\\
\frac{\sqrt{P_{y}^{2}P_{z}^{2}+P_{x}^{2}(2P_{z}^{2}-1)}}{2P_{z}^{2}-1}\sin\\!\left[\sqrt{2P_{z}^{2}-1}\frac{\lambda}{L}-\arcsin\left(\frac{P_{y}P_{z}}{\sqrt{P_{y}^{2}P_{z}^{2}+P_{x}^{2}(2P_{z}^{2}-1)}}\right)\right]-\frac{P_{y}P_{z}}{2P_{z}^{2}-1}\,,&\text{if
}1/2<P_{z}^{2}<1,\end{cases}$ (B.38)
and we have an additional constraint that $P_{x}^{2}+P_{y}^{2}+P_{z}^{2}=1$.
The fact that these expressions are not smooth with respect to the initial
velocity $\hat{P}$ should not worry us. This features exists within many other
dynamical systems, c.f. elliptical, parabolic and hyperbolic trajectories on
orbital mechanics.
To visualise these geodesics, Figures 30, 30 and 30 show the trajectory of the
above geodesics along selected directions emanating from the origin. The
curved red, green and blue surfaces traced by the set of geodesics with an
initial direction orthogonal to the $x$-, $y$\- and $z$-direction
respectively. The magenta, cyan and yellow surfaces are traced by sets of
geodesics with an initial direction in a small cone around the $x$-, $y$\- and
$z$-direction respectively. The grey surfaces are similar, but are cones
around initial directions with $|P_{x}|=|P_{y}|=|P_{z}|$. To give a clearer
visualisation of the effects of curvature, we have increased its effects by
drawing the surfaces from distance $\lambda=0$ to distance
$\lambda=5\eta_{*}$.
Figure 28: Geodesics of the $\widetilde{\text{U}(\mathds{H}^{2})}$ geometry.
Note that the cyan cone is stretched horizontally, the magenta cone is
stretched more harshly vertically, while the yellow cone is largely
unaffected.
Figure 29: The same visualisation, viewed from a higher angle to show a
clockwise twisting effect of geodesics about the $z$-axis of the geometry; a
similar counter-clockwise twist can be observed around the $y$-axis in the
first plot.
Figure 30: However, around $x$-axis of the geometry geodesics are skewed
towards a diagonal direction rather than twisted.
## Appendix C Geodesics of the Nil geometry
The Nil case starts along the same general lines from spatial sections of the
Nil-metric (2.18),
$\displaystyle{\rm d}\Sigma_{3}^{2}={\rm
d}x^{2}+\left(1+\frac{x^{2}}{L^{2}}\right){\rm d}y^{2}+{\rm
d}z^{2}-\frac{2x}{L}\ {\rm d}y{\rm d}z\,.$ (C.1)
Note that we have written $L=1/\sqrt{-\kappa}$ for notational convenience. Nil
has two obvious Killing vectors, $\partial_{y}$ and $\partial_{z}$, which lead
two conserved quantities and therefore to two first-order equations,
$\displaystyle Q_{1}$
$\displaystyle=\dot{y}\left(1+\frac{x^{2}}{L^{2}}\right)-x\dot{z}/L$
$\displaystyle\longrightarrow$ $\displaystyle\dot{y}$
$\displaystyle=Q_{1}+Q_{2}x/L.$ (C.2) $\displaystyle Q_{2}$
$\displaystyle=\dot{z}-x\dot{y}/L$ $\displaystyle\longrightarrow$
$\displaystyle\dot{z}$
$\displaystyle=Q_{2}\left(1+\frac{x^{2}}{L^{2}}\right)+Q_{1}x/L\,.$ (C.3)
Unfortunately, the Nil equivalent of (A.5) does not allow for separation of
variables and we must solve the geodesic equation for $x$ directly,
$\displaystyle\ddot{x}(\lambda)+\frac{Q_{2}^{2}}{L^{2}}\left(x(\lambda)+\frac{Q_{1}L}{Q_{2}}\right)=0\,.$
(C.4)
This is the equation for a (shifted) simple harmonic oscillator with angular
frequency $\omega=Q_{2}/L$ and it is solved by,
$\displaystyle
x(\lambda)+\frac{Q_{2}L}{Q_{1}}=C_{1}\cos\left(\omega\lambda\right)+C_{2}\sin\left(\omega\lambda\right)\,.$
(C.5)
From this result it is a straightforward, if tedious, exercise to derive a
full solution for $y$ and $z$. We will omit the details of this calculation
and skip directly to the solution in a convenient form,
$\displaystyle x(\lambda)$
$\displaystyle=x_{0}+\frac{P_{x}}{\omega}\sin(\omega\lambda)-\frac{P_{y}}{\omega}\Big{[}1-\cos(\omega\lambda)\Big{]}$
(C.6) $\displaystyle y(\lambda)$
$\displaystyle=y_{0}+\frac{P_{y}}{\omega}\sin(\omega\lambda)+\frac{P_{x}}{\omega}\Big{[}1-\cos(\omega\lambda)\Big{]}$
(C.7) $\displaystyle z(\lambda)$
$\displaystyle=z_{0}+L\omega\lambda+\frac{x_{0}P_{y}}{L\omega}\sin(\omega\lambda)+\frac{P_{x}^{2}}{4L\omega^{2}}\Big{[}\sin(2\omega\lambda)-2\omega\lambda\Big{]}+\frac{P_{y}^{2}}{4L\omega^{2}}\Big{[}2\omega\lambda-4\sin(\omega\lambda)+\sin(2\omega\lambda)\Big{]}$
(C.8)
$\displaystyle+\frac{2P_{x}}{L\omega^{2}}\Big{[}x_{0}\omega+P_{y}\cos(\omega\lambda)\Big{]}\sin^{2}\left(\frac{\omega\lambda}{2}\right)\,,$
(C.9)
where we have chosen the $Q$s, $C$s and the constants arising from integrating
(C.2) and (C.3) so that $x^{i}(0)=x^{i}_{0}$ and $\dot{x}^{i}(0)=P_{i}$, and
the angular frequency is defined as,
$\omega=\dfrac{LP_{z}-P_{y}x_{0}}{L^{2}}$.
Radial geodesics.
To make these geodesics radial, we set $x_{0}=y_{0}=z_{0}=0$. The angular
frequency now simplifies to $\omega=P_{z}/L$ and we may write exact
expressions for the geodesics as
$\displaystyle x(\lambda)$
$\displaystyle=\frac{LP_{x}}{P_{z}}\sin\left(P_{z}\frac{\lambda}{L}\right)-\frac{LP_{y}}{P_{z}}\left[1-\cos\left(P_{z}\frac{\lambda}{L}\right)\right]$
(C.10) $\displaystyle y(\lambda)$
$\displaystyle=\frac{LP_{y}}{P_{z}}\sin\left(P_{z}\frac{\lambda}{L}\right)+\frac{LP_{x}}{P_{z}}\left[1-\cos\left(P_{z}\frac{\lambda}{L}\right)\right]$
(C.11) $\displaystyle z(\lambda)$
$\displaystyle=P_{z}\lambda+\frac{P_{x}^{2}}{4P_{z}^{2}}\left[2P_{z}\lambda-L\sin\left(\frac{2P_{z}\lambda}{L}\right)\right]+\frac{P_{y}^{2}}{4P_{z}^{2}}\left[2P_{z}\lambda-4L\sin\left(\frac{P_{z}\lambda}{L}\right)+L\sin\left(\frac{2P_{z}\lambda}{L}\right)\right]$
(C.12)
$\displaystyle+\frac{2LP_{x}P_{y}}{P_{z}^{2}}\cos\left(\frac{P_{z}\lambda}{L}\right)\sin^{2}\left(\frac{P_{z}\lambda}{2L}\right)\,.$
(C.13)
Given these expressions, it is productive to reconsider equation (A.5), which
now reads
$\displaystyle 1=\epsilon=P_{x}^{2}+P_{y}^{2}+P_{z}^{2}.$ (C.14)
In effect, this tells us that if we choose initial velocity vector
$\vec{P}=(P_{x},P_{y},P_{z})$ to be of unit length, then $\lambda$
parameterises geodesic distance along the curve. Hence, proper (radial)
distance is again given by $\ell_{\text{rad}}=\lambda_{f}\equiv\rho_{0}$.
To visualise these geodesics we have again plotted select geodesics of this
geometry in Figures 33, 33 and 33. For a description of the surfaces, please
refer to the previous appendix. As in Appendix C, we have augmented the
effects of curvature for a clearer visualisation. For this geometry it was
sufficient to draw the surfaces from distance $\lambda=0$ to distance
$\lambda=2.5\eta_{*}$.
Figure 31: Geodesics of the Nil geometry. Like the
$\widetilde{\text{U}(\mathds{H}^{2})}$ geometry, the cyan cone is stretched
horizontally, the magenta cone is stretched more harshly vertically, while the
yellow cone is largely unaffected.
Figure 32: The same visualisation, viewed from a higher angle to show a
counter-clockwise twisting effect of geodesics about the $z$-axis of the
geometry; a similar twist can be observed around the $y$-axis in the first
plot.
Figure 33: However, around $x$-axis of the geometry geodesics are skewed
towards a diagonal direction rather than twisted. Note that the diagonal
points the other way when compared to $\widetilde{\text{U}(\mathds{H}^{2})}$.
## Appendix D Geodesics of the Solv geometry
We again start from metric introduced in (2.20) and look at spatial sections,
${\rm d}\Sigma_{3}^{2}={\rm e}^{2z/L}\ {\rm d}x^{2}+{\rm e}^{-2z/L}\ {\rm
d}y^{2}+{\rm d}z^{2}\,.$ (D.1)
This metric admits three Killing vectors, the first is $K_{1}={\rm
e}^{2z/L}\partial_{x}$, the second $K_{2}={\rm e}^{-2z/L}\partial_{y}$ and the
third $K_{3}=-\tfrac{x}{L}{\rm e}^{2z/L}\partial_{x}\ +\ \tfrac{y}{L}{\rm
e}^{-2z/L}\partial_{y}\ +\ \partial_{z}$. These lead to the following first-
order equations,
$\displaystyle\dot{x}$ $\displaystyle=$ $\displaystyle P_{x}{\rm e}^{-2z/L}$
(D.2) $\displaystyle\dot{y}$ $\displaystyle=$ $\displaystyle P_{y}{\rm
e}^{2z/L}$ (D.3) $\displaystyle\dot{z}$ $\displaystyle=$ $\displaystyle
P_{z}+P_{x}\frac{x}{L}-P_{y}\frac{y}{L}\,.$ (D.4)
Unlike in the previous cases, it is hard to solve this system exactly 202020
Upon inserting equations (D.2–D.3) into equations (D.1) one obtains,
$1=\frac{P_{x}^{2}}{{\rm e}^{2z/L}}+\frac{P_{y}^{2}}{{\rm
e}^{-2z/L}}+\dot{z}^{2}\;\Longrightarrow\int\frac{{\rm d}z}{\sqrt{1-{\rm
e}^{-2z/L}P_{x}^{2}-{\rm e}^{2z/L}P_{y}^{2}}}=\lambda\,.$ (D.5) A convenient
variable is $w=e^{z/L}$, in terms of which this equation simplifies to,
$\int\frac{{\rm
d}w}{\sqrt{w^{2}-P_{x}^{2}-w^{4}P_{y}^{2}}}=\frac{\lambda}{L}\,.$ (D.6) This
is an elliptic integral, and so its solution can be expressed in terms of
elliptic functions. Instead of studying these general solutions, we opt for a
much simpler expansion in powers of $1/L$, with the results given in equations
(D.10–D.12).
and so we will opt for a perturbative approach.
Radial geodesics.
We can write $\vec{x}=\sum_{i=0}^{n}\vec{x}_{i}L^{-i}$ and expand the system
above to a desired power $L^{n}$. To linear order, for example, this would
look like
$\displaystyle\dot{x}$ $\displaystyle=P_{x}\big{(}1-2z/L\big{)}$ (D.7)
$\displaystyle\dot{y}$ $\displaystyle=P_{y}\big{(}1+2z/L\big{)}$ (D.8)
$\displaystyle\dot{z}$
$\displaystyle=P_{z}+P_{x}\frac{x}{L}-P_{y}\frac{y}{L}.$ (D.9)
We can then require $x^{i}(0)=0$ and equate terms of equal power of $L$ to
solve the system. This gives the following result,
$\displaystyle x(\lambda)$ $\displaystyle=\lambda
P_{x}\left(1-P_{z}\frac{\lambda}{L}-\frac{1}{3}(1-2P_{y}^{2}+3P_{z}^{2})\frac{\lambda^{2}}{L^{2}}+...\right)$
(D.10) $\displaystyle y(\lambda)$ $\displaystyle=\lambda
P_{y}\left(1+P_{z}\frac{\lambda}{L}+\frac{1}{3}(1-2P_{y}^{2}+3P_{z}^{2})\frac{\lambda^{2}}{L^{2}}+...\right)$
(D.11) $\displaystyle z(\lambda)$
$\displaystyle=P_{z}\lambda+\frac{P_{x}^{2}-P_{y}^{2}}{2}\frac{\lambda}{L}+\frac{P_{z}^{3}-P_{z}}{3}\frac{\lambda^{2}}{L^{2}}+...\,.$
(D.12)
We can evaluate the constraint equation (A.5) at $\lambda=0$, which tells us
that $P_{x}^{2}+P_{y}^{2}+P_{z}^{2}=1$ holds as before. Hence (radial) proper
distance to the origin is again given by
$\ell_{\text{rad}}=\lambda_{f}\equiv\rho_{0}$.
To visualise these geodesics we have again plotted select geodesics of this
geometry in Figures 36, 36 and 36. For a description of the surfaces, please
refer to the previous appendix. As before, for a clearer visualisation we have
augmented the effects of curvature by drawing the surfaces from distance
$\lambda=0$ to distance $\lambda=2.5\eta_{*}$.
Figure 34: Geodesics of the Solv geometry. The geometry is characterized by
geodesics being pushed up- or downward respectively depending on whether $|x|$
is larger or smaller than $|y|$.
Figure 35: The same visualisation, viewed along the $y$-direction and with
some elements removed to show more clearly how geodesics along the $x$-$z$
plane are deflected upwards along a distinctive guitar pick shape.
Figure 36: The same visualisation, viewed along the $x$-direction and with
some elements removed to show more clearly how geodesics along the $y$-$z$
plane are deflected downwards.
## References
* [1] E. A. Milne, “World-Structure and the Expansion of the Universe.” Zeitschrift fur Astrophysik 6 (1933) 1.
* [2] A. de Oliveira-Costa, M. Tegmark, M. Zaldarriaga and A. Hamilton, “The Significance of the largest scale CMB fluctuations in WMAP,” Phys. Rev. D 69 (2004), 063516 doi:10.1103/PhysRevD.69.063516 [arXiv:astro-ph/0307282 [astro-ph]].
* [3] E. Abdalla, G. Franco Abellán, A. Aboubrahim, A. Agnello, O. Akarsu, Y. Akrami, G. Alestas, D. Aloni, L. Amendola and L. A. Anchordoqui, et al. “Cosmology intertwined: A review of the particle physics, astrophysics, and cosmology associated with the cosmological tensions and anomalies,” JHEAp 34 (2022), 49-211 doi:10.1016/j.jheap.2022.04.002 [arXiv:2203.06142 [astro-ph.CO]].
* [4] P. A. R. Ade et al. [Planck], “Planck 2015 results - XVIII. Background geometry and topology of the Universe,” Astron. Astrophys. 594 (2016), A18 doi:10.1051/0004-6361/201525829 [arXiv:1502.01593 [astro-ph.CO]].
* [5] P. A. R. Ade et al. [Planck], “Planck 2013 results. XXVI. Background geometry and topology of the Universe,” Astron. Astrophys. 571 (2014), A26 doi:10.1051/0004-6361/201321546 [arXiv:1303.5086 [astro-ph.CO]].
* [6] K. Land and J. Magueijo, “The Axis of evil,” Phys. Rev. Lett. 95 (2005), 071301 doi:10.1103/PhysRevLett.95.071301 [arXiv:astro-ph/0502237 [astro-ph]].
* [7] K. Land and J. Magueijo, “Cubic anomalies in WMAP,” Mon. Not. Roy. Astron. Soc. 357 (2005), 994-1002 doi:10.1111/j.1365-2966.2005.08707.x [arXiv:astro-ph/0405519 [astro-ph]].
* [8] K. Land and J. Magueijo, “The Axis of Evil revisited,” Mon. Not. Roy. Astron. Soc. 378 (2007), 153-158 doi:10.1111/j.1365-2966.2007.11749.x [arXiv:astro-ph/0611518 [astro-ph]].
* [9] C. J. Copi, D. Huterer, D. J. Schwarz and G. D. Starkman, “Large-scale alignments from WMAP and Planck,” Mon. Not. Roy. Astron. Soc. 449 (2015) no.4, 3458-3470 doi:10.1093/mnras/stv501 [arXiv:1311.4562 [astro-ph.CO]].
* [10] Y. Akrami et al. [Planck], “Planck 2018 results. VII. Isotropy and Statistics of the CMB,” Astron. Astrophys. 641 (2020), A7 doi:10.1051/0004-6361/201935201 [arXiv:1906.02552 [astro-ph.CO]].
* [11] P. J. E. Peebles, “Anomalies in Physical Cosmology,” [arXiv:2208.05018 [astro-ph.CO]].
* [12] P. Jain, G. Narain and S. Sarala, “Large scale alignment of optical polarizations from distant QSOs using coordinate invariant statistics,” Mon. Not. Roy. Astron. Soc. 347 (2004), 394 doi:10.1111/j.1365-2966.2004.07169.x [arXiv:astro-ph/0301530 [astro-ph]].
* [13] G. E. Marinello, R. G. Clowes, L. E. Campusano, G. M. Williger, I. K. Söchting and M. J. Graham, “Compatibility of the Large Quasar Groups with the Concordance Cosmological Model,” Mon. Not. Roy. Astron. Soc. 461 (2016) no.3, 2267-2281 doi:10.1093/mnras/stw1513 [arXiv:1603.03260 [astro-ph.CO]].
* [14] P. K. Aluri, P. Cea, P. Chingangbam, M. C. Chu, R. G. Clowes, D. Hutsemékers, J. P. Kochappan, A. Krasiński, A. M. Lopez and L. Liu, et al. “Is the Observable Universe Consistent with the Cosmological Principle?,” [arXiv:2207.05765 [astro-ph.CO]].
* [15] N. Secrest, S. von Hausegger, M. Rameez, R. Mohayaee and S. Sarkar, “A Challenge to the Standard Cosmological Model,” [arXiv:2206.05624 [astro-ph.CO]].
* [16] D. J. Schwarz, C. J. Copi, D. Huterer and G. D. Starkman, “CMB Anomalies after Planck,” Class. Quant. Grav. 33 (2016) no.18, 184001 doi:10.1088/0264-9381/33/18/184001 [arXiv:1510.07929 [astro-ph.CO]].
* [17] L. Perivolaropoulos and F. Skara, “Challenges for $\Lambda$CDM: An update,” New Astron. Rev. 95 (2022), 101659 doi:10.1016/j.newar.2022.101659 [arXiv:2105.05208 [astro-ph.CO]].
* [18] N. Aghanim et al. [Planck], “Planck 2018 results. VI. Cosmological parameters,” Astron. Astrophys. 641 (2020), A6 [erratum: Astron. Astrophys. 652 (2021), C4] doi:10.1051/0004-6361/201833910 [arXiv:1807.06209 [astro-ph.CO]].
* [19] C. D. Leonard, P. Bull and R. Allison, “Spatial curvature endgame: Reaching the limit of curvature determination,” Phys. Rev. D 94 (2016) no.2, 023502 doi:10.1103/PhysRevD.94.023502 [arXiv:1604.01410 [astro-ph.CO]].
* [20] E. Di Dio, F. Montanari, A. Raccanelli, R. Durrer, M. Kamionkowski and J. Lesgourgues, “Curvature constraints from Large Scale Structure,” JCAP 06 (2016), 013 doi:10.1088/1475-7516/2016/06/013 [arXiv:1603.09073 [astro-ph.CO]].
* [21] N. J. Cornish, D. N. Spergel and G. D. Starkman, “Measuring the topology of the universe,” Proc. Nat. Acad. Sci. 95 (1998), 82 doi:10.1073/pnas.95.1.82 [arXiv:astro-ph/9708083 [astro-ph]].
* [22] J. P. Luminet and B. F. Roukema, “Topology of the universe: Theory and observation,” [arXiv:astro-ph/9901364 [astro-ph]].
* [23] J. Sandhu, “Cosmic Topology,” [arXiv:1612.04157 [astro-ph.CO]].
* [24] A. A. Starobinsky, “New restrictions on spatial topology of the universe from microwave background temperature fluctuations,” JETP Lett. 57 (1993), 622-625 [arXiv:gr-qc/9305019 [gr-qc]].
* [25] A. de Oliviera Costa and G. F. Smoot, “Constraints on the topology of the universe from the 2-year COBE data,” Astrophys. J. 448 (1995), 477 doi:10.1086/175977 [arXiv:astro-ph/9412003 [astro-ph]].
* [26] N. J. Cornish, D. N. Spergel, G. D. Starkman and E. Komatsu, “Constraining the topology of the universe,” Phys. Rev. Lett. 92 (2004), 201302 doi:10.1103/PhysRevLett.92.201302 [arXiv:astro-ph/0310233 [astro-ph]].
* [27] P. M. Vaudrevange, G. D. Starkman, N. J. Cornish and D. N. Spergel, “Constraints on the Topology of the Universe: Extension to General Geometries,” Phys. Rev. D 86 (2012), 083526 doi:10.1103/PhysRevD.86.083526 [arXiv:1206.2939 [astro-ph.CO]].
* [28] J. P. Luminet, J. Weeks, A. Riazuelo, R. Lehoucq and J. P. Uzan, “Dodecahedral space topology as an explanation for weak wide - angle temperature correlations in the cosmic microwave background,” Nature 425 (2003), 593 doi:10.1038/nature01944 [arXiv:astro-ph/0310253 [astro-ph]].
* [29] A. Bernui, C. P. Novaes, T. S. Pereira and G. D. Starkman, “Topology and the suppression of CMB large-angle correlations,” [arXiv:1809.05924 [astro-ph.CO]].
* [30] William P. Thurston, “Three dimensional Manifolds, Kleinian groups and hyperbolic geometry,” Bull. Am. Math. Soc. (N.S.) 6, 357 (1982).
* [31] G. Perelman, “Finite extinction time for the solutions to the Ricci flow on certain three-manifolds,” [arXiv:math/0307245 [math.DG]].
* [32] G. Perelman, “The Entropy formula for the Ricci flow and its geometric applications,” [arXiv:math/0211159 [math.DG]].
* [33] G. Perelman, “Ricci flow with surgery on three-manifolds,” [arXiv:math/0303109 [math.DG]].
* [34] R. Coquereaux, “The history of the universe is an elliptic curve,” Class. Quant. Grav. 32 (2015) no.11, 115013 [erratum: Class. Quant. Grav. 33 (2016) no.15, 159601] doi:10.1088/0264-9381/32/11/115013 [arXiv:1411.2192 [gr-qc]].
* [35] R. Coquereaux and A. Grossmann, “Analytic Discussion of Spatially Closed Friedmann Universes With Cosmological Constant and Radiation Pressure,” Annals Phys. 143 (1982), 296 doi:10.1016/0003-4916(82)90030-6
* [36] H. V. Fagundes, “Closed spaces in cosmology,” Gen. Rel. Grav. 24 (1992), 199 doi:10.1007/BF00756787 [arXiv:0812.4103 [gr-qc]].
* [37] M. Koussour, H. Filali and M. Bennai, “Two Minimally Interacting Fluids: Matter and Holographic Dark Energy in Bianchi Type-I Universe,” doi:10.2139/ssrn.4028697
* [38] Izrail Solomonovich Gradshteyn and Iosif Moiseevich Ryzhik, “Table of integrals, series, and products”, Academic press (2014).
* [39] C. J. G. Vedder, E. Belgacem, N. E. Chisari and T. Prokopec, “Fluctuating Dark Energy and the Luminosity Distance,” [arXiv:2209.00440 [astro-ph.CO]].
|
# On the properties of affine solutions of cold plasma equations
Olga S. Rozanova Mathematics and Mechanics Department, Lomonosov Moscow State
University, Leninskie Gory, Moscow, 119991, Russia<EMAIL_ADDRESS>and Marko K. Turzynsky Russian University of Transport, Obraztsova, 9,
Moscow, 127055 and Higher School of Economics, Pokrovskiy Blvd, 11, Moscow,
109028, Russia<EMAIL_ADDRESS>
###### Abstract.
We study the affine solutions of the equations of plane oscillations of cold
plasma, which, under the assumption of electrostaticity, correspond to the
Euler-Poisson equations in the repulsive case. It is proved that the zero
equilibrium state of the cold plasma equations, both with and without the
assumption of electrostaticity, is unstable in the class of all affine
solutions. It is also shown that an arbitrary perturbation of an axially
symmetric electrostatic solution leads to a finite time blow-up.
###### Key words and phrases:
cold plasma, Euler-Poisson equations, quasilinear system, affine solutions,
blow up
###### 1991 Mathematics Subject Classification:
Primary 35Q60; Secondary 35L60, 35L67, 34M10
## 1\. Introduction
The equations that describe the cold plasma oscillations in the Euler
coordinates have the form [1], [6]:
(1) $\partial_{t}n+{\rm div}(n{\bf V})=0,\quad\partial_{t}{\bf V}+({\bf
V}\cdot\nabla){\bf V}=-({\bf E}+[{\bf V}\times{\bf B}]),$ (2)
$\partial_{t}{\bf E}=n\\!{\bf V}+{\mbox{curl}\,}\,{\bf
B},\quad\partial_{t}{\bf B}=-{\mbox{curl}\,}\,{\bf E},\quad{\rm div}\,{\bf
B}=0,$
where ${\bf V}(t,x)$, $n(t,x)>0$ are the speed and density of electrons, ${\bf
B}(t,x)$ and ${\bf E}(t,x)$ are electrical and magnetic fields,
$x\in{\mathbb{R}}^{3},$ $t\geq 0$, $\nabla$, $\rm div$, curl are the gradient,
divergence and vorticity with respect to the spatial variables.
System (1), (2) has a class of solutions
(3) ${\bf V}=Q(t){\bf x},\qquad{\bf E}=R(t){\bf x},$
where $Q$ and $R$ are $3\times 3$ matrices with coefficients depending on $t$,
and $\bf x$ is the radius vector of the point $x\in\mathbb{R}^{3}$. These
solutions are called affine. Affine solutions have been known since the time
of Kirchhoff and Dirichlet. They play an important role in various models of
continuous media [3].
It follows from (3) that $n=n(t)$ and ${\bf B}={\bf B}(t)$. Since
${\mbox{curl}\,}\,{\bf B}(t)=0$, then from the first equations of system (1),
(2), under the assumption that the solution is sufficiently smooth and the
steady-state density equals $1$, we get $n=1-{\rm div}\,{\bf E}>0$. Thus, we
can exclude $n$ from the system and obtain
(4) $\partial_{t}{\bf V}+({\bf V}\cdot\nabla){\bf V}=-({\bf E}+[{\bf
V}\times{\bf B}(t)]),\quad\partial_{t}{\bf E}+{\bf V}{\rm div}\,{\bf E}={\bf
V},\quad\dot{\bf B}(t)+{\mbox{curl}\,}\,{\bf E}=0.$
We will consider system (4) together with the initial data
$({\bf V},\,{\bf E},\,{\bf B})|_{t=0}=(Q_{0}{\bf x},R_{0}{\bf x},{\bf
B}_{0}),$
with constant matrices $Q_{0}$ and $R_{0}$.
Further, we assume that oscillations occur in a plane perpendicular to the
coordinate vector $e_{3}$. Thus, we restrict the class of solutions under
consideration to
(5) ${\bf V}=Q{\bf x}=\left(\begin{array}[]{ccr}a(t)&b(t)&0\\\ c(t)&d(t)&0\\\
0&0&0\end{array}\right){\bf x},\quad{\bf E}=R{\bf
x}=\left(\begin{array}[]{ccr}A(t)&B(t)&0\\\ C(t)&D(t)&0\\\
0&0&0\end{array}\right){\bf x},\quad{\bf B}=(0,0,\mathcal{B}(t)).$
System (4) for solutions of the form (5) reduces to a matrix system of
differential equations for the matrices $Q$ and $R$ and the scalar function
$\mathcal{B}$:
(6) $\dot{Q}+Q^{2}-\mathcal{B}(t)\mathcal{L}Q+R=0,\quad\dot{R}-(1-{\rm
tr}\,R)Q=0,\quad\dot{\mathcal{B}}(t)-{\rm tr}\,(\mathcal{L}R)=0,$
which consists of 9 differential equations, here
$\mathcal{L}=\left(\begin{array}[]{ccr}0&-1&0\\\ 1&0&0\\\
0&0&0\end{array}\right)$. The initial data for (6) are
$(Q,\,R,\,{\mathcal{B}})|_{t=0}=(Q_{0},R_{0},{\mathcal{B}}_{0}).$
An important class of oscillations is distinguished by the condition
${\mbox{curl}\,}\,{\bf E}=0$, such oscillations are called electrostatic.
Under this condition, the magnetic field does not change with time and has the
form ${\bf B}=(0,0,\mathcal{B}_{0})$. Another consequence of this assumption
is the condition ${\mbox{curl}\,}\,{\bf V}=0$. Thus, system (4) can be
rewritten as
(7) $\partial_{t}n+{\rm div}(n{\bf V})=0,\quad\partial_{t}{\bf V}+({\bf
V}\cdot\nabla){\bf V}=-({\bf E}+[{\bf V}\times{\bf
B}_{0}]),\quad\partial_{t}{\bf E}=n{\bf V},\quad{\mbox{curl}\,}\,{\bf E}=0.$
It is easy to check that this situation is realized only in the case
$\mathcal{B}_{0}=0$, $b=c=B=C=0$, the number of equations in system (6) is
reduced to four.
If we introduce a potential $\Phi$ such that $\nabla\Phi=-{\bf E}$, then (7)
can be rewritten as a system of Euler-Poisson equations (e.g. [5])
(8) $\displaystyle\displaystyle{\partial n\over\partial t}+\mbox{div}\,(n{\bf
V})=0,\quad\displaystyle{\partial{\bf V}\over\partial t}+\left({\bf
V}\cdot\nabla\right){\bf V}=\,k\,\nabla\Phi,\quad\Delta\Phi=n-n_{0},$
for $k=n_{0}=1$. Thus, the results obtained for system (7) in the
electrostatic case can be reformulated in terms of solutions of the Euler-
Poisson equations.
Solutions of the form (5) have also a subclass of solutions with the radial
symmetry in the plane $x_{3}=0$, for which
(9) ${\bf V}=a\,{\bf r}+c\,{\bf r}_{\bot},\quad{\bf E}=A\,{\bf r}+C\,{\bf
r}_{\bot},\quad{\bf r}=(x_{1},x_{2},0),\quad{\bf r}_{\bot}=(x_{2},-x_{1},0).$
For such solutions, the number of equations in system (6) is reduced to five.
Such solutions are electrostatic only if $c=C={\mathcal{B}}=0$, i.e. the
radially symmetric electrostatic solution is axisymmetric.
It was recently proved [8] that affine solutions play an exceptional role in
the class of axisymmetric solutions of multidimensional Euler-Poisson
equations (8) depending on $(t,r)$, where $r=\sqrt{x_{1}^{2}+x_{2}^{2}}$.
Namely, if some solution preserves global smoothness in time, then it is
either affine or tends to affine as $t\to\infty$ uniformly on each interval in
$r$. In addition, the zero equilibrium state turns out to be unstable with
respect to axisymmetric perturbations of an arbitrary form, but stable with
respect to affine axisymmetric perturbations. As shown above, the axisymmetric
solutions of the Euler-Poisson equations correspond to the axisymmetric
solutions of the cold plasma equations with the condition of electrostaticity.
In this regard, a natural question arises: will the zero equilibrium of the
cold plasma equations be stable in the class of affine solutions without the
assumption of axial symmetry or electrostaticity?
We show that the answer to this question is negative. Moreover, it turned out
that a general perturbation from the affine axially symmetric solution leads
to a blow-up of the solution in a finite time. Since plane oscillations are a
subclass of spatial oscillations, the result on the instability of the zero
equilibrium state is also valid for the three-dimensional case.
The paper has the following structure. In Section 2 for the electrostatic case
$\mathcal{B}_{0}=0$ we construct a globally smooth solution of system (10)
with axial symmetry and show that it is Lyapunov stable in the class of
electrostatic solutions (5) with axial symmetry. In particular, the
equilibrium ${\bf V}={\bf E}=0$ of system (10) is stable. Further, relying on
the Floquet theory, we show that the equilibrium ${\bf V}={\bf E}=0$ is
unstable with respect to small perturbations in the class of affine
electrostatic solutions, and thus also in the class of arbitrary affine
perturbations. Moreover, we show that in the general case (without a special
choice of initial data) such a perturbation grows with time and leads to a
blow-up of the solution in a finite time. Further, by numerical computation of
the characteristic multipliers of the system of ordinary differential
equations, we show that a similar result is valid for an arbitrary deviation
from the globally smooth solution with axial symmetry constructed in Section
2.
In Section 3 we consider the non-electrostatic case and show that the zero
equilibrium ${\bf V}={\bf E}=\mathcal{B}_{0}=0$ is unstable in the class of
radially symmetric non-electrostatic solutions. We show that a general
perturbation of a globally smooth axisymmetric electrostatic solution in the
class of radially symmetric non-electrostatic solutions also leads to a blow-
up of the solution in a finite time. Also in section 3 we discuss the
difference between deviations from zero equilibrium ${\bf V}={\bf
E}=\mathcal{B}_{0}=0$ and equilibrium ${\bf V}={\bf E}=0$,
$\mathcal{B}_{0}\neq 0$.
## 2\. Electrostatic oscillations
### 2.1. Solution with axial symmetry
In the case (9), under the electrostatic condition $c=C=\mathcal{B}_{0}=0$,
the system (6) takes the form
(10) $\dot{a}=-A-a^{2},\quad\dot{A}=a-2Aa.$
The equilibrium of the $a=A=0$ system is the center, since the eigenvalues of
the linear approximation matrix in it are equal to $\lambda_{1,2}=\pm i$, and
the phase curves are symmetric when $a$ is replaced by $-a$.
Let us find the first integrals of system (10). It implies
$\frac{ada}{dA}-\frac{a^{2}}{2A-1}=\frac{A}{2A-1},$ after replacing $a^{2}=u$
we obtain a linear equation, the solution is
(11) $a=\pm\sqrt{(\tfrac{1}{2}\ln|2A-1|+K)(2A-1)-\tfrac{1}{2}},\quad K=\rm
const.$
This integral was also obtained in [8]. The curve on the phase plane given by
the expression (11) is bounded for all values of $K$ satisfying the condition
$A(0)<\frac{1}{2}$ (see [8]), so the derivatives of solution, $a$ and $A$,
remain bounded for all $t>0$. The explicit form of the bounded phase curve
(11) also implies that the equilibrium $a=A=0$, corresponding to the zero rest
state, is stable by Lyapunov. The solutions corresponding to any fixed phase
curve (11) are also Lyapunov stable if the perturbation occurs in the class of
affine solutions given by system (10).
In this way, $a(t)$, $A(t)$ are periodic with period
$T=2\int\limits_{A_{-}}^{A_{+}}\frac{d\eta}{(1-2\eta)a(\eta)},$
$a$ is set to (11), $A_{-}<0$ and $A_{+}>0$ is the smaller and larger roots of
the equation $a(A)=0$. Besides, $\int\limits_{0}^{T}a(\tau)\,d\tau=0$.
The period $T$ was studied in [8]. It depends on $A(0)=\varepsilon$,
$\varepsilon\in(0,\frac{1}{2})$ decreasing monotonically from $2\pi$ to
$\sqrt{2}\pi$, and the asymptotic formula
(12)
$T=2\pi(1-\frac{1}{12}\varepsilon^{2}+o(\varepsilon^{2})),\quad\varepsilon\to
0,$
holds, see [8], Lemma 4.
### 2.2. Arbitrary electrostatic oscillations of form (5)
We formulate two similar theorems, the first of which will be proved
analytically, while the second is a semi-analytical result, for the proof we
use numerical methods.
The proof of all theorems is based on the Floquet theory for systems of linear
equations with periodic coefficients (for example, [4], Section 2.4).
According to this theory, for the fundamental matrix $\Psi(t)$ ($\Psi(0)=E$)
there exists a constant matrix $M$, possibly with complex coefficients, such
that $\Psi(T)=e^{TM}$, where $T$ is the period of the coefficients. The
eigenvalues of the matrix of monodromy $e^{TM}$ are called the characteristic
multipliers of the system. If among the characteristic multipliers there are
such that their absolute value is greater than one, then the zero solution of
the studied linear system is unstable in the sense of Lyapunov ([4], Theorem
2.53).
###### Theorem 1.
1\. The zero equilibrium of system (6) in the class ${\mathcal{B}}(t)\equiv 0$
(corresponding to electrostatic oscillations) is unstable by Lyapunov.
2\. A general small non-axisymmetric perturbation of the equilibrium blows up
in a finite time.
Proof of Theorem 1. The system (6) in the case of ${\mathcal{B}}\equiv 0$ has
the form
(13)
$\dot{A}=(1-A-D)a,\quad\dot{D}=(1-A-D)d,\,\quad\dot{a}+a^{2}+A=0,\quad\dot{d}+d^{2}+D=0.$
To study the effect of deviation from symmetry, we make the substitution
$d=a+\sigma$, $D=A+\delta$, which corresponds to the axisymmetric case for
$\sigma=\delta=0$. Then (13) reduces to
$\dot{A}=(1-2A)a-\delta
a,\quad\dot{a}=-a^{2}-A,\,\quad\dot{\delta}=(1-2A-\delta)\sigma,\quad\dot{\sigma}=-\sigma^{2}-2a\sigma-\delta.$
We choose a small parameter $\varepsilon$ and set
$\displaystyle A(t)$ $\displaystyle=$ $\displaystyle
A_{0}(t)+\varepsilon^{2}A_{1}(t)+o(\varepsilon^{2}),\quad
a(t)=a_{0}(t)+\varepsilon^{2}a_{1}(t)+o(\varepsilon^{2}),$
$\displaystyle\delta(t)$ $\displaystyle=$
$\displaystyle\varepsilon^{2}\delta_{1}(t)+o(\varepsilon^{2}),\quad\sigma(t)=\varepsilon^{2}\sigma_{1}(t)+o(\varepsilon^{2}),$
For $\varepsilon=0$ we obtain a globally smooth solution $A_{0}(t),a_{0}(t)$,
which is a solution to system (10). For the functions
$A_{1},\,a_{1},\,\delta_{1},\,\sigma_{1}$, discarding terms of the order of
smallness $o(\varepsilon^{2})$, we obtain the linear system
(14) $\displaystyle\dot{A}_{1}$ $\displaystyle=$
$\displaystyle-2a_{0}A_{1}+(1-2A_{0})a_{1}-a_{0}\delta_{1},\quad\dot{a}_{1}=-2a_{0}a_{1}-A_{1},$
(15) $\displaystyle\dot{\delta}_{1}$ $\displaystyle=$
$\displaystyle(1-2A_{0})\sigma_{1},\quad\dot{\sigma}_{1}=-2a_{0}\sigma_{1}-\delta_{1}.$
Let us show that the zero solution of system (14), (15) is unstable.
Note that if $\delta_{1}(0)=\sigma_{1}(0)=0$, then system (14), (15) reduces
to two equations (14), $\delta_{1}\equiv 0$, and then the equilibrium
$A_{1}=a_{1}=0$ turns out to be stable. This follows from the fact that the
perturbed solution remains axisymmetric and the integral (11) holds for it.
Thus, we will consider a perturbation of the solution $A_{0}(t),a_{0}(t)$, for
which $\delta_{1}(0)\sigma_{1}(0)\neq 0$.
1\. Let $A_{0}(t),a_{0}(t)$ itself be a small deviation from the zero
equilibrium position. We take $A_{0}(0)=\varepsilon$ as a small parameter.
Then
(16) $\displaystyle\qquad A_{0}(t)=\varepsilon\cos
t+A_{01}(t)\varepsilon^{2}+o(\varepsilon^{2}),\quad a_{0}(t)=-\varepsilon\sin
t+a_{01}(t)\varepsilon^{2}+o(\varepsilon^{2}),$
all subsequent expansion terms are found sequentially from (10).
To obtain an asymptotic representation of the components of the fundamental
matrix, we set
(17) $\displaystyle A_{1}(t)$ $\displaystyle=$ $\displaystyle
A_{10}(t)+A_{11}(t)\varepsilon+A_{12}(t)\varepsilon^{2}+o(\varepsilon^{2}),$
$\displaystyle a_{1}(t)$ $\displaystyle=$ $\displaystyle
a_{10}(t)+a_{11}(t)\varepsilon+a_{12}(t)\varepsilon^{2}+o(\varepsilon^{2}),$
$\displaystyle\delta_{1}(t)$ $\displaystyle=$
$\displaystyle\delta_{10}(t)+\delta_{11}(t)\varepsilon+\delta_{12}(t)\varepsilon^{2}+o(\varepsilon^{2}),$
(18) $\displaystyle\sigma_{1}(t)$ $\displaystyle=$
$\displaystyle\sigma_{10}(t)+\sigma_{11}(t)\varepsilon+\sigma_{12}(t)\varepsilon^{2}+o(\varepsilon^{2}).$
The fundamental matrix has the form
$\Psi(t)=\Psi_{0}(t)+\Psi_{1}(t)\varepsilon+\Psi_{2}(t)\varepsilon^{2}+o(\varepsilon^{2}),$
$\Psi_{0}(0)=\mathbb{E}$, $\Psi_{i}(0)=0$, $i\in\mathbb{N}$. The calculations
that need to be done to find the matrices $\Psi_{i}(t)$ are cumbersome, but
standard: we substitute (16), (17) – (18) into (14), (15) and equate the
coefficients at the same powers of $\varepsilon$. At each stage, one has to
solve a linear inhomogeneous system with constant coefficients.
Denote the eigenvalues of the matrix $\Psi(T)$ as $\lambda_{i},$ and the
eigenvalues of the matrix
$\Psi_{k}(T)=\Psi_{0}(T_{0})+\Psi_{1}(T_{1})\varepsilon+\cdots+\Psi_{k}(T_{k})\varepsilon^{k}$,
$k\in\mathbb{N}$ as $\bar{\lambda}_{ki},$ $\,i=1,\dots,4$. Further, we denote
as $T_{j}$, $j=0,\dots,k$, the period $T$ (see (12)) calculated in the
approximation $O(\varepsilon^{j})$, and the eigenvalues of the matrix
$\Psi_{j}(T_{j})$ as $\lambda_{ji}$. To prove the instability, we have to find
an expansion up to such an order $k$ in $\varepsilon$ that among
$\bar{\lambda}_{ki}$ there is a greater than one by the absolute value.
As follows from (12), $T_{0}=T_{1}=2\pi$,
$T_{2}=2\pi-\frac{\pi}{6}\varepsilon^{2}$. Calculations performed using the
computer algebra package MAPLE show that
$\bar{\lambda}_{0i}=\bar{\lambda}_{1i}=1,$ $i=1,\dots,4$,
(19)
$\displaystyle\qquad\bar{\lambda}_{2i}=1\pm\frac{\sqrt{3}\pi}{6}\varepsilon^{2}+o(\varepsilon^{2}),\,i=1,2,\quad\bar{\lambda}_{2i}=1\pm\frac{\sqrt{3}\pi}{2}\varepsilon^{2}+o(\varepsilon^{2}),\,i=3,4,$
and further terms of the expansion cannot change the coefficients of
$\varepsilon$ to a power less than two. Thus, already for $k=2$ we can
conclude that there is a pair of eigenvalues such that $|\lambda|>1$. Thus,
the instability of the zero equilibrium is proved.
Note that according to the Liouville theorem on the conservation of the phase
volume
$\displaystyle{\rm det}\Psi(T)=\exp\left(\int\limits_{0}^{T}{\rm
tr}{\mathcal{M}}(\tau)d\tau\right)\,{\rm det}\Psi(0),$
where $\mathcal{M}$ is the matrix corresponding to the linear system (14),
(15). It is easy to see that ${\rm tr}{\mathcal{M}}=6a_{0}(t)$, and since
$\int\limits_{0}^{T}a_{0}(\tau)d\tau=0$. Therefore
$\prod\limits_{i=1}^{4}\,|\lambda_{i}|\,=1.$ However
$\prod\limits_{i=1}^{4}\,|\bar{\lambda}_{ki}|$ must be equal to one only up to
terms $o(\varepsilon^{k})$, which we see from (19).
2\. The fact that the linear system (14), (15) has at least one characteristic
multiplier whose absolute value is greater than one indicates that any
component of its solution, including $A_{1}(t)$, contains a term
$\mathcal{C}P(t)\exp(\mu t)$, with characteristic exponent $\mu>0$, $P(t)$ is
a bounded periodic function, $\mathcal{C}$ is a constant , depending on the
initial data. With some special choice of initial data, one can make the
constant $\mathcal{C}$ equal to zero. This will certainly be the case if
$\sigma(0)=\delta(0)=0$, that is, when the initial perturbation is
axisymmetric. However, if the perturbation is chosen arbitrarily, then an
exponentially growing term is necessarily present. Therefore, if we assume the
boundedness of $A(t)$, and hence $A_{1}(t)$, for all finite times $t$, then
there exists a point $t_{*}$ for which $A(t_{*})=\frac{1}{2}$. Since
$A(t)=\frac{1}{2}$ is a part of solution to system (6), this would contradict
the uniqueness theorem. Therefore, there is a finite time $t_{c}<t_{*}$ at
which $A(t)$ becomes infinite, which corresponds to a blow-up of the solution.
$\square$
###### Theorem 2.
A globally smooth axisymmetric solution of system (6) is unstable by Lyapunov
in the class ${\mathcal{B}}(t)\equiv 0$, and any general non-axisymmetric
perturbation of it blows up in a finite time.
Proof of Theorem 2 is completely analogous to the proof of the previous
theorem, but in order to investigate the stability of the perturbation from
the equilibrium of the solution $A_{0}(t),a_{0}(t)$, one has to apply
numerical methods. This is not surprising, since even for a much simpler
situation the Mathieu equations the stability region can only be found
numerically without the assumption that the periodic coefficient is small [7].
Therefore, for each fixed $A_{0}(0)=A_{*}$, we solve system (10), (14), (15)
numerically using the Runge–Kutta–Felberg method of the fourth-fifth order
(RKF45), and then find the absolute values of the eigenvalues at the point
$T(A_{*})$.
Figure 1 shows the dependence $|\lambda_{i}(A_{*})|$ for different ranges of
$A_{*}$. It is easy to see that the maximum of the eigenvalues is greater than
1 in absolute values, which indicates instability.
If we introduce the measure of instability as
$S(A_{*})=\max\limits_{i}|\lambda_{i}(A_{*})|-1$, then we see that this value
on the interval $(0,\frac{1}{2})$ varies nonmonotonically, remaining positive.
Namely, it initially increases for small $A_{*}$, which is indicated by the
asymptotic representation of the eigenvalues (19), but sharply decreases at
the point $A_{*}\approx 0.125$, remaining very small up to the point
$A_{*}\approx 0.32$, after which it sharply increases, approaching the
boundary value $A_{*}\approx 0.5$. This, in particular, indicates that it is
very difficult to detect instability by direct numerical methods without
resorting to the Floquet theory. since the deviation from the stationary
solution $A_{0}(t),a_{0}(t)$ grows very slowly. Indeed, for example, in the
region of the point $A_{*}=0.25$ we have $S(A_{*})\sim 10^{-7}$.
We also note the properties of the eigenvalues themselves. On the interval
$A_{*}\in(0,\approx 0.125)$ there are a pair of complex conjugate and a pair
of real eigenvalues, one of which is greater than one. On the interval
$A_{*}\in(\approx 0.125,\approx 0.32)$, there are two pairs of complex
conjugate eigenvalues, which also indicates a softer loss of stability. On the
interval $A_{*}\in(\approx 0.32,0.5)$, a pair of real eigenvalues again
arises, one of which is positive and rapidly grows as one approaches the right
boundary of the interval.
The blow-up of an arbitrary non-axisymmetric perturbation of a globally smooth
axisymmetric solution is proved in the same way as in Theorem 1. $\square$
Figure 1. Values of characteristic multipliers for the case of electrostatic
non-axisymmetric oscillations depending on $A(0)$. The figure in the center
shows that the curves that appear to match in the figure on the left are
actually different.
###### Corollary 1.
The zero equilibrium of system (4) is unstable by Lyapunov in the class of all
affine solutions (3).
The proof of Corollary 1 follows directly from the observation that plane
deviations from the equilibrium in the class (5) with ${\mathcal{B}}(t)\equiv
0$ form a subclass among all possible affine deviations. $\square$
###### Corollary 2.
1\. The zero equilibrium state of the Euler-Poisson equations is unstable in
the class of affine solutions.
2\. A general small non-axisymmetric affine perturbation of a globally smooth
axisymmetric affine solution of the Euler-Poisson equations (8) blows up in a
finite time.
Proof. Сorollary 2 is a reformulation of Theorem 1 for the case of the Euler-
Poisson equations.
###### Remark 1.
The electrostatic solutions of the cold plasma equations, generally speaking,
blow up in a finite time, this time can be estimated from below, see [9].
## 3\. Non-electrostatic oscillations
The equilibrium of system (6) corresponding to nonzero density have the form
$Q=R=0$ (a matrix with zero components), ${\bf B}={\bf B}_{0}=\rm const$.
The linearization matrix at this equilibrium has the following eigenvalues
$\lambda=\pm\frac{1}{2}\sqrt{-4-2\mathcal{B}_{0}^{2}\pm
2\sqrt{\mathcal{B}_{0}^{4}+4\mathcal{B}_{0}^{2}}},$ double multiplicity, and
$\lambda=0$. It is easy to check that
$(-4-2\mathcal{B}_{0}^{2}+2\sqrt{\mathcal{B}_{0}^{4}+4\mathcal{B}_{0}^{2}})<0$
for all real $\mathcal{B}_{0}$. Hence, the real part of all eigenvalues is
zero, and the theory of linear approximation to study the stability of
equilibrium is not applicable.
Since we are interested in the deviation from the electrostatic condition when
$\mathcal{B}_{0}=0$, we will study the stability of the equilibrium with
$\mathcal{B}_{0}=0$ in the class of non-electrostatic perturbations.
###### Theorem 3.
1\. The zero equilibrium of system (6) is unstable in the sense of Lyapunov in
the class of affine non-electrostatic solutions.
2\. A general small radially symmetric affine non-electrostatic perturbation
of a globally smooth axisymmetric affine solution of system (6) blows up in a
finite time.
Proof of Theorem 3. 1\. The proof is completely analogous to the proof of
Theorem 1 and is based on the Floquet theory described above. It suffices to
show that the zero equilibrium is unstable in the class of affine solutions
with radial symmetry (9).
System (6) in this case has the form
(20) $\displaystyle\dot{A}$ $\displaystyle-$
$\displaystyle(1-2\,A)a=0,\quad\dot{C}-(1-2\,A)c=0,\quad\dot{\mathcal{B}}-2\,C=0,$
(21) $\displaystyle\dot{a}$ $\displaystyle+$ $\displaystyle
a^{2}-c^{2}+A-\mathcal{B}c=0,\quad\dot{c}+2\,ca+C+\mathcal{B}a=0.$
Let us set
(22) $\displaystyle A(t)$ $\displaystyle=$ $\displaystyle
A_{0}(t)+A_{1}(t)\varepsilon^{2}+o(\varepsilon),\quad
a(t)=a_{0}(t)+a_{1}(t)\varepsilon^{2}+o(\varepsilon^{2}),$ $\displaystyle
c(t)$ $\displaystyle=$ $\displaystyle
c_{1}(t)\varepsilon^{2}+o(\varepsilon^{2}),\quad
C(t)=C_{1}(t)\varepsilon^{2}+o(\varepsilon^{2}),$
$\displaystyle\mathcal{B}(t)$ $\displaystyle=$
$\displaystyle\mathcal{B}_{1}(t)\varepsilon^{2}+o(\varepsilon^{2}),$
where $\varepsilon$ is some small parameter. Substituting these series into
(20), (21) and remembering that (9) implies $A=D$, $a=d$, $C=-B$, $c=-b$, we
get the following system:
$\displaystyle\dot{a}_{0}$ $\displaystyle+$ $\displaystyle
A_{0}+a_{0}^{2}=0,\quad\quad\,\,\dot{A}_{0}-(1-2A_{0})a_{0}=0,$ (23)
$\displaystyle\dot{a}_{1}$ $\displaystyle+$ $\displaystyle
A_{1}+2a_{0}a_{1}=0,\quad\dot{A}_{1}-(1-2A_{0})a_{1}+2a_{0}A_{1}=0,$ (24)
$\displaystyle\dot{C}_{1}$ $\displaystyle-$
$\displaystyle(1-2A_{0})c_{1}=0,\quad\dot{c}_{1}+a_{0}{\mathcal{B}}_{1}+2a_{0}c_{1}+C_{1}=0,\quad\dot{\mathcal{B}}_{1}-2C_{1}=0.$
We see that the zero term of the series, $A_{0}(t),\,a_{0}(t)$, is a solution
of (10). For the next terms of expansion, we obtain a linear system of
equations (23), (24) with known periodic coefficients. The equations (23) for
$A_{1}(t),\,a_{1}(t)$ is split off and three equations (24) can be considered
separately.
We choose $A_{0}(0)=\varepsilon\ll 1$, $a_{0}(0)=0$, so the zero terms of the
series themselves turn out to be small, and the expansion (16) is valid.
To obtain an asymptotic representation of the components of the fundamental
matrix, we set
$\displaystyle c_{1}(t)$ $\displaystyle=$ $\displaystyle
c_{10}(t)+c_{11}(t)\varepsilon+c_{12}(t)\varepsilon^{2}+o(\varepsilon^{2}),$
$\displaystyle C_{1}(t)$ $\displaystyle=$ $\displaystyle
C_{10}(t)+C_{11}(t)\varepsilon+C_{12}(t)\varepsilon^{2}+o(\varepsilon^{2}),$
$\displaystyle B_{1}(t)$ $\displaystyle=$ $\displaystyle
B_{10}(t)+B_{11}(t)\varepsilon+B_{12}(t)\varepsilon^{2}+o(\varepsilon^{2}).$
We use the same notation and methods as in the proof of Theorem 1.
Computations show that $\bar{\lambda}_{0i}=\bar{\lambda}_{1i}=1,$ $i=1,2,3$,
$\displaystyle\bar{\lambda}_{2i}=1\pm\frac{\sqrt{5}}{3}\pi\varepsilon^{2}+o(\varepsilon^{2}),\,i=1,2,\quad\bar{\lambda}_{23}=1,$
and further terms of the expansion cannot change the coefficients of
$\varepsilon$ to a power less than two. Thus, there is a pair of eigenvalues
such that $|\lambda|>1$ and the instability of the zero equilibrium is proved.
2\. In order to prove the blow-up of an arbitrary non-electrostatic
perturbation of a globally smooth axisymmetric solution, we cannot directly
use the arguments of Theorem 1. Indeed, our conclusions concern the components
$C,c,\mathcal{B}$, while the restriction on the component $A$ led to a
contradiction. Therefore, we note that the terms of expansion (22) for $a$ and
$A$ in $\varepsilon$ starting from the fourth power, that is, $A_{3}$ and
$a_{3}$, are no longer separated from $C,c,\mathcal{B}$. Namely, as follows
from (20), (21), they are subject to the following inhomogeneous system of
linear equations
$\dot{A}_{3}-a_{3}=-2A_{0}a_{2}-2A_{1}a_{1}-2a_{0}A_{2},\quad\dot{a}_{3}+A_{4}=-2a_{0}a_{2}+a_{1}^{2}+{\mathcal{B}}_{1}c_{1}-c_{1}^{2}.$
Moreover, $A_{0},A_{1},A_{2}$, $a_{0},a_{1},a_{2}$, are periodic and bounded
(which follows from (11), since only the previous components $a$ and $A$ are
used to calculate the first expansion components), while ${\mathcal{B}}_{1}$
and $c_{1}$ generally contain the term $\mathcal{C}P(t)\exp(\mu t)$, with
characteristic exponent $\mu>0$, where $P(t)$ is a bounded periodic function,
$\mathcal{C}$ is a constant depending on the initial data. With some special
choice of initial data, it is possible to ensure that the constant
$\mathcal{C}$ turns out to be zero, but for an arbitrary non-electrostatic
perturbation of the zero state of rest, an exponentially growing component of
the solution is necessarily present.
Therefore, as follows from the standard formula for representing the solution
of a linear inhomogeneous equation, $A_{3}$ and $a_{3}$ also have this
property, so if we assume that the solution is defined for all $t>0$, we get a
contradiction with the condition $A<\frac{1}{2}$. $\square$
###### Theorem 4.
A globally smooth axisymmetric solution of system (6) is unstable in the sense
of Lyapunov in the class of affine non-electrostatic solutions with radial
symmetry (9), and any general radially symmetric non-electrostatic
perturbation of it blows up in a finite time.
For the proof, we repeat the procedure similar to the proof of Theorem 2. For
each fixed $A_{0}(0)=A_{*}$, we solve system (10), (24) numerically using the
fourth-fifth order Runge–Kutta–Felberg method (RKF45), and then find the
absolute values of eigenvalues at the point $T(A_{*})$.
Figure 2 shows the dependence $|\lambda_{i}(A_{*})|$ for different ranges of
$A_{*}$. It is easy to see that the maximum of the eigenvalues exceeds 1 in
absolute value, which indicates instability. We see that the quantity
$\max\limits_{i}|\lambda_{i}(A_{*})|-1$, which can be called the measure of
instability, changes nonmonotonically on the interval $(0,0.5)$. However, up
to the value of $A_{*}\approx 0.15$, it is approximately at the same level,
being, nevertheless, significantly (two orders of magnitude) larger than the
analogous value in the electrostatic case, and then it increases, but not as
sharply as in electrostatic case. Among the eigenvalues $\lambda_{i}$,
$i=1,2,3$, there are necessarily a pair of complex conjugate ones, and first
the real eigenvalue is greater than one in absolute value, then complex
conjugate ones is greater in absolute value (for $A_{*}\in(\approx
0.07,\approx 0.14)$), and then the real eigenvalue again becomes larger in
absolute value.
The result on the blow-up of a radially symmetric non-electrostatic
perturbation of a general form follows from the same reasoning as in Theorem
3. $\square$
Figure 2. Values of the characteristic multipliers for the case of non-
electrostatic radially symmetric oscillations as a function of $A(0)$, the
figure on the left shows the details of the change in the characteristic
multipliers with high resolution.
###### Remark 2.
The proof that a general perturbation the electrostatic axisymmetric solution
in the class of arbitrary affine non-electrostatic solutions (not radially
symmetric) collapses in a finite time is carried out in exactly the same way
as Theorems 2 and 4, but is more cumbersome, since it requires solving a
system of equations of the 9th order and examining 9 eigenvalues. We do not
present the results of these calculations.
###### Remark 3.
The deviation from the equilibrium with $\mathcal{B}_{0}\neq 0$ behaves quite
differently. Indeed, if in (22) we replace the representation for
$\mathcal{B}$ with
$\mathcal{B}(t)=\mathcal{B}_{0}+\mathcal{B}_{1}(t)\varepsilon^{2}+o(\varepsilon^{2})$,
then we get $A_{0}(t)=a_{0}(t)=0$, and the next terms of expansion are subject
to a linear homogeneous system of equations with constant coefficients with a
matrix having purely imaginary eigenvalues
$\pm\frac{1}{2}i\sqrt{4+2\mathcal{B}_{0}^{2}\pm
2\sqrt{\mathcal{B}_{0}^{4}+4\mathcal{B}_{0}^{2}}}$ and zero. Thus, in the
first approximation in $\varepsilon$, the solution is a superposition of two
periodic motions with different periods. In order to construct the next
approximation, one has to solve a linear inhomogeneous equation with constant
coefficients. When solving, secular terms arise, but this does not mean that
the equilibrium position is unstable (see an example in [2]). Moreover, the
numerical results indicate that a small deviations from the equilibrium at
$\mathcal{B}_{0}\neq 0$ are bounded. However, we do not know an analytical
proof of this fact. In this case, it is not possible to apply the method used
in the proof of the previous theorems.
Figure 3. The difference in the behavior of the solution upon deviation from
the equilibrium at $\mathcal{B}_{0}=0$ and $\mathcal{B}_{0}\neq 0$. Initial
deviation is chosen as $a(0)=c(0)=0,A(0)=0.1,C(0)=0.1$, $\mathcal{B}_{0}=0$
(left) and $\mathcal{B}_{0}=0.04$ (right). The calculations are done for
$t=220$. For $\mathcal{B}_{0}=0$ the solution goes to infinity in finite time.
The hypothesis is that the larger $\mathcal{B}_{0}$, the wider the
neighborhood of the equilibrium, starting from which, the solution remains
bounded and globally smooth.
Note that the magnetic field also plays a stabilizing role in other problems
related to the description of cold plasma [10].
Figure 3 illustrates the difference in the behavior of the magnetic field
component for $\mathcal{B}_{0}=0$ and $\mathcal{B}_{0}\neq 0$ for the same
initial data for the remaining components of the solution.
###### Remark 4.
The fact that, for some choice of initial data, the time-dependent
coefficients at the second and lower powers of $\varepsilon$ remain bounded
for all $t>0$ does not mean that all other coefficients in the expansion of
the solution in $\varepsilon$ have the same property. The Floquet theory can
be successfully used to prove instability, but it is difficult to apply it to
prove stability.
## Acknowledgments
Supported by the Moscow Center for Fundamental and Applied Mathematics under
the agreement №075-15-2019-1621. The authors thanks V.V. Bykov and A.V.
Borovskikh for discussions.
## References
* [1] Alexandrov A.F., Bogdankevich L.S., Rukhadze A.A. Principles of plasma electrodynamics. Springer series in electronics and photonics, Springer: Berlin Heidelberg, 1984.
* [2] Bogoliubov N. N., Mitropolski Y. A. Asymptotic Methods in the Theory of Nonlinear Oscillations, Gordon and Breach: New York, 1961.
* [3] Borlsov A.V., Mamaev I.S., Kilin A.A., Hamiltonian dynamics of liquid and gas self-gravitating ellipsoids, Nonlinear Dynamics, 4(4) 363-407 (2008).
* [4] Chicone C. Ordinary Differential Equations with Applications. Springer-Verlag: New York, 1999.
* [5] S.Engelberg, H.Liu, E.Tadmor, Critical thresholds in Euler-Poisson equations, Indiana University Mathematics Journal, 50, 109-157 (2001).
* [6] Ginzburg V.L. Propagation of electromagnetic waves in plasma. Pergamon: New York, 1970.
* [7] McLachlan N. W., Theory and applications of Mathieu functions, Oxford University Press, 1947.
* [8] Rozanova O.S. On the behavior of multidimensional radially symmetric solutions of the repulsive Euler-Рoisson equations, Physica D: Nonlinear Phenomena, 133578 (2022).
* [9] Rozanova O.S. On the properties of multidimensional electrostatic oscillations of an electron plasma (2022) arxiv.org: 2201.03619v2.
* [10] Rozanova O.S., Chizhonkov E.V. The influence of an external magnetic field on cold plasma oscillations. Z. Angew. Math. Phys.73 249 (2022).
|
11institutetext: Paderborn University
Zukunftsmeile 2, 33102 Paderborn, Germany
11email<EMAIL_ADDRESS>
# Self-Adaptive Digital Assistance Systems for Work 4.0
Enes Yigitbas[0000-0002-5967-833X] Stefan Sauer[0000-0003-3084-0409]
Gregor Engels[0000-0001-5397-9548]
###### Abstract
In the era of digital transformation, new technological foundations and
possibilities for collaboration, production as well as organization open up
many opportunities to work differently in the future. The digitization of
workflows results in new forms of working which is denoted by the term Work
4.0. In the context of Work 4.0, digital assistance systems play an important
role as they give users additional situation-specific information about a
workflow or a product via displays, mobile devices such as tablets and
smartphones, or data glasses.
Furthermore, such digital assistance systems can be used to provide
instructions and technical support in the working process as well as for
training purposes. However, existing digital assistance systems are mostly
created focusing on the “design for all” paradigm neglecting the situation-
specific tasks, skills, preferences, or environments of an individual human
worker. To overcome this issue, we present a monitoring and adaptation
framework for supporting self-adaptive digital assistance systems for Work
4.0. Our framework supports context monitoring as well as UI adaptation for
augmented (AR) and virtual reality (VR)-based digital assistance systems. The
benefit of our framework is shown based on exemplary case studies from
different domains, e.g. context-aware maintenance application in AR or
warehouse management training in VR.
###### Keywords:
digital assistance systems work 4.0 industry 4.0 self-adaptive situation-aware
## Section 1 Introduction
Nowadays we are witnessing a rising trend of digital transformation which is
shaping our everyday life, value creation processes, and the way we are
working. Especially in the context of production processes, the increasing
amount of digitization and interconnection of production systems is sometimes
referred to as Industry 4.0. As a result of this industrial (r)evolution, the
role of human work changes significantly, which is often denoted with the term
Work 4.0 [2]. This means that in the context of Work 4.0 due to the
digitization of workflows each individual worker will face a variety of
challenges and problems to solve, mostly related to high cognitive activities.
To overcome this problem, digital assistance systems play a crucial role to
support human workers to execute their tasks in an efficient, effective, and
pleasant manner. For this purpose, digital assistance systems assist human
workers by providing additional situation-specific information about a
workflow or a product via displays, mobile devices such as tablets and
smartphones, or data glasses. Such digital assistance systems can be used to
provide instructions and technical support in the working process as well as
for training purposes.
In the last decades, various digital assistance systems have been proposed in
different application domains such as manufacturing [14], assembly [9], or
maintenance [16]. However, existing digital assistance systems are created
focusing on the “design for all” paradigm neglecting the situation-specific
tasks, skills, preferences, or environments of an individual human worker. In
most cases, the existing digital assistance systems are system-centred in a
way that they primarily focus on the industrial task they are supporting.
Certainly, in this connection, the context-of-use which is crucial for the
interaction of the user with the production system is not considered.
To overcome this issue, we present a monitoring and adaptation framework for
supporting self-adaptive digital assistance systems (SADAS) for Work 4.0.
According to Laddaga et al., a ”Self-adaptive Software System evaluates its
own behavior and changes behavior when the evaluation indicates that it is not
accomplishing what the software is intended to do, or when better
functionality or performance is possible” [18]. We make use of this definition
and transfer the idea of self-adaptive software systems to self-adaptive
digital assistance systems (SADAS). For this purpose, our framework supports
context monitoring and UI adaptation for AR/VR-based SADAS. The benefit of our
framework is shown based on example case studies from different domains, e.g.
context-aware maintenance application in augmented reality or warehouse
management training in virtual reality.
The remainder of this book chapter is structured as follows: In Section 2, we
present background information on Industry&Work 4.0 as well as digital
assistance systems. In Section 3, we discuss the main challenges in developing
self-adaptive digital assistance systems. Based on these challenges, in
Section 4, we describe and discuss related approaches. Section 5 is dedicated
to presenting our monitoring and adaptation framework for SADAS. In Section 6,
we present case studies to show the applicability of our framework. Finally,
Section 7 concludes our work with an outlook on future work.
## Section 2 Background
In this section, we introduce basic concepts of Industry 4.0 and Work 4.0 as
well as the main idea behind Digital Assistance Systems.
### Section 2.1 Industry and Work 4.0
Since the beginning of industrialization, technological advancements have led
to paradigm shifts which today are named ”industrial revolutions”: in the
field of mechanization (the so-called 1st industrial revolution), of the
intensive use of electrical energy (the so-called 2nd industrial revolution),
and of the widespread digitization (the so-called 3rd industrial revolution)
[19]. On the basis of an advanced digitization within factories, the
combination of internet technologies and future-oriented technologies in the
field of “smart” objects (machines and products) the term “Industry 4.0” was
established for a planned “4th industrial revolution” [19].
According to [25], the term Industry 4.0 stands for the fourth industrial
revolution which is defined as a new level of organization and control over
the entire value chain of the life cycle of products. For realizing the future
of productivity and growth in manufacturing industries, Industry 4.0 includes
several enabling technologies such as cyber physical systems (CPS), internet
of things (IoT), cloud computing, or novel forms of human-computer
interaction. Over the last few years, Industry 4.0 has emerged as a promising
technology framework used for integrating and extending manufacturing
processes at both intra- and inter-organizational levels. This emergence of
Industry 4.0 has been fuelled by the recent development in ICT. The
developments and the technological advances in Industry 4.0 provide a viable
array of solutions to the growing needs of digitization in manufacturing
industries [27].
On the other hand, the process of digitization and incorporation of new
technologies and intelligent systems in various sectors and domains, are the
core enablers for the changes which are about to come with the new way of work
[1]. Nowadays, the business processes of every corporation and organization
are supported by powerful IT systems which become more enhanced by the
introduction of sophisticated robotic and sensor technologies, Cyber-Physical
Systems, 3D printing technologies and intelligent software systems. As a
consequence of the process of rapid digitization and the technology
fluctuations, the requirements and demands for the working individuals in the
workplace are changing.
Therefore, the term Work 4.0 was introduced in November 2015 by the German
Federal Ministry of Labour and Social Affairs (BMAS) when it launched a report
entitled Re-Imagining Work: Green Paper Work 4.0 [26]. This initiative
envisions new ways of work where the focus will be on the human workers,
taking into account their individual abilities, characteristics, and
preferences while aiming at allowing greater flexibility and ensuring work-
life balance. Considering the current predictions, it becomes necessary to
focus on the human as an important part in the sector of industrial
production. Therefore, the need to develop digital assistance systems which
are able to adapt to the personal abilities, needs, and individual
characteristics of the working individuals is emerging.
### Section 2.2 Digital Assistance Systems
The term Digital Assistance System was introduced in [10] as the primary
interface to optimally integrate humans into a production environment during
task execution. Based on this definition, a DAS can be seen as a technical
system for dynamically delivering digitally prepared information.
Informational assistance systems record data via sensors and inputs, then
process this data to provide employees the right information (“what”) at the
right time (“when”) in the desired format (“how”) [22]. The main goals of a
DAS are to avoid uncertainty and mental stress for users, warn them of
dangers, as well as increase of productivity, e.g., reduction of training
time, search times, or operating errors [10]. DAS can be divided into
stationary assistance systems, mobile assistance systems, handheld devices
(such as tablet PCs), and wearables [22]. While stationary assistance systems
are permanently installed at a work station (such as a mounted projection
device), mobile assistance systems, in contrast, are moved to the assembly
object via a mobile solution. Wearables can be classified by the body part on
which they are worn (such as “smart glasses,” “smart gloves,” “smart
watches”).
In the past, DAS have often been used to create standardized instruction
manuals to be used by all employees working on the assembly system (design for
all) – independent of their individual features. To tackle this limitation,
providing personalized and situation-specific assistance is a promising
alternative to empower the workers while supporting them in performing complex
physical and cognitive tasks. However, in order to provide such assistance,
the DAS needs to be enriched with capabilities concerning continuous
monitoring and self-adaptation which are known from the area of Autonomic
Computing. The term Autonomic Computing (also known as AC) refers to the self-
managing characteristics of distributed computing resources, adapting to
unpredictable changes while hiding intrinsic complexity to operators and users
[15]. The AC system concept is designed to monitor and adapt a Managed
Element, using high-level policies. It will constantly check and optimize its
status and automatically adapt itself to changing conditions by using the
Monitor, Analyze, Plan, Execute-Knowledge loop. Based on the ideas of context-
awareness and self-adaptation, we aim to bring classical DAS to a new level of
Self-adaptive Digital Assistance Systems (SADAS).
## Section 3 Challenges
In the course of two different research projects, we have analyzed the
challenges in the application and adoption of digital assistance systems in
the industry setting. The first project was related to the manual assembly of
an Electrical Cabinet (E-cabinet), while the second one was dealing with the
process of manual assembly of a concrete product in a Smart Factory [12]. For
identifying the requirements and needs of the human workers in the industrial
sector with regard to the usage of DAS, we have conducted semi-structured
interviews with experts from different research fields: Psychology, Sociology,
Didactic, Economics, Computer Science, Electrical and Mechanical Engineering.
Based on this investigation, we have identified the following main challenges:
Challenge 1: Information Presentation
For many years traditional graphical user interfaces (GUIs) have been
successfully adopted for mobile platforms. e.g. through the integration of
multi-touch interaction and responsive layout algorithms that adapt the visual
display to different device sizes. However, in applications that rely on
spatial information related to a real-world environment GUIs are not ideal,
because the information displayed in the interface is removed from its real-
world context and interaction is effected indirectly through the interface
[24]. Especially digital assistance systems in the context of manufacturing
and assembly using current sensor data and the user’s current location are
examples that rely heavily on such spatial information which can be reached
through interaction technologies like Augmented Reality or Virtual Reality.
Associated with the aspect of information presentation is the question of the
computing platform how a digital assistance system works and can be accessed
by the end-users. There are several target devices for DAS on the market which
are developed by different companies and organizations. Target devices could
be smartphones, tablets, or HMDs for Augmented Reality (e.g. Microsoft
Hololens, RealWear Glasses, or Google Glass Enterprise Edition) or Virtual
Reality (e.g. HTC Vive, Oculus Quest, or Valve Index). The cost of a device,
its comfort in using it, and the ability of a device to help the user
accomplish her task are some of the reasons that influence the type of
equipment that different users and organizations use to acquire them. Each
computing platform can have different properties regarding hardware and
sensor, operating system, used SDKs, etc. Given the heterogeneous span of
various devices for DAS, it is essential to have multi-platform support so
that a digital assistance system can be deployed and used across varying
computing platforms.
Challenge 2: Monitoring
The acceptance and successful application of a digital assistance system
highly depends on the quality of the information that is shown to the end-
users in guiding them through their task. With this regard, it is important to
provide situation-aware information for the end-users so that they can
accomplish their tasks in an efficient, effective, and satisfying manner. For
this purpose, a digital assistance system (DAS) should enable context
monitoring features to the end-users to inform them about dynamically changing
characteristics of the working environment. With this regard, an important
challenge is to continuously observe the context-of-use of a DAS through
various sensors. The context-of-use can be described through different
characteristics regarding the user (physical, emotional, preferences, etc.),
platform (Hololens, Handheld, etc.), and environment (real vs. virtual
environmental information). Due to the rich context dimension which is
spanning over the real-world and virtual objects, it is a complex task to
track and relate the relevant context information to each other. The mixture
of real (position, posture, emotion, etc.) and virtual (coordinates, view
angle, walk-through, etc.) context information additionally increases the
aspect of context management compared to classical context-aware applications
like in the web or mobile context.
Challenge 3: Adaptation
Based on the collected context information, a decision-making process is
required to analyze and decide whether conditions and constraints are
fulfilled to trigger specific adaptation operations on the DAS. In general, an
important challenge is to cope with conflicting adaptation rules which aim at
different adaptation goals. This problem is even more emphasized in the case
of AR-based digital assistance systems as we need to ensure a consistent
display between the real-world entities and virtual overlay information. For
the decision-making step, it is also important to decide about a reasoning
technique like rule-based or learning-based to provide a performant and
scalable solution.
As AR/VR-based digital assistance systems consist of a complex structure and
composition, an extremely high number of various adaptations is possible. The
adaptations should cover text, symbols, 2D images, and videos, as well as 3D
models and animations. In this regard, many adaptation combinations and
modality changes increase the complexity of the adaptation process.
## Section 4 Related Work
In previous work, different approaches were introduced to address the above-
mentioned challenges Information Presentation, Monitoring, and Adaptation
within the scope of digital assistance systems.
In [3], the authors present a framework for assistance systems to support work
processes in smart factories. They argue that, due to to the large spectrum of
assistance systems, it is hard to acquire an overview and to select an
adequate digital assistance system based on meaningful criteria. Therefore,
they suggest a set of comparison criteria in order to ease the selection of an
adequate assistance system. Compared to our framework, this work is rather
supporting the process of selecting a suitable digital assistance system while
the above-mentioned challenges are not explicitly covered.
Similar work is presented in [5] where the authors present solution ideas for
the technological assistance of workers. Besides technological means for
supporting human-machine interaction in the Industry 4.0 era, the authors
describe how the novel role of human workers in the context of Industry 4.0
should be addressed. As concrete examples for digital assistance systems, they
focus on web-based and mobile apps incorporating AR features for Work 4.0
scenarios. They also focus on the aspects of context monitoring and adaptive
UIs for the AR-based assistance app. However, the main focus is on hand-held
AR devices, while the usage of head-mounted displays in AR or VR is not
covered.
A more formal approach in guiding different stakeholders to choose an adequate
digital assistance system for their organization, domain, or application field
is presented in [16]. This work proposes a process-based model to facilitate
the selection of suitable DAS for supporting maintenance operations in
manufacturing industries. Using this approach, a digital assistance system is
selected and linked to maintenance activities. Furthermore, they collect user
feedback by employing the selected DAS to improve the quality of
recommendations and to identify the strength and weaknesses of each DAS in
association with the maintenance tasks. While this approach supports the
selection of an adequate DAS in the context of Industry 4.0 it is not focusing
on the aspects of monitoring and adaptation.
Besides the above-described approaches, some other works in the field of
digital assistance systems apply the human-centered design process in order to
design and develop assistance systems that fulfill the needs of the user
requirements and the context-of-use. An example work in this direction is
presented in [21] where the authors develop a digital assistance system for
production planning and control. Similar to our work, this approach is
focusing on the aspects of context monitoring and UI adaptations within DAS,
however, they do not apply AR/VR interfaces to cover spatial information and
interaction with a DAS.
Another type of work related to digital assistance systems is presented in [4]
where the authors propose a lightweight canvas method to foster
interdisciplinary discussions on DAS. While this approach is primarily
focusing on interdisciplinary discussions in the early stages of requirements
understanding and design of DAS, it is not addressing a development process
for situation-aware digital assistance systems in AR or VR.
The most related approach to our monitoring and adaptation framework for SADAS
is presented in [12]. In this work, the authors introduce a digital-twin based
multi-modal UI adaptation framework for assistance systems in Industry 4.0.
This approach characterizes a predecessor solution of our presented work here.
While this work covers aspects of context-awareness and adaptation for DAS,
the scope of targeted applications and devices remains restricted so that AR
and VR technologies for example are not covered.
While the above-described approaches highly focus on the selection and
development of digital assistance systems, they are not fully covering the
novel aspects of context-awareness and adaptation especially in the
combination of AR and VR applied for DAS. Therefore, in the following,
according to the challenges introduced in Section 3, we analyze further
approaches which focus more on the topic of context monitoring and adaptation
within the scope of AR and VR applications.
Augmented Reality (AR) and Virtual Reality (VR) have been a topic of intense
research in the last decades. In the past few years, massive advances in
affordable consumer hardware and accessible software frameworks are now
bringing AR and VR to the masses. AR enables the augmentation of real-world
physical objects with virtual elements and has been already applied for
different aspects such as robot programming [34], product configuration (e.g.,
[6], [7]), prototyping [13], planning and measurements [40] or for realizing
smart interfaces (e.g., [17], [35]). In contrast to AR, VR interfaces support
the interaction in an immersive computer-generated 3D world and have been used
in different application domains such as training [36], prototyping [38],
robotics [37], education [41], healthcare [31], or even for collaborative
software modeling [30].
While context-awareness has been exploited in various types of applications
including web [28], mobile (e.g., [33] or [32]), and cross-channel
applications (e.g., [39] or [29]) to improve the usability of an interactive
system by adapting its user interface, only a few existing works are focusing
on the topic of context-awareness in AR and VR.
In [8], the concept of Pervasive Augmented Reality (PAR) is introduced. A
taxonomy for PAR and context-aware AR that classifies context sources and
targets is presented. The context sources are classified as human,
environmental, and system factors. As apparent in the title, Grubert’s work
treats Augmented Reality, here with special regards to pervasive Augmented
Reality.
Context-aware Mobile Augmented Reality (CAMAR) [23] is an approach on context-
awareness in mobile AR focusing on user context, which is measured using the
user’s mobile device. It enables the user to customize the presentation of
virtual content and to share this information with other users selectively,
depending on the context. Furthermore, a framework called UCAM (Unified
Context-aware Application Model) [11] can be used to create CAMAR-enabled
applications. UCAM is a framework which besides the acquisition, process, and
awareness of contextual information provides also a unified way of
representation with respect to user, content, and environment.
The framework presented in [20] focuses on the context-aware adaptation of
interfaces in mixed reality, with the main adaptation points being what
content is displayed, where it is shown and how much information of it is
displayed. It is designed to adjust the content display depending on the
user’s tasks and their cognitive load and archives this using a combination of
rule-based decisions and combinatorial optimization. The framework uses
parameters about the applications that are to be displayed as input
additionally to the context-specific parameters to achieve a fitting layout
optimization. The framework is mentioning mixed reality as its base, but
regarding that, it shows contents in the real world and does not create a
whole new virtual world, it can safely be said that AR is supported.
Apart from the above-mentioned approaches which address the development of
context-aware applications in general without directly focusing on digital
assistance systems in the context of Industry 4.0, there are also specific
approaches that use AR in the smart factory context. One example of such an
approach is presented in [24]. Here, AR is used for supporting workers in an
Industry 4.0 environment where they have to accomplish assembly tasks. The
work presents the initial experience with the AR-based assistance systems.
## Section 5 Monitoring and Adaptation Framework
In order to address the described challenges, we present a monitoring and
adaptation framework for supporting self-adaptive digital assistance systems
(SADAS). Our framework which is based on the MAPE-K architecture [15], is
depicted in Figure 1.
Figure 1: Architectural overview of the monitoring and adaptation framework
It is basically divided up into two main components, the Autonomic Manager and
the Managed Element. The Autonomic Manager is responsible for continuously
monitoring the Managed Element through Sensors and to automatically react to
changing conditions by adapting the Managed Element through Effectors. For
this purpose, the Autonomic Manager consists of a control loop that is called
MAPE-K, while this acronym represents the starting letters of the main sub-
components: Monitor, Analyze, Plan, Execute, and Knowledge.
The Managed Element represents in our case the Digital Assistance System (DAS)
that is deployed on an execution platform that can be accessed through
different devices such as VR HMDs, AR Smart Glasses, or Tablets. Furthermore,
the DAS is characterized through context information that can be observed
through Context Monitoring Features. This context information can consist
either of Real-world Context information which is gathered by sensing existing
sensors in the real physical world (in the case of AR) or Virtual-world
Context information when context information such as gestures, pose, or
virtual environment information are continuously monitored in the VR world.
Besides context information that can be observed through the sensors of the
DAS, there are Adaptation Features to characterize the adaptation operations
which are executed with the means of the Effectors of the DAS. The Adaptation
Features can contain various adaptation operations to adjust the DAS interface
through run-time adaptations, e.g., changing modality or layout.
A refined architectural overview of our monitoring and adaptation framework
for virtual and augmented reality (MAVAR) based SADAS is depicted in Figure 2.
It consists of three main components: Context Monitoring, Decision Making, and
Adaptation.
Figure 2: Monitoring, Decision Making, and Adaptation in MAVAR
The Context Monitoring component is responsible for constantly collecting
information about different kinds of context to enable the framework to react
to them appropriately. All the information on the context is measured by
sensors; partially real sensors like the camera or the inertial sensors like
gyroscopes, partially in a more figurative sense like measuring information
about the app such as positioning of virtual objects or usage. The sensor data
is read out through Sensor APIs and used by several sub-components which are
each responsible for checking on one specific context feature (e.g.
DarkCondition and DistanceToUserBigCondition in Figure 2). The entirety of the
Context Monitoring Features is used by the Condition and each of the context
features has its own component responsible for monitoring it. The context
observed by the Condition sub-classes is categorized into Environment, User,
and Platform Context. The Environment Context includes everything that impacts
the system from the outside, such as noise or objects in the real world (in
AR) or virtual environmental information (in VR). User Context denotes any
information available about the user, such as age or experience, but also the
social context the user is currently in. Any information about the platform on
which the system is running, like the availability of sensors or the
compatibility with different software kits, is summarized in the Platform
Context.
The Decision Making is led by the Control component. This component supervises
the active conditions and rules from the Context Monitoring and Adaptation
components. The connections between the Control, Rule, and Condition component
make it possible for the components to work together closely and to
incorporate and connect the Context Monitoring and Adaptation component.
The Adaptation component is responsible for the execution of actions in
response to a captured context change. The Adaptation Features component
consists of several sub-components (like the AudioOutRule or the FaceUserRule
in Figure 2) specified to each execute a respective adaptation. To do so, the
sub-components make use of different parts of the System API to access the
needed functions. Some of these APIs are for example for AR API, which is
necessary to influence the AR part of the application, or the Native API,
which is used to influence native system functions as the language. The
adaptation features, which are executed by the Rule sub-classes, are divided
into the Style, Modality, Service, Content Presentation, Real-World, and
Virtual-World changes. The Style adaptation operation changes the look or
behavior of single elements, the Modality operation adjusts the sensory input
and output the user utilizes to interact with the application, and Content
Presentation treats the way contents are presented to the user on the screen.
Furthermore, the Service change operation describes changes made on the device
level regarding the type of device or features it offers, the Real-World
changes treat actions that are tied to objects from the real, physical
surroundings of the device and the Virtual-World changes describe adjustments
of virtual objects either in an AR or VR scene.
To achieve the required functionality of monitoring and adaptation of DAS, it
is necessary to connect adaptations to the context changes that should trigger
them. In MAVAR, this is done by adding one or more conditions to a rule, which
will react to changes in the context features monitored by the conditions with
the adaptation which is implemented for it. To be more specific, a rule will
become active and get executed as soon as all of its conditions are fulfilled
at the same time. For cleanup purposes, there is also an unexecute method,
which will be called when one or more of a rule’s conditions are not fulfilled
anymore (making the rule inactive) and which is supposed to be used to reverse
any effects of the rules execution that should only be active as long as its
conditions are true.
The constant monitoring of the context through the conditions and the prompt
execution of the adaptations through the rules is ensured by the control
component. The control component acts as an observer for all of the condition
and rule components. Conditions report a change to the control when they
detect a change in the context feature which they monitor, therefore they
report either on it newly being true or newly being false. Rules report a
change to the control when they are either executed (thereby executing an
adaptation) or unexecuted (reversing an adaptation). As the rules depend on
their registered conditions for changing, they report to the control if either
all their conditions are true and were not all true before (rule gets
executed), or if all conditions were true before and at least one newly turned
false (rule gets unexecuted).
To make sure the context changes are detected, the control component regularly
updates itself. If the update method is called, each of the registered rules
is evaluated and in turn evaluates its respective conditions to check whether
a context change has happened since the last evaluation and returns the new
state of the context to the rule. If a change occurred, the condition also
notifies the observing control component, causing a new update process. The
rules receive the result of each of their respective conditions and react
accordingly (for example by being calling their execute method). If a rule is
executed or unexecuted, it notifies the observing control component, so a new
update will be executed in case any context features were impacted by the
rule’s actions.
## Section 6 Case Studies
In this section, two case studies are presented which show the benefit of our
monitoring and adaptation framework for digital assistance systems. The first
case study deals with an AR-based SADAS for maintenance scenarios. The second
case study shows an example of a VR-based SADAS which supports warehouse
management training in a virtual environment.
### Section 6.1 Example 1: AR-based Context-aware Assistance for Maintenance
Tasks
As an example application of our monitoring and adaptation framework for DAS,
a multi-platform and context-aware AR app for printer maintenance was
developed. The app guides its user through the process of exchanging the ink
cartridges of a printer step by step, with each step being described in a text
window and illustrated by 3D arrows and other elements arranged on the
printer.
Some of the monitoring and adaptation features are illustrated with
screenshots in the following figures, with the left image illustrating the
state before the adaptation and the right image showing it after adapting.
Figure 3: Experience-level adaptation: Instructions on how to turn elements
are only shown to new users.
In Figure 3, the effect of an adaptation responding to the user’s experience
is shown. It displays example control elements to illustrate how to do
different transformations on 3D objects and thereby shows the user how they
can, for example, rotate objects. As the action responsible for displaying the
illustrations is connected to a condition monitoring the number of app uses,
it is only executed on the first five uses of the app, so the user can use the
app undisturbed once they got used to the controls.
Figure 4: View-angle adaptation: The window is dynamically rotating to face
the user regardless of their position.
While working on the printer, the user has to access the printer from several
angles. This can cause them to look at the message with the instructions from
a very steep angle, which makes it hard or impossible to read. Of course, the
user could move away from the printer, read the message, and then go back to
the printer again, but that would be rather inconvenient and disruptive for
the workflow. For this reason, the application uses an action that rotates the
specified object, in this case, the message window, towards the user at all
times (see Figure 4) to make sure it is always readable.
Figure 5: Modality adaptation: The application switches to a conversational UI
if the AR camera is not available. Figure 6: Distance-based adaptation: When
the user moves away from the printer, the level of detail is decreased while
the text size is increased.
There is also a condition that turns true whenever the camera receives very
little light input, which is usually the case when the device was laid down,
for example, if the user needs their hands free. As this prevents the user
from seeing any objects of the AR application, a voice interface is activated
(see Figure 5). It makes sure the description of the current task is read out
to the user and it enables them to interact with the application using voice
commands. This allows the user to interact with the application even if they
are currently not able to hold the device to use it in AR mode.
Figure 7: Context-aware AR Printer Maintenance App on HoloLens
Figure 6 shows a change in the instruction window’s level of detail. This is
achieved using the action for this in combination with a condition that reacts
to the user’s distance to the printer, so the detail is lowered if the user is
more than for example 1.2 meters away from the printer. Adjusting the level of
detail can help the user to focus on the currently important task. It can also
be used to make the user come closer to important objects or to create a
simpler and tidier AR environment by removing information that is currently
unnecessary.
These features, amongst others, aim to enhance the user’s experience using the
printer maintenance application. As they are all executed automatically based
on context information, the user does not have to do anything to get an
application that is at all times customized to the current situation.
Furthermore, in Figure 7, screenshots from the same digital assistance system
application are shown for a different target platform. Instead of an Android-
based AR app like shown before, we now have the same app running on the
HoloLens with the same context-awareness and UI adaptation features supported
for the new target platform.
In summary, the implementation of the above described AR-based digital
assistance system shows how our MAVAR framework supports the monitoring and
adaptation process of a DAS on different target platforms.
### Section 6.2 Example 2: VR-based Context-aware Assistance for Warehouse
Management Training
As a further application scenario for our monitoring and adaptation framework
for DAS, we present an example from the logistics domain. A typical task in
this domain is warehouse management where employees have to pursue pick and
order operations. As shown in Figure 8(a), a digital assistance system is used
for supporting the employees in their tasks. In most cases, the picking
process is paper-based in a classical sense, or sometimes there is a digital
assistance system in the form of an application running on a tablet. In both
cases, still, logistics executives often detect stock discrepancies and
misplaced wares. This is due to the different levels of expertise of employees
with the warehouse management system and order picking process.
Figure 8: Warehouse Management
In addition, the differences in performance between order pickers are great.
Some pick quickly and precisely, others with errors or even damage wares.
Furthermore, in a real physical setting, training of the employees is
expensive and time-consuming. It is often also the case that companies do not
have a spare warehouse where they can train new employees and relocate moved
wares to the original position after the training. Furthermore, it is
desirable to offer repeatable and comparable training, especially for new or
short-term staff.
To overcome these issues, we have developed a VR-based assistance system to
support the training of the picking process. As shown in Figure 8(b), the same
warehouse management application running on the tablet is provided in the
virtual training environment. In addition to that, the core application logic
of the warehouse management app can be further augmented through virtual
elements that can be displayed in the virtual environment. This is, for
example, used to realize a step by the guidance of the user of the VR training
application. In Figure 9(a), one can see how additional textual descriptions
help the user to accomplish the task. Similar to the AR-based assistance
system, we have also here monitoring and adaptation features to guide the
learning process in the warehouse management training app in the most suitable
way. In Figure 9(b), for example, it is shown how the location of the wares
which the user has to pick are highlighted by a green box. Based on such
situation-aware information, the users can be guided through step-by-step
context-aware information so that the effect of learning can increase.
Figure 9: VR Warehouse Management Training Application
Furthermore, our VR-based assistance system is designed in such a way that
different workflows in the area of warehouse management (Single-order and
Multi-order picking with different kinds of exceptions) can be supported. For
this purpose, the different training workflows are specified based on a
process model (in our case BPMN) which can be edited according to the needs of
the VR training application.
To sum up, the illustration of the above described VR-based digital assistance
system for warehouse management training shows that our monitoring and
adaptation framework is applicable for different kinds of digital assistance
systems and flexible to cover various workflows in this area.
### Section 6.3 Discussion
While the above-described case studies illustrate the benefit of our
monitoring and adaptation framework, there is still room for improvement of
such self-adaptive digital assistance systems so that they can find their way
to industrial practice. With this regard, it has to be mentioned that the
implementation of our framework currently is in a prototypical state where
several improvements regarding visualization of the AR/VR interfaces can be
done to increase the usability and user experience (UX) of the end-users. In
this context, a usability study should be conducted to assess the usability
and UX of the resulting DAS based on our framework. Besides that, the
efficiency and effectiveness of our monitoring and adaptation framework should
be analyzed to check its applicability and benefit in further domains beyond
maintenance and training.
## Section 7 Conclusion and Outlook
As a consequence of ongoing digital transformation, new technologies are
emerging which change the way we are working and communicating with humans and
machines. In this context, digital assistance systems play a crucial role as
they provide means for supporting human-to-human and human-to-machine
interactions. Furthermore, such digital assistance systems can be used to
provide instructions and technical support in the working process as well as
for training purposes.
In this book chapter, we argue that existing digital assistance systems are
mostly created focusing on the “design for all” paradigm neglecting the
situation-specific tasks, skills, preferences, or environments of an
individual human worker. To overcome this issue, we first discuss the main
challenges in developing self-adaptive AR/VR-based digital assistance systems.
After that, we present a monitoring and adaptation framework for supporting
self-adaptive AR/VR-based digital assistance systems for Work 4.0. Our
framework supports context monitoring as well as UI adaptation for AR/VR-based
digital assistance systems. The benefit of our framework is shown based on
exemplary case studies from different domains, e.g. context-aware maintenance
application in augmented reality or warehouse management training in virtual
reality.
Although the presented framework makes a further step to support and improve
working processes of humans in times of Industry 4.0, further research
according to assistance systems has to be done to reach a better degree of
acceptance. For reaching this, further improvements in the areas of hardware
and display technology are required so that dynamic 3d objects and information
can be visualized on smart glasses or similar wearables. Beyond that,
holographic 3d displays are emerging which could lead to the future of
holographic working environments. Furthermore, intelligent techniques are
necessary to improve object detection at run time that is a key enabler in
assisting humans in working processes. In its current state, the implemented
adaptation process in our framework follows a rule-based approach. Further
optimization of UI adaptations can be reached through extending the adaptation
manager by machine learning algorithms. This way, log data (context
information, previous adaptations, and user feedback) can be analyzed to learn
the most suitable adaptations for future context-of-use situations. In
general, a broader acceptance of AR/VR technologies needs to be reached so
that these technologies can be used to augment human abilities and thus
improve their cognitive and physical tasks.
## References
* [1] Bonekamp, L., Sure, M.: Consequences of industry 4.0 on human labour and work organisation. Journal of Business and Media Psychology 6(1), 33–40 (2015)
* [2] De Vos, M.: Work 4.0 and the future of labour law. Available at SSRN 3217834 (2018)
* [3] Fellmann, M., Robert, S., Büttner, S., Mucha, H., Röcker, C.: Towards a framework for assistance systems to support work processes in smart factories. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E.R. (eds.) Machine Learning and Knowledge Extraction - First IFIP TC 5, WG 8.4, 8.9, 12.9 International Cross-Domain Conference, CD-MAKE 2017, Reggio di Calabria, Italy, August 29 - September 1, 2017, Proceedings. Lecture Notes in Computer Science, vol. 10410, pp. 59–68. Springer (2017). https://doi.org/10.1007/978-3-319-66808-6_5, https://doi.org/10.1007/978-3-319-66808-6_5
* [4] Fischer, H., Senft, B., Rittmeier, F., Sauer, S.: A canvas method to foster interdisciplinary discussions on digital assistance systems. In: Marcus, A., Wang, W. (eds.) Design, User Experience, and Usability: Theory and Practice - 7th International Conference, DUXU 2018, Held as Part of HCI International 2018, Las Vegas, NV, USA, July 15-20, 2018, Proceedings, Part I. Lecture Notes in Computer Science, vol. 10918, pp. 711–724. Springer (2018). https://doi.org/10.1007/978-3-319-91797-9_49, https://doi.org/10.1007/978-3-319-91797-9_49
* [5] Gorecky, D., Schmitt, M., Loskyll, M., Zühlke, D.: Human-machine-interaction in the industry 4.0 era. In: 12th IEEE International Conference on Industrial Informatics, INDIN 2014, Porto Alegre, RS, Brazil, July 27-30, 2014. pp. 289–294. IEEE (2014). https://doi.org/10.1109/INDIN.2014.6945523, https://doi.org/10.1109/INDIN.2014.6945523
* [6] Gottschalk, S., Yigitbas, E., Schmidt, E., Engels, G.: Model-based product configuration in augmented reality applications. In: Bernhaupt, R., Ardito, C., Sauer, S. (eds.) Human-Centered Software Engineering - 8th IFIP WG 13.2 International Working Conference, HCSE 2020, Eindhoven, The Netherlands, November 30 - December 2, 2020, Proceedings. Lecture Notes in Computer Science, vol. 12481, pp. 84–104. Springer (2020). https://doi.org/10.1007/978-3-030-64266-2_5, https://doi.org/10.1007/978-3-030-64266-2_5
* [7] Gottschalk, S., Yigitbas, E., Schmidt, E., Engels, G.: Proconar: A tool support for model-based AR product configuration. In: Bernhaupt, R., Ardito, C., Sauer, S. (eds.) Human-Centered Software Engineering - 8th IFIP WG 13.2 International Working Conference, HCSE 2020, Eindhoven, The Netherlands, November 30 - December 2, 2020, Proceedings. Lecture Notes in Computer Science, vol. 12481, pp. 207–215. Springer (2020). https://doi.org/10.1007/978-3-030-64266-2_14, https://doi.org/10.1007/978-3-030-64266-2_14
* [8] Grubert, J., et al.: Towards pervasive augmented reality: Context-awareness in augmented reality. IEEE Trans. Vis. Comput. Graph. 23(6), 1706–1724 (2017)
* [9] Hinrichsen, S., Bendzioch, S.: How digital assistance systems improve work productivity in assembly. In: International Conference on Applied Human Factors and Ergonomics. pp. 332–342. Springer (2018)
* [10] Hold, P., Erol, S., Reisinger, G., Sihn, W.: Planning and evaluation of digital assistance systems. Procedia Manufacturing 9, 143–150 (2017)
* [11] Hong, D., Shin, C., Oh, S., Woo, W.: A new paradigm for user interaction in ubiquitous computing environment. ISUVR 2006 pp. 41–44 (2006)
* [12] Josifovska, K., Yigitbas, E., Engels, G.: A digital twin-based multi-modal UI adaptation framework for assistance systems in industry 4.0. In: Kurosu, M. (ed.) Human-Computer Interaction. Design Practice in Contemporary Societies - Thematic Area, HCI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, Part III. Lecture Notes in Computer Science, vol. 11568, pp. 398–409. Springer (2019). https://doi.org/10.1007/978-3-030-22636-7_30, https://doi.org/10.1007/978-3-030-22636-7_30
* [13] Jovanovikj, I., Yigitbas, E., Sauer, S., Engels, G.: Augmented and virtual reality object repository for rapid prototyping. In: Bernhaupt, R., Ardito, C., Sauer, S. (eds.) Human-Centered Software Engineering - 8th IFIP WG 13.2 International Working Conference, HCSE 2020, Eindhoven, The Netherlands, November 30 - December 2, 2020, Proceedings. Lecture Notes in Computer Science, vol. 12481, pp. 216–224. Springer (2020). https://doi.org/10.1007/978-3-030-64266-2_15, https://doi.org/10.1007/978-3-030-64266-2_15
* [14] Keller, T., Bayer, C., Bausch, P., Metternich, J.: Benefit evaluation of digital assistance systems for assembly workstations. Procedia CIRP 81, 441–446 (2019)
* [15] Kephart, J.O., Chess, D.M.: The vision of autonomic computing. Computer 36(1), 41–50 (2003). https://doi.org/10.1109/MC.2003.1160055, https://doi.org/10.1109/MC.2003.1160055
* [16] Kovacs, K., Ansari, F., Geisert, C., Uhlmann, E., Glawar, R., Sihn, W.: A process model for enhancing digital assistance in knowledge-based maintenance. In: Beyerer, J., Kühnert, C., Niggemann, O. (eds.) Machine Learning for Cyber Physical Systems, Selected papers from the International Conference ML4CPS 2018, Karlsruhe, Germany, October 23-24, 2018. pp. 87–96. Springer (2018). https://doi.org/10.1007/978-3-662-58485-9_10, https://doi.org/10.1007/978-3-662-58485-9_10
* [17] Krings, S., Yigitbas, E., Jovanovikj, I., Sauer, S., Engels, G.: Development framework for context-aware augmented reality applications. In: Bowen, J., Vanderdonckt, J., Winckler, M. (eds.) EICS ’20: ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Sophia Antipolis, France, June 23-26, 2020. pp. 9:1–9:6. ACM (2020). https://doi.org/10.1145/3393672.3398640, https://doi.org/10.1145/3393672.3398640
* [18] Laddaga, R., Robertson, P.: Self adaptive software: A position paper. In: SELF-STAR: International Workshop on Self-* Properties in Complex Information Systems. vol. 31, p. 19. Citeseer (2004)
* [19] Lasi, H., Fettke, P., Kemper, H.G., Feld, T., Hoffmann, M.: Industry 4.0. Business & information systems engineering 6(4), 239–242 (2014)
* [20] Lindlbauer, D., Feit, A.M., Hilliges, O.: Context-aware online adaptation of mixed reality interfaces. In: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST 2019, New Orleans, LA, USA, October 20-23, 2019. pp. 147–160 (2019)
* [21] Nelles, J., Kuz, S., Mertens, A., Schlick, C.M.: Human-centered design of assistance systems for production planning and control: The role of the human in industry 4.0. In: IEEE International Conference on Industrial Technology, ICIT 2016, Taipei, Taiwan, March 14-17, 2016. pp. 2099–2104. IEEE (2016). https://doi.org/10.1109/ICIT.2016.7475093, https://doi.org/10.1109/ICIT.2016.7475093
* [22] Nikolenko, A., Sehr, P., Hinrichsen, S., Bendzioch, S.: Digital assembly assistance systems–a case study. In: International Conference on Applied Human Factors and Ergonomics. pp. 24–33. Springer (2019)
* [23] Oh, S., Woo, W., et al.: Camar: Context-aware mobile augmented reality in smart space. Proc. of IWUVR 9, 48–51 (2009)
* [24] Paelke, V.: Augmented reality in the smart factory: Supporting workers in an industry 4.0. environment. In: Grau, A., Martínez, H. (eds.) Proceedings of the 2014 IEEE Emerging Technology and Factory Automation, ETFA 2014, Barcelona, Spain, September 16-19, 2014. pp. 1–4. IEEE (2014). https://doi.org/10.1109/ETFA.2014.7005252, https://doi.org/10.1109/ETFA.2014.7005252
* [25] Rüßmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Engel, P., Harnisch, M.: Industry 4.0: The future of productivity and growth in manufacturing industries. Boston Consulting Group 9(1), 54–89 (2015)
* [26] Salimi, M.: Work 4.0: An enormous potential for economic growth in germany. ADAPT Bulletin 16 (2015)
* [27] Xu, L.D., Xu, E.L., Li, L.: Industry 4.0: state of the art and future trends. International Journal of Production Research 56(8), 2941–2962 (2018)
* [28] Yigitbas, E., et al.: Self-adaptive UIs: Integrated model-driven development of UIs and their adaptations. In: Proc. of the ECMFA 2017. pp. 126–141 (2017)
* [29] Yigitbas, E., Anjorin, A., Jovanovikj, I., Kern, T., Sauer, S., Engels, G.: Usability evaluation of model-driven cross-device web user interfaces. In: Bogdan, C., Kuusinen, K., Lárusdóttir, M.K., Palanque, P.A., Winckler, M. (eds.) Human-Centered Software Engineering - 7th IFIP WG 13.2 International Working Conference, HCSE 2018, Sophia Antipolis, France, September 3-5, 2018, Revised Selected Papers. Lecture Notes in Computer Science, vol. 11262, pp. 231–247. Springer (2018). https://doi.org/10.1007/978-3-030-05909-5_14, https://doi.org/10.1007/978-3-030-05909-5_14
* [30] Yigitbas, E., Gorissen, S., Weidmann, N., Engels, G.: Collaborative software modeling in virtual reality. CoRR abs/2107.12772 (2021), https://arxiv.org/abs/2107.12772
* [31] Yigitbas, E., Heindörfer, J., Engels, G.: A context-aware virtual reality first aid training application. In: Alt, F., Bulling, A., Döring, T. (eds.) Proc. of Mensch und Computer 2019. pp. 885–888. GI / ACM (2019)
* [32] Yigitbas, E., Hottung, A., Rojas, S.M., Anjorin, A., Sauer, S., Engels, G.: Context- and data-driven satisfaction analysis of user interface adaptations based on instant user feedback. Proc. ACM Hum. Comput. Interact. 3(EICS), 19:1–19:20 (2019). https://doi.org/10.1145/3331161, https://doi.org/10.1145/3331161
* [33] Yigitbas, E., Josifovska, K., Jovanovikj, I., Kalinci, F., Anjorin, A., Engels, G.: Component-based development of adaptive user interfaces. In: Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems, EICS 2019, Valencia, Spain, June 18-21, 2019. pp. 13:1–13:7 (2019)
* [34] Yigitbas, E., Jovanovikj, I., Engels, G.: Simplifying robot programming using augmented reality and end-user development. In: Ardito, C., Lanzilotti, R., Malizia, A., Petrie, H., Piccinno, A., Desolda, G., Inkpen, K. (eds.) Human-Computer Interaction - INTERACT 2021 - 18th IFIP TC 13 International Conference, Bari, Italy, August 30 - September 3, 2021, Proceedings, Part I. Lecture Notes in Computer Science, vol. 12932, pp. 631–651. Springer (2021). https://doi.org/10.1007/978-3-030-85623-6_36, https://doi.org/10.1007/978-3-030-85623-6_36
* [35] Yigitbas, E., Jovanovikj, I., Sauer, S., Engels, G.: On the development of context-aware augmented reality applications. In: Abdelnour-Nocera, J.L., Parmaxi, A., Winckler, M., Loizides, F., Ardito, C., Bhutkar, G., Dannenmann, P. (eds.) Beyond Interactions - INTERACT 2019 IFIP TC 13 Workshops, Paphos, Cyprus, September 2-6, 2019, Revised Selected Papers. Lecture Notes in Computer Science, vol. 11930, pp. 107–120. Springer (2019). https://doi.org/10.1007/978-3-030-46540-7_11, https://doi.org/10.1007/978-3-030-46540-7_11
* [36] Yigitbas, E., Jovanovikj, I., Scholand, J., Engels, G.: VR training for warehouse management. In: Teather, R.J., Joslin, C., Stuerzlinger, W., Figueroa, P., Hu, Y., Batmaz, A.U., Lee, W., Ortega, F.R. (eds.) VRST ’20: 26th ACM Symposium on Virtual Reality Software and Technology. pp. 78:1–78:3. ACM (2020)
* [37] Yigitbas, E., Karakaya, K., Jovanovikj, I., Engels, G.: Enhancing human-in-the-loop adaptive systems through digital twins and VR interfaces. In: 16th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, SEAMS@ICSE 2021, Madrid, Spain, May 18-24, 2021. pp. 30–40. IEEE (2021). https://doi.org/10.1109/SEAMS51251.2021.00015, https://doi.org/10.1109/SEAMS51251.2021.00015
* [38] Yigitbas, E., Klauke, J., Gottschalk, S., Engels, G.: VREUD - an end-user development tool to simplify the creation of interactive VR scenes. CoRR abs/2107.00377 (2021), https://arxiv.org/abs/2107.00377
* [39] Yigitbas, E., Sauer, S.: Engineering context-adaptive uis for task-continuous cross-channel applications. In: Human-Centered and Error-Resilient Systems Development - IFIP WG 13.2/13.5 Joint Working Conference. pp. 281–300 (2016)
* [40] Yigitbas, E., Sauer, S., Engels, G.: Using augmented reality for enhancing planning and measurements in the scaffolding business. In: EICS ’21: ACM SIGCHI Symposium on Engineering Interactive Computing Systems, virtual, June 8-11, 2021. ACM (2021), https://doi.org/10.1145/3459926.3464747
* [41] Yigitbas, E., Tejedor, C.B., Engels, G.: Experiencing and programming the ENIAC in VR. In: Alt, F., Schneegass, S., Hornecker, E. (eds.) Mensch und Computer 2020. pp. 505–506. ACM (2020)
|
# Groupoidal 2-quasi-categories and homotopy 2-types
Victor Brittes
(Date: 28 November 2022)
###### Abstract.
We define a notion of groupoidal 2-quasi-categories and show that they are the
fibrant objects of a model structure on the category of $\Theta_{2}$-sets. We
show that this model category is Quillen equivalent to the Kan-Quillen model
category of simplicial sets and that 2-truncated groupoidal 2-quasi-categories
are models for homotopy 2-types.
###### Key words and phrases:
2-quasi-categories, 2-quasi-groupoids, 2-groupoids, homotopy 2-types
###### 2020 Mathematics Subject Classification:
18N10, 18N40, 18N55, 18N65, 55P15
###### Contents
1. 1 Localisation of model category structures
2. 2 2-quasi-categories
3. 3 2-quasi-groupoids
4. 4 Campbell’s nerve for bicategories and 2-truncated 2-quasi-categories
5. 5 2-truncated 2-quasi-groupoids vs 2-groupoids
6. 6 Homotopy 2-types
## Introduction
One of the motivations for the development of higher category theory comes
from algebraic topology. To fully encode the homotopy data of a given space,
one is led to consider mathematical structures formed by objects, morphisms,
2-morphisms between morphisms, 3-morphisms between 2-morphisms… and where
notions such as associativity and invertibility of morphisms are only defined
up to invertible higher morphisms. In this philosophy, an
$(\infty,n)$-category would be a higher category where all morphisms of
dimension $>n$ are invertible. Many models have been proposed for
$(\infty,n)$-categories, and comparisons of such models can be found in [BR13]
and [BR20]. An object modeling $(\infty,n)$-categories is groupoidal if all
$k$-morphisms $(1\leq k\leq n)$ are also (weakly) invertible, in an
appropriate sense. By considering $n$-truncated versions of these groupoidal
objects – where, roughly speaking, the data of $k$-morphisms for $k>n$ is
trivial – we expect to obtain a model for homotopy $n$-types.
For example, in the case $n=1$, one of the several models for
$(\infty,1)$-categories are quasi-categories, which were introduced by
Boardman and Vogt in the 70’s and whose theory was further developed by
mathematicians such as Joyal and Lurie. Quasi-categories are simplicial sets
satisfying a certain lifting condition, and can also be described as fibrant-
cofibrant objects of a model structure on the category of simplicial sets. An
important result of Joyal is that quasi-categories where all morphisms are
invertible – which we may call groupoidal – are precisely Kan complexes. With
the notion of truncation for quasi-categories presented in [Joy08a, CL20],
1-truncated groupoidal quasi-categories are exactly homotopy 1-types.
The aim of this short paper is to study similar notions in the case of
2-quasi-categories. In [Ara14], Ara introduces $n$-quasi-categories as models
for $(\infty,n)$-categories. These are the fibrant-cofibrant objects of a
model category structure on the category of $n$-cellular sets (i.e., functors
$\Theta_{n}^{\operatorname{op}}\to\operatorname{\mathbf{Set}}$). The category
$\Theta_{n}$ plays the role of a generalisation of the simplex category
$\Delta$, allowing to define notions of non-invertible higher morphisms.
Given a 2-quasi-category, one can consider its underlying quasi-category and
quasi-categories of morphisms between any two objects, as described in
[Cam20]. We propose a definition of groupoidal 2-quasi-categories as 2-quasi-
categories which are locally Kan complexes (or $\infty$-groupoids), and whose
underlying quasi-category is a Kan complex. We shall call these objects
2-quasi-groupoids, for short. By describing 2-quasi-groupoids as local objects
in Ara’s model structure for 2-quasi-categories, we obtain, starting from a
theorem of [Cam20], the following result (Theorem 3.14):
###### Theorem A.
There is a Quillen equivalence between the Kan-Quillen model structure on
simplicial sets and a model structure on the category of 2-cellular sets whose
fibrant-cofibrant objects are 2-quasi-groupoids.
Then, we consider 2-truncated 2-quasi-groupoids (i.e., 2-quasi-groupoids which
are 2-truncated 2-quasi-categories in the sense of [Cam20]). Using, once
again, the theory of localisation of model category structures, we show
(Theorem 5.7) that
###### Theorem B.
There is a Quillen equivalence between a model structure on the category of
2-cellular sets whose fibrant-cofibrant objects are 2-truncated 2-quasi-
groupoids and Moerdijk-Svensson’s model structure on the category of (strict)
2-groupoids, described in [MS93].
The link to homotopy 2-types is made using a result of [MS93] which provides a
Quillen equivalence between 2-groupoids and homotopy 2-types.
We may summarize the discussion above by placing each model in the framework
of (weak) $(r,n)$-categories: higher categories where all $k$-morphisms are
invertible for $k>n$ and trivial for $k>r$. Models for general (weak)
$(r,n)$-categories can be found in [Rez10].
General concept | Model
---|---
$(\infty,2)$-categories | 2-quasi-categories
$(\infty,0)$-categories = $\infty$-groupoids = spaces | 2-quasi-groupoids
$(2,2)$-categories = weak 2-categories | 2-truncated 2-quasi-categories
$(2,0)$-categories = weak 2-groupoids | 2-truncated 2-quasi-groupoids
It should be possible to generalize to $n\geq 3$ the comparison between
homotopy $n$-types and $n$-truncated groupoidal $n$-quasi-categories, for a
suitable notion of such. However, other methods are necessary, since it is
known that strict 3-groupoids do not model all homotopy 3-types, for example
(cf. [Sim11, 2.7]). This generalization is the object of future work of the
author.
Organisation of the paper. In the first preliminary section, we recall some
aspects of the theory of localisation of model structures, our main technical
tool. In section 2, we briefly recall the definition of the category
$\Theta_{2}$ and of 2-quasi-categories. The notion of 2-quasi-groupoid is
presented in the third section, where it is also proved that 2-quasi-groupoids
are models for spaces (Theorem A). Section 4 is a recollection of some results
of [Cam20] concerning the homotopy coherent nerve and 2-truncated 2-quasi-
categories, which will allow us to prove the equivalence between 2-truncated
2-quasi-groupoids and 2-groupoids (Theorem B) in section 5. The last section
presents Moerdijk-Svensson’s comparison between 2-groupoids and homotopy
2-types.
Acknowledgement. We would like to thank Muriel Livernet and Clemens Berger for
the helpful and insightful conversations. This work is part of my Master’s
thesis written under their supervision. We would also like to highlight the
importance of Alexander Campbell’s article [Cam20] for the ideas involved in
the proofs of the main theorems.
Notation. If $\mathcal{A}$ is a small category, we denote by
$\widehat{\mathcal{A}}$ the category of presheaves of sets on $\mathcal{A}$,
i.e., of functors
$\mathcal{A}^{\operatorname{op}}\to\operatorname{\mathbf{Set}}$. If $X$ is an
object of $\widehat{\mathcal{A}}$ and $a$ is an object of $\mathcal{A}$, we
write $X_{a}$ for the set $X(a)$. The Yoneda embedding
$a\mapsto\mathcal{A}[a]:=\operatorname{Hom}_{\mathcal{A}}(-,a)$ defines a
fully faithful functor $\mathcal{A}\to\widehat{\mathcal{A}}$. If $f:a\to b$ is
a morphism in $\mathcal{A}$, we still denote by
$f:\mathcal{A}[a]\to\mathcal{A}[b]$ its image under the Yoneda embedding.
When representing an adjunction by
$F:\mathcal{C}\rightleftarrows\mathcal{D}:G$, the functor $F$ is left adjoint
to the functor $G$.
## 1\. Localisation of model category structures
We recall some notions and results about the localisation of model categories,
mainly following the appendix A of [CL20]. A complete reference is [Hir03].
###### 1.1.
Let $(\mathcal{M},\mathsf{Cof},\mathsf{W},\mathsf{Fib})$ be a model category
structure on a category $\mathcal{M}$. A model category structure
$(\mathcal{M},\mathsf{Cof}_{\operatorname{loc}},\mathsf{W}_{\operatorname{loc}},\mathsf{Fib}_{\operatorname{loc}})$
on $\mathcal{M}$ is a (left) Bousfield localisation of
$(\mathcal{M},\mathsf{Cof},\mathsf{W},\mathsf{Fib})$ if
$\mathsf{Cof}_{\operatorname{loc}}=\mathsf{Cof}$ and
$\mathsf{W}\subset\mathsf{W}_{\operatorname{loc}}$. When studying a model
category structure and a given localisation, we shall write $\mathcal{M}$ and
$\mathcal{M}_{\operatorname{loc}}$ to refer to the original model structure
and to its localisation, respectively. We shall also call local fibration
(resp. local fibrant object, resp. local weak equivalence) a fibration (resp.
fibrant object, resp. weak equivalence) of $\mathcal{M}_{\operatorname{loc}}$.
We see that a Bousfield localisation is completely determined by its fibrant
objects, i.e., the local fibrant objects. It is useful to know that a morphism
between local fibrant objects is a weak equivalence (resp. fibration) in
$\mathcal{M}$ if and only if it is a local weak equivalence (resp. local
fibration).
###### 1.2.
A Quillen adjunction $F:\mathcal{M}\rightleftarrows\mathcal{N}:G$ is said to
be a homotopy reflection if the right-derived functor
$\mathbb{R}G:\operatorname{Ho}(\mathcal{N})\to\operatorname{Ho}(\mathcal{M})$
is fully faithful. For example, if $\mathcal{M}_{\operatorname{loc}}$ is a
Bousfield localisation of $\mathcal{M}$, then the adjunction
$\operatorname{id}_{\mathcal{M}}:\mathcal{M}\rightleftarrows\mathcal{M}_{\operatorname{loc}}:\operatorname{id}_{\mathcal{M}}$
is a homotopy reflection (see [JT07, Proposition 7.19]). Homotopy reflections
were originally introduced in [Dug01], where they were called homotopically
surjective maps of model categories.
There is a criterion to know whether a homotopy reflection
$F:\mathcal{M}\rightleftarrows\mathcal{N}:G$ yields a Quillen equivalence
after localisation of $\mathcal{M}$.
###### Theorem 1.3.
Let $F:\mathcal{M}\rightleftarrows\mathcal{N}:G$ be a Quillen adjunction
between model categories and $\mathcal{M}_{\operatorname{loc}}$ be a Bousfield
localisation of $\mathcal{M}$.
(1) The adjunction
$F:\mathcal{M}_{\operatorname{loc}}\rightleftarrows\mathcal{N}:G$ is a Quillen
adjunction if and only $G$ sends every fibrant object of $\mathcal{N}$ to a
fibrant object of $\mathcal{M}_{\operatorname{loc}}$.
(2) Suppose that $F:\mathcal{M}\rightleftarrows\mathcal{N}:G$ is a homotopy
reflection. The adjunction
$F:\mathcal{M}_{\operatorname{loc}}\rightleftarrows\mathcal{N}:G$ is a Quillen
equivalence if and only if it is a Quillen adjunction and for every fibrant-
cofibrant object $X$ of $\mathcal{M}_{\operatorname{loc}}$, there is a fibrant
object $Y$ of $\mathcal{N}$ and a weak equivalence $X\to G(Y)$ in
$\mathcal{M}$.
###### Proof.
(1) The condition is necessary since right Quillen functors preserve fibrant
objects. To prove that it is sufficient, it is enough to show that if $G$
sends fibrant objects of $\mathcal{N}$ to fibrant objects of
$\mathcal{M}_{\operatorname{loc}}$, then
$F:\mathcal{M}_{\operatorname{loc}}\to\mathcal{N}$ preserves acyclic
cofibrations.
Let $i$ be an acyclic cofibration of $\mathcal{M}_{\operatorname{loc}}$. We
want to show that the cofibration $F(i)$ is acyclic. By [JT07, Lemma 7.14], it
is the case if and only if $F(i)$ has the left lifting property with respect
to all fibrations between fibrant objects. Let $p:X\to Y$ be such a fibration.
By adjunction, we know that $F(i)$ has the left lifting property with respect
to $p$ if and only if $i$ has the left lifting property with respect to
$G(p)$. But $G(p)$ is a fibration of $\mathcal{M}$ (since
$G:\mathcal{N}\to\mathcal{M}$ is right Quillen) between local fibrant objects
(by the hypothesis), thus a fibration of $\mathcal{M}_{\operatorname{loc}}$.
Therefore, the lift exists since $i$ is an acyclic fibration of
$\mathcal{M}_{\operatorname{loc}}$.
(2) See [CL20, Theorem A.14]. ∎
###### 1.4.
Given a model category $\mathcal{M}$ and two objects $X,Y$ of $\mathcal{M}$,
we can consider the homotopy mapping space
$\underline{\operatorname{Ho}\mathcal{M}}(X,Y)$, which is the image of the
pair $(X,Y)$ by the functor
$\underline{\operatorname{Ho}\mathcal{M}}:\operatorname{Ho}(\mathcal{M})^{\operatorname{op}}\times\operatorname{Ho}(\mathcal{M})\to\operatorname{Ho}(\widehat{\Delta})$111When
writing $\operatorname{Ho}(\widehat{\Delta})$, we always consider the Kan-
Quillen model structure on the category of simplicial sets induced by
$\operatorname{Hom}_{\mathcal{M}}:\mathcal{M}^{\operatorname{op}}\times\mathcal{M}\to\operatorname{\mathbf{Set}}$
(cf. [Ara14, A.2]).
Let $S$ be a class of morphisms of $\mathcal{M}$. An object $X$ of
$\mathcal{M}$ is $S$-local (or local with respect to $S$) if for every
morphism $f:A\to B$ of $S$, the induced map
$\underline{\operatorname{Ho}\mathcal{M}}(f,X):\underline{\operatorname{Ho}\mathcal{M}}(B,X)\to\underline{\operatorname{Ho}\mathcal{M}}(A,X)$
is an isomorphism (in the homotopy category
$\operatorname{Ho}(\widehat{\Delta})$).
A morphism $f:A\to B$ of $\mathcal{M}$ is an $S$-equivalence if for every
$S$-local object $X$, the induced map
$\underline{\operatorname{Ho}\mathcal{M}}(f,X):\underline{\operatorname{Ho}\mathcal{M}}(B,X)\to\underline{\operatorname{Ho}\mathcal{M}}(A,X)$
is an isomorphism (in the homotopy category
$\operatorname{Ho}(\widehat{\Delta})$).
If there is a Bousfield localisation $\mathcal{M}_{\operatorname{loc}}$ of
$\mathcal{M}$ whose local fibrant objects are the $S$-local objects and whose
weak equivalences are the $S$-equivalences, we say that
$\mathcal{M}_{\operatorname{loc}}$ is a (Bousfield) localisation of
$\mathcal{M}$ with respect to $S$, and we denote it by $L_{S}\mathcal{M}$.
###### Example 1.5.
The Kan-Quillen model category structure on simplicial sets is a localisation
of Joyal’s model structure with respect to the morphism
$\Delta[1]\to\Delta[0]$, cf. [CL20, Proposition 3.30].
We will state a result, due to Smith, about the existence of the localisation
of a model category with respect to a certain set of morphisms. Before, let us
recall some definitions. A model category is left proper if the pushout of
every weak equivalence along a cofibration is a weak equivalence. A model
category where all objects are cofibrant is left proper (see [Hir03, Corollary
13.1.3]). A model category is combinatorial if it is cofibrantly generated and
locally presentable.
###### Theorem 1.6.
Let $\mathcal{M}$ be a left proper and combinatorial model category. Let $S$
be a set of morphisms of $\mathcal{M}$. Then the localisation
$L_{S}\mathcal{M}$ of $\mathcal{M}$ with respect to $S$ exists and is left
proper and combinatorial.
###### Proof.
See [Bar10, Theorem 4.7]. ∎
###### Remark 1.7.
If $F:\mathcal{M}\rightleftarrows\mathcal{N}:G$ is a Quillen adjunction
between model categories, the induced adjunction between the homotopy
categories is usually denoted by
$\mathbb{L}F:\operatorname{Ho}(\mathcal{M})\rightleftarrows\operatorname{Ho}(\mathcal{N}):\mathbb{R}G$.
In what follows, we will abuse language and also denote by $\mathbb{L}F$ the
functor $\mathbb{L}F:=FQ:\mathcal{M}\to\mathcal{N}$, where $Q$ is a fixed
functorial cofibrant replacement in the model category $\mathcal{M}$. In all
the applications presented in this paper, all objects of $\mathcal{M}$ will be
cofibrant, and so we shall take $Q=\operatorname{id}_{\mathcal{M}}$.
We can transfer localisations of model structures along Quillen adjunctions.
###### Proposition 1.8.
Let $F:\mathcal{M}\rightleftarrows\mathcal{N}:G$ be a Quillen adjunction
between model categories $\mathcal{M}$ and $\mathcal{N}$. Let $S$ be a class
of morphisms of $\mathcal{M}$. A fibrant object $Y$ of $\mathcal{N}$ is
$\mathbb{L}F(S)$-local if and only if $G(Y)$ is $S$-local.
###### Proof.
See [Hir03, Proposition 3.1.12]. ∎
###### Theorem 1.9.
Let $F:\mathcal{M}\rightleftarrows\mathcal{N}:G$ be a Quillen adjunction
between model categories $\mathcal{M}$ and $\mathcal{N}$. Let $S$ be a class
of morphisms of $\mathcal{M}$. If the localisations $L_{S}\mathcal{M}$ and
$L_{\mathbb{L}F(S)}\mathcal{N}$ exist, then
$F:L_{S}\mathcal{M}\rightleftarrows L_{\mathbb{L}F(S)}\mathcal{N}:G$
is a Quillen adjunction between the localised model categories.
Moreover, if $F:\mathcal{M}\rightleftarrows\mathcal{N}:G$ is a Quillen
equivalence, then so is $F:L_{S}\mathcal{M}\rightleftarrows
L_{\mathbb{L}F(S)}\mathcal{N}:G$.
###### Proof.
See [Hir03, Proposition 3.3.20] ∎
We end this section stating a proposition which will allow us to understand
successive localisations of a model category.
###### Lemma 1.10.
Let $\mathcal{M}$ be a model category and $\mathcal{M}_{\operatorname{loc}}$
be a Bousfield localisation of $\mathcal{M}$. For every object $X$ and every
local fibrant object $Y$ of $\mathcal{M}$, the homotopy mapping spaces
$\underline{\operatorname{Ho}\mathcal{M}}(X,Y)$ and
$\underline{\operatorname{Ho}\mathcal{M}_{\operatorname{loc}}}(X,Y)$ are
naturally isomorphic in $\operatorname{Ho}(\widehat{\Delta})$.
###### Proof.
See [Ara14, Lemma A.4] ∎
###### Proposition 1.11.
Let $\mathcal{M}$ be a model category and $S,T$ be two classes of morphisms of
$\mathcal{M}$. Suppose that the localisations $L_{S}\mathcal{M}$,
$L_{T}\mathcal{M}$, $L_{T}L_{S}\mathcal{M}$, $L_{S}L_{T}\mathcal{M}$ and
$L_{S\cup T}\mathcal{M}$ exist. Then the model categories
$L_{T}L_{S}\mathcal{M}$, $L_{S}L_{T}\mathcal{M}$ and $L_{S\cup T}\mathcal{M}$
are the same.
###### Proof.
Since a model structure is completely determined by its cofibrations and
fibrant objects (cf. [Joy08b, Proposition E.1.10]) and the 3 considered model
structures have the same cofibrations, it is sufficient to show that they have
the same fibrant objects. Let $X$ be an object of $\mathcal{M}$. We claim that
the following assertions are equivalent:
1. (1)
$X$ is a $T$-local object of $L_{S}\mathcal{M}$
2. (2)
$X$ is an $S$-local object of $L_{T}\mathcal{M}$
3. (3)
$X$ is an $(S\cup T)$-local objects of $\mathcal{M}$
We will show that $(1)\Leftrightarrow(3)$. The equivalence
$(2)\Leftrightarrow(3)$ follows by exchanging the roles of $S$ and $T$.
$(1)\Rightarrow(3)$ Suppose that $X$ is a $T$-local object of
$L_{S}\mathcal{M}$. We have to show that, for every $f\in S\cup T$, $f:A\to
B$, the map
$\underline{\operatorname{Ho}\mathcal{M}}(f,X):\underline{\operatorname{Ho}\mathcal{M}}(B,X)\to\underline{\operatorname{Ho}\mathcal{M}}(A,X)$
is an isomorphism. This is true if $f\in S$, since $X$ is fibrant in
$L_{T}L_{S}\mathcal{M}$, so it is in particular fibrant in $L_{S}\mathcal{M}$,
which means it is $S$-local in $\mathcal{M}$. If $f\in T$, we consider the
commutative square
${{\underline{\text{Ho}\mathcal{M}}(B,X)}}$${{\underline{\text{Ho}\mathcal{M}}(A,X)}}$${{\underline{\text{Ho}L_{S}\mathcal{M}}(B,X)}}$${{\underline{\text{Ho}L_{S}\mathcal{M}}(A,X)}}$$\scriptstyle{\underline{\text{Ho}\mathcal{M}}(f,X)}$$\scriptstyle{\underline{\text{Ho}L_{S}\mathcal{M}}(f,X)}$$\scriptstyle{\cong}$$\scriptstyle{\cong}$
where the isomorphisms are given by Lemma 1.10. The bottom arrow is an
isomorphism, since $X$ is $T$-local in $L_{S}\mathcal{M}$ by assumption, and
thus the top arrow is also an isomorphism, as desired.
$(3)\Rightarrow(1)$ Let $X$ be a $(S\cup T)$-local object of $\mathcal{M}$.
Let $f:A\to B$ be a morphism in the class $T$. We have to show that the map
$\underline{\operatorname{Ho}L_{S}\mathcal{M}}(f,X):\underline{\operatorname{Ho}L_{S}\mathcal{M}}(B,X)\to\underline{\operatorname{Ho}L_{S}\mathcal{M}}(A,X)$
is an isomorphism. Since $X$ is $(S\cup T)$-local in $\mathcal{M}$, it is in
particular $S$-local in $\mathcal{M}$, and so it is a fibrant object of
$L_{S}\mathcal{M}$. Therefore, we can use Lemma 1.10 to consider a commutative
square as above. The top arrow is an isomorphism since $X$ is $T$-local in
$\mathcal{M}$, which implies that the bottom arrow is also an isomorphism. ∎
## 2\. 2-quasi-categories
Ara introduces in [Ara14] $n$-quasi-categories as models for
$(\infty,n)$-categories. These are presheaves on the category $\Theta_{n}$
which are fibrant objects for a certain model category structure on
$\widehat{\Theta_{n}}$. Here, we consider the case $n=2$.
###### 2.1.
We denote by $\Delta$ the category of finite ordinals $[n]=\\{0<1<\ldots<n\\}$
and non-decreasing maps between them. There is a fully faithful inclusion
$\Delta\to\operatorname{\mathbf{Cat}}$, where we see each object of $\Delta$
as the category associated to the respective ordered set.
The morphisms of $\Delta$ are generated by the face and degeneracy maps. Face
maps are the maps $\delta^{i}:[n]\to[n+1]$ skipping $i$, for $0\leq i\leq
n+1$. Degeneracy maps are the maps $\sigma^{i}:[n+1]\to[n]$ sending both $i$
and $i+1$ to $i$, for $0\leq i\leq n$.
###### 2.2.
We introduce the category $\Theta_{2}$ as a wreath product $\Delta\wr\Delta$,
as first presented by Berger in [Ber07]. The objects of $\Theta_{2}$ are lists
$[n;\mathbf{q}]=[n;q_{1},\ldots,q_{n}]$, where $n,q_{1},\ldots,q_{n}$ are non-
negative integers. A morphism
$[\alpha,\boldsymbol{\alpha}]:[m;\mathbf{p}]\to[n;\mathbf{q}]$ is the data of
a morphism $\alpha:[m]\to[n]$ in $\Delta$ and of a morphism
$\alpha_{j}:[p_{i}]\to[q_{j}]$ in $\Delta$ for every
$\alpha(i-1)<j\leq\alpha(i)$, $1\leq i\leq m$.
We shall think of $\Theta_{2}$ as a full subcategory of
$\operatorname{\mathbf{2-Cat}}$. An object $[n;\mathbf{q}]$ of $\Theta_{2}$ is
represented by the 2-category freely generated by the 2-graph with $n+1$
objects $0,1,\ldots,n$, an ordered set $\\{(j,0),\ldots,(j,q_{j})\\}$ of
1-arrows $j-1\to j$ for every $1\leq j\leq n$, and one 2-arrow between any two
consecutive 1-arrows between the same objects. For example, the 2-graph which
generates the object $[3;1,0,2]$ can be represented as below.
${0}$${1}$${2}$${3}$$\scriptstyle{(1,0)}$$\scriptstyle{(1,1)}$$\scriptstyle{(2,0)}$$\scriptstyle{(3,0)}$$\scriptstyle{(3,2)}$$\scriptstyle{(3,1)}$
A morphism $[\alpha,\boldsymbol{\alpha}]:[m;\mathbf{p}]\to[n;\mathbf{q}]$
corresponds to the 2-functor which sends each object $i$ of the 2-category
$[m;\mathbf{p}]$ to $\alpha(i)$, each 1-morphism $(i,k)$ to the composite
$(\alpha(i),\alpha_{\alpha(i)}(k))\circ\ldots\circ(\alpha(i-1)+1,\alpha_{\alpha(i-1)+1}(k))$
when $\alpha(i)>\alpha(i-1)$ and to $\operatorname{id}_{\alpha(i)}$ when
$\alpha(i)=\alpha(i-1)$, and each 2-morphism of $[m;\mathbf{p}]$ to the unique
possible 2-morphism of $[n;\mathbf{q}]$.
###### 2.3.
We consider the category $\widehat{\Theta_{2}}$ of 2-cellular sets. The
representable 2-cellular sets are denoted by $\Theta_{2}[n;\mathbf{q}]$. The
boundary $\partial\Theta_{2}[n;\mathbf{q}]$ of a representable
$\Theta_{2}[n;\mathbf{q}]$ is the 2-cellular set generated by the non-
surjective222A morphism
$[\alpha;\boldsymbol{\alpha}]:[m;\mathbf{p}]\to[n;\mathbf{q}]$ is surjective
if $\alpha:[m]\to[n]$ is surjective and all $\alpha_{j}:[p_{i}]\to[q_{j}]$ are
surjective. morphisms $[m;\mathbf{p}]\to[n;\mathbf{q}]$ in $\Theta_{2}$. We
denote by $\delta_{n;\mathbf{q}}$ the boundary inclusion
$\partial\Theta_{2}[n;\mathbf{q}]\to\Theta_{2}[n;\mathbf{q}]$.
###### 2.4.
There is a model category structure on $\widehat{\Theta_{2}}$, constructed in
[Ara14] by means of Cisinki’s theory of localizers, which we will call Ara’s
model structure for 2-quasi-categories. The cofibrations of this model
structure are monomorphisms of 2-cellular sets (and so every object is
cofibrant). The fibrant objects are called 2-quasi-categories.
###### 2.5.
Let $F:\mathcal{A}\to\mathcal{B}$ be a functor from a small category
$\mathcal{A}$ to a category $\mathcal{B}$. The evaluation (or nerve) functor
associated to $F$ is the functor $F^{!}:\mathcal{B}\to\widehat{\mathcal{A}}$,
given by $F^{!}(b)_{a}=\operatorname{Hom}_{\mathcal{B}}(F(a),b)$ for every
$b\in\mathcal{B}$ and every $a\in\mathcal{A}$.
If $\mathcal{B}$ is cocomplete, the evaluation functor admits a left adjoint
$F_{!}:\widehat{\mathcal{A}}\to\mathcal{B}$, which is the left Kan extension
of $F$ along the Yoneda embedding $\mathcal{A}\to\widehat{\mathcal{A}}$.
###### 2.6.
The inclusion $\Theta_{2}\to\operatorname{\mathbf{2-Cat}}$ induces a nerve
functor $N_{2}:\operatorname{\mathbf{2-Cat}}\to\widehat{\Theta_{2}}$, which we
call the (strict) 2-nerve functor. Explicitly, for a 2-category $\mathcal{C}$
and $[n;\mathbf{q}]\in\Theta_{2}$, we have
$N_{2}(\mathcal{C})_{n;\mathbf{q}}=\operatorname{Hom}_{\operatorname{\mathbf{2-Cat}}}([n;\mathbf{q}],\mathcal{C})$
The strict 2-nerve functor is fully faithful, cf. [Ber02, Theorem 1.12].
The (strict 1-)nerve of a category is always a quasi-category. The analogous
statement is not true, in general, for 2-categories.
###### Proposition 2.7.
The strict 2-nerve of a 2-category $\mathcal{C}$ is a 2-quasi-category if and
only if the only invertible 2-morphisms of $\mathcal{C}$ are the identities.
###### Proof.
It follows from [Ara14, Proposition 7.10] in the case $n=2$. ∎
## 3\. 2-quasi-groupoids
In this section, we provide a definition of groupoidal 2-quasi-category, which
we call 2-quasi-groupoid. We show that there is a model category structure on
the category of 2-cellular sets such that the fibrant-cofibrant objects are
precisely the 2-quasi-groupoids. Moreover, we show that this model category
structure is Quillen equivalent to the Kan-Quillen model structure on
simplicial sets.
###### 3.1.
The inclusion $i:\operatorname{\mathbf{Cat}}\to\operatorname{\mathbf{2-Cat}}$
has a left adjoint
$t:\operatorname{\mathbf{2-Cat}}\to\operatorname{\mathbf{Cat}}$, called the
truncation functor, which sends a strict 2-category $\mathcal{C}$ to the
category $t(\mathcal{C})$ whose objects are the same as those of $\mathcal{C}$
and morphisms are equivalence classes of 1-morphisms of $\mathcal{C}$ with
respect to the equivalence relation freely generated by 2-morphisms. The
adjunction
$t:\operatorname{\mathbf{2-Cat}}\rightleftarrows\operatorname{\mathbf{Cat}}:i$
restricts to an adjunction
$t:\Theta_{2}\rightleftarrows\Delta:i$
Explicitly, we have $t([n;\mathbf{q}])=[n]$ and $i([n])=[n;0,\ldots,0]$.
The adjunction above yields an adjunction
$t^{*}:\widehat{\Delta}\rightleftarrows\widehat{\Theta_{2}}:i^{*}$
between the respective presheaf categories. Note that the functor $t^{*}$ is
isomorphic to the functor $i_{!}:\widehat{\Delta}\to\widehat{\Theta_{2}}$
which extends the functor
$\Delta\xrightarrow[]{i}\Theta_{2}\xrightarrow[]{}\widehat{\Theta_{2}}$ by
colimits.
###### Proposition 3.2.
The adjunction $(t^{*},i^{*})$ is a Quillen adjunction between the category of
simplicial sets with Joyal’s model structure and the category of 2-cellular
sets with Ara’s model structure for 2-quasi-categories.
###### Proof.
See [Cam20, Proposition 7.5]. ∎
###### 3.3.
Let $X$ be a 2-quasi-category. Proposition 3.2 implies that the simplicial set
$i^{*}(X)$ is a quasi-category. We call $i^{*}(X)$ the underlying quasi-
category of $X$.
###### 3.4.
The inclusion $i:\operatorname{\mathbf{Cat}}\to\operatorname{\mathbf{2-Cat}}$
also admits a right adjoint
$t_{r}:\operatorname{\mathbf{2-Cat}}\to\operatorname{\mathbf{Cat}}$, called
the right truncation. If $\mathcal{C}$ is a 2-category, then
$t_{r}(\mathcal{C})$ is the category whose objects and morphisms are the
objects and (1-)morphisms of $\mathcal{C}$. Using the adjunction $(i,t_{r})$
and the definition of the nerve functors, it is easy to see that the square
${\operatorname{\mathbf{2-Cat}}}$${\operatorname{\mathbf{Cat}}}$${\widehat{\Theta_{2}}}$${\widehat{\Delta}}$$\scriptstyle{t_{r}}$$\scriptstyle{N_{2}}$$\scriptstyle{N}$$\scriptstyle{i^{*}}$
is commutative (up to isomorphism).
###### 3.5.
Let $X$ be a 2-cellular set, and $x,y\in X_{0}$. Consider the simplicial set
$\operatorname{Hom}_{X}(x,y)$, whose set of $n$-simplices is given by the
pullback
${\operatorname{Hom}_{X}(x,y)_{n}}$${X_{1;n}}$${\\{*\\}}$${X_{0}\times
X_{0}}$$\scriptstyle{(x\text{,}y)}$
for $n\geq 0$, and face and degeneracy maps induced by
$X([\operatorname{id};\delta^{i}])$ and $X([\operatorname{id};\sigma^{i}])$,
respectively.
The association $(X,x,y)\mapsto\operatorname{Hom}_{X}(x,y)$ is actually part
of an adjunction between bipointed 2-cellular sets and simplicial sets:
$\Sigma:\widehat{\Delta}\rightleftarrows\partial\Theta_{2}[1;0]/\widehat{\Theta_{2}}:\operatorname{Hom}$
The left adjoint $\Sigma$ is obtained by left Kan extension along the Yoneda
embedding of the functor
$\Delta\to\partial\Theta_{2}[1;0]/\widehat{\Theta_{2}}$ which maps $[n]$ to
$(\Theta_{2}[1;n],0,1)$ for every $n\geq 0$.
###### Proposition 3.6.
The adjunction $(\Sigma,\operatorname{Hom})$ is a Quillen adjunction between
the category of simplicial sets with Joyal’s model structure and the category
of bipointed 2-cellular sets with the model structure induced by Ara’s model
structure for 2-quasi-categories.
###### Proof.
See [Cam20, Proposition 6.5]. ∎
###### 3.7.
Let $X$ be a 2-quasi-category. Proposition 3.6 implies that, for all $x,y\in
X_{0}$ the simplicial set $\operatorname{Hom}_{X}(x,y)$ is a quasi-category.
We call $\operatorname{Hom}_{X}(x,y)$ the hom-quasi-category between $x$ and
$y$. We say that $X$ is locally Kan if, for every $x,y\in X_{0}$, the hom-
quasi-category $\operatorname{Hom}_{X}(x,y)$ is a Kan complex.
###### Proposition 3.8.
A 2-quasi-category $X$ is locally Kan if and only if $X$ is local with respect
to $[\operatorname{id};\sigma^{0}]:\Theta_{2}[1;1]\to\Theta_{2}[1;0]$ in the
model structure for 2-quasi-categories.
###### Proof.
Let $\mathcal{M}$ be a model category and $C$ be a cofibrant object of
$\mathcal{M}$. Let $(A,a:C\to A)$ and $(B,b:C\to B)$ be two objects of
$C/\mathcal{M}$, and $f:A\to B$ be a morphism in $\mathcal{M}$ such that
$fa=b$ (that is, $f$ is a morphism in $C/\mathcal{M}$). One can show that a
fibrant object $X$ of $\mathcal{M}$ is local with respect to $f$ if and only
if it for every morphism $x:C\to X$, the object $(X,x)$ of $C/\mathcal{M}$ is
local with respect to $f$ in the induced model structure.
Taking $\mathcal{M}$ to be $\widehat{\Theta_{2}}$ with the model structure for
2-quasi-categories and $C$ to be $\partial\Theta_{2}[1;0]$, we get that a
2-quasi-category $X$ is local with respect to
$[\operatorname{id};\sigma^{0}]:\Theta_{2}[1;1]\to\Theta_{2}[1;0]$ if and only
for every $x,y\in X_{0}$, the object $(X,x,y)$ of
$\partial\Theta_{2}[1;0]/\widehat{\Theta_{2}}$ is local with respect
$[\operatorname{id};\sigma^{0}]$. But
$[\operatorname{id};\sigma^{0}]=\Sigma(\sigma^{0}:\Delta[1]\to\Delta[0])$.
Thus, by applying Proposition 1.8 to the adjunction
$\Sigma:\widehat{\Delta}\rightleftarrows\partial\Theta_{2}[1;0]/\widehat{\Theta_{2}}:\operatorname{Hom}$,
we conclude that a 2-quasi-category $X$ is local with respect to
$[\operatorname{id};\sigma^{0}]$ if and only if for every $x,y\in X_{0}$, the
hom-quasi-category $\operatorname{Hom}_{X}(x,y)$ is local with respect to
$\Delta[1]\to\Delta[0]$, which is equivalent to saying that
$\operatorname{Hom}_{X}(x,y)$ is a Kan complex (cf. Example 1.5). ∎
Using the characterisation of locally Kan 2-quasi-categories as local objects
in Ara’s model structure, we can apply the existence theorem 1.6 to get a
model structure for locally Kan 2-quasi-categories.
###### Proposition 3.9.
There exists a model structure on the category of 2-cellular sets whose
cofibrations are monomorphisms, and whose fibrant objects are locally Kan
2-quasi-categories. This model structure is a Bousfield localisation of Ara’s
model structure with respect to
$[\operatorname{id};\sigma^{0}]:\Theta_{2}[1;1]\to\Theta_{2}[1;0]$.
###### Proof.
Ara’s model structure for 2-quasi-categories is left proper and combinatorial,
so we can apply Theorem 1.6 to
$S=\\{[\operatorname{id};\sigma^{0}]:\Theta_{2}[1;1]\to\Theta_{2}[1;0]\\}$. ∎
The following theorem of Campbell will be the starting point of the proof of
Theorem 3.14.
###### Theorem 3.10.
The adjunction
$t^{*}:\widehat{\Delta}\rightleftarrows\widehat{\Theta_{2}}:i^{*}$
is a Quillen equivalence between the category of simplicial sets with Joyal’s
model structure and the category of 2-cellular sets with the model structure
for locally Kan 2-quasi-categories.
###### Proof.
See [Cam20, Corollary 11.6]. ∎
We now have all the ingredients to define 2-quasi-groupoids. Recall that a
(strict) 2-groupoid is a 2-category where all 1-morphisms and 2-morphisms are
(strictly) invertible. Equivalently, a 2-category $\mathcal{C}$ is a
2-groupoid if and only if:
1. (1)
$\mathcal{C}$ is locally groupoidal, i.e., for every two objects $x,y$ of
$\mathcal{C}$, the hom-category $\mathcal{C}(x,y)$ is a groupoid.
Equivalently, for every two object $x,y$ of $\mathcal{C}$, the quasi-category
$N(\mathcal{C}(x,y))\cong\operatorname{Hom}_{N_{2}(\mathcal{C})}(x,y)$ is a
Kan complex (or $\infty$-groupoid);
2. (2)
$t_{r}(\mathcal{C})$ is a groupoid, which amounts to say that
$Nt_{r}(\mathcal{C})\cong i^{*}N_{2}(\mathcal{C})$ is a Kan complex.
Inspired by this characterisation, we give the following definition:
###### Definition 3.11.
A 2-quasi-category $X$ is a 2-quasi-groupoid if it is locally Kan and its
underlying quasi-category $i^{*}(X)$ is a Kan complex.
###### Proposition 3.12.
A 2-quasi-category $X$ is a 2-quasi-groupoid if and only if $X$ is local with
respect to $[\operatorname{id};\sigma^{0}]:\Theta_{2}[1;1]\to\Theta_{2}[1;0]$
and $[\sigma^{0}]:\Theta_{2}[1;0]\to\Theta_{2}[0]$ in the model structure for
2-quasi-categories.
###### Proof.
By Proposition 3.8, it suffices to show that $i^{*}(X)$ is a Kan complex if
and only if $X$ is local with respect to
$[\sigma^{0}]:\Theta_{2}[1;0]\to\Theta_{2}[0]$. This follows from Proposition
1.8 applied to the Quillen adjunction
$t^{*}:\widehat{\Delta}\rightleftarrows\widehat{\Theta_{2}}:i^{*}$ of
Proposition 3.2, by noticing that
$[\sigma^{0}]:\Theta_{2}[1;0]\to\Theta_{2}[0]=t^{*}(\Delta[1]\to\Delta[0])$
and that Kan complexes are precisely quasi-categories which are local with
respect to $\Delta[1]\to\Delta[0]$ in Joyal’s model structure. ∎
Exactly as in Proposition 3.9, we can get a model structure for 2-quasi-
groupoids.
###### Proposition 3.13.
There exists a model structure on the category of 2-cellular sets whose
cofibrations are monomorphisms, and whose fibrant objects are 2-quasi-
groupoids. This model structure is a Bousfield localisation of Ara’s model
structure with respect to
$[\operatorname{id};\sigma^{0}]:\Theta_{2}[1;1]\to\Theta_{2}[1;0]$ and
$[\sigma^{0}]:\Theta_{2}[1;0]\to\Theta_{2}[0]$. ∎
We finish this section by showing that this model structure is Quillen
equivalent to the Kan-Quillen model structure on simplicial sets.
###### Theorem 3.14.
The adjunction
$t^{*}:\widehat{\Delta}\rightleftarrows\widehat{\Theta_{2}}:i^{*}$
is a Quillen equivalence between the category of simplicial sets with the Kan-
Quillen model structure and the category of 2-cellular sets with the model
structure for 2-quasi-groupoids.
###### Proof.
By theorem Theorem 3.10, the adjunction above is a Quillen equivalence between
the model structures for quasi-categories and for locally Kan 2-quasi-
categories. We know that the Kan-Quillen model structure is a localisation of
Joyal’s model structure with respect to $\Delta[1]\to\Delta[0]$. Applying
Theorem 1.9 to the adjunction in question, we obtain a Quillen equivalence
between the Kan-Quillen model structure and the localisation of the model
structure for locally Kan 2-quasi-categories with respect to
$\mathbb{L}t^{*}(\Delta[1]\to\Delta[0])=(\Theta_{2}[1;0]\to\Theta_{2}[0])$
(which exists by Theorem 1.6 since the model structure for locally Kan
2-quasi-categories is left proper and combinatorial - also by Theorem 1.6).
It remains to show that this localisation is precisely the model structure for
2-quasi-groupoids. This follows immediately by Propositions 1.11, 3.9 and
3.13. ∎
## 4\. Campbell’s nerve for bicategories and 2-truncated 2-quasi-categories
We have seen in Proposition 2.7 that, unlike the case of quasi-categories, the
strict 2-nerve of a 2-category is not always a 2-quasi-category. In [Cam20] is
built a homotopy coherent nerve
$N_{h}:\operatorname{\mathbf{Bicat}}\to\widehat{\Theta_{2}}$, with the
property that $N_{h}(\mathcal{C})$ is a 2-quasi-category for every bicategory
$\mathcal{C}$. In this section, we recall the definition of the nerve $N_{h}$
and how it induces a Quillen equivalence between Lack’s model structure for
bicategories (described in [Lac04]) and a model structure for 2-truncated
2-quasi-categories (see Definition 4.9).
All the results of this section are due to Campbell. We present some proofs in
order to show that the techniques used in section 5 are the same as those of
[Cam20].
###### 4.1.
We will write $\operatorname{\mathbf{Bicat}}$ for the category of bicategories
and normal pseudo-functors (i.e., pseudo-functors which preserve identities
strictly and preserve composition of 1-morphisms up to an invertible
2-morphism) and $\operatorname{\mathbf{Bicat}}_{s}$ for the the category of
bicategories and strict functors. Recall that $\operatorname{\mathbf{2-Cat}}$
is the category of (strict) 2-categories and (strict) 2-functors. We have
inclusions:
$\operatorname{\mathbf{2-Cat}}\to\operatorname{\mathbf{Bicat}}_{s}\to\operatorname{\mathbf{Bicat}}$
where only the first one is fully faithful. Both inclusion have left adjoints,
which we denote by
$\lambda:\operatorname{\mathbf{Bicat}}_{s}\to\operatorname{\mathbf{2-Cat}}$
and $Q:\operatorname{\mathbf{Bicat}}\to\operatorname{\mathbf{Bicat}}_{s}$. We
write $\operatorname{st}$ for the composite functor
$\operatorname{st}:=\lambda
Q:\operatorname{\mathbf{Bicat}}\to\operatorname{\mathbf{2-Cat}}$.
An important feature of the categories $\operatorname{\mathbf{2-Cat}}$ and
$\operatorname{\mathbf{Bicat}}_{s}$ is that they are both complete and
cocomplete, which is not the case for $\operatorname{\mathbf{Bicat}}$ (it is
nether complete nor cocomplete).
###### Definition 4.2.
The homotopy coherent nerve
$N_{h}:\operatorname{\mathbf{Bicat}}\to\widehat{\Theta_{2}}$ is the nerve
functor (cf. §2.5) associated to the inclusion
$\Theta_{2}\to\operatorname{\mathbf{2-Cat}}\to\operatorname{\mathbf{Bicat}}$.
As for the strict 2-nerve, the homotopy coherent nerve
$N_{h}:\operatorname{\mathbf{Bicat}}\to\widehat{\Theta_{2}}$ is fully faithful
[Cam20, Theorem 3.18]. We also denote by $N_{h}$ the restriction of this
functor along the inclusions
$\operatorname{\mathbf{2-Cat}}\to\operatorname{\mathbf{Bicat}}_{s}\to\operatorname{\mathbf{Bicat}}$.
Note that $N_{h}:\operatorname{\mathbf{2-Cat}}\to\widehat{\Theta_{2}}$ is
still faithful, but is no longer full.
Explicitly, for a 2-category $\mathcal{C}$ and $[n;\mathbf{q}]\in\Theta_{2}$,
we have
$N_{h}(\mathcal{C})_{n;\mathbf{q}}=\operatorname{Hom}_{\operatorname{\mathbf{Bicat}}}([n;\mathbf{q}],\mathcal{C})\cong\operatorname{Hom}_{\operatorname{\mathbf{Bicat}}_{s}}(Q[n;q],\mathcal{C})\cong\operatorname{Hom}_{\operatorname{\mathbf{2-Cat}}}(\operatorname{st}[n;\mathbf{q}],\mathcal{C})$
###### 4.3.
The restriction of the nerve
$N_{h}:\operatorname{\mathbf{Bicat}}\to\widehat{\Theta_{2}}$ to
$\operatorname{\mathbf{Bicat}}_{s}$ is isomorphic to the nerve functor
$\operatorname{\mathbf{Bicat}}_{s}\to\widehat{\Theta_{2}}$ induced by the
composition of the inclusion $\Theta_{2}\to\operatorname{\mathbf{Bicat}}$ with
the functor
$Q:\operatorname{\mathbf{Bicat}}\to\operatorname{\mathbf{Bicat}}_{s}$. Since
$\operatorname{\mathbf{Bicat}}_{s}$ is a cocomplete category, we have (see
§2.5) that $N_{h}:\operatorname{\mathbf{Bicat}}_{s}\to\widehat{\Theta_{2}}$ is
the right functor of an adjunction
$\tau_{b}:\widehat{\Theta_{2}}\rightleftarrows\operatorname{\mathbf{Bicat}}_{s}:N_{h}$
where $\tau_{b}$ is obtained by left Kan extension of
$Q:\Theta_{2}\to\operatorname{\mathbf{Bicat}}_{s}$ along the Yoneda embedding.
###### 4.4.
A pseudo-functor $F:\mathcal{C}\to\mathcal{D}$ between two bicategories is a
biequivalence if
1. (1)
$F$ is biessentially surjective on objects (i.e., if for every object $y$ of
$\mathcal{D}$, there is an object $x$ of $\mathcal{C}$ such that $F(x)$ is
equivalent333A morphism $f:x\to y$ in a bicategory is an equivalence if there
exists a morphism $g:y\to x$ and invertible 2-morphisms
$\alpha:gf\Rightarrow\operatorname{id}_{x}$ and
$\beta:fg\Rightarrow\operatorname{id}_{y}$. Two objects of a bicategory are
said to be equivalent if there is an equivalence between then. to $y$, and
2. (2)
$F$ is locally an equivalence of categories (i.e., for every objects
$x,x^{\prime}\in\mathcal{C}$, the functor
$F_{x,x^{\prime}}:\operatorname{Hom}_{\mathcal{C}}(x,x^{\prime})\to\operatorname{Hom}_{\mathcal{D}}(Fx,Fx^{\prime})$
is an equivalence of categories)
The categories $\operatorname{\mathbf{Bicat}}_{s}$ and
$\operatorname{\mathbf{2-Cat}}$ are endowed with model category structures,
described in [Lac02] and [Lac04], whose weak equivalences are biequivalences
and for which all objects are fibrant. The model structure on
$\operatorname{\mathbf{2-Cat}}$ is right-transferred by the one on
$\operatorname{\mathbf{Bicat}}_{s}$ via the adjunction
$\lambda:\operatorname{\mathbf{Bicat}}_{s}\rightleftarrows\operatorname{\mathbf{2-Cat}}$
of §4.1. Moreover, this adjunction is a Quillen equivalence between both model
categories. It will be useful to know hat the components at every object of
the unit and the counit of the adjunction
$\operatorname{st}:\operatorname{\mathbf{Bicat}}\rightleftarrows\operatorname{\mathbf{2-Cat}}$
are biequivalences (See [Cam20, §4.8]).
###### Theorem 4.5.
The adjunction
$\tau_{b}:\widehat{\Theta_{2}}\rightleftarrows\operatorname{\mathbf{Bicat}}_{s}:N_{h}$
is a Quillen adjunction between the category of 2-cellular sets with Ara’s
model structure for 2-quasi-categories and $\operatorname{\mathbf{Bicat}}_{s}$
with Lacks’s model structure for bicategories. Moreover, it is a homotopy
reflection.
###### Proof.
See [Cam20, Theorem 5.10]. ∎
###### 4.6.
Theorem 4.5 implies that the homotopy coherent nerve $N_{h}(\mathcal{C})$ of
every bicategory $\mathcal{C}$ is a 2-quasi-category. Therefore, we have
indeed a solution for the "problem" of the strict nerve described in the
beginning of this section. Moreover, since every strict 2-functor is a normal
pseudo-functor, there is an inclusion $N_{2}(\mathcal{C})\to
N_{h}(\mathcal{C})$ for every 2-category $\mathcal{C}$. Campbell shows that
this inclusion is a weak equivalence of Ara [Cam20, Theorem 10.10], which
exhibits $N_{h}(\mathcal{C})$ as a fibrant replacement of
$N_{2}(\mathcal{C})$.
###### 4.7.
Since the homotopy coherent nerve of a bicategory is a 2-quasi-category, the
functor $N_{h}:\operatorname{\mathbf{Bicat}}\to\widehat{\Theta_{2}}$ factors
through the full subcategory $\operatorname{\mathbf{2-QCat}}$ of
$\widehat{\Theta_{2}}$ formed by 2-quasi-categories. The functor
$N_{h}:\operatorname{\mathbf{Bicat}}\to\operatorname{\mathbf{2-QCat}}$ has a
left adjoint, denote by
$\operatorname{Ho}:\operatorname{\mathbf{2-QCat}}\to\operatorname{\mathbf{Bicat}}$.
For a 2-quasi-category $X$, we call $\operatorname{Ho}(X)$ the homotopy
bicategory associated to $X$. The objects of $\operatorname{Ho}(X)$ are the
same as those of $X$, and its hom-categories are given by
$\operatorname{Ho}(X)(x,y)=\operatorname{ho}(\operatorname{Hom}_{X}(x,y))$,
for every $x,y\in X_{0}$ (where $\operatorname{ho}(Y)$ denotes the homotopy
category of a quasi-category $Y$) [Cam20, §6.26]. A 1-morphism of
$\operatorname{Ho}(X)$ is an equivalence if and only if it is invertible in
the underlying quasi-category $i^{*}(X)$ [Cam20, Proposition 7.8].
By considering the adjunctions of §4.1 and §4.3, we have that for every
2-quasi-category $X$ and every bicategory $\mathcal{C}$,
$\operatorname{Hom}_{\operatorname{\mathbf{Bicat}}_{s}}(Q\operatorname{Ho}(X),\mathcal{C})\cong\operatorname{Hom}_{\operatorname{\mathbf{Bicat}}}(\operatorname{Ho}(X),\mathcal{C})\cong\operatorname{Hom}_{\widehat{\Theta_{2}}}(X,N_{h}(\mathcal{C}))\cong\operatorname{Hom}_{\operatorname{\mathbf{Bicat}}_{s}}(\tau_{b}X,\mathcal{C})$
By the Yoneda lemma, the bicategories $Q\operatorname{Ho}(X)$ and $\tau_{b}X$
are isomorphic. By applying the functor $\lambda$, we obtain an isomorphism
$\operatorname{st}\operatorname{Ho}(X)\cong\lambda\tau_{b}X$.
We can now give an alternative characterisation of 2-quasi-groupoids. Recall
that a bigroupoid is a bicategory where every 1-morphism is an equivalence and
every 2-morphism is (strictly invertible).
###### Proposition 4.8.
A 2-quasi-category $X$ is a 2-quasi-groupoid if and only if the 2-category
$\lambda\tau_{b}(X)$ is a bigroupoid.
###### Proof.
Let $X$ be a 2-quasi-category. We show that:
1. (1)
$i^{*}(X)$ is a Kan complex if and only if every 1-morphism of
$\lambda\tau_{b}(X)$ is an equivalence, and
2. (2)
$X$ is locally Kan if and only if every 2-morphism of $\lambda\tau_{b}(X)$ is
invertible.
We have that $i^{*}(X)$ is a Kan complex if and only if every 1-morphism of
$\operatorname{Ho}(X)$ is an equivalence, cf. §4.7. We know that
$\lambda\tau_{b}(X)\cong\operatorname{st}(\operatorname{Ho}(X))$ and that the
unit $\operatorname{Ho}(X)\to\operatorname{st}(\operatorname{Ho}(X))$ is a
biequivalence by §4.4. Note that if $\mathcal{C}\to\mathcal{D}$ is a
biequivalence between bicategories, then every morphism of $\mathcal{C}$ is an
equivalence if and only if every morphism of $\mathcal{D}$ is an equivalence.
This shows (1).
For (2), recall that for every $x,y\in X_{0}$, we have
$\operatorname{ho}(\operatorname{Hom}_{X}(x,y))=\operatorname{Ho}(X)(x,y)$, so
$X$ is locally Kan if and only if the homotopy bicategory
$\operatorname{Ho}(X)$ is locally groupoidal. Since a bicategory biequivalent
to a locally groupoidal bicategory is also locally groupoidal, we have that
$\lambda\tau_{b}(X)\cong\operatorname{st}(\operatorname{Ho}(X))$ is locally
groupoidal. This is equivalent to saying that every 2-morphism of
$\lambda\tau_{b}(X)$ is invertible. ∎
###### 4.9.
Recall from [Joy08a, CL20] that a quasi-category $X$ is said to be 1-truncated
if the morphism $X\to N(\operatorname{ho}(X))$ is a weak categorical
equivalence. Following [Cam20], a 2-quasi-category $X$ is 2-truncated if, for
every $x,y\in X_{0}$, the hom-quasi-category $\operatorname{Hom}_{X}(x,y)$ is
1-truncated.
2-truncated 2-quasi-categories can be characterised as local objects.
###### Proposition 4.10.
A 2-quasi-category $X$ is 2-truncated if and only if $X$ is local with respect
to the boundary inclusion
$\delta_{1;3}:\partial\Theta_{2}[1;3]\to\Theta_{2}[1;3]$ in the model
structure for 2-quasi-categories.
###### Proof.
A quasi-category is 1-truncated if and only if it is local with respect to the
boundary inclusion $\delta_{3}:\partial\Delta[3]\to\Delta[3]$ in Joyal’s model
structure (see [CL20, Proposition 3.23]). The result follows from Proposition
1.8, since
$(\delta_{1;3}:\partial\Theta_{2}[1;3]\to\Theta_{2}[1;3])=\Sigma(\delta_{3}:\partial\Delta[3]\to\Delta[3])$.
∎
###### 4.11.
In view of the previous proposition, we can use Theorem 1.6 once again to
produce a Bousfield localisation of Ara’s model structure, whose fibrant
objects are the 2-truncated 2-quasi-categories. We call it the model structure
for 2-truncated 2-quasi-categories.
The following theorem allows us to show that the Quillen adjunction of Theorem
4.5 becomes a Quillen equivalence after localizing Ara’s model structure to
obtain the model structure for 2-truncated 2-quasi-categories.
###### Theorem 4.12.
A 2-quasi-category $X$ is 2-truncated if and only if the unit $X\to
N_{h}(\operatorname{Ho}(X))$ of the adjunction
$\operatorname{Ho}:\operatorname{\mathbf{2-QCat}}\rightleftarrows\operatorname{\mathbf{Bicat}}:N_{h}$
is a weak equivalence of Ara.
###### Proof.
See [Cam20, Theorem 7.28]. ∎
###### Theorem 4.13.
The adjunction
$\tau_{b}:\widehat{\Theta_{2}}\rightleftarrows\operatorname{\mathbf{Bicat}}_{s}:N_{h}$
is a Quillen equivalence between the category of 2-cellular sets with the
model structure for 2-truncated 2-quasi-categories and
$\operatorname{\mathbf{Bicat}}_{s}$ with Lacks’s model structure for
bicategories.
###### Proof.
Given Theorem 4.5, it is sufficient by Theorem 1.3 to show that every
2-truncated 2-quasi-category is weakly equivalent (in Ara’s model structure)
to the homotopy coherent nerve of a bicategory. This follows from Theorem
4.12. ∎
## 5\. 2-truncated 2-quasi-groupoids vs 2-groupoids
In this section, we construct a Quillen equivalence between a model structure
for 2-truncated 2-quasi-groupoids on $\widehat{\Theta_{2}}$ and the model
structure for 2-groupoids on the category $\operatorname{\mathbf{2-Gpd}}Gpd$
of 2-groupoids and (strict) 2-functors, described in [MS93].
###### 5.1.
The inclusion functor
$\operatorname{\mathbf{2-Gpd}}Gpd\to\operatorname{\mathbf{2-Cat}}$ has a left
adjoint $F:\operatorname{\mathbf{2-Cat}}\to\operatorname{\mathbf{2-Gpd}}Gpd$,
which freely adds inverses to every morphism and 2-morphism.
###### Proposition 5.2.
The adjunction
$F:\operatorname{\mathbf{2-Cat}}\rightleftarrows\operatorname{\mathbf{2-Gpd}}Gpd$
is a Quillen adjunction between Lack’s model structure for 2-categories and
Moerdijk-Svensson’s model structure for 2-groupoids. The model structure for
2-groupoids is the model structure right-transferred from the model structure
for 2-categories via this adjunction. Moreover, the adjunction is a homotopy
reflection.
###### Proof.
The fact that the model structure for 2-groupoids is the model structure
right-transferred from the model structure for 2-categories can be checked
directly from the definitions of weak equivalences and fibrations in both
model structures. This implies that the adjunction is Quillen, since the
inclusion $\operatorname{\mathbf{2-Gpd}}Gpd\to\operatorname{\mathbf{2-Cat}}$
obviously preserves fibrations and weak equivalence. The adjunction is a
homotopy reflection by a result of Lack [Lac02, Theorem 8.4], which states
that the counit of the derived adjunction is invertible. ∎
###### 5.3.
Let us recollect, for the convenience of the reader, the adjunctions
considered so far.
${{\mathbf{Bicat}}}$${{\mathbf{Bicat}_{s}}}$${{\mathbf{2\text{-}Cat}}}$${{\mathbf{2\text{-}Gpd}}}$${{\mathbf{2\text{-}QCat}}}$${{\widehat{\Theta_{2}}}}$$\scriptstyle{N_{h}}$$\scriptstyle{\tau_{b}}$$\scriptstyle{I}$$\scriptstyle{Q}$$\scriptstyle{\lambda}$$\scriptstyle{F}$$\scriptstyle{N_{h}}$Ho$\scriptstyle{\dashv}$$\scriptstyle{\dashv}$$\scriptstyle{\dashv}$$\scriptstyle{\dashv}$$\scriptstyle{\dashv}$
In the diagram above, the non-labelled arrows are inclusions, the first two
horizontal adjunctions are those of §4.1 and the third horizontal adjunction
is the one of Proposition 5.2. The left-side (resp. right-side) vertical
adjunction is presented in §4.7 (resp. §4.3).
###### 5.4.
We consider the Quillen adjunction
$\tau:\widehat{\Theta_{2}}\rightleftarrows\operatorname{\mathbf{2-Gpd}}Gpd:N_{h}$
obtained by composing the right-side vertical adjunction with the second and
the third horizontal adjunctions of the diagram above. The functor
$\tau:\widehat{\Theta_{2}}\to\operatorname{\mathbf{2-Gpd}}Gpd$ is explicitly
defined as the composite $\tau:=F\lambda\tau_{b}$.
Our goal is to show that this adjunction is a Quillen equivalence between a
certain model structure for 2-truncated 2-quasi-groupoids on
$\widehat{\Theta_{2}}$, described in §5.6, and Moerdijk-Svensson’s model
structure on $\operatorname{\mathbf{2-Gpd}}Gpd$.
###### Proposition 5.5.
A 2-quasi-category $X$ is a 2-truncated 2-quasi-groupoid if and only if it is
local with respect to the morphisms
$\delta_{1;3}:\partial\Theta_{2}[1;3]\to\Theta_{2}[1;3],[\operatorname{id};\sigma^{0}]:\Theta_{2}[1;1]\to\Theta_{2}[1;0],[\sigma^{0}]:\Theta_{2}[1;0]\to\Theta_{2}[0]$
in Ara’s model structure for 2-quasi-categories
###### Proof.
It follows from Propositions 3.12 and 4.10. ∎
###### 5.6.
Theorem 1.6 gives a model structure on $\widehat{\Theta_{2}}$ whose
cofibrations are monomorphisms, and whose fibrant objects are 2-truncated
2-quasi-groupoids. This model structure for 2-truncated 2-quasi-groupoids is
the localisation of Ara’s model structure for 2-quasi-categories with respect
to the morphisms of Proposition 5.5.
###### Theorem 5.7.
The adjunction
$\tau:\widehat{\Theta_{2}}\rightleftarrows\operatorname{\mathbf{2-Gpd}}Gpd:N_{h}$
of §5.4 is a Quillen equivalence between the model structure for 2-truncated
2-quasi-groupoids and Moerdijk-Svensson’s model structure for 2-groupoids.
###### Proof.
We consider the model structure for 2-truncated quasi-categories on
$\widehat{\Theta_{2}}$. With this model structure, the adjunction (1)
$\tau:\widehat{\Theta_{2}}\rightleftarrows\operatorname{\mathbf{2-Gpd}}Gpd:N_{h}$
is Quillen and a homotopy reflection, as a composition of the Quillen
equivalence (2)
$\tau_{b}:\widehat{\Theta_{2}}\rightleftarrows\operatorname{\mathbf{Bicat}}_{s}:N_{h}$
of Theorem 4.13, the Quillen equivalence (3)
$\lambda:\operatorname{\mathbf{Bicat}}_{s}\rightleftarrows\operatorname{\mathbf{2-Cat}}$
of §4.4 and the homotopy reflection (4)
$F:\operatorname{\mathbf{2-Cat}}\rightleftarrows\operatorname{\mathbf{2-Gpd}}Gpd$
of Proposition 5.2.
By Propositions 1.11 and 4.11 and §5.6, the model structure for 2-truncated
2-quasi-groupoids is a localisation of the former one. We can then use Theorem
1.3.(2) to show that the adjunction (1) is a Quillen equivalence after
localisation. The conditions to be checked amount to the assertions (a) and
(b) below.
(a) The adjunction (1) is a Quillen adjunction, when considering the model
structure for 2-truncated quasi-groupoids on $\widehat{\Theta_{2}}$.
By Theorem 1.3.(1), it is sufficient to show that if $\mathcal{G}$ is a
2-groupoid, then $N_{h}(\mathcal{G})$ is a 2-truncated 2-quasi-groupoid. For
every objects $x,y$ of $\mathcal{G}$, we have that
$\operatorname{Hom}_{N_{h}(\mathcal{G})}(x,y)\cong N(\mathcal{G}(x,y))$ (here,
the hom-simplicial set is the one defined in §3.7). Indeed, an $n$-simplex of
$\operatorname{Hom}_{N_{h}(\mathcal{G})}(x,y)$ is a normal pseudo-functor
$[1;n]\to\mathcal{G}$ sending the objects 0 and 1 of $[1;n]$ to $x$ and $y$ –
such a pseudo-functor is necessarily a 2-functor since $[1;n]$ has no
composable (non-identity) 1-morphisms – or, equivalently, the data of $n$
composable 2-morphisms with 2-source $x$ and 2-target $y$, which is an
$n$-simplex of $N(\mathcal{G}(x,y))$. But $\mathcal{G}(x,y)$ is a groupoid, so
$N(\mathcal{G}(x,y))$ is 1-truncated and a Kan complex, and hence
$N_{h}(\mathcal{G})$ is 2-truncated and locally Kan. Since every 1-morphism of
$\mathcal{G}$ is invertible, it follows from the definitions that every
morphism (1-simplex) of $i^{*}(N_{h}(\mathcal{G}))$ is invertible, and so
$i^{*}(N_{h}(\mathcal{G}))$ is a Kan complex.
(b) For every 2-truncated 2-quasi-groupoid $X$, there exists a 2-groupoid $Y$
and a weak equivalence $X\to N_{h}(Y)$ in the model structure for 2-truncated
2-quasi-categories.
Let $Y=\tau X$ and consider the unit morphism $X\to N_{h}(\tau X)$ of the
adjunction (1). This morphism can be written as a composite
$X\xrightarrow[]{\eta_{X}}N_{h}(\lambda\tau_{b}X)\xrightarrow[]{N_{h}(\eta^{\prime}_{\lambda\tau_{b}X})}N_{h}(\tau
X)$
where the first morphism is the component at $X$ of the unit of the Quillen
equivalence obtained by composing the Quillen equivalences (2) and (3), and
the second one is the image by $N_{h}$ of the component at $\lambda\tau_{b}X$
of the unit of the Quillen adjunction (4). The first morphism is a weak
equivalence, since it is the unit of a Quillen equivalence at a cofibrant
object $X$, and $\lambda\tau_{b}X$ is fibrant (as every 2-category). By
Proposition 4.8, the 2-category $\lambda\tau_{b}X$ is a bigroupoid. In the
proof of [Lac02, Theorem 8.5], it is shown that if a 2-category $\mathcal{A}$
is a bigroupoid, the 2-functor $\mathcal{A}\to F\mathcal{A}$ given by the unit
of the adjunction (4) is a biequivalence. Thus, the second morphism is also a
weak equivalence, as the image by a right Quillen functor of a weak
equivalence between fibrant objects. We conclude that $X\to N_{h}(\tau X)$ is
a weak equivalence. ∎
## 6\. Homotopy 2-types
In this section, we recall a result of [MS93], which will allow the comparison
of 2-truncated 2-quasi-groupoids and homotopy 2-types, by means of a zigzag of
Quillen equivalences.
###### 6.1.
Let $n\geq 0$. A (homotopy) $n$-type is a Kan complex $X$ such that, for every
$x\in X_{0}$, the $m$-th homotopy group $\pi_{m}(X,x)$ is trivial for every
$m>n$. Homotopy $n$-types can be characterised as local objects in the Kan-
Quillen model structure on simplicial sets. Indeed, a Kan complex $X$ is an
$n$-type if and only if it is local with respect to the boundary inclusion
$\partial\Delta[n+2]\to\Delta[n+2]$ in this model structure (see for example
[CL20, Corollary 3.25]). Therefore, by Theorem 1.6, the Bousfield localisation
of the Kan-Quillen model structure with respect to
$\delta_{n}:\partial\Delta[n+2]\to\Delta[n+2]$ produces a model structure
whose fibrant objects are homotopy $n$-types. For $n=2$, we obtain a model
structure for homotopy 2-types.
###### Theorem 6.2 (Moerdijk-Svensson).
There is a Quillen equivalence
$W:\widehat{\Delta}\rightleftarrows\operatorname{\mathbf{2-Gpd}}Gpd:N_{S}$
between the model structure for homotopy 2-types and Moerdijk-Svensson’s model
structure for 2-groupoids.
###### Proof.
By [MS93, Proposition 2.1.(ii),(iv)], the adjunction of [MS93, Theorem 2.3] is
Quillen when considering the Kan-Quillen model structure on
$\widehat{\Delta}$. It follows from Theorem 1.3.(1) and [MS93, Proposition
2.1.(iii)] that it remains Quillen after localisation to the model structure
for 2-types. The induced adjunction between homotopy categories is an
equivalence by [MS93, Corollary 2.6], after noticing that the homotopy
category of 2-types is equivalent to the full subcategory of the homotopy
category of spaces formed by homotopy 2-types. ∎
###### Corollary 6.3.
The homotopy category of $\widehat{\Theta_{2}}$ with the model structure for
2-truncated 2-quasi-groupoids is equivalent to the homotopy category of
$\widehat{\Delta}$ with the model structure for homotopy 2-types.
###### Proof.
This follows from the Quillen equivalence of Theorem 5.7 and Moerdijk-
Svensson’s Quillen equivalence of Theorem 6.2. Indeed, we have a zigzag of
Quillen equivalences
${{\widehat{\Theta_{2}}}}$${{\mathbf{2\text{-}Gpd}}}$${{\widehat{\Delta}}}$$\scriptstyle{W}$$\scriptstyle{N_{S}}$$\scriptstyle{\tau}$$\scriptstyle{N_{h}}$$\scriptstyle{\dashv}$$\scriptstyle{\dashv}$
∎
## References
* [Ara14] Dimitri Ara. Higher quasi-categories vs higher Rezk spaces. Journal of K-theory, 14(3):701–749, 2014.
* [Bar10] Clark Barwick. On left and right model categories and left and right bousfield localizations. Homology, Homotopy and Applications, 12(2):245–320, 2010.
* [Ber02] Clemens Berger. A cellular nerve for higher categories. Advances in Mathematics, 169(1):118–175, 2002.
* [Ber07] Clemens Berger. Iterated wreath product of the simplex category and iterated loop spaces. Advances in Mathematics, 213(1):230–270, 2007.
* [BR13] Julia E Bergner and Charles Rezk. Comparison of models for ($\infty$, n)–categories, I. Geometry & Topology, 17(4):2163–2202, 2013.
* [BR20] Julia E Bergner and Charles Rezk. Comparison of models for ($\infty$, n)–categories, II. Journal of Topology, 13(4):1554–1581, 2020.
* [Cam20] Alexander Campbell. A homotopy coherent cellular nerve for bicategories. Advances in Mathematics, 368:107138, 2020.
* [CL20] Alexander Campbell and Edoardo Lanari. On truncated quasi-categories. Cahiers de topologie et géométrie différentielle catégoriques, 61(2):154–207, 2020.
* [Dug01] Daniel Dugger. Combinatorial model categories have presentations. Advances in Mathematics, 164(1):177–201, 2001.
* [Hir03] Philip S Hirschhorn. Model categories and their localizations. Number 99. American Mathematical Soc., 2003.
* [Joy08a] André Joyal. Notes on quasi-categories. preprint, 2008.
* [Joy08b] André Joyal. The theory of quasi-categories and its applications. preprint, 2008.
* [JT07] André Joyal and Myles Tierney. Quasi-categories vs Segal spaces. Contemporary Mathematics, 431(277-326):10, 2007.
* [Lac02] Stephen Lack. A Quillen model structure for 2-categories. K-theory, 26(2):171–205, 2002.
* [Lac04] Stephen Lack. A Quillen model structure for bicategories. K-theory, 33(3):185–197, 2004.
* [MS93] Ieke Moerdijk and Jan-Alve Svensson. Algebraic classification of equivariant homotopy 2-types, I. Journal of Pure and Applied Algebra, 89(1-2):187–216, 1993.
* [Rez10] Charles Rezk. A cartesian presentation of weak n–categories. Geometry & Topology, 14(1):521–571, 2010.
* [Sim11] Carlos Simpson. Homotopy Theory of Higher Categories: From Segal Categories to n-Categories and Beyond, volume 19. Cambridge University Press, 2011.
|
Recursion Relation for Toeplitz Determinants and the Discrete Painlevé II
Hierarchy
Recursion Relation for Toeplitz Determinants
and the Discrete Painlevé II Hierarchy††This paper is a contribution to the
Special Issue on Evolution Equations, Exactly Solvable Models and Random
Matrices in honor of Alexander Its’ 70th birthday. The full collection is
available at https://www.emis.de/journals/SIGMA/Its.html
Thomas CHOUTEAU a and Sofia TARRICONE b
T. Chouteau and S. Tarricone
a) Université d’Angers, CNRS, LAREMA, SFR MATHSTIC, F-49000 Angers, France
<EMAIL_ADDRESS>
b) Institut de Physique Théorique, Université Paris-Saclay, CEA, CNRS,
b) F-91191 Gif-sur-Yvette, France<EMAIL_ADDRESS>https://starricone.netlify.app/
Received December 22, 2022, in final form May 16, 2023; Published online May
28, 2023
Solutions of the discrete Painlevé II hierarchy are shown to be in relation
with a family of Toeplitz determinants describing certain quantities in
multicritical random partitions models, for which the limiting behavior has
been recently considered in the literature. Our proof is based on the
Riemann–Hilbert approach for the orthogonal polynomials on the unit circle
related to the Toeplitz determinants of interest. This technique allows us to
construct a new Lax pair for the discrete Painlevé II hierarchy that is then
mapped to the one introduced by Cresswell and Joshi.
discrete Painlevé equations; orthogonal polynomials; Riemann–Hilbert problems;
Toeplitz determinants
33E17; 33C47; 35Q15
## 1 Introduction
Let us consider the symbol $\varphi(z)=\mathrm{e}^{w(z)}$ with
$w(z)\coloneqq v(z)+v\big{(}z^{-1}\big{)}\qquad\text{and}\qquad
v(z)\coloneqq\sum_{j=1}^{N}\frac{\theta_{j}}{j}z^{j},$ (1.1)
for $\theta_{j}$ being real constants and natural $N\geq 1$. The $n$-th
Toeplitz matrix associated to this symbol and denoted by $T_{n}(\varphi)$ is a
square $(n+1)$-dimensional matrix which entries are given by
$T_{n}(\varphi)_{i,j}\coloneqq\varphi_{i-j},\qquad i,j=0,\dots,n.$ (1.2)
Here for every $k\in\mathbb{Z}$, $\varphi_{k}$ is the $k$-th Fourier
coefficient of $\varphi(z)$, namely
$\varphi_{k}=\int_{-\pi}^{\pi}\mathrm{e}^{-{\rm
i}k\beta}\varphi\big{(}\mathrm{e}^{{\rm i}\beta}\big{)}\frac{{\rm
d}\beta}{2\pi},$
so that $\sum_{k\in\mathbb{Z}}\varphi_{k}z^{k}=\varphi(z)$. Notice that, even
though it is not emphasized in our notation, the functions $\varphi_{k}$ and
thus the Toeplitz matrix $T_{n}(\varphi)$ explicitly depend on the natural
parameter $N$ which enters in the definition of $v(z)$ in equation (1.1).
In the present work, it is indeed the dependence on this parameter $N$ that we
want to study. In particular, we show that the Toeplitz determinants
associated to $T_{n}(\varphi)$, naturally defined as
$D_{n}^{N}\coloneqq D_{n}=\det(T_{n}(\varphi)),$ (1.3)
are related to some solutions of a discrete version of the Painlevé II
hierarchy, indexed over the parameter $N$ (the dependence on $N$ is dropped in
the rest of the paper). Our interest in these Toeplitz determinants comes from
their appearance in the recent paper [5]. The authors there consider some
probability measures on the set of integer partitions called multicritical
Schur measures, which are a particular case of Schur measures introduced by
Okounkov in [23]. They are generalizations of the classical Poissonized
Plancherel measure and they are defined as
$\mathbb{P}(\\{\lambda\\})=Z^{-1}s_{\lambda}[\theta_{1},\dots,\theta_{N}]^{2},\qquad\text{with}\quad
Z=\exp\Bigg{(}\sum_{i=1}^{N}\frac{\theta_{i}^{2}}{i}\Bigg{)}.$ (1.4)
Here $s_{\lambda}[\theta_{1},\dots,\theta_{N}]$ denotes a Schur symmetric
function indexed by a partition $\lambda$ that can be expressed as
$s_{\lambda}[\theta_{1},\dots,\theta_{N}]=\det_{i,j}h_{\lambda_{i}-i+j}[\theta_{1},\dots,\theta_{N}],$
where $\sum_{k\geq
0}h_{k}z^{k}=\exp\big{(}\sum_{i=1}^{N}\frac{\theta_{i}}{i}z^{i}\big{)}$. In
[5], the authors first used the term multicritical to underline that they
obtained a different limiting edge behavior for these Schur measures compared
to the classical case of the Poissonized Plancherel measure ($N=1$) which is
characterised by the Tracy–Widom GUE distribution. For more details, we remind
to their Theorem 1 or our discussion in the paragraph “Continuous limit”
below, for instance see equation (1.23) where the higher order Tracy–Widom
distributions appear.
In this setting, denoting by $\lambda=(\lambda_{1}\geq\lambda_{2}\geq\dots\geq
0)$ a generic integer partition and by
$\lambda^{\prime}=(\lambda_{1}^{\prime}\geq\lambda_{2}^{\prime}\geq\dots\geq
0)$ its conjugate partition (namely such that
$\lambda_{j}^{\prime}=|i:\lambda_{i}\geq j|$), major quantities of interest of
the model are, for any given $n\in\mathbb{N}$,
$r_{n}\coloneqq\mathbb{P}(\lambda_{1}\leq n)\qquad\text{and}\qquad
q_{n}\coloneqq\mathbb{P}(\lambda_{1}^{\prime}\leq n),$ (1.5)
that are often called discrete gap probabilities as random partitions have a
natural interpretation in terms of random configuration of points on the set
of semi-integers. Indeed, associating the set
$\\{\lambda_{i}-i+1/2\\}\subset\mathbb{Z}+\frac{1}{2}$ to a partition
$\lambda$ (see [23]), $r_{n}$ and $q_{n}$ can be expressed in terms of a
Fredholm determinant of a discrete kernel which corresponds to the gap
probability in the determinantal point process defined through the same
kernel.
According to Geronimo–Case/Borodin–Okounkov formula [7], there is a relation
between this Fredholm determinant and the Toeplitz determinant $D_{n}$ and
this implies that $r_{n}$ and $q_{n}$ (up to a constant factor) are Toeplitz
determinants. It leads to (for instance [5, Propositions 6 and 7]):
$\displaystyle q_{n}=\mathrm{e}^{-\sum_{j=1}^{N}\theta_{j}^{2}/j}D_{n-1}.$
(1.6)
For $r_{n}$ instead, one should define
$\widetilde{\theta}_{i}=(-1)^{i-1}\theta_{i}$ and by taking
$\tilde{w}(z)=\tilde{v}(z)+\tilde{v}\big{(}z^{-1}\big{)}$, where
$\tilde{v}(z)$ is nothing than $v(z)$ with $\theta_{i}$ replaced by
$\tilde{\theta}_{i}$ as given above, the Toeplitz determinant
$\widetilde{D}_{n}$ associated to the symbol
$\widetilde{\varphi}(z)=\mathrm{e}^{\tilde{w}(z)}$ would give the analogue
formula
$r_{n}=\mathrm{e}^{-\sum_{j=1}^{N}\widetilde{\theta}_{j}^{2}/j}\widetilde{D}_{n-1}.$
Notice that in the simplest case, when $N=1$, the quantities $r_{n}$ and
$q_{n}$ coincide. Moreover, thanks to Schensted’s theorem [27], they are also
equal to the discrete probability distribution function of the length of the
longest increasing subsequence of random permutations of size $m$, with $m$
distributed as a Poisson random variable.
In the case $N=1$, the relation of these quantities with the theory of
discrete Painlevé equations was shown two decades ago independently and
through very different methods by Borodin [6], Baik [2], Adler and van
Moerbeke [1] and Forrester and Witte [16].111They obtained an analogue of
equation (1.7) for Toeplitz determinant associated to symbols which are not
necessarily positive or even real valued. In particular, they all proved that
for every $n\geq 1$, the following chain of equalities holds
$\frac{D_{n}D_{n-2}}{D_{n-1}^{2}}=\frac{q_{n+1}q_{n-1}}{q_{n}^{2}}=\frac{r_{n+1}r_{n-1}}{r_{n}^{2}}=1-x_{n}^{2},$
(1.7)
where $x_{n}$ solves the second order nonlinear difference equation
$\theta_{1}(x_{n+1}+x_{n-1})\big{(}1-x_{n}^{2}\big{)}+nx_{n}=0,$ (1.8)
with certain initial conditions. Equation (1.8) is a particular case of the so
called discrete Painlevé II equation [26], a discrete analogue of the
classical second order ODE known as the Painlevé II equation [24]. This means
that performing some continuous limit of equation (1.8) one gets back the
Painlevé II equation. The Painlevé II equations, discrete and continuous ones,
depend in general on an additional constant term $\alpha\in\mathbb{R}$. In the
present work, we consider the discrete Painlevé II equation and its hierarchy
in the homogeneous case where $\alpha=0$. Its continuous limit will correspond
as well to the case $\alpha=0$.
###### Remark 1.1.
The homogeneous Painlevé II equation admits a famous solution [17], called the
Hastings–McLeod solution, found by requiring a specific boundary condition at
$\infty$. In parallel, one might wonder what is the large $n$ behavior of the
solution $x_{n}$ of the discrete Painlevé II equation (1.8). Its behavior is
expressed in terms of the Bessel functions. First, this is suggested by the
following heuristic arguments. Because of the definition of $r_{n}$ (1.5), as
$n\to\infty$, $r_{n}$ tends to one and according to the equation (1.7),
$x_{n}$ tends to zero. Then for large $n$, the nonlinear term in equation
(1.8) is small compared to the linear ones and the equation (1.8) reduces to
the equation
$\theta_{1}\left(x_{n+1}+x_{n-1}\right)+nx_{n}=0,$
which indeed admits $J_{-n}(2\theta_{1})$, the Bessel function of the first
kind of order $-n$, as a solution. The claim is confirmed by a result of the
recent work [9]. The authors there studied the finite temperature deformation
for the discrete Bessel point process. The Fredholm determinant of the finite
temperature discrete Bessel kernel they studied depends on a function
$\sigma$. In the case when $\sigma=1_{\mathbb{Z}_{+}^{\prime}}$ (the
characteristic function of the set of positive half integers), the Fredholm
determinant is then equal to $r_{n}$. Then from [9, equations (1.33) and
(1.36) of Theorem III] together with equation (1.7), one can deduce that for
$n$ large $x_{n}^{2}\sim J_{n}(2\theta_{1})^{2}$ and, because of the previous
discussion, one can conclude
$x_{n}\sim J_{-n}(2\theta_{1})=(-1)^{n}J_{n}(2\theta_{1}),$
see also Figure 1.
Figure 1: For $N=1$, the graphs of $x_{n}$ and $(-1)^{n}J_{n}(2\theta_{1})$ in
function of $n$ for $\theta_{1}=3$.
For $N>1$, Adler and van Moerbeke presented in [1], a generalization of
equation (1.7) by proving that $x_{n}$ satisfies some recurrence relation
written in terms of the Toeplitz lattice Lax matrices. The main result of our
work is a recurrence relation for $x_{n}$ defined via a $N$-times iterating
discrete operator which establishes the link with the discrete Painlevé II
hierarchy [11]. The precise result is stated as below.
###### Theorem 1.2.
For any fixed $N\geq 1$, for the Toeplitz determinants $D_{n}$ (1.3), $n\geq
1$ associated to the symbol $\varphi(z)$ (1.1), we have
$\frac{D_{n}D_{n-2}}{D_{n-1}^{2}}=1-x_{n}^{2},$ (1.9)
where $x_{n}$ solves the $2N$ order nonlinear difference equation
$nx_{n}+\bigl{(}-v_{n}-v_{n}{\rm
Perm}_{n}+2x_{n}\Delta^{-1}(x_{n}-(\Delta+I)x_{n}{\rm
Perm}_{n})\bigr{)}{L}^{N}(0)=0,$ (1.10)
where $L$ is a discrete recursion operator defined as
$L(u_{n})\coloneqq\big{(}x_{n+1}\big{(}2\Delta^{-1}+I\big{)}((\Delta+I)x_{n}{\rm
Perm}_{n}-x_{n})+v_{n+1}(\Delta+I)-x_{n}x_{n+1}\bigr{)}u_{n}.$ (1.11)
Here $v_{n}\coloneqq 1-x_{n}^{2}$, $\Delta$ denotes the difference operator
$\Delta\colon\ u_{n}\to u_{n+1}-u_{n}$
and ${\rm Perm}_{n}$ is the transformation of the space
$\mathbb{C}\big{[}(x_{j})_{j\in[[0,2n]]}\big{]}$ acting by permuting indices
in the following way:
$\displaystyle\begin{split}{\rm
Perm}_{n}\colon\quad\mathbb{C}\big{[}(x_{j})_{j\in[[0,2n]]}\big{]}&\longrightarrow\mathbb{C}\big{[}(x_{j})_{j\in[[0,2n]]}\big{]},\\\
P\big{(}(x_{n+j})_{-n\leqslant j\leqslant n}\big{)}&\longmapsto
P\big{(}(x_{n-j})_{-n\leqslant j\leqslant n}\big{)}.\end{split}$ (1.12)
###### Remark 1.3.
According to equation (1.10) and the definition of the operator $L$ (1.11), we
need to perform discrete integrations to compute the $N$-th equation of the
discrete Painlevé II hierarchy. It is always possible to accomplish this
discrete integration. The operator $\Delta^{-1}$, inverse of the difference
operator $\Delta$, is applied to $(\Delta+I)x_{n}{\rm Perm}_{n}-x_{n}$ and it
is possible to write this operator as a derivative. Indeed,
$\left(\Delta+I\right)x_{n}{\rm Perm}_{n}-x_{n}=\Delta x_{n}{\rm
Perm}_{n}+({\rm Perm}_{n}-I)x_{n}.$
The first term on the right hand side is a derivative and because of the
definition of ${\rm Perm}_{n}$, the second term can be expressed as a
derivative.
Equation (1.10), together with the definition of the recursion operator $L$ in
(1.11), of the quantity $v_{n}$ and of the transformation ${\rm Perm}_{n}$ in
(1.12) is indeed the $N$-th member of the discrete Painlevé II hierarchy. The
first equations of the hierarchy read as
$\displaystyle N=1\colon\ $ $\displaystyle
nx_{n}+\theta_{1}(x_{n+1}+x_{n-1})\big{(}1-x_{n}^{2}\big{)}=0,$ (1.13)
$\displaystyle N=2\colon\ $ $\displaystyle
nx_{n}+\theta_{1}\big{(}1-x_{n}^{2}\big{)}(x_{n+1}+x_{n-1})+\theta_{2}\big{(}1-x_{n}^{2}\big{)}$
$\displaystyle\qquad\times{}\bigl{(}x_{n+2}\big{(}1-x_{n+1}^{2}\big{)}+x_{n-2}\big{(}1-x_{n-1}^{2}\big{)}-x_{n}(x_{n+1}+x_{n-1})^{2}\bigr{)}=0,$
(1.14) $\displaystyle N=3\colon\ $ $\displaystyle
nx_{n}+\theta_{1}\big{(}1-x_{n}^{2}\big{)}(x_{n+1}+x_{n-1})+\theta_{2}\big{(}1-x_{n}^{2}\big{)}\bigl{(}x_{n+2}\big{(}1-x_{n+1}^{2}\big{)}$
$\displaystyle\qquad{}+x_{n-2}\big{(}1-x_{n-1}^{2}\big{)}-x_{n}(x_{n+1}+x_{n-1})^{2}\bigr{)}+\theta_{3}\big{(}1-x_{n}^{2}\big{)}\bigl{(}x_{n}^{2}(x_{n+1}+x_{n-1})^{3}$
$\displaystyle\qquad{}+x_{n+3}\big{(}1-x_{n+2}^{2}\big{)}\big{(}1-x_{n+1}^{2}\big{)}+x_{n-3}\big{(}1-x_{n-2}^{2}\big{)}\big{(}1-x_{n-1}^{2}\big{)}\bigr{)}$
$\displaystyle\qquad{}+\theta_{3}\big{(}1-x_{n}^{2}\big{)}\bigl{(}-2x_{n}(x_{n+1}+x_{n-1})\big{(}x_{n+2}\big{(}1-x_{n+1}^{2}\big{)}+x_{n-2}\big{(}1-x_{n-1}^{2}\big{)}\big{)}$
$\displaystyle\qquad{}-x_{n-1}x_{n-2}^{2}\big{(}1-x_{n-1}^{2}\big{)}\bigr{)}$
$\displaystyle\qquad{}+\theta_{3}\big{(}1-x_{n}^{2}\big{)}\bigl{(}-x_{n+1}x_{n+2}^{2}\big{(}1-x_{n+1}^{2}\big{)}-x_{n+1}x_{n-1}(x_{n+1}+x_{n-1})\bigr{)}=0$
(1.15)
with the first one coinciding with the discrete Painlevé II equation (1.8).
Computations with the operator (1.11) introduced in Theorem 1.2 for $N=1$ and
$2$ are done in Example 3.11.
###### Remark 1.4.
The same heuristic argument used in Remark 1.1 applies also when $N>1$ (since
$x_{n}$ still tends to zero as $n\to\infty$), thus suggesting that the $N$-th
equation of the discrete Painlevé II hierarchy reduces to a linear discrete
equation for large $n$. For $N=2$ and $3$, the reduced equations are
$\displaystyle N=2\colon\ $ $\displaystyle
nx_{n}+\theta_{1}(x_{n+1}+x_{n-1})+\theta_{2}(x_{n+2}+x_{n-2})=0,$
$\displaystyle N=3\colon\ $ $\displaystyle
nx_{n}+\theta_{1}(x_{n+1}+x_{n-1})+\theta_{2}(x_{n+2}+x_{n-2})+\theta_{3}(x_{n+3}+x_{n-3})=0.$
Similar recurrence relations appeared in [12] for the multivariable
generalized Bessel functions (GBFs). These generalized Bessel functions were
discussed in [21, 23] in the context of Schur measures for random partitions
and generalizations of the previous recurrence equations were introduced (in
particular, see in [21, equation (3.2b)]). We denote by
$J_{n}^{(N)}(\xi_{1},\dots,\xi_{N})$ a $N$-variable GBFs of order $n$. In
[12], $J_{n}^{(N)}(\xi_{1},\dots,\xi_{N})$ is defined as a discrete
convolution product of $N$ Bessel functions. In particular, if
$j_{n}^{(k)}(\xi)$ is the $n$-th Fourier coefficient of the function
$\beta\to\mathrm{e}^{2{\rm i}\xi\sin(k\beta)}$ then
$J_{n}^{(N)}(\xi_{1},\dots,\xi_{N})\coloneqq j_{n}^{(N)}(\xi_{N})\ast
j_{n}^{(N-1)}(\xi_{N-1})\ast\cdots\ast j_{n}^{(1)}(\xi_{1})(n),$
where $\ast$ denotes the discrete convolution.
In the case $N=1$, the symbol we considered was
$\varphi\big{(}\mathrm{e}^{{\rm
i}\beta}\big{)}=\mathrm{e}^{\theta_{1}(\mathrm{e}^{{\rm
i}\beta}+\mathrm{e}^{-{\rm i}\beta})}=\mathrm{e}^{2\theta_{1}\cos(\beta)}$ and
the large $n$ asymptotic behavior of $x_{n}$ was given by
$J_{-n}(2\theta_{1})$ which is the $n$-th Fourier coefficient of the function
$\beta\to\mathrm{e}^{\theta_{1}(\mathrm{e}^{{\rm i}\beta}-\mathrm{e}^{-{\rm
i}\beta})}$ up to a constant $(-1)^{n}$.
For $N>1$, the symbol is $\varphi_{N}(\mathrm{e}^{{\rm
i}\beta})=\prod_{k=1}^{N}\mathrm{e}^{\frac{\theta_{k}}{k}(\mathrm{e}^{{\rm
i}k\beta}+\mathrm{e}^{-{\rm
i}k\beta})}=\prod_{k=1}^{N}\mathrm{e}^{2\frac{\theta_{k}}{k}\cos(k\beta)}$.
Then, we conjecture that the large $n$ asymptotic behavior of $x_{n}^{(N)}$
would be given by the $n$-th Fourier coefficient of
$\beta\to\prod_{k=1}^{N}\mathrm{e}^{\frac{(-1)^{k+1}\theta_{k}}{k}(\mathrm{e}^{{\rm
i}k\beta}-\mathrm{e}^{-{\rm i}k\beta})}$ which is precisely
$J_{n}^{(N)}(\xi_{1},\dots,\xi_{N})$ up to some constant and proper rescaling:
$x_{n}^{(N)}\sim(-1)^{n}J_{n}^{(N)}\bigg{(}\bigg{(}(-1)^{i}\frac{2}{i}\theta_{i}\bigg{)}_{1\leqslant
i\leqslant N}\bigg{)},$
see also Figure 2.
Figure 2: The graphs of $x_{n}^{(N)}$ and
$(-1)^{n}J_{n}^{(N)}\big{(}(\theta_{i})_{1\leqslant i\leqslant N}\big{)}$ (for
$N=2$ on left and $N=3$ on the right) in function of $n$ for $\theta_{1}=3$,
$\theta_{2}=1.2$ and $\theta_{3}=2.6$.
###### Remark 1.5.
Notice that for $N=1,2$ the equations (1.13) and (1.14) coincide with the ones
found in [1]. Also notice that in the physics literature, Periwal and Schewitz
[25] found similar discrete equations for $N=1,2$ (with different coefficients
sign) in the context of unitary matrix models and used their solutions to
evaluate the behavior of some typical integrals in the large-dimensional limit
passing through the continuous limit of their discrete equations. For $N=1$,
the discrete Painlevé II equation was also found in [18] as a particular case
of the string equation for the full unitary matrix model, i.e., for
$w(z)=\theta_{1}z+\theta_{-1}z^{-1}$. The dependence in $\theta_{\pm 1}$ of
$x_{n}$ (and some other $x_{n}^{*}$) was also studied there and it produced
some evolution equations related, after some change of variables, to the two-
dimensional Toda equations. This would suggest that for the general case
$N>1$, the dependence of $x_{n}$ on times $\theta_{1},\dots,\theta_{N}$ would
be related to the one-dimensional Toda hierarchy (see also [23]).
The first construction of a discrete Painlevé II hierarchy in [11] used the
integrability property of the continuous one, in the following sense. It is
well known that the classical Painlevé II equation admits an entire hierarchy
of higher order analogues. Indeed, this equation can be obtained as a self-
similarity reduction of the modified KdV equation. Thus, the higher order
members of the Painlevé II hierarchy are but analogue self-similarity
reductions of the corresponding higher order members of the modified KdV
hierarchy (see, e.g., [14]). In some way, this implies that the Lax
representation of the KdV hierarchy in terms of isospectral deformations
becomes for the Painlevé II hierarchy a Lax representation in terms of
isomonodromic deformations [10].
In [11] then, the discrete Painlevé II hierarchy is defined as the
compatibility condition of a sort of “discretization” of the Lax
representation of the Painlevé II hierarchy. In particular, they considered
the compatibility condition of a linear $2\times 2$ matrix-valued system of
the following type:
$\Phi_{n+1}(z)=L_{n}(z)\Phi_{n}(z),\qquad\frac{\partial}{\partial
z}\Phi_{n}(z)=M_{n}(z)\Phi_{n}(z),$ (1.16)
where the coefficients $L_{n}(z)$, $M_{n}(z)$ are explicit matrix-valued
rational function in $z$, depending on $x_{\ell},\ell=n+N,\dots,n-N$, in some
recursive (on $N$) way. This allows the authors there to compactly write the
$N$-th discrete Painlevé II equation using some recursion operators. The
linear system that we obtain in Proposition 2.11 and that encodes our
hierarchy as written in (1.10) is mapped into the one of [11] through an
explicit transformation, as shown in Proposition 2.18, thus implying that
(1.10) is indeed the same discrete Painlevé II hierarchy.
Continuous limit. The aim of this paragraph is to explain heuristically the
reason why our result given in Theorem 1.2 can be considered as the discrete
analogue of the generalized Tracy–Widom formula for higher order Airy kernels
(namely, the result contained in [8, Theorem 1.1], case $\tau_{i}=0$).
For $N=1$, Borodin in [6] already pointed out that formula (1.7) with (1.8)
can be seen as a discrete analogue of the classical Tracy–Widom formula for
the GUE Tracy–Widom distribution [28, 29]. In other words, he described how to
pass from the left to the right in the picture below:
Discrete case$\dfrac{D_{n}D_{n-2}-D_{n-1}^{2}}{D_{n-1}^{2}}=-x_{n}^{2}$with
$nx_{n}+\theta\big{(}1-x_{n}^{2}\big{)}(x_{n+1}+x_{n-1})=0$,Continuous
case$\dfrac{{\rm d}^{2}}{{\rm d}t^{2}}\log\det(1-\mathcal{K}_{\rm
Ai}|_{(t,+\infty)})=-u^{2}(t)$with
$u^{\prime\prime}(t)=2u^{3}(t)+tu(t)$,$u(t)\underset{t\rightarrow\infty}{\sim}{\rm
Ai}(t)$,Baik–Deift–Johansson
where ${\rm Ai}(t)$ denotes the classical Airy function and $\mathcal{K}_{\rm
Ai}$ denotes the integral operator acting on $L^{2}(\mathbb{R})$ through the
Airy kernel. This connection was achieved by using the scaling limit computed
by Baik, Deift and Johansonn in [3] for the distribution of the first part of
partitions in the Poissonized Plancherel random partition model (which is
recovered in [5, Theorem 1] for $N=1$). In some way, as emphasized by Borodin,
their result not only assures the existence of a limiting function for the
$D_{n}$, in this case $D(t)=\det(1-\mathcal{K}_{\rm Ai}|_{(t,+\infty)})$, for
a certain continuous variable $t$. It also encodes already how the discrete
function $x_{n}$, should be rescaled in terms of a differentiable function
$u(t)$ to get back, from the recursion relation for $D_{n}$, the Tracy–Widom
formula.
To generalize this result for the case $N>1$, we proceed by adapting the
method used by Borodin in [6] for $N=1$ to the higher order cases, using the
scaling proposed in [5]222Up to the correction of the typo $d\rightarrow
d^{-1}$ in their statement of Theorem 1. for the multicritical case (notice
that their $n$ corresponds to our $N$), instead of the Baik–Deift–Johansson’s
one that only holds for $N=1$.
We recall that $D_{n}$ is the Toeplitz determinant associated to the symbol
$\varphi(z)$ (1.1) (which depends on $\theta_{i}$, $i=1,\dots,N$ and thus on
$N$). In the following discussion, we write explicitly the dependence on the
family of parameters $(\theta_{i})$, $i=1,\dots,N$ of
$D_{n}=D_{n}(\theta_{i})$, $x_{n}=x_{n}(\theta_{i})$,
${r_{n}=r_{n}(\theta_{i})}$ and $q_{n}=q_{n}(\theta_{i})$. Consider equation
(1.9) written in terms of the Toeplitz determinants $D_{n}(\theta_{i})$ in
this way
$\frac{D_{n-2}(\theta_{i})D_{n}(\theta_{i})-D_{n-1}^{2}(\theta_{i})}{D_{n-1}^{2}(\theta_{i})}=-x_{n}^{2}(\theta_{i}).$
(1.17)
From the equation (1.6), this previous equation can be expressed in terms of
$q_{n}(\theta_{i})$ defined as (1.5). It becomes
$\frac{q_{n-1}(\theta_{i})q_{n+1}(\theta_{i})-q_{n}^{2}(\theta_{i})}{q_{n}^{2}(\theta_{i})}=-x_{n}^{2}(\theta_{i}).$
(1.18)
According to [5, Lemma 8], with the change of parameters
$\tilde{\theta}_{i}=(-1)^{i-1}\theta_{i}$, we have
$q_{n}(\theta_{i})=r_{n}\big{(}\tilde{\theta}_{i}\big{)}$. Thus equation
(1.18) now reads as
$\frac{r_{n-1}\big{(}\tilde{\theta}_{i}\big{)}r_{n+1}\big{(}\tilde{\theta}_{i}\big{)}-r_{n}^{2}\big{(}\tilde{\theta}_{i}\big{)}}{r_{n}^{2}\big{(}\tilde{\theta}_{i}\big{)}}=-x_{n}^{2}(\theta_{i}).$
(1.19)
Following the scaling limit described in [5, Theorem 1], we define the
following scaling for the discrete variable $n$:
$n=b\theta+t\theta^{\frac{1}{2N+1}}d^{-\frac{1}{2N+1}}\quad\iff\quad
t=(n-b\theta)\theta^{-\frac{1}{2N+1}}d^{\frac{1}{2N+1}}$ (1.20)
with $b$, $d$ defined as
$b=\frac{N+1}{N},\qquad d=\begin{pmatrix}2N\\\ N-1\end{pmatrix}$
and choose $\tilde{\theta}_{i}$ (resp. $\theta_{i}$) all proportional to
$\theta=\tilde{\theta}_{1}=\theta_{1}$ in the following way:
$\tilde{\theta}_{i}=(-1)^{i-1}\frac{(N-1)!(N+1)!}{(N-i)!(N+i)!}\theta,\qquad
i=1,\dots,N,$
respectively,
$\theta_{i}=\frac{(N-1)!(N+1)!}{(N-i)!(N+i)!}\theta,\qquad i=1,\dots,N.$
(1.21)
Now recall the definition of $r_{n}\big{(}\tilde{\theta}_{i}\big{)}$ (1.5) in
function of $\mathbb{P}=\mathbb{P}_{\tilde{\theta}_{i}}$ (see equation (1.4)
for the definition of $\mathbb{P}$ and the dependence on the family of
parameters $(\theta_{i})$). From the previous scaling, it is now possible to
express $r_{n}\big{(}\tilde{\theta}_{i}\big{)}$ in function of $t$ and
$\theta$
$r_{n}\big{(}\tilde{\theta}_{i}\big{)}=\mathbb{P}_{\tilde{\theta}_{i}}\bigg{(}\frac{\lambda_{1}-b\theta}{(\theta
d^{-1})^{\frac{1}{2N+1}}}\leqslant t\bigg{)}$ (1.22)
and according to [5, Theorem 1], the limiting behavior of the probability
distribution function of $\lambda_{1}$ in this setting is given by
$\displaystyle\lim_{\theta\rightarrow+\infty}r_{n}\big{(}\tilde{\theta}_{i}\big{)}=\lim_{\theta\rightarrow+\infty}\mathbb{P}_{\tilde{\theta}_{i}}\bigg{(}\frac{\lambda_{1}-b\theta}{\big{(}\theta
d^{-1}\big{)}^{\frac{1}{2N+1}}}\leqslant t\bigg{)}=F_{N}(t),$ $\displaystyle
F_{N}(t)=\det(1-\mathcal{K}_{{\rm Ai}_{2N+1}}|_{(t,\infty)}),$ (1.23)
where $\mathcal{K}_{{\rm Ai}_{2N+1}}$ is the integral operator acting with
higher order Airy kernel (see, for instance, in [5, equation (2.7)]).
As we did for $r_{n}\big{(}\tilde{\theta}_{i}\big{)}$ in equation (1.22), we
express $r_{n+1}\big{(}\tilde{\theta}_{i}\big{)}$ and
$r_{n-1}\big{(}\tilde{\theta}_{i}\big{)}$ in function of $t$ and $\theta$:
$r_{n\pm
1}\big{(}\tilde{\theta}_{i}\big{)}=\mathbb{P}_{\tilde{\theta}_{i}}\bigg{(}\frac{\lambda_{1}-b\theta}{\big{(}\theta
d^{-1}\big{)}^{\frac{1}{2N+1}}}\leqslant t\pm\big{(}\theta
d^{-1}\big{)}^{-\frac{1}{2N+1}}\bigg{)}.$
With this discussion and this scaling for $n$, $(\theta_{i})$ and
$\big{(}\tilde{\theta}_{i}\big{)}$, we deduce that
$-\lim_{\theta\rightarrow+\infty}\dfrac{x_{n}^{2}(\theta_{i})}{\big{(}\theta
d^{-1}\big{)}^{-\frac{2}{2N+1}}}=\lim_{\theta\rightarrow+\infty}\frac{r_{n-1}\big{(}\tilde{\theta}_{i}\big{)}r_{n+1}\big{(}\tilde{\theta}_{i}\big{)}-r_{n}^{2}\big{(}\tilde{\theta}_{i}\big{)}}{\big{(}\theta
d^{-1}\big{)}^{-\frac{2}{2N+1}}r_{n}^{2}\big{(}\tilde{\theta}_{i}\big{)}}=\frac{{\rm
d}^{2}}{{\rm d}t^{2}}\log F_{N}(t),$
where the first equality comes from equation (1.19) and the second from
equation (1.23).
From now on, we drop the dependence on $\theta_{i}$, $i=1,\dots,N$ in the
notation. The previous equation suggests that, in order to be consistent with
[8, Theorem 1.1], the discrete function $x_{n}$ appearing in formula (1.17) in
the scaling (1.20) for $n$ and (1.21) for $(\theta_{i})$ limit should be
$-x_{n}^{2}\sim-(\theta)^{-\frac{2}{2N+1}}d^{\frac{2}{2N+1}}u^{2}(t)$
with $u(t)$ solution of the $N$-th equation of the Painlevé II hierarchy. This
can be proved directly by computing the scaling limit of the equations of the
discrete Painlevé II hierarchy we found for $x_{n}$ in Theorem 1.2. Indeed,
for every fixed $N$, we write $x_{n}$ as
$x_{n}=(-1)^{n}\theta^{-\frac{1}{2N+1}}d^{\frac{1}{2N+1}}u(t)$ (1.24)
with $u(t)$ a smooth function of the variable $t$ defined as in equation
(1.20). Now recall that $x_{n}$ solves the discrete equation (1.10) of order
$2N$ for every $N\geq 1$. The continuous limit of the discrete equations of
the hierarchy (1.10), under the definition of $x_{n}$ (1.24) and the scaling
of the parameters $\theta_{i}$ as (1.21), gives the equations of the classical
Painlevé II hierarchy. For any fixed $N$ the computation should be done in the
same way: consider the $N$-th discrete equation of the hierarchy (1.10) and
replace each $\theta_{i}$ with the values given in formula (1.21). Then
substitute $x_{n}$ with the definition in (1.24) and for
$\theta\rightarrow+\infty$ compute the asymptotic expansion of every term
$x_{n+K}\propto u\big{(}t+K\theta^{-\frac{1}{2N+1}}d^{\frac{1}{2N+1}}\big{)}$,
$K=-N,\dots,N$ appearing in the discrete equation. The coefficient of
$\theta^{-1}$ resulting after this procedure coincides indeed with the $N$-th
equation of the Painlevé II hierarchy. For $N=1,2,3$, the computations are
explicitly done in the Appendix A.
###### Remark 1.6.
It is worthy to mention that in [8], the authors also consider a
generalization of the Fredholm determinant $F_{N}(t)$, recalled here in
(1.23), depending on additional parameters $\tau_{i}$. Those are related to
solutions of the general Painlevé II hierarchy, which depends as well on the
$\tau_{i}$. With the scaling as in [5] for the $\theta_{i}$’s, the continuous
limit for our discrete equations leads to the Painlevé II hierarchy with
$\tau_{i}=0$ for all $i$. This is consistent with the fact that the limiting
behavior in [5], written here in equation (1.23), involves indeed the Fredholm
determinant $F_{N}(t)$ corresponding to $\tau_{i}=0$ for all $i$ (the same
already appeared in [22]).
Methodology and outline. The rest of the work is devoted to prove Theorem 1.2.
In order to do so, we introduce the classical Riemann–Hilbert characterization
[4] of the family of orthogonal polynomials on the unit circle (OPUC for
brevity) with respect to a measure defined by the symbol $\varphi(z)$.
Classical results from orthogonal polynomials theory allow to achieve almost
directly formula (1.17) where $x_{n}$ is defined as the constant term of the
$n$-th monic orthogonal polynomial of the family. The Riemann–Hilbert problem
for the OPUC is then used to deduce a linear system of the same type of (1.16)
which is proven to be in relation with the Lax pair introduced by Cresswell
and Joshi [11] for the discrete Painlevé II hierarchy. This is done in Section
2. The explicit computation of the Lax pair together with the construction of
the recursion operator and the hierarchy for $x_{n}$ as written in (1.10) are
done in Section 3.
## 2 OPUC: the Riemann–Hilbert approach and a discrete
Painlevé II Lax pair
In this section, we introduce the relevant family of orthogonal polynomials on
the unit circle. We recall some of their properties and their Riemann–Hilbert
characterization. Afterward we derive a Lax pair associated to the
Riemann–Hilbert problem and establish the relation with the Lax pair for
discrete Painlevé II hierarchy (1.16) introduced by Cresswell and Joshi [11].
The proofs of the results for orthogonal polynomials stated in here can be
found in the classical reference [4].
We denote by $S^{1}$ the unit circle in $\mathbb{C}$ counterclockwise
oriented. We consider the following positive measure on $S^{1}$ (absolutely
continuous w.r.t. the Lebesgue measure there):
$\mathrm{d}\mu(\beta)=\frac{\mathrm{e}^{w(\mathrm{e}^{{\rm
i}\beta})}}{2\pi}\mathrm{d}\beta,$ (2.1)
where the function $w(z)$ for any $z\in\mathbb{C}$ is given as in equation
(1.1). The family of orthogonal polynomials on the unit circle (OPUC) w.r.t.
the measure (2.1) is defined as the collection of polynomials
$\\{p_{n}(z)\\}_{n\in\mathbb{N}}$ written as
$p_{n}(z)=\kappa_{n}z^{n}+\dots+\kappa_{0},\qquad\kappa_{n}>0$ (2.2)
and such that the following relation holds for any indices $k$, $h$
$\int_{-\pi}^{\pi}\overline{p_{k}\big{(}\mathrm{e}^{\mathrm{i}\beta}\big{)}}p_{h}\big{(}\mathrm{e}^{\mathrm{i}\beta}\big{)}\frac{\mathrm{d}\mu(\beta)}{2\pi}=\delta_{k,h}.$
The family of monic orthogonal polynomials $\\{\pi_{n}(z)\\}$ associated to
the previous ones is defined in analogue way, so that
$p_{n}(z)=\kappa_{n}\pi_{n}(z)$.
### 2.1 Toeplitz determinants related to OPUC
We recall that $\varphi(z)=\mathrm{e}^{w(z)}$, $z\in S^{1}$ with $w(z)$
defined as in (1.1) and that we defined $D_{n}\coloneqq\det(T_{n}(\varphi))$
(by convention $D_{-1}=1$) to be the $n$-Toeplitz determinant associated to
the symbol $\varphi$ (see equations (1.2) and (1.3)). Because $\varphi(z)$ is
a real nonnegative function, $D_{n}\in\mathbb{R}_{>0}$.
###### Proposition 2.1.
If $\varphi(z)$ is a real nonnegative function, we have that
$p_{\ell}(z)=\frac{1}{\sqrt{D_{\ell}D_{\ell-1}}}\det\begin{pmatrix}\varphi_{0}&\varphi_{-1}&\dots&\varphi_{-\ell+1}&\varphi_{-\ell}\\\
\varphi_{1}&\varphi_{0}&\dots&\varphi_{-\ell+2}&\varphi_{-\ell+1}\\\
\vdots&\vdots&\ddots&\vdots&\vdots\\\
\varphi_{\ell-1}&\varphi_{\ell-2}&\dots&\varphi_{0}&\varphi_{-1}\\\
1&z&\dots&z^{\ell-1}&z^{\ell}\end{pmatrix}\\!,\qquad\ell\geq 0.$ (2.3)
###### Proof.
The proof is similar to the one for the orthogonal polynomials on the real
line, that can be found, e.g., [13, equation (3.5)], and following discussion.
∎
###### Corollary 2.2.
The ratio of two consecutive Toeplitz determinants is expressed as
$\frac{D_{\ell-1}}{D_{\ell}}=\kappa_{\ell}^{2},\qquad\ell\geq 0.$ (2.4)
###### Proof.
Thanks to formula (2.3), we have that
$p_{\ell}(z)=\frac{1}{\sqrt{D_{\ell}D_{\ell-1}}}\det\begin{pmatrix}\varphi_{0}&\varphi_{-1}&\dots&\varphi_{-\ell+1}\\\
\varphi_{1}&\varphi_{0}&\dots&\varphi_{-\ell+2}\\\
\vdots&\vdots&\ddots&\vdots\\\
\varphi_{\ell-1}&\varphi_{\ell-2}&\dots&\varphi_{0}\end{pmatrix}z^{\ell}+\dots=\sqrt{\frac{D_{\ell-1}}{D_{\ell}}}z^{\ell}+\cdots,$
and by definition $p_{\ell}(z)=\kappa_{\ell}\pi_{\ell}(z)$ with the latter
being the $\ell$-th monic orthogonal polynomial on $S^{1}$. Thus formula (2.4)
follows. ∎
### 2.2 Riemann–Hilbert problem associated to OPUC
The family $\\{\pi_{n}\\}$ of orthogonal polynomials has a well-known
characterization in terms of a $2\times 2$ dimensional Riemann–Hilbert
problem, also depending on $n\geq 0$.
###### Riemann–Hilbert Problem 2.3.
The function $Y(z)\coloneqq
Y(n,\theta_{j};z)\colon\mathbb{C}\rightarrow\mathrm{GL}(2,\mathbb{C})$ has the
following properties:
1. (1)
$Y(z)$ is analytic for every $z\in\mathbb{C}\setminus S^{1}$;
2. (2)
$Y(z)$ has continuous boundary values $Y_{\pm}(z)$ while approaching non-
tangentially $S^{1}$ either from the left or from the right, and they are
related for all $z\in S^{1}$ through
$Y_{+}(z)=Y_{-}(z)J_{Y}(z),\qquad\text{with}\quad
J_{Y}(z)=\begin{pmatrix}1&z^{-n}\mathrm{e}^{w(z)}\\\ 0&1\end{pmatrix}\\!;$
3. (3)
$Y(z)$ is normalized at $\infty$ as
$Y(z)\sim\Bigg{(}I+\sum_{j=1}^{\infty}\frac{Y_{j}(n,\theta_{j})}{z^{j}}\Bigg{)}z^{n\sigma_{3}},\qquad
z\rightarrow\infty,$
where $\sigma_{3}$ denotes the Pauli’s matrix
$\sigma_{3}\coloneqq\left(\begin{smallmatrix}1&\hphantom{-}0\\\
0&-1\end{smallmatrix}\right)$.
It is known from [3] that the above Riemann–Hilbert problem, for each $n\geq
0$, admits a unique solution which is explicitly written in terms of the
family $\\{\pi_{n}(z)\\}$. Before stating the result, we introduce the
following notation. For every polynomial $q(z)$, $z\in\mathbb{C}$, its reverse
polynomial $q^{*}(z)$ is defined as the polynomial of the same degree such
that
$q^{*}(z)\coloneqq z^{n}\overline{q\big{(}\bar{z}^{-1}\big{)}}.$
For every $\big{(}L^{p}\big{(}S^{1}\big{)}\big{)}$ function $f(y)$, its Cauchy
transform $\mathcal{C}f(z)$ is defined for any $z\notin S^{1}$ as
$\left(\mathcal{C}f(y)\right)(z)\coloneqq\frac{1}{2\pi\mathrm{i}}\int_{S^{1}}\frac{f(y)}{y-z}\,\mathrm{d}y.$
###### Remark 2.4.
Notice that the results in [3] for the Riemann–Hilbert characterization a
family of orthogonal polynomials on the unit circle are a sort of extension of
the results known from [15, 20] for the case of orthogonal polynomials on the
real line.
###### Theorem 2.5.
For every $n\geq 0$, the Riemann–Hilbert Problem 2.3 admits a unique solution
$Y(z)$ that is written as
$Y(z)=\begin{pmatrix}\pi_{n}(z)&\mathcal{C}\big{(}y^{-n}\pi_{n}(y)\mathrm{e}^{w(y)}\big{)}(z)\\\\[2.84526pt]
-\kappa^{2}_{n-1}\pi_{n-1}^{*}(z)&-\kappa_{n-1}^{2}\mathcal{C}\big{(}y^{-n}\pi_{n-1}^{*}(y)\mathrm{e}^{w(y)}\big{)}(z)\end{pmatrix}\\!.$
(2.5)
Moreover, $\det(Y(z))\equiv 1$.
###### Proof.
See [3, Lemma 4.1]. ∎
The solution $Y(z)$ has a symmetry which will be very useful in the following
section.
###### Corollary 2.6.
The unique solution $Y(z)$ of the Riemann–Hilbert Problem 2.3 is such that
$\displaystyle
Y(z)=\sigma_{3}Y(0)^{-1}Y\big{(}z^{-1}\big{)}z^{n\sigma_{3}}\sigma_{3},$ (2.6)
$\displaystyle Y(z)=\overline{Y(\bar{z})}.$ (2.7)
###### Proof.
See [4, Proposition 5.12]. ∎
Notice that the factor $Y(0)=Y(n,\theta_{j};0)$ appearing in equation (2.6)
has a very explicit form by equation (2.5). This will be useful in the
following sections.
###### Lemma 2.7.
For every $n\geq 0$, we have
$Y(0)=Y(n,\theta_{j};0)=\begin{pmatrix}x_{n}&\kappa_{n}^{-2}\\\
-\kappa^{2}_{n-1}&x_{n}\end{pmatrix}\\!,$ (2.8)
where we denoted with $x_{n}\coloneqq\pi_{n}(0)$, and $\kappa_{n}$ is defined
as in equation (2.2). Moreover, we have
$\frac{\kappa_{n-1}^{2}}{\kappa_{n}^{2}}=1-x_{n}^{2},$ (2.9)
and we have $x_{n}\in\mathbb{R}$.
###### Proof.
The first column of $Y(n;0)$ directly follows from the evaluation in $z=0$ of
$Y(n;z)$ as given in equation (2.5). Indeed, $Y^{11}(n;0)=\pi_{n}(0)$ and
$Y^{21}(n;0)=-\kappa_{n-1}^{2}\pi^{*}_{n-1}(0)$ but we observe that
$\pi^{*}_{n-1}(0)=z^{n-1}\overline{\pi_{n-1}\big{(}\bar{z}^{-1}\big{)}}\big{|}_{z=0}=z^{n-1}\big{(}z^{-(n-1)}+\dots+\overline{\pi_{n-1}(0)}\big{)}\big{|}_{z=0}=1.$
Thus we conclude that $Y^{21}(n;0)=-\kappa_{n-1}^{2}$. For what concerns the
second column of $Y(n;0)$, we first find the $(2,2)$-entry. This is indeed
easily deduced from the symmetry given in (2.6). In the limit for
$z\rightarrow\infty$ it gives
$Y(n;0)=\sigma_{3}Y^{-1}(n;0)\sigma_{3},$
thus $Y^{22}(n;0)=Y^{11}(n;0)=\pi_{n}(0)$. Finally, for the entry $(1,2)$ of
$Y(n;0)$, we compute it explicitly using the orthonormality property of the
polynomials $p_{m}(z)$
$\displaystyle Y^{12}(n;0)$ $\displaystyle=\frac{1}{2\pi{\rm
i}}\int_{S^{1}}\frac{\pi_{n}(s)s^{-n}w(s)}{s}\,{\rm
d}s=\int_{-\pi}^{\pi}\pi_{n}\big{(}\mathrm{e}^{{\rm
i}\theta}\big{)}\overline{\mathrm{e}^{{\rm i}n\theta}}w\big{(}\mathrm{e}^{{\rm
i}\theta}\big{)}\frac{{\rm d}\theta}{2\pi}$
$\displaystyle=\frac{1}{\kappa_{n}^{2}}\int_{-\pi}^{\pi}p_{n}\big{(}\mathrm{e}^{{\rm
i}\theta}\big{)}\overline{p_{n}\big{(}\mathrm{e}^{{\rm
i}\theta}\big{)}}w\big{(}\mathrm{e}^{{\rm i}\theta}\big{)}\frac{{\rm
d}\theta}{2\pi}=\frac{1}{\kappa_{n}^{2}}.$
Equation (2.9) comes from the fact that $\det(Y(n,\theta_{j};z))=1$
identically in $z$ and so in particular for $z=0$ by writing
$Y(n,\theta_{j};0)$ as in equation (2.8), relation (2.9) is obtained.
Finally, the fact that $x_{n}$ is real follows from the entry $(1,1)$ of
equation (2.7) together with equation (2.5). ∎
At this point, we are already able to express the ratio of Toeplitz
determinants in terms of the constant term of the monic orthogonal
polynomials, as follows.
###### Corollary 2.8.
For every $n\geq 1$, the Toeplitz determinants $D_{n}$ satisfy the recursion
relation
$\frac{D_{n-2}D_{n}}{D_{n-1}^{2}}=1-x_{n}^{2}.$ (2.10)
###### Proof.
Putting together equation (2.9) with equation (2.4) (for two consecutive
integers) we obtain the recursion relation (2.10). ∎
We emphasize again that the symbol $\varphi(z)$ actually depends on the
natural parameter $N$, so the Toeplitz determinants $D_{n}$, $n\geq 1$ (1.3)
do as well as $x_{n}=\pi_{n}(0)$, $n\geq 1$ do (since it is the constant
coefficient of the $n$-th monic OPUC w.r.t. the $N$-depending measure (2.1),
(1.1)). The $N$-dependence of the latter will be emphasized in the following
section, where $x_{n}$ is proved to be a solution of the $N$-th higher order
generalization of the discrete Painlevé II equation.
We consider now the following matrix-valued function
$\Psi(n,\theta_{j};z)\coloneqq\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}Y(n,\theta_{j};z)\begin{pmatrix}1&0\\\
0&z^{n}\end{pmatrix}\mathrm{e}^{w(z)\frac{\sigma_{3}}{2}}.$ (2.11)
Thanks to the properties of $Y(z;n,\theta_{j})$ from the Riemann–Hilbert
Problem 2.3 one can prove that $\Psi(n,\theta_{j};z)$ satisfies the following
Riemann–Hilbert problem.
###### Riemann–Hilbert Problem 2.9.
The function
$\Psi(z)\coloneqq\Psi(n,\theta_{j};z)\colon\mathbb{C}\rightarrow\mathrm{GL}(2,\mathbb{C})$
has the following properties:
1. (1)
$\Psi(z)$ is analytic for every
$z\in\mathbb{C}\setminus\big{\\{}S^{1}\cup\\{0\\}\big{\\}}$;
2. (2)
$\Psi(z)$ has continuous boundary values $\Psi_{\pm}(z)$ while approaching
non-tangentially $S^{1}$ either from the left or from the right, and they are
related for all $z\in S^{1}$ through
$\Psi_{+}(z)=\Psi_{-}(z)J_{0},\qquad J_{0}=\begin{pmatrix}1&1\\\
0&1\end{pmatrix}\\!;$ (2.12)
3. (3)
$\Psi(z)$ has asymptotic behavior near $0$ given by
$\Psi(z)\sim\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}Y(0)\Bigg{(}I+\sum_{j=1}^{\infty}z^{j}\widetilde{Y}_{j}(n)\Bigg{)}\begin{pmatrix}1&0\\\
0&z^{n}\end{pmatrix}\mathrm{e}^{w(z)\frac{\sigma_{3}}{2}},\qquad z\rightarrow
0;$ (2.13)
4. (4)
$\Psi(z)$ has asymptotic behavior near $\infty$ given by
$\Psi(z)\sim\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}\Bigg{(}I+\sum_{j=1}^{\infty}\frac{Y_{j}(n)}{z^{j}}\Bigg{)}\begin{pmatrix}z^{n}&0\\\
0&1\end{pmatrix}\mathrm{e}^{w(z)\frac{\sigma_{3}}{2}},\qquad|z|\rightarrow\infty.$
(2.14)
###### Proposition 2.10.
The function $\Psi(n,\theta_{j};z)$ defined in (2.11) solves the
Riemann–Hilbert Problem 2.9.
###### Proof.
The analyticity condition and the asymptotic expansions at $0$, $\infty$ given
in (2.13), (2.14) follows directly from the definition (2.11) and the fact
that $Y(z)$ solves the Riemann–Hilbert Problem 2.3. Condition (2.12) follows
from direct computation
$\displaystyle\Psi(z)_{+}$ $\displaystyle=\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}Y_{+}(z)\begin{pmatrix}1&0\\\
0&z^{n}\end{pmatrix}\mathrm{e}^{w(z)\frac{\sigma_{3}}{2}}=\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}Y_{-}(z)J_{Y}(z)\begin{pmatrix}1&0\\\
0&z^{n}\end{pmatrix}\mathrm{e}^{w(z)\frac{\sigma_{3}}{2}}$
$\displaystyle=\Psi_{-}(z)\begin{pmatrix}1&0\\\
0&z^{-n}\end{pmatrix}\mathrm{e}^{-w(z)\frac{\sigma_{3}}{2}}\begin{pmatrix}1&z^{-n}\mathrm{e}^{w(z)}\\\
0&1\end{pmatrix}\begin{pmatrix}1&0\\\
0&z^{n}\end{pmatrix}\mathrm{e}^{w(z)\frac{\sigma_{3}}{2}}=\Psi_{-}(z)\begin{pmatrix}1&1\\\
0&1\end{pmatrix}\\!.\\!\\!\\!\\!$ ∎
### 2.3 A linear differential system for $\boldsymbol{\Psi(z)}$
From the solution of the Riemann–Hilbert Problem 2.9, we deduce the following
equations (in the following we omit in $\Psi$ the dependence on $\theta_{j}$
that should be considered only as parameters and not actual variables like
$n$, $z$).
###### Proposition 2.11.
We have
$\Psi(n+1;z)=U(n;z)\Psi(n;z),\qquad\partial_{z}\Psi(n;z)=T(n;z)\Psi(n;z)$
(2.15)
with
$U(n;z):=\begin{pmatrix}z+x_{n}x_{n+1}&-x_{n+1}\\\\[2.84526pt]
-\big{(}1-x_{n+1}^{2}\big{)}x_{n}&1-x_{n+1}^{2}\end{pmatrix}=\sigma_{+}z+U_{0}(n),$
(2.16)
where $\sigma_{+}\coloneqq\begin{pmatrix}1&0\\\ 0&0\end{pmatrix}$ and
$T(n;z):=T_{1}(n)z^{N-1}+T_{2}(n)z^{N-2}+\dots+T_{2N+1}(n)z^{-N-1}=\sum_{k=1}^{2N+1}T_{k}z^{N-k},$
(2.17)
where
$T_{1}(n)=\frac{\theta_{N}}{2}\sigma_{3}.$ (2.18)
###### Remark 2.12.
The coefficient $(T_{i}(n))_{2\leqslant i\leqslant 2N+1}$ defined in equation
(2.17) will be computed in Section 3.
###### Proof.
We first prove the first equation. We start by defining the quantity
$U(n;z)\coloneqq\Psi(n+1;z)\Psi^{-1}(n;z).$
Since the jump condition for $\Psi(z)$ (2.12) is independent of $n$, $U(n;z)$
is analytic everywhere. Plugging in equation (2.14), we have the expansion at
$\infty$
$\displaystyle U(n;z)={}$ $\displaystyle\begin{pmatrix}1&0\\\
0&\kappa_{n+1}^{-2}\end{pmatrix}\bigg{(}I+\frac{Y_{1}(n+1)}{z}+\mathcal{O}\big{(}z^{-2}\big{)}\bigg{)}z^{(n+1)\sigma_{3}}\begin{pmatrix}1&0\\\
0&z\end{pmatrix}z^{-n\sigma_{3}}$
$\displaystyle\times\bigg{(}I-\frac{Y_{1}(n)}{z}+\mathcal{O}\big{(}z^{-2}\big{)}\bigg{)}\begin{pmatrix}1&0\\\
0&\kappa_{n}^{2}\end{pmatrix}\\!,$
from which we deduce that $U(n;z)$ is a polynomial in $z$ of degree $1$, by
Liouville theorem. Moreover, its matrix-valued coefficient are written as
$U(n;z)=z\begin{pmatrix}1&0\\\
0&0\end{pmatrix}+\underbracket{\begin{pmatrix}1&0\\\
0&\kappa_{n+1}^{-2}\end{pmatrix}Y(n+1;0)\begin{pmatrix}1&0\\\
0&0\end{pmatrix}Y^{-1}(n;0)\begin{pmatrix}1&0\\\
0&\kappa_{n}^{2}\end{pmatrix}}_{=U_{0}(n)}.$
Doing the computation and using equation (2.8), we obtain
$\displaystyle U_{0}(n)$
$\displaystyle=\begin{pmatrix}Y^{11}(n+1;0)Y^{22}(n;0)&-\kappa_{n}^{2}Y^{11}(n+1;0)Y^{12}(n;0)\\\
\kappa_{n+1}^{-2}Y^{21}(n+1;0)Y^{22}(n,0)&-Y^{21}(n+1;0)Y^{12}(n;0)\end{pmatrix}$
$\displaystyle=\begin{pmatrix}x_{n+1}x_{n}&-x_{n+1}\\\
-\big{(}1-x_{n+1}^{2}\big{)}x_{n}&1-x_{n+1}^{2}\end{pmatrix}\\!.$
For what concerns the second equation, we define
$T(n;z)\coloneqq\partial_{z}\Psi(n;z)\Psi^{-1}(n;z)$. From the asymptotic
behavior of $\Psi(n;z)$ at $0$ and $\infty$, we can deduce that $T(n;z)$ is a
meromorphic function in $z$ with behavior at $\infty$ described by
$T(n;z)\sim\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}\bigg{(}I+\frac{Y_{1}(n)}{z}+O\big{(}z^{-2}\big{)}\bigg{)}\frac{V^{\prime}(z)}{2}\sigma_{3}\bigg{(}I-\frac{Y_{1}(n)}{z}+O\big{(}z^{-2}\big{)}\bigg{)}\begin{pmatrix}1&0\\\
0&\kappa_{n}^{2}\end{pmatrix}$
(polynomial behavior of degree $N-1$) while at $0$ its behavior is described
by
$\displaystyle T(n;z)\sim\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}Y(n,0)\big{(}I+\tilde{Y}_{1}(n)z+O\big{(}z^{2}\big{)}\big{)}$
$\displaystyle\hphantom{T(n;z)\sim}{}\times\frac{-V^{\prime}(z^{-1})}{2z^{2}}\sigma_{3}\big{(}I-\tilde{Y}_{1}(n)z+O\big{(}z^{2}\big{)}\big{)}\begin{pmatrix}1&0\\\
0&\kappa_{n}^{2}\end{pmatrix}\\!,$
i.e., there is a pole of order $N+1$. In conclusion, we can write
$T(n;z)=\frac{\theta_{N}}{2}\sigma_{3}z^{N-1}+T_{2}(n)z^{N-2}+\dots+T_{2N+1}(n)z^{-N-1}.$
∎
Moreover, thanks to the symmetry for the solution of the Riemann–Hilbert
problem $Y(z)$ stated in (2.6), we have that the coefficient matrix $T(n;z)$
satisfies a symmetry property.
###### Proposition 2.13.
$T(n;z)$ has the following symmetry:
$T(n;z^{-1})=-z^{2}\big{(}K(n)T(n;z)K(n)^{-1}-nz^{-1}I_{2}\big{)}$ (2.19)
with $K(n)\coloneqq\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}Y(n;0)\sigma_{3}\begin{pmatrix}1&0\\\
0&\kappa_{n}^{2}\end{pmatrix}$.
###### Remark 2.14.
Notice that for all $n$, the matrix $K(n)$ is s.t. $K(n)^{-1}=K(n)$ since we
have the identity $x_{n}^{2}+\frac{\kappa_{n-1}^{2}}{\kappa_{n}^{2}}=1$.
###### Proof.
On the one hand,
$\partial_{z}\big{(}\Psi\big{(}n;z^{-1}\big{)}\big{)}=-\dfrac{1}{z^{2}}T\big{(}n;z^{-1}\big{)}\Psi\big{(}n;z^{-1}\big{)}.$
On the other hand, using the symmetry (2.6) for $Y$ we deduce the following
symmetry for $\Psi$:
$\Psi\big{(}n;z^{-1}\big{)}=z^{-n}\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}Y(0)\sigma_{3}\begin{pmatrix}1&0\\\
0&\kappa_{n}^{2}\end{pmatrix}\Psi(n;z)\sigma_{3}.$
This previous equation leads to
$\partial_{z}\big{(}\Psi\big{(}n;z^{-1}\big{)}\big{)}=z^{-n}\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}Y(0)\sigma_{3}\begin{pmatrix}1&0\\\
0&\kappa_{n}^{2}\end{pmatrix}\partial_{z}\Psi(n;z)\sigma_{3}-nz^{-1}\Psi\big{(}n;z^{-1}\big{)}.$
Then
$\displaystyle T\big{(}n;z^{-1}\big{)}={}$
$\displaystyle-z^{2}\bigg{(}\\!\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}Y(0)\sigma_{3}\begin{pmatrix}1&0\\\
0&\kappa_{n}^{2}\end{pmatrix}T(n;z)\begin{pmatrix}1&0\\\
0&\kappa_{n}^{-2}\end{pmatrix}\sigma_{3}Y(0)^{-1}$
$\displaystyle\times\begin{pmatrix}1&0\\\
0&\kappa_{n}^{2}\end{pmatrix}-nz^{-1}I_{2}\bigg{)}.$ ∎
The symmetry (2.19) reflects on the coefficients $T_{k}(n)$, $k=1,\dots,2N+1$
as written below.
###### Corollary 2.15.
The coefficients $T_{k}(n)$, $k=1,\dots,2N+1$ satisfy
$\displaystyle T_{j}(n)=-K(n)T_{2N+2-j}(n)K(n)^{-1},\qquad j=1,\dots,N,$
(2.20) $\displaystyle T_{N+1}(n)=-K(n)T_{N+1}(n)K(n)^{-1}+nI_{2}.$ (2.21)
###### Proof.
Indeed, by replacing the exact shape of $T(n;z)$ in equation (2.19), we have
$\displaystyle\begin{split}\sum_{k=1}^{2N+1}T_{k}(n)z^{-N+k}&=T\big{(}n;z^{-1}\big{)}=-z^{2}\Bigg{(}\sum_{k=1}^{2N+1}KT_{k}(n)K^{-1}z^{N-k}-nz^{-1}I_{2}\Bigg{)}\\\
&=-\sum_{k=1}^{2N+1}KT_{k}(n)K^{-1}z^{N+2-k}+nzI_{2}\\\
&=-\sum_{j=1}^{2N+1}KT_{2N+2-j}(n)K^{-1}z^{-N+j}+nzI_{2},\end{split}$
so looking at the powers $z^{-N+j}$ for $j=1,\dots,N$, we get equation (2.20)
and for $j=N+1$, we get equation (2.21). ∎
Notice first that from equations (2.20) if the first $N+1$ coefficients of
$T(n;z)$ are known, then we can obtain the remaining ones. Second, notice that
the coefficient $T_{N+1}(n)$ plays an important role since it solves an
equation, the one given in (2.21).
### 2.4 Relation with the Cresswell–Joshi Lax pair
To conclude this section, we describe how the Lax pair (2.15) is related with
the one of the discrete Painlevé II hierarchy (1.16) originally introduced by
Cresswell and Joshi in [11] as follows.
###### Definition 2.16.
A Lax pair for the discrete Painlevé II hierarchy is given by a pair of
matrices $(L_{n}(z),M_{n}(z))$, defining the coefficients of a discrete-
differential system for a matrix-valued function $\Phi(n;z)$, such as
$\displaystyle\Phi(n+1;z)=\begin{pmatrix}z&x_{n}\\\
x_{n}&1/z\end{pmatrix}\Phi(n;z)=L_{n}(z)\Phi(n;z),$ (2.22)
$\displaystyle\dfrac{\partial}{\partial z}\Phi(n;z)=M_{n}(z)\Phi(n;z),$ (2.23)
with the property that
$M_{n}(z)=\begin{pmatrix}A_{n}(z)&B_{n}(z)\\\ C_{n}(z)&-A_{n}(z)\end{pmatrix}$
with $A_{n}$, $B_{n}$ and $C_{n}$ are rational in $z$ (and depending also on
$N$).
###### Remark 2.17.
Specifically, in [11, Section 3.1], the authors proved that the compatibility
condition of the system of equations (2.22) and (2.23) defines the
coefficients of the matrix $M_{n}(z)$, leaving in turns only one discrete
equation of order $2N$ for $x_{n}$. This is defined as the $N$-th member of
the discrete Painlevé II hierarchy.
We establish now a link between this Lax Pair and the system (2.15) we
obtained starting from the OPUC. We define
$\Phi(n;z):=\sigma_{3}\begin{pmatrix}z^{-n+3/2}&0\\\
0&z^{-n+1/2}\end{pmatrix}\begin{pmatrix}1&0\\\
-x_{n-1}&1\end{pmatrix}\Psi\big{(}n-1;z^{2}\big{)}.$
###### Proposition 2.18.
$\Phi(n;z)$ defined as above satisfies the system of equations (2.22) and
(2.23).
###### Proof.
First we compute the discrete equation for $\Phi(n;z)$. From the definition,
we have
$\displaystyle\Phi(n+1;z)=\sigma_{3}\begin{pmatrix}z^{-n+1/2}&0\\\
0&z^{-n-1/2}\end{pmatrix}\begin{pmatrix}1&0\\\
-x_{n}&1\end{pmatrix}\Psi\big{(}n;z^{2}\big{)}.$
According to equation (2.15),
$\displaystyle\Phi(n+1;z)={}$
$\displaystyle\sigma_{3}\begin{pmatrix}z^{-n+1/2}&0\\\
0&z^{-n-1/2}\end{pmatrix}\begin{pmatrix}1&0\\\
-x_{n}&1\end{pmatrix}U\big{(}n-1;z^{2}\big{)}\Psi\big{(}n-1;z^{2}\big{)}$
$\displaystyle={}$ $\displaystyle\sigma_{3}\begin{pmatrix}z^{-n+1/2}&0\\\
0&z^{-n-1/2}\end{pmatrix}\begin{pmatrix}1&0\\\
-x_{n}&1\end{pmatrix}U\big{(}n-1;z^{2}\big{)}\begin{pmatrix}1&0\\\
x_{n-1}&1\end{pmatrix}$ $\displaystyle\times\begin{pmatrix}z^{n-3/2}&0\\\
0&z^{n-1/2}\end{pmatrix}\sigma_{3}\Phi(n;z)=\begin{pmatrix}z&x_{n}\\\
x_{n}&1/z\end{pmatrix}\Phi(n;z).$
Now we compute the derivative with respect to $z$.
Defining $M_{n}(z):=\big{(}\frac{\partial}{\partial
z}\Phi(n;z)\big{)}\Phi(n;z)^{-1}$, similar computations lead to
$\displaystyle M_{n}(z)={}$ $\displaystyle
z^{-1}\sigma_{3}\begin{pmatrix}-n+3/2&0\\\
0&-n+1/2\end{pmatrix}\sigma_{3}+2z\sigma_{3}\begin{pmatrix}z&0\\\
0&1\end{pmatrix}\begin{pmatrix}1&0\\\ -x_{n-1}&1\end{pmatrix}$
$\displaystyle\times T\big{(}n-1;z^{2}\big{)}\begin{pmatrix}1&0\\\
x_{n-1}&1\end{pmatrix}\begin{pmatrix}z^{-1}&0\\\ 0&1\end{pmatrix}\sigma_{3}.$
(2.24)
We need to prove two things: first the trace of $M_{n}(z)$ is null and then
entries of $M_{n}(z)$ are rational in $z$.
For the trace of $M_{n}(z)$ we use the fact that
$\operatorname{Tr}(T(n;z))=nz^{-1}$. Then
$\operatorname{Tr}(M_{n}(z))=(-2n+2)z^{-1}+2z\operatorname{Tr}\big{(}T\big{(}n-1;z^{2}\big{)}\big{)}=0.$
From the expression of $T(n;z)$ (2.17) and the equation (2.24), we conclude
entries of $M_{n}(z)$ are rational in $z$. ∎
## 3 From the Lax Pair to the discrete Painlevé II hierarchy
In this section, we study the compatibility condition associated to the linear
system (2.15). This first allows us to reconstruct completely the matrix
$T(n;z)$ and then to obtain an explicit $2N$ order discrete equation for
$x_{n}$ which corresponds to equation (1.10).
### 3.1 The symmetry in the compatibility condition
We study the consequences of the symmetry (2.19) for the matrix $T(n;z)$ on
the compatibility condition for the Lax pair introduced in Proposition 2.11.
More precisely, we show that, thanks to the symmetry (2.19), the compatibility
condition contains an overdetermined system of equations.
We recall that the compatibility condition reads as
$\sigma_{+}-T(n+1;z)U(n;z)+U(n;z)T(n;z)=0,$ (3.1)
where we have to replace $U(n;z)$ as in (2.16) and $T(n;z)$ as
$T(n;z)=\sum_{k=1}^{N+1}T_{k}(n)z^{N-k}+\sum_{k=N+2}^{2N+1}-K(n)T_{2N+2-k}(n)K(n)^{-1}z^{N-k},$
(3.2)
and with the coefficient $T_{N+1}(n)$ satisfying equation (2.21).
###### Lemma 3.1.
The compatibility condition (3.1), for $U(n;z)$, $T(n;z)$ as described above,
corresponds to the following system
$\displaystyle T_{1}(n+1)\sigma_{+}-\sigma_{+}T_{1}(n)=0,$ $\displaystyle
T_{j+1}(n+1)\sigma_{+}-\sigma_{+}T_{j+1}(n)+T_{j}(n+1)U_{0}(n)-U_{0}(n)T_{j}(n)=\sigma_{+}\delta_{j,N},\qquad
j=1,\dots,N,$ $\displaystyle T_{N+1}(n)=-K(n)T_{N+1}(n)K(n)^{-1}+nI_{2}.$
###### Proof.
The compatibility condition (3.1), after replacing $U(n;z)$, $T(n;z)$ of the
prescribed form, involves powers of $z$ from $N$ to $-N-1$. Imposing that the
coefficients of each of these powers of $z$ is identically zero, we obtain the
following equations:
$\displaystyle z^{N}\colon$ $\displaystyle
T_{1}(n+1)\sigma_{+}-\sigma_{+}T_{1}(n)=0,$ (3.3) $\displaystyle z^{N-j},$
$\displaystyle\\!\\!\\!\\!\\!j=1,\dots,N\colon$ $\displaystyle
T_{j+1}(n+1)\sigma_{+}-\sigma_{+}T_{j+1}(n)+T_{j}(n+1)U_{0}(n)-U_{0}(n)T_{j}(n)=\sigma_{+}\delta_{j,N,}$
(3.4) $\displaystyle z^{-1}\colon$ $\displaystyle
T_{N+1}(n+1)U_{0}(n)-U_{0}(n)T_{N+1}(n)-K(n+1)T_{N}(n+1)K(n+1)^{-1}\sigma_{+}$
$\displaystyle\qquad{}+\sigma_{+}K(n)T_{N}(n)K(n)^{-1}=0,$ (3.5)
$\displaystyle z^{N-j},$ $\displaystyle\\!\\!\\!\\!\\!j=N+2,\dots,2N\colon\ $
$\displaystyle-K(n+1)T_{2N+1-j}(n+1)K(n+1)^{-1}\sigma_{+}+\sigma_{+}K(n)T_{2N+1-j}(n)K(n)^{-1}$
$\displaystyle\qquad{}+U_{0}(n)K(n)T_{2N+2-j}(n)K(n)^{-1}$
$\displaystyle\qquad{}-K(n+1)T_{2N+2-j}(n+1)K(n+1)^{-1}U_{0}(n)=0,$ (3.6)
$\displaystyle z^{-N-1}\colon\ $
$\displaystyle-K(n+1)T_{1}(n+1)K(n+1)^{-1}U_{0}(n)+U_{0}(n)K(n)T_{1}(n)K(n)^{-1}=0.$
(3.7)
With the change of indices $2N+1-j=k\iff k=2N+1-j=N-1,\dots,1$, the equation
(3.6) becomes:
$\displaystyle-K(n+1)T_{k}(n+1)K(n+1)^{-1}\sigma_{+}+\sigma_{+}K(n)T_{k}(n)K(n)^{-1}$
$\displaystyle\qquad{}-K(n+1)T_{k+1}(n+1)K(n+1)^{-1}U_{0}(n)+U_{0}(n)K(n)T_{k+1}(n)K(n)^{-1}=0.$
(3.8)
We now show that equations (3.5), (3.6), (3.7) are equivalent to the first
ones (3.3), (3.4) thanks to the symmetry of the coefficients $T_{k}(n)$ given
in (2.20) together with the equation for $T_{N+1}(n)$, already obtained in
(2.21).
To start with, we notice the following relations:
$\displaystyle\widetilde{U}_{0}(n)\coloneqq
K(n+1)^{-1}U_{0}(n)K(n)=\sigma_{+},$
$\displaystyle\widetilde{\sigma}(n)\coloneqq
K(n+1)^{-1}\sigma_{+}K(n)=U_{0}(n),$
deduced by using multiple times relation (2.9), namely
$x_{n}^{2}+\frac{\kappa_{n-1}^{2}}{\kappa_{n}^{2}}=1$.
1. 1.
Let us consider first the equation (3.7) obtained from the coefficient of the
term $z^{-N-1}$. Multiplying by $K(n+1)^{-1}$ to the left and by $K(n)$ to the
right, we obtain
$-T_{1}(n+1)\widetilde{U}_{0}(n)+\widetilde{U}_{0}(n)T_{1}(n)=0,$
that is exactly (3.3).
2. 2.
Let us consider now equations (3.8), obtained from the coefficients of the
term $z^{N-j}$, $j=N+2,\dots,2N$. By multiplying by $K(n+1)^{-1}$ to the left
and by $K(n)$ to the right as before, we obtain the equations for
$k=N-1,\dots,1$
$-T_{k}(n+1)\widetilde{\sigma}(n)+\widetilde{\sigma}(n)T_{k}(n)-T_{k+1}(n+1)\widetilde{U}_{0}(n)+\widetilde{U}_{0}(n)T_{k+1}(n)=0,$
which is exactly equation (3.4) for $j=1,\dots,N-1$.
3. 3.
The last equation is (3.5) obtained from the coefficient of the term $z^{-1}$.
We multiply, again, by $K(n+1)^{-1}$ to the left and by $K(n)$ to the right,
and we get
$\displaystyle
K(n+1)^{-1}T_{N+1}(n+1)K(n+1)\widetilde{U}_{0}(n)-\widetilde{U}_{0}(n)K(n)^{-1}T_{N+1}(n)K(n)$
$\displaystyle\qquad{}-T_{N}(n+1)\widetilde{\sigma}(n)+\widetilde{\sigma}(n)T_{N}(n)=0,$
and then we replace the symmetry for the term $T_{N+1}(n)$ namely the equation
(2.21) (that indeed it has not be used until now)
$-T_{N+1}(n+1)\widetilde{U}_{0}(n)+\widetilde{U}_{0}(n)T_{N+1}(n)+\widetilde{U}_{0}(n)-T_{N}(n+1)\widetilde{\sigma}(n)+\widetilde{\sigma}(n)T_{N}(n)=0.$
And this is again exactly equation (3.4), for $j=N$.
Thus the compatibility condition (3.1) is reduced to the equations in the
statement, namely equations (3.3), (3.4), (2.21). ∎
Now, we use equations (3.3), (3.4) together with the initial condition for
$T_{1}(n)$ given in (2.18), to recursively find the coefficients $T_{k}(n)$,
for $k=1,\dots,N+1$, in terms of the $x_{n\pm j},j=1,\dots,N$. With the
coefficients $T_{k}(n)$ computed in such a way, the symmetry for $T_{N+1}(n)$,
i.e., equation (2.21), once $T_{N+1}(n)$ is determined, provides an actual
discrete equation for $x_{n}$ of order $2N$, that is what we call the higher
order analogue of the discrete Painlevé II equation (that coincide for $N=1,2$
to the ones already appeared in [1, 6, 11]).
### 3.2 The recursion
In this subsection, we explain how equations (3.3), (3.4) resulting from the
compatibility condition (3.1) can be used to find recursively (in $k$) all the
coefficients $T_{k}(n)$, $k=1,\dots,N+1$ of $T(n;z)$.
###### Lemma 3.2.
For every $i=1,\dots,N$, starting from the initial condition (2.18)
$T_{1}(n)=\frac{\theta_{N}}{2}\sigma_{3}$, we have
$\displaystyle
T_{i+1,12}(n)=x_{n+1}\big{(}2\Delta^{-1}+I\big{)}\bigg{(}\dfrac{x_{n+1}}{v_{n+1}}T_{i,21}(n+1)-x_{n}T_{i,12}(n)\bigg{)}+v_{n+1}T_{i,12}(n+1)$
$\displaystyle\hphantom{T_{i+1,12}(n)=}-x_{n}x_{n+1}T_{i,12}(n),$
$\displaystyle
T_{i+1,21}(n+1)=x_{n}v_{n+1}\big{(}2\Delta^{-1}+I\big{)}\bigg{(}\dfrac{x_{n+1}}{v_{n+1}}T_{i,21}(n+1)-x_{n}T_{i,12}(n)\bigg{)}+v_{n+1}T_{i,21}(n)$
$\displaystyle\hphantom{T_{i+1,21}(n+1)=}-x_{n}x_{n+1}T_{i,21}(n+1),$
$\displaystyle
T_{i+1,11}(n)=-T_{i+1,22}(n)+n\delta_{i,N}=\Delta^{-1}\bigg{(}\dfrac{-x_{n+1}}{v_{n+1}}T_{i+1,21}(n+1)+x_{n}T_{i+1,12}(n)\bigg{)}+n\delta_{i,N},$
where
$\displaystyle\Delta\colon\ T_{i}(n)\to T_{i}(n+1)-T_{i}(n),$ (3.9)
$\displaystyle v_{n}\coloneqq 1-x_{n}^{2},$ (3.10)
###### Proof.
We rewrite equations (3.3), (3.4) for $i=1,\dots,N$, entry by entry. For the
first one, we have
$\begin{cases}T_{1,11}(n+1)-T_{1,11}(n)=0,\\\
T_{1,12}(n)=T_{1,21}(n+1)=0.\end{cases}$
This is satisfied by $T_{1}(n)$ given in (2.18). For the second one, for any
$1\leqslant i\leqslant N$ we have the four equations:
$\displaystyle
T_{i+1,11}(n+1)-T_{i+1,11}(n)=-T_{i,11}(n+1)x_{n}x_{n+1}+T_{i,12}(n+1)\big{(}1-x_{n+1}^{2}\big{)}x_{n}$
$\displaystyle\hphantom{T_{i+1,11}(n+1)-T_{i+1,11}(n)=}{}+x_{n}x_{n+1}T_{i,11}(n)-x_{n+1}T_{i,21}(n)+\delta_{i,N},$
$\displaystyle
T_{i+1,12}(n)=-x_{n+1}T_{i,11}(n+1)+T_{i,12}(n+1)\big{(}1-x_{n+1}^{2}\big{)}-x_{n}x_{n+1}T_{i,12}(n)+x_{n+1}T_{i,22}(n),$
$\displaystyle
T_{i+1,21}(n+1)=-T_{i,21}(n+1)x_{n}x_{n+1}+T_{i,22}(n+1)x_{n}\big{(}1-x_{n+1}^{2}\big{)}$
$\displaystyle\hphantom{T_{i+1,21}(n+1)=}{}-T_{i,11}(n)x_{n}\big{(}1-x_{n+1}^{2}\big{)}+\big{(}1-x_{n+1}^{2}\big{)}T_{i,21}(n),$
$\displaystyle
0=T_{i,21}(n+1)x_{n+1}\\!-T_{i,22}(n+1)\big{(}1\\!-x_{n+1}^{2}\big{)}\\!-x_{n}\big{(}1\\!-x_{n+1}^{2}\big{)}T_{i,12}(n)+T_{i,22}(n)\big{(}1\\!-x_{n+1}^{2}\big{)}.$
Using the notations introduced in (3.9), (3.10), the previous equations with
$1\leqslant i\leqslant N$ become
$\displaystyle\Delta T_{i+1,11}(n)=-x_{n}x_{n+1}\Delta
T_{i,11}(n)+x_{n}v_{n+1}T_{i,12}(n+1)-x_{n+1}T_{i,21}(n)+\delta_{i,N},$ (3.11)
$\displaystyle
T_{i+1,12}(n)=-x_{n+1}T_{i,11}(n+1)\\!+v_{n+1}T_{i,12}(n+1)\\!-x_{n}x_{n+1}T_{i,12}(n)\\!+x_{n+1}T_{i,22}(n),$
(3.12) $\displaystyle
T_{i+1,21}(n+1)=-x_{n}x_{n+1}T_{i,21}(n+1)+x_{n}v_{n+1}T_{i,22}(n+1)-x_{n}v_{n+1}T_{i,11}(n)$
$\displaystyle\hphantom{T_{i+1,21}(n+1)=}{}+v_{n+1}T_{i,21}(n),$ (3.13)
$\displaystyle v_{n+1}\Delta
T_{i,22}(n)=x_{n+1}T_{i,21}(n+1)-x_{n}v_{n+1}T_{i,12}(n).$ (3.14)
From these equations, we see that in order to obtain the diagonal terms, there
is a “discrete integration” to perform, while the off-diagonal terms are
directly determined from the previous ones. Moreover, we can rewrite the four
equation as only two equations involving only the off-diagonal terms. Indeed,
because of $\operatorname{Tr}(T(n;z))=nz^{-1}$, $T_{i,11}(n,z)=-T_{i,22}(n,z)$
for $1\leqslant i\leqslant N$. Thus (3.14) can be written as
$v_{n+1}\Delta T_{i,11}(n)=-x_{n+1}T_{i,21}(n+1)+x_{n}v_{n+1}T_{i,12}(n).$
Formally, $1\leqslant i\leqslant N$,
$T_{i,11}(n)=-T_{i,22}(n)=\Delta^{-1}\bigg{(}\dfrac{-x_{n+1}}{v_{n+1}}T_{i,21}(n+1)+x_{n}T_{i,12}(n)\bigg{)},$
(3.15)
which still holds for $i=N+1$ up to adding the “constant” $n$ on the right
hand side. Using this in (3.12) and (3.13), we obtain:
$\displaystyle
T_{i+1,12}(n)=x_{n+1}\big{(}2\Delta^{-1}+I\big{)}\bigg{(}\dfrac{x_{n+1}}{v_{n+1}}T_{i,21}(n+1)-x_{n}T_{i,12}(n)\bigg{)}+v_{n+1}T_{i,12}(n+1)$
$\displaystyle\hphantom{T_{i+1,12}(n)=}{}-x_{n}x_{n+1}T_{i,12}(n),$
$\displaystyle
T_{i+1,21}(n+1)=x_{n}v_{n+1}\big{(}2\Delta^{-1}+I\big{)}\bigg{(}\dfrac{x_{n+1}}{v_{n+1}}T_{i,21}(n+1)-x_{n}T_{i,12}(n)\bigg{)}+v_{n+1}T_{i,21}(n)$
$\displaystyle\hphantom{T_{i+1,21}(n+1)=}-x_{n}x_{n+1}T_{i,21}(n+1).$ ∎
We notice that, defining the discrete recursion operator
$\displaystyle\mathcal{L}\begin{pmatrix}u_{n}\\\
y_{n}\end{pmatrix}=\begin{pmatrix}x_{n+1}\big{(}2\Delta^{-1}+I\big{)}\bigg{(}\dfrac{x_{n+1}}{v_{n+1}}y_{n}-x_{n}u_{n}\bigg{)}+(v_{n+1}(\Delta+I)-x_{n}x_{n+1})u_{n}\\\\[11.38109pt]
x_{n}v_{n+1}\big{(}2\Delta^{-1}\\!+\\!I\big{)}\bigg{(}\dfrac{x_{n+1}}{v_{n+1}}y_{n}\\!-\\!x_{n}u_{n}\bigg{)}\\!+\\!\big{(}v_{n+1}(\Delta+I)^{-1}\\!-\\!x_{n}x_{n+1}\big{)}y_{n}\end{pmatrix}\\!,$
(3.16)
we can rewrite the two equations for the off-diagonal entries of $T_{i}(n)$
obtained above as
$\begin{pmatrix}T_{i+1,12}(n)\\\
T_{i+1,21}(n+1)\end{pmatrix}=\mathcal{L}\begin{pmatrix}T_{i,12}(n)\\\
T_{i,21}(n+1)\end{pmatrix}\\!,\qquad 1\leqslant i\leqslant N.$ (3.17)
And, recursively we obtain
$\begin{pmatrix}T_{N+1,12}(n)\\\
T_{N+1,21}(n+1)\end{pmatrix}={\mathcal{L}}^{N}\begin{pmatrix}0\\\
0\end{pmatrix}\\!.$ (3.18)
This procedure allows to construct the whole matrix $T(n;z)$, starting from
the initial condition $T_{1}(n)=\frac{\theta_{N}}{2}\sigma_{3}$ and iterating
the operator $\mathcal{L}$ we obtain off diagonal terms of $T(n;z)$ and
compute diagonal one with equation (3.15). Below we implemented this method to
find the matrix $T(n;z)$ in the first few cases $N=1,2$.
###### Example 3.3.
In the case $N=1$, the matrix $T(n;z)=T_{1}(n)+T_{2}(n)z^{-1}+T_{3}(n)z^{-2}$.
Knowing $T_{1}(n)$, we only have to find $T_{2}(n)$ using the recurrence
relation given from the compatibility, i.e., equations (3.11), (3.12), (3.13)
for $i=1$. Since: $T_{1,12}(n)=T_{1,21}(n)=0$, and
$T_{1,11}(n)=\theta_{N}/2=-T_{1,22}(n)$, we have
$\displaystyle T_{2,11}(n)=n,$ $\displaystyle
T_{2,12}(n)=-x_{n+1}(T_{1,11}(n+1)+T_{1,11}(n))=-\theta_{1}x_{n+1},$
$\displaystyle
T_{2,21}(n+1)=x_{n}v_{n+1}(T_{1,22}(n+1)+T_{1,22}(n))=-\theta_{1}x_{n}v_{n+1},$
and $T_{2,22}(n)=n-T_{2,11}(n)=0$. Moreover, the symmetry which reflects terms
of $T(n;z)$ two by two gives $T_{3}(n)=-K(n)T_{1}(n)K(n)$. Thus the Lax matrix
for $N=1$ is
$T(n;z)=\dfrac{\theta_{1}}{2}\begin{pmatrix}1&\hphantom{-}0\\\
0&-1\end{pmatrix}+\frac{1}{z}\begin{pmatrix}n&-\theta_{1}x_{n+1}\\\
-\theta_{1}v_{n}x_{n-1}&0\end{pmatrix}+\frac{\theta_{1}}{z^{2}}\begin{pmatrix}\frac{1}{2}-x_{n}^{2}&x_{n}\\\
v_{n}x_{n}&x_{n}^{2}-\frac{1}{2}\end{pmatrix}\\!.$
###### Example 3.4.
In the case $N=2$, the matrix
$T(n;z)=T_{1}(n)z+T_{2}(n)+T_{3}(n)z^{-1}+T_{4}(n)z^{-2}+T_{5}(n)z^{-3}$. This
time we have to find $T_{2}(n)$ (that will be almost the same as before) and
also $T_{3}(n)$ using the recurrence relation given from the compatibility,
i.e., equations (3.11), (3.12), (3.13) for $i=1$ and $2$. First we find
$T_{2}(n)$ ($i=1$ above), we have
$\displaystyle T_{2,11}(n)=\frac{\theta_{1}}{2},$ $\displaystyle
T_{2,12}(n)=-x_{n+1}(T_{1,11}(n+1)+T_{1,11}(n))=-\theta_{2}x_{n+1},$
$\displaystyle
T_{2,21}(n+1)=x_{n}v_{n+1}(T_{1,22}(n+1)+T_{1,22}(n))=-\theta_{2}x_{n}v_{n+1},$
and $T_{2,22}(n)=-T_{2,11}=-\frac{\theta_{1}}{2}$.
Then we consider the equation for $i=2$ and find $T_{3}(n)$. We have
$\displaystyle\Delta
T_{3,11}(n)=x_{n}v_{n+1}(-\theta_{2}x_{n+2})-x_{n+1}(-\theta_{2}x_{n-1}v_{n})+1\\!\implies\\!T_{3,11}(n)=n-\theta_{2}x_{n-1}x_{n+1}v_{n},$
$\displaystyle
T_{3,12}(n)=-\theta_{1}x_{n+1}-\theta_{2}\big{(}v_{n+1}x_{n+2}-x_{n}x_{n+1}^{2}\big{)},$
$\displaystyle
T_{3,21}(n+1)=\bigl{(}-\theta_{1}x_{n}-\theta_{2}\big{(}v_{n}x_{n-1}-x_{n}^{2}x_{n+1}\big{)}\bigr{)}v_{n+1},$
$\displaystyle T_{3,22}(n)=n-T_{3,11}(n)=\theta_{2}x_{n-1}x_{n+1}v_{n}.$
Finally, we take $T_{4}(n)=-K(n)T_{2}(n)K(n)$ and
$T_{5}(n)=-K(n)T_{1}(n)K(n)$. Thus the Lax matrix for $N=2$ is
$\displaystyle T(n;z)=z\frac{\theta_{2}}{2}\begin{pmatrix}1&0\\\
0&-1\end{pmatrix}+\begin{pmatrix}\frac{\theta_{1}}{2}&-\theta_{2}x_{n+1}\\\
-\theta_{2}x_{n-1}v_{n}&-\frac{\theta_{1}}{2}\end{pmatrix}$
$\displaystyle+\frac{1}{z}\begin{pmatrix}n-\theta_{2}x_{n-1}x_{n+1}v_{n}&-\theta_{1}x_{n+1}-\theta_{2}\big{(}v_{n+1}x_{n+2}-x_{n}x_{n+1}^{2}\big{)}\\\
\bigl{(}-\theta_{1}x_{n-1}-\theta_{2}\big{(}v_{n-1}x_{n-2}-x_{n}x_{n-1}^{2}\big{)}\bigr{)}v_{n}&\theta_{2}x_{n-1}x_{n+1}v_{n}\end{pmatrix}$
$\displaystyle+\frac{1}{z^{2}}\begin{pmatrix}-\theta_{2}v_{n}(x_{n}x_{n-1}+x_{n}x_{n+1})+\frac{\theta_{1}}{2}\big{(}v_{n}-x_{n}^{2}\big{)}&-\theta_{2}\big{(}v_{n}x_{n-1}+x_{n}^{2}x_{n+1}\big{)}\\\
-\theta_{2}\big{(}v_{n}x_{n+1}+x_{n}^{2}x_{n-1}\big{)}v_{n}&\theta_{2}v_{n}(x_{n}x_{n-1}+x_{n}x_{n+1})-\frac{\theta_{1}}{2}\big{(}v_{n}-x_{n}^{2}\big{)}\end{pmatrix}$
$\displaystyle+\frac{\theta_{2}}{z^{3}}\begin{pmatrix}\frac{1}{2}-x_{n}^{2}&x_{n}\\\
v_{n}x_{n}&x_{n}^{2}-\frac{1}{2}\end{pmatrix}\\!.$
Now that we have reconstructed the whole matrix $T(n;z)$ in terms of $x_{n\pm
j}$, $j=-N,\dots,N$ we are left with the equation that $T_{N+1}(n)$ has to
satisfy, namely (2.21). We now show that actually this coincide with only one
scalar equation in $T_{N+1,12}$ and $T_{N+1,21}$. Indeed, entry by entry it
reads as the following system of four equations. From the off-diagonal entries
$\displaystyle\begin{split}&v_{n}T_{N+1,12}(n)=x_{n}(T_{N+1,11}(n)-T_{N+1,22}(n))-T_{N+1,21}(n),\\\
&v_{n}T_{N+1,21}(n)=x_{n}v_{n}(T_{N+1,11}(n)-T_{N+1,22}(n))-v_{n}^{2}T_{N+1,12}(n)\end{split}$
(3.19)
and from the diagonal entries
$\displaystyle
n-\big{(}1+x_{n}^{2}\big{)}T_{N+1,11}(n)-v_{n}T_{N+1,22}(n)+x_{n}T_{N+1,21}(n)+x_{n}v_{n}T_{N+1,12}(n)=0,$
$\displaystyle
n-\big{(}1+x_{n}^{2}\big{)}T_{N+1,22}(n)-v_{n}T_{N+1,11}(n)-x_{n}T_{N+1,21}(n)-x_{n}v_{n}T_{N+1,12}(n)=0.$
We notice first that the four above equations are all the same. The first and
the second equations are the same up to a multiplication by $v_{n}$. Using the
relation $T_{N+1,11}(n)+T_{N+1,22}(n)=n$, we can rewrite the third and the
forth equations and obtain the same equation up to a sign. Finally,
multiplying by $x_{n}$ the first equation and using the relation
$T_{N+1,11}(n)+T_{N+1,22}(n)=n$ we obtain the third one. Thus from now on we
will refer only to (3.19), as for the remaining equation.
Using equation (3.14) and $\operatorname{Tr}(T(n;z))=nz^{-1}$, we express
equation (3.19) in function of $T_{N+1,12}(n)$ and $T_{N+1,21}(n)$. Consider
equation (3.19), with the identity $\operatorname{Tr}(T_{N+1}(n))=n$, it is
rewritten as
$v_{n}T_{N+1,12}(n)=x_{n}(n-2T_{N+1,22}(n))-T_{N+1,21}(n).$
Equation (3.14) holds also for $i=N+1$. It means it is possible to replace
$T_{N+1,22}(n)$ in the previous equation and obtain
$\displaystyle nx_{n}-v_{n}T_{N+1,12}(n)-T_{N+1,21}(n)$
$\displaystyle\qquad{}-2x_{n}\Delta^{-1}\biggl{(}-x_{n}T_{N+1,12}(n)+\dfrac{x_{n+1}}{v_{n+1}}(\Delta+I)T_{N+1,21}(n)\biggr{)}=0.$
(3.20)
### 3.3 The relation between $\boldsymbol{T_{i,12}(n)}$ and
$\boldsymbol{T_{i,21}(n)}$
The previous equation (3.2) depends on $T_{N+1,12}(n)$ and $T_{N+1,21}(n)$.
The aim of this part is to establish a connection between $T_{i,12}(n)$ and
$T_{i,21}(n)$ to rewrite equation (3.2) just in function of $T_{N+1,12}(n)$.
To accomplish this, we study the compatibility condition of $C(n;z)\coloneqq
T(n;z)^{2}$ and $U(n;z)$. $C(n;z)$ is rational in $z$ with a pole of order
$-2N-2$ at $0$. We write $C(n;z)$ as
$C(n;z)=\sum_{i=1}^{4N+1}C_{i}(n)z^{2N-1-i}$ (3.21)
with
$C_{i}(n)\coloneqq\sum_{j=1}^{i}T_{j}(n)T_{i+1-j}(n)$ (3.22)
where $C_{1}(n)=\frac{\theta_{N}^{2}}{4}I_{2}$.
In what follows we will need the following lemma:
###### Lemma 3.5.
Diagonal coefficients of $C_{i}(n)$ defined as in (3.22) satisfy the following
equation:
$\displaystyle\forall 1\leqslant i\leqslant N,\qquad C_{i,11}(n)=C_{i,22}(n),$
$\displaystyle C_{N+1,11}(n)=n\theta_{N}+C_{N+1,22}(n).$
###### Proof.
We express $C_{i,11}(n)$ in function of $T_{i,kj}(n)$. With the equation
(3.22)
$C_{i,11}(n)=\sum_{j=1}^{i}T_{j,11}(n)T_{i+1-j,11}(n)+T_{j,12}(n)T_{i+1-j,21}(n).$
Then, the sum index change $j=i-k+1$ leads to
$C_{i,11}(n)=\sum_{k=1}^{i}T_{i-k+1,11}(n)T_{k,11}(n)+T_{i-k+1,12}(n)T_{k,21}(n).$
Finally, with the relation $\operatorname{Tr}(T(n;z))=nz^{-1}$,
* •
if $1\leqslant i\leqslant N$,
$C_{i,11}(n)=\sum_{k=1}^{i}T_{i-k+1,22}(n)T_{k,22}(n)+T_{k,21}(n)T_{i-k+1,12}(n)=C_{i,22}(n).$
* •
if $i=N+1$,
$\displaystyle C_{N+1,11}(n)$
$\displaystyle=-2nT_{1,22}(n)+\sum_{k=1}^{N+1}T_{N-k+2,22}(n)T_{k,22}(n)+T_{k,21}(n)T_{N-k+2,12}(n)$
$\displaystyle=n\theta_{N}+C_{N+1,22}(n).$ ∎
We deduce the compatibility condition for $C$ and $U$ from the one for $T$ and
$U$.
###### Lemma 3.6.
$C(n;z)$ (3.21) and $U(n;z)$ (2.16) satisfy the following compatibility
condition:
$C(n+1;z)U(n;z)-U(n;z)C(n;z)=T(n+1;z)\sigma_{+}+\sigma_{+}T(n;z).$ (3.23)
###### Proof.
Multiplying on the left (resp. on the right) equation (3.1) by $T(n+1;z)$
(resp. $T(n;z)$) and summing these two equations leads to the result. ∎
The left (resp. right) hand side of the equation in the previous lemma is an
expression in powers of $z$ from $z^{2N-1}$ to $z^{-2N-2}$ (resp. from
$z^{N-1}$ to $z^{-N-1}$). This equation leads to recursive equation for
$C_{i}(n)$. We consider only expression in powers of $z$ from $z^{2N-1}$ to
$z^{N-1}$.
According to (3.1) and (3.23), $\forall 1\leqslant i\leqslant N$, $C_{i}(n)$
and $T_{i}(n)$ satisfy the same recursive equation (see equations
(3.11)–(3.14)). For $i=N+1$, the equation is a bit different. The term with
$\delta_{i,N}$ is now multiplied by $\theta_{N}$.
From these equations we deduce the following result.
###### Proposition 3.7.
Let $C_{i}(n)$ be as in (3.22). Then $\forall 1\leqslant i\leqslant N$,
$C_{i}(n)=\alpha_{i}I_{2}\qquad\text{and}\qquad
C_{N+1}(n)=\theta_{N}n\sigma_{+}+\alpha_{N+1}I_{2}.$
###### Proof.
We prove Proposition 3.7 by induction. For $i=1$, we already know
$C_{1}(n)=\frac{\theta_{N}^{2}}{4}$. Suppose $C_{i}(n)=\alpha_{i}I_{2}$ for
$i\leqslant N-1$. $C_{i+1}(n)$ satisfies the following equations:
$\displaystyle\Delta C_{i+1,11}(n)=-x_{n}x_{n+1}\Delta
C_{i,11}(n)+x_{n}v_{n+1}C_{i,12}(n+1)-x_{n+1}C_{i,21}(n)+\theta_{N}\delta_{i,N},$
$\displaystyle
C_{i+1,12}(n)=-x_{n+1}C_{i,11}(n+1)+v_{n+1}C_{i,12}(n+1)-x_{n}x_{n+1}C_{i,12}(n)+x_{n+1}C_{i,22}(n),$
$\displaystyle
C_{i+1,21}(n+1)=-x_{n}x_{n+1}C_{i,21}(n+1)+x_{n}v_{n+1}C_{i,22}(n+1)-x_{n}v_{n+1}C_{i,11}(n)$
$\displaystyle\hphantom{C_{i+1,21}(n+1)=}{}+v_{n+1}C_{i,21}(n).$
Using induction hypothesis,
$\displaystyle\Delta C_{i+1,11}(n)=-0\cdot x_{n}x_{n+1}+0\cdot
x_{n}v_{n+1}-0\cdot x_{n+1}+\theta_{N}\delta_{i,N}=\theta_{N}\delta_{i,N},$
$\displaystyle C_{i+1,12}(n)=-x_{n+1}\alpha_{i}+0\cdot v_{n+1}-0\cdot
x_{n}x_{n+1}+x_{n+1}\alpha_{i}=0,$ $\displaystyle C_{i+1,21}(n+1)=-0\cdot
x_{n}x_{n+1}+x_{n}v_{n+1}\alpha_{i}-x_{n}v_{n+1}\alpha_{i}+0\cdot v_{n+1}=0.$
From the first equation, we conclude $C_{i+1,11}(n)=\alpha_{i+1}$ if
$i\leqslant N-1$ (resp. $C_{N+1,11}(n)=\theta_{N}n+\alpha_{N+1}$ if $i=N$) and
according to Lemma 3.5 $C_{i+1,22}(n)=\alpha_{i+1}$ (resp.
$C_{N+1,22}(n)=\alpha_{N+1}$) which concludes the proof. ∎
From equation (3.22) and Proposition 3.7, we obtain
$\displaystyle\theta_{N}T_{i,11}(n)=\alpha_{i}-\sum_{j=2}^{i-1}T_{j,11}(n)T_{i-j+1,11}(n)+T_{j,12}(n)T_{i-j+1,21}(n),$
(3.24)
$\displaystyle\theta_{N}T_{N+1,11}(n)=n\theta_{N}+\alpha_{N+1}-\sum_{j=2}^{N}T_{j,11}(n)T_{N-j+2,11}(n)+T_{j,12}(n)T_{N-j+2,21}(n).$
(3.25)
With all this discussion on $C(n;z)$ it is now possible to prove the following
proposition.
###### Proposition 3.8.
The following holds: $\forall 1\leqslant i\leqslant N+1$, $T_{i,11}(n)$,
$T_{i,12}(n)$ and $T_{i,21}(n)$ are polynomials in $x_{n+j}$’s. Moreover, the
following symmetries hold:
$\exists(Q_{i,n}((u_{n+j})_{1-i\leqslant j\leqslant
i-1}),P_{i,n}((u_{n+j})_{1-i\leqslant j\leqslant i-1}))$
polynomials in $u_{n+j}$’s such that,
$\displaystyle T_{i,11}(n)=Q_{i,n}((x_{n+j})_{1-i\leqslant j\leqslant
i-1})=Q_{i,n}((x_{n-j})_{1-i\leqslant j\leqslant i-1}),$ $\displaystyle
T_{i,12}(n)=P_{i,n}((x_{n+j})_{1-i\leqslant j\leqslant i-1}),$ $\displaystyle
T_{i,21}(n)=v_{n}P_{i,n}((x_{n-j})_{1-i\leqslant j\leqslant i-1}).$
###### Proof.
We prove this proposition by strong induction. For $i=1$,
$T_{1}(n)=\frac{\theta_{N}}{2}\sigma_{3}$, then defining
$Q_{1,n}(u_{n}):=\frac{\theta_{N}}{2}$, $P_{1,n}(u_{n}):=0$;
$T_{1,11}(n)=Q_{1,n}(x_{n})$, $T_{1,12}(n)=P_{1,n}(x_{n})$ and
$T_{1,21}(n)=v_{n}P_{1,n}(x_{n})$.
Now suppose the property true for all $j\in[[1,i]]$ with $i\leqslant N$ and
let $(Q_{j,n},P_{j,n})_{j\leqslant i}$ be polynomials in $x_{n+j}$’s
satisfying the property. According to (3.24) (and (3.25) for $i=N$) and strong
induction hypothesis, $T_{i+1}(n)$ is a polynomial in $x_{n+j}$’s and the
invariance when you exchange $x_{n+j}$ by $x_{n-j}$ holds.
Because of equation (3.12) (resp. equation (3.13)) and of induction
hypothesis, there exists $P_{i+1,n}((u_{n+j})_{-i\leqslant j\leqslant i})$
(resp. $\tilde{P}_{i+1,n}((u_{n+j})_{-i\leqslant j\leqslant i})$) a polynomial
such that
$T_{i+1,12}(n)=P_{i+1,n}((x_{n+j})_{-i\leqslant j\leqslant i}),$
respectively,
$T_{i+1,21}(n)=\tilde{P}_{i+1,n}((x_{n+j})_{-i\leqslant j\leqslant i}).$
Now we establish the link between $P_{i+1,n}$ and $\tilde{P}_{i+1,n}$.
According to equation (3.12) and the relation
$\operatorname{Tr}(T(n;z))=nz^{-1}$,
$\displaystyle P_{i+1,n}\big{(}(x_{n+j})_{j=-i}^{i}\big{)}={}$ $\displaystyle-
x_{n+1}Q_{i,n+1}\big{(}(x_{n+j})_{j=-i}^{i-2}\big{)}+v_{n+1}P_{i,n+1}\big{(}(x_{n+j})_{j=-i}^{i-2}\big{)}$
$\displaystyle-
x_{n}x_{n+1}P_{i,n}\big{(}(x_{n+j})_{j=1-i}^{i-1}\big{)}-x_{n+1}Q_{i,n}\big{(}(x_{n+j})_{j=1-i}^{i-1}\big{)}.$
Then
$\displaystyle v_{n}P_{i+1,n}\big{(}(x_{n-j})_{j=-i}^{i}\big{)}={}$
$\displaystyle
v_{n}\big{(}-x_{n-1}Q_{i,n-1}\big{(}(x_{n-j})_{j=-i}^{i-2}\big{)}+v_{n-1}P_{i,n-1}\big{(}(x_{n-j})_{j=-i}^{i-2}\big{)}$
$\displaystyle-
x_{n}x_{n-1}P_{i,n}\big{(}(x_{n-j})_{j=1-i}^{i-1}\big{)}-x_{n-1}Q_{i,n}\big{(}(x_{n-j})_{j=1-i}^{i-1}\big{)}\big{)}.$
From induction hypothesis and $\operatorname{Tr}(T(n;z))=nz^{-1}$
$\displaystyle v_{n}P_{i+1,n}\big{(}(x_{n-j})_{j=-i}^{i}\big{)}={}$
$\displaystyle-
x_{n-1}v_{n}T_{i,11}(n-1)+v_{n}T_{i,21}(n-1)+x_{n-1}x_{n}T_{i,21}(n)$
$\displaystyle+x_{n-1}v_{n}T_{i,22}(n).$
According to equation (3.13),
$v_{n}P_{i+1,n}\big{(}(x_{n-j})_{j=-i}^{i}\big{)}=T_{i+1,21}(n+1).$
Then
$v_{n}P_{i+1,n}\big{(}(x_{n-j})_{j=-i}^{i}\big{)}=\tilde{P}_{i+1,n}\big{(}(x_{n+j})_{-i\leqslant
j\leqslant i}\big{)}$
and this concludes the proof. ∎
Define $\mathbb{C}[(x_{j})_{j\in[[0,2n]]}]$ and the transformation
$\displaystyle{\rm Perm}_{n}\colon\quad\mathbb{C}[(x_{j})_{j\in[[0,2n]]}]$
$\displaystyle\longrightarrow\mathbb{C}[(x_{j})_{j\in[[0,2n]]}],$
$\displaystyle P((x_{n+j})_{-n\leqslant j\leqslant n})$
$\displaystyle\longmapsto P((x_{n-j})_{-n\leqslant j\leqslant n}).$
From the previous proposition,
$T_{i,21}(n)=v_{n}{\rm Perm}_{n}(T_{i,12}(n)).$ (3.26)
###### Remark 3.9.
As a consequence of the Proposition 3.8, the equation (3.19) is a polynomial
in $x_{n+j}$’s and is invariant when you apply ${\rm Perm}_{n}$ to this
equation because ${\rm Perm}_{n}^{2}={\rm Id}$ and ${\rm
Perm}_{n}v_{n}=v_{n}{\rm Perm}_{n}$.
We use the link we established in Proposition 3.8 between $T_{i,12}(n)$ and
$T_{i,21}(n)$ to rewrite the operator $\mathcal{L}$ (3.16) as a scalar
operator:
$L(u_{n})\coloneqq\big{(}x_{n+1}\big{(}2\Delta^{-1}+I\big{)}((\Delta+I)x_{n}{\rm
Perm}_{n}-x_{n})+v_{n+1}(\Delta+I)-x_{n}x_{n+1}\big{)}u_{n}.$ (3.27)
Finally, collecting all the results from the previous sections, we state and
proof the following theorem.
###### Theorem 3.10.
The system (2.15), with $T(n;z)$ of the form (3.2) and coefficient
$T_{N+1}(n)$ satisfying the symmetry condition (2.21), is a Lax pair for the
$N$-th higher order discrete Painlevé II equation and the equation is given by
the expression:
$nx_{n}+\big{(}2x_{n}\Delta^{-1}(x_{n}-(\Delta+I)x_{n}{\rm
Perm}_{n})-v_{n}-v_{n}{\rm Perm}_{n}\big{)}T_{N+1,12}(n)=0,$ (3.28)
where $T_{N+1,12}(n)={L}^{N}(0)$ with $L$ as in (3.27).
###### Proof.
Replacing $T_{N+1,21}(n)$ with the relation (3.26), equation (3.2) now reads
as
$nx_{n}+\big{(}2x_{n}\Delta^{-1}(x_{n}-(\Delta+I)x_{n}{\rm
Perm}_{n})-v_{n}-v_{n}{\rm Perm}_{n}\big{)}T_{N+1,12}(n)=0.$
Equations (3.17) and (3.18) with the relation (3.26) reduce to
$T_{i+1,12}(n)=L(T_{i,12}(n))\qquad\text{and}\qquad T_{N+1,12}(n)={L}^{N}(0),$
which concludes the proof. ∎
The next two examples explain for $N=1,2$ how to compute explicitly equation
(3.28).
###### Example 3.11.
Using the expression defined in Theorem 3.10, we compute the first equation
(1.13) and the second (1.14).
For $N=1$: First we compute $T_{2,12}(n)$ with the operator $L$ (3.27):
$T_{2,12}(n)=2x_{n+1}\Delta^{-1}(0)=-\theta_{1}x_{n+1},$
where $-\theta_{1}/2$ is the integration constant.
Replacing $T_{2,12}(n)$ in equation (3.28),
$\displaystyle
nx_{n}+v_{n}\theta_{1}(x_{n+1}+x_{n-1})+2x_{n}\Delta^{-1}(\theta_{1}x_{n}x_{n+1}-\theta_{1}x_{n}x_{n+1})=0.$
Then
$(n+\alpha)x_{n}+\theta_{1}v_{n}(x_{n+1}+x_{n-1})=0.$
This equation is the same as equation (1.13) if we choose the integration
constant $\alpha$ to be zero.
For $N=2$: We compute $T_{3,12}(n)$. Computations are the same for
$T_{2,12}(n)$ except for the integration constant,
$T_{2,12}(n)=-\theta_{2}x_{n+1}$.
$\displaystyle
T_{3,12}(n)=L(T_{2,12}(n))=\big{(}x_{n}x_{n+1}^{2}-v_{n+1}x_{n+2}\big{)}\theta_{2}$
$\displaystyle\hphantom{T_{3,12}(n)=}{}+x_{n+1}\big{(}2\Delta^{-1}+I\big{)}(-\theta_{2}x_{n}x_{n+1}+\theta_{2}x_{n}x_{n+1})$
Then
$T_{3,12}(n)=\theta_{2}\big{(}x_{n}x_{n+1}^{2}-v_{n+1}x_{n+2}\big{)}-\theta_{1}x_{n+1}$.
Replacing $T_{3,12}(n)$ in equation (3.28),
$(n+\alpha)x_{n}+\theta_{2}v_{n}\big{(}v_{n+1}x_{n+2}+v_{n-1}x_{n-2}-x_{n}(x_{n+1}+x_{n-1})^{2}\big{)}+\theta_{1}v_{n}(x_{n+1}+x_{n-1})=0$
which is the same equation as (1.14).
We finally conclude the work by noticing that Theorem 3.10 together with
Corollary 2.8 give the proof of Theorem 1.2.
###### Remark 3.12.
In our setting, the fixed $N\geq 1$ define the order $(2N)$ of the discrete
equation solved by $x_{n}$, the quantity related to the Toeplitz determinants
$D_{n}$. An alternative approach could be to leave $N$ variate and consider it
as a second discrete variable for $x_{n}$. In effect, this is done in [19],
where the authors consider orthogonal polynomials on the real line, w.r.t. a
weight $\rho(\lambda;N){\rm d}\lambda$ and where the dependence on an integer
parameter $N$ is such that $\rho(\lambda;N+1)=\lambda\rho(\lambda;N)$. In this
case the relevant quantities to consider (related to the Hankel determinants)
are the coefficients of the three terms recurrence relation satisfied by these
polynomials. The authors there proved that these quantities solve (up to some
change of variables) the discrete-time Toda molecule equation, a coupled
system of discrete equations in the two variables $n$, $N$. The result deeply
relies on the quasi-periodic condition satisfied by the weight $\rho$. Back to
our setting, the measure we have for our orthogonal polynomials on the unit
circle is such that
$\mathrm{d}\mu(\lambda;N+1)=\mathrm{e}^{\sum_{j=1}^{N+1}\frac{\theta_{j}}{j}(\mathrm{e}^{{\rm
i}\lambda j}+\mathrm{e}^{-{\rm i}\lambda
j})}\frac{\mathrm{d}\lambda}{2\pi}=\mathrm{e}^{\frac{\theta_{N+1}}{N+1}(\mathrm{e}^{{\rm
i}\lambda(N+1)}+\mathrm{e}^{-{\rm
i}\lambda(N+1)})}\,\mathrm{d}\mu(\lambda;N).$
This relation does not seem as promising as the one for $\rho$ for the study
of the $N$-dependence, but it is another point that we could further
investigate.
## Appendix A The continuous limit
This appendix contains further computations for the continuous limit of the
equations of the discrete Painlevé II hierarchy (1.10) in the first cases
$N=1,2,3$. To obtain it, we follow the scaling limit given in [5, Theorem 1]
as already recalled in the introduction.
The case $\boldsymbol{N=1}$. Notice that in this case we recover the same
computation done in [6, Chapter 9]. We consider equation (1.13) written as
$x_{n+1}+x_{n-1}+\frac{nx_{n}}{\theta_{1}\big{(}1-x_{n}^{2}\big{)}}=0$
in which the only parameter appearing is $\theta_{1}=\theta$. Following the
scaling limit of [5, Theorem 1], in the case $N=1$, we have
$b=2,\qquad d=1\qquad\text{and}\qquad
x_{n}=(-1)^{n}\theta^{-\frac{1}{3}}u(t)\quad\text{with}\quad
t=(n-2\theta)\theta^{-\frac{1}{3}}.$
Now, for $\theta\rightarrow+\infty$, we compute
$\displaystyle x_{n\pm 1}$
$\displaystyle\sim(-1)^{n+1}\theta^{-\frac{1}{3}}u\big{(}t\pm\theta^{-\frac{1}{3}}\big{)}$
$\displaystyle\sim(-1)^{n+1}\theta^{-\frac{1}{3}}\bigg{(}u(t)\pm\theta^{-\frac{1}{3}}u^{\prime}(t)+\frac{\theta^{-\frac{2}{3}}}{2}u^{\prime\prime}(t)+O\big{(}\theta^{-1}\big{)}\bigg{)},$
that gives
$x_{n+1}+x_{n-1}\sim(-1)^{n+1}2\theta^{-\frac{1}{3}}u(t)+(-1)^{n+1}\theta^{-1}u^{\prime\prime}(t)+O\big{(}\theta^{-1}\big{)}.$
The other term appearing in the discrete Painlevé II equation gives instead
$\displaystyle\frac{nx_{n}}{\theta_{1}\big{(}1-x_{n}^{2}\big{)}}\sim\big{(}2\theta+t\theta^{\frac{1}{3}}\big{)}(-1)^{n}\theta^{-\frac{1}{3}}u(t)\theta^{-1}\bigg{(}1+\theta^{-\frac{2}{3}}u^{2}(t)+O\big{(}\theta^{-1}\big{)}\bigg{)}$
$\displaystyle\sim(-1)^{n}2\theta^{-\frac{1}{3}}u(t)+(-1)^{n}\theta^{-1}\big{(}tu(t)+2u^{3}(t)\big{)}+O\big{(}\theta^{-1}\big{)}.$
Thus equation (1.8) in this scaling limit gives at the first order
(coefficient of $\theta^{-1}$) the second order differential equation
$u^{\prime\prime}(t)-tu(t)-2u^{3}(t)=0,$
which coincides indeed with the Painlevé II equation.
The case $\boldsymbol{N=2}$. We consider equation (1.14), with the parameters
$\theta_{1}$, $\theta_{2}$ rescaled as $\theta_{1}=\theta$,
$\theta_{2}=\frac{\theta}{4}$. It reads as
$\displaystyle\frac{nx_{n}}{\big{(}1-x_{n}^{2}\big{)}}+\theta(x_{n+1}+x_{n-1})$
$\displaystyle{}\qquad+\frac{\theta}{4}\big{(}x_{n+2}\big{(}1-x_{n+1}^{2}\big{)}+x_{n-2}\big{(}1-x_{n-1}^{2}\big{)}-x_{n}(x_{n+1}+x_{n-1})^{2}\big{)}=0$
(A.1)
and this time we consider the following scaling limit (case $N=2$ in [5,
Theorem 1])
$b=\frac{3}{2},\qquad d=4\qquad\text{and}\qquad
x_{n}=(-1)^{n}\theta^{-\frac{1}{5}}4^{\frac{1}{5}}u(t)\quad\text{with}\quad
t=\bigg{(}n-\frac{3}{2}\theta\bigg{)}\theta^{-\frac{1}{5}}4^{\frac{1}{5}}.$
For $\theta\rightarrow+\infty$, similar computations gives the fourth order
differential equation
$tu(t)+6u(t)^{5}-10u(t)u^{\prime}(t)^{2}-10u(t)^{2}u^{\prime\prime}(t)+u^{\prime\prime\prime\prime}(t)=0$
which corresponds to the second equation of the Painlevé II hierarchy.
Detailed computations to obtain certain terms from the previous equation are
given below. We begin with the expansion of the first term in equation (A.1):
$\displaystyle\frac{nx_{n}}{\big{(}1-x_{n}^{2}\big{)}}$
$\displaystyle\sim\bigg{(}\frac{3}{2}\theta+4^{-\frac{1}{5}}\theta^{\frac{1}{5}}t\bigg{)}(-1)^{n}\theta^{-\frac{1}{5}}4^{\frac{1}{5}}u(t)\big{(}1+4^{\frac{2}{5}}\theta^{-\frac{2}{5}}u^{2}(t)+4^{\frac{4}{5}}\theta^{-\frac{4}{5}}u^{4}(t)+O\big{(}\theta^{-1}\big{)}\big{)}$
$\displaystyle\sim(-1)^{n}\bigg{(}\frac{3}{2}4^{\frac{1}{5}}\theta^{\frac{4}{5}}u(t)+\frac{3}{2}4^{\frac{3}{5}}\theta^{\frac{2}{5}}u(t)^{3}+tu(t)+6u(t)^{5}+O\big{(}\theta^{-\frac{1}{5}}\big{)}\bigg{)}.$
Computing expansions of $x_{n\pm 1}$, $x_{n\pm 2}$ as $\theta\to\infty$, we
obtain
$\displaystyle x_{n\pm
1}\sim(-1)^{n+1}4^{\frac{1}{5}}\theta^{-\frac{1}{5}}u\big{(}t\pm
4^{\frac{1}{5}}\theta^{-\frac{1}{5}}\big{)}\sim(-1)^{n+1}4^{\frac{1}{5}}\theta^{-\frac{1}{5}}$
$\displaystyle\hphantom{x_{n\pm 1}\sim}\times\bigg{(}u(t)\pm
4^{\frac{1}{5}}\theta^{-\frac{1}{5}}u^{\prime}(t)+\frac{4^{\frac{2}{5}}\theta^{-\frac{2}{5}}}{2}u^{\prime\prime}(t)\pm\frac{4^{\frac{3}{5}}\theta^{-\frac{3}{5}}}{6}u^{\prime\prime\prime}(t)+\frac{4^{\frac{4}{5}}\theta^{-\frac{4}{5}}}{24}u^{\prime\prime\prime\prime}(t)+O\big{(}\theta^{-1}\big{)}\bigg{)},$
$\displaystyle x_{n\pm
2}\sim(-1)^{n}4^{\frac{1}{5}}\theta^{-\frac{1}{5}}u\big{(}t\pm
2\theta^{-\frac{1}{5}}4^{\frac{1}{5}}\big{)}\sim(-1)^{n}4^{\frac{1}{5}}\theta^{-\frac{1}{5}}$
$\displaystyle\hphantom{x_{n\pm 2}\sim}\times\bigg{(}u(t)\pm
4^{\frac{1}{5}}2\theta^{-\frac{1}{5}}u^{\prime}(t)+4^{\frac{7}{5}}\theta^{-\frac{2}{5}}u^{\prime\prime}(t)\pm\frac{4^{\frac{8}{5}}2\theta^{-\frac{3}{5}}}{3}u^{\prime\prime\prime}(t)+\frac{4^{\frac{9}{5}}\theta^{-\frac{4}{5}}}{3}u^{\prime\prime\prime\prime}(t)+O\big{(}\theta^{-1}\big{)}\bigg{)}$
that gives for the second term of equation (A.1)
$\theta(x_{n+1}+x_{n-1})\sim(-1)^{n+1}\bigg{(}4^{\frac{1}{5}}2\theta^{\frac{4}{5}}u(t)+4^{\frac{3}{5}}\theta^{\frac{2}{5}}u^{\prime\prime}(t)+\frac{1}{3}u^{\prime\prime\prime\prime}(t)+O\big{(}\theta^{-\frac{1}{5}}\big{)}\bigg{)}.$
Some linear and nonlinear terms appear with the expansion of the third term of
equation (A.1). The linear one is
$\frac{\theta}{4}(x_{n+2}+x_{n-2})\sim(-1)^{n}\bigg{(}4^{\frac{1}{5}}\theta^{\frac{4}{5}}\frac{1}{2}u(t)+4^{\frac{3}{5}}\theta^{\frac{2}{5}}u^{\prime\prime}(t)+\frac{4}{3}u^{\prime\prime\prime\prime}(t)+O\big{(}\theta^{-\frac{1}{5}}\big{)}\bigg{)}.$
Nonlinear ones are
$\displaystyle\frac{\theta}{4}x_{n}(x_{n+1}+x_{n-1})^{2}\sim(-1)^{n}u(t)\big{(}4^{\frac{3}{5}}\theta^{\frac{2}{5}}u(t)^{2}+4u(t)u^{\prime\prime}(t)+O\big{(}\theta^{-\frac{1}{5}}\big{)}\big{)},$
$\displaystyle\frac{\theta}{4}x_{n\pm 2}x_{n\pm
1}^{2}\sim(-1)^{n}\big{(}4^{-\frac{2}{5}}\theta^{\frac{2}{5}}u(t)^{3}\pm
4^{\frac{4}{5}}\theta^{\frac{1}{5}}u(t)^{2}u^{\prime}(t)+3u(t)^{2}u^{\prime\prime}(t)+5u(t)u^{\prime}(t)^{2}\big{)}.$
From these computations, we see that we recover exactly
$tu(t)+6u(t)^{5}-10u(t)u^{\prime}(t)^{2}-10u(t)^{2}u^{\prime\prime}(t)+u^{\prime\prime\prime\prime}(t)=0.$
The case $\boldsymbol{N=3}$. We consider equation (1.15) with the parameters
$\theta_{1}$, $\theta_{2}$, $\theta_{3}$ rescaled as $\theta_{1}=\theta$,
$\theta_{2}=\frac{2\theta}{5}$, $\theta_{3}=\frac{\theta}{15}$ and rewritten
as
$\displaystyle\frac{nx_{n}}{\theta\big{(}1-x_{n}^{2}\big{)}}+(x_{n+1}+x_{n-1})+\frac{2}{5}\big{(}x_{n+2}\big{(}1-x_{n+1}^{2}\big{)}+x_{n-2}\big{(}1-x_{n-1}^{2}\big{)}-x_{n}(x_{n+1}+x_{n-1})^{2}\big{)}$
$\displaystyle\qquad+\frac{1}{15}\big{(}x_{n}^{2}(x_{n+1}+x_{n-1})^{3}+x_{n+3}\big{(}1-x_{n+2}^{2}\big{)}\big{(}1-x_{n+1}^{2}\big{)}+x_{n-3}\big{(}1-x_{n-2}^{2}\big{)}\big{(}1-x_{n-1}^{2}\big{)}\big{)}$
$\displaystyle\qquad+\frac{1}{15}\bigl{(}-2x_{n}(x_{n+1}+x_{n-1})\big{(}x_{n+2}\big{(}1\\!-x_{n+1}^{2}\big{)}\\!+x_{n-2}\big{(}1\\!-x_{n-1}^{2}\big{)}\big{)}-x_{n-1}x_{n-2}^{2}\big{(}1\\!-x_{n-1}^{2}\big{)}\bigr{)}$
$\displaystyle\qquad+\frac{1}{15}\bigl{(}-x_{n+1}x_{n+2}^{2}\big{(}1-x_{n+1}^{2}\big{)}-x_{n+1}x_{n-1}(x_{n+1}+x_{n-1})\bigr{)}=0.$
Finally, we consider the following scaling limit (case $N=3$ of [5, Theorem
1])
$b=\frac{4}{3},\qquad d=15\qquad\text{and}\qquad
x_{n}=(-1)^{n}\theta^{-\frac{1}{7}}15^{\frac{1}{7}}u(t)\quad\text{with}\quad
t=\bigg{(}n-\frac{4}{3}\theta\bigg{)}\theta^{-\frac{1}{7}}15^{\frac{1}{7}}.$
Again, for $\theta\rightarrow+\infty$ the asymptotic expansion of the equation
above results at the first order (coefficient of $\theta^{-1}$) into the sixth
order differential equation
$\displaystyle
tu(t)+20u(t)^{7}-140u(t)^{3}u^{\prime}(t)^{2}-70u(t)^{4}u^{\prime\prime}(t)+70u^{\prime}(t)^{2}u^{\prime\prime}(t)+42u(t)u^{\prime\prime}(t)^{2}$
$\displaystyle\qquad{}+56u(t)u^{\prime}(t)u^{\prime\prime\prime}(t)+14u(t)^{4}u^{\prime\prime}(t)-u^{\prime\prime\prime\prime\prime\prime}(t)=0,$
which corresponds to the third equation in the Painlevé II hierarchy.
###### Remark A.1.
Computations for $N=2$ and $N=3$ were performed with Maple/Mathematica. Files
are available on demand.
### Acknowledgments
We acknowledge the support of the H2020-MSCA-RISE-2017 PROJECT No. 778010
IPaDEGAN and the International Research Project PIICQ, funded by CNRS. During
the period from November 2021 to October 2022, S.T. was supported also by the
Fonds de la Recherche Scientifique-FNRS under EOS project O013018F and based
at the Institut de Recherche en Mathématique et Physique of UCLouvain. The
authors are grateful to Mattia Cafasso for the inspiration given to work on
this project and his guidance. The authors also want to thank the referees of
this paper for useful comments and suggestions. S.T. is also grateful to
Giulio Ruzza for meaningful conversations.
## References
* [1] Adler M., van Moerbeke P., Recursion relations for unitary integrals, combinatorics and the Toeplitz lattice, Comm. Math. Phys. 237 (2003), 397–440, arXiv:math-ph/0201063.
* [2] Baik J., Riemann–Hilbert problems for last passage percolation, in Recent Developments in Integrable Systems and Riemann–Hilbert Problems (Birmingham, AL, 2000), Contemp. Math., Vol. 326, Amer. Math. Soc., Providence, RI, 2003, 1–21, arXiv:math.PR/0107079.
* [3] Baik J., Deift P., Johansson K., On the distribution of the length of the longest increasing subsequence of random permutations, J. Amer. Math. Soc. 12 (1999), 1119–1178, arXiv:math.CO/9810105.
* [4] Baik J., Deift P., Suidan T., Combinatorics and random matrix theory, Grad. Stud. Math., Vol. 172, Amer. Math. Soc., Providence, RI, 2016.
* [5] Betea D., Bouttier J., Walsh H., Multicritical random partitions, Sém. Lothar. Combin. 85 B (2021), 33, 12 pages, arXiv:2012.01995.
* [6] Borodin A., Discrete gap probabilities and discrete Painlevé equations, Duke Math. J. 117 (2003), 489–542, arXiv:math-ph/0111008.
* [7] Borodin A., Okounkov A., A Fredholm determinant formula for Toeplitz determinants, Integral Equations Operator Theory 37 (2000), 386–396, arXiv:math.CA/9907165.
* [8] Cafasso M., Claeys T., Girotti M., Fredholm determinant solutions of the Painlevé II hierarchy and gap probabilities of determinantal point processes, Int. Math. Res. Not. 2021 (2021), 2437–2478, arXiv:1902.05595.
* [9] Cafasso M., Ruzza G., Integrable equations associated with the finite-temperature deformation of the discrete Bessel point proces, J. Lond. Math. Soc., to appear, arXiv:2207.01421.
* [10] Clarkson P.A., Joshi N., Mazzocco M., The Lax pair for the mKdV hierarchy, in Théories Asymptotiques et Équations de Painlevé, Sémin. Congr., Vol. 14, Soc. Math. France, Paris, 2006, 53–64.
* [11] Cresswell C., Joshi N., The discrete first, second and thirty-fourth Painlevé hierarchies, J. Phys. A 32 (1999), 655–669.
* [12] Dattoli G., Chiccoli C., Lorenzutta S., Maino G., Richetta M., Torre A., Generating functions of multivariable generalized Bessel functions and Jacobi-elliptic functions, J. Math. Phys. 33 (1992), 25–36.
* [13] Deift P.A., Orthogonal polynomials and random matrices: a Riemann–Hilbert approach, Courant Lect. Notes Math., Vol. 3, Amer. Math. Soc., Providence, RI, 1999.
* [14] Flaschka H., Newell A.C., Monodromy- and spectrum-preserving deformations. I, Comm. Math. Phys. 76 (1980), 65–116.
* [15] Fokas A.S., Its A.R., Kitaev A.V., Discrete Painlevé equations and their appearance in quantum gravity, Comm. Math. Phys. 142 (1991), 313–344.
* [16] Forrester P.J., Witte N.S., Bi-orthogonal polynomials on the unit circle, regular semi-classical weights and integrable systems, Constr. Approx. 24 (2006), 201–237, arXiv:math.CA/0412394.
* [17] Hastings S.P., McLeod J.B., A boundary value problem associated with the second Painlevé transcendent and the Korteweg–de Vries equation, Arch. Rational Mech. Anal. 73 (1980), 31–51.
* [18] Hisakado M., Unitary matrix models and Painlevé III, Modern Phys. Lett. A 11 (1996), 3001–3010, arXiv:hep-th/9609214.
* [19] Hisakado M., Wadati M., Matrix models of two-dimensional gravity and discrete Toda theory, Modern Phys. Lett. A 11 (1996), 1797–1806, arXiv:hep-th/9605175.
* [20] Its A.R., Kitaev A.V., Fokas A.S., An isomonodromy approach to the theory of two-dimensional quantum gravity, Russian Math. Surveys 45 (1990), 155–157.
* [21] Kimura T., Zahabi A., Universal edge scaling in random partitions, Lett. Math. Phys. 111 (2021), 48, 16 pages, arXiv:2012.06424.
* [22] Le Doussal P., Majumdar S.N., Schehr G., Multicritical edge statistics for the momenta of fermions in nonharmonic traps, Phys. Rev. Lett. 121 (2018), 030603, 7 pages, arXiv:1802.06436.
* [23] Okounkov A., Infinite wedge and random partitions, Selecta Math. (N.S.) 7 (2001), 57–81, arXiv:math.RT/9907127.
* [24] Painlevé P., Mémoire sur les équations différentielles dont l’intégrale générale est uniforme, Bull. Soc. Math. France 28 (1900), 201–261.
* [25] Periwal V., Shevitz D., Exactly solvable unitary matrix models: multicritical potentials and correlations, Nuclear Phys. B 344 (1990), 731–746.
* [26] Ramani A., Grammaticos B., Hietarinta J., Discrete versions of the Painlevé equations, Phys. Rev. Lett. 67 (1991), 1829–1832.
* [27] Schensted C., Longest increasing and decreasing subsequences, Canadian J. Math. 13 (1961), 179–191.
* [28] Tracy C.A., Widom H., Fredholm determinants, differential equations and matrix models, Comm. Math. Phys. 163 (1994), 33–72, arXiv:hep-th/9306042.
* [29] Tracy C.A., Widom H., Level-spacing distributions and the Airy kernel, Comm. Math. Phys. 159 (1994), 151–174, arXiv:hep-th/9211141.
|
# Light rings in static and extremal black holes
Pedro Bargueño<EMAIL_ADDRESS>Departamento de Física Aplicada,
Universidad de Alicante, Campus de San Vicente del Raspeig, E-03690 Alicante,
Spain
###### Abstract
In this work we establish some results concerning the existence of external
light rings in extremal black hole spacetimes through the Newman-Penrose
formalism. Specifically, assuming conformal flatness, staticity and the null
energy condition, we show that a sufficient condition for the existence of
external light rings is $R<2K_{G}$, where $R$, the curvature scalar of the
spacetime and $K_{G}$, the Gaussian curvature of a spacelike two-surface, are
both evaluated at the outermost event horizon, which can be endowed with
spherical, hyperbolic or planar geometry. Our results are valid for any metric
gravity theory where photons follow null geodesics.
## I Introduction
The ringdown and shadow observables are of fundamental importance to provide
information on black hole geometry, which has been recently revealed by
gravitational wave observations Abbott1 ; Abbott2 and shadow images EHT1 ;
EHT2 ; EHT3 . They are both intimately connected to a special set of bound
null orbits for test particles Cardoso2016 ; Cunha2018 which, when planar,
are known as light rings, an extreme form of light deflection consisting of
closed paths.
The existence of null circular geodesics in generic (non-extremal)
asymptotically flat black holes was proved in Ref. Hod2013 for spherically
symmetric hairy configurations and for stationary axisymmetric black hole
spacetimes Cunha2020 . The findings presented in Ref. Cunha2020 were extended
to static and spherically symmetric black holes not only with asymptotically
flat behavior, but also with (A)dS asymptotics Wei2020 and later to a general
static warped product spacetime Koga2021 . A subsequent extension for light
rings in a stationary spacetime with an ergoregion was presented in Ghosh2021
. These recent works, mainly based on topological and/or effective potential
techniques, were recently followed by a more geometric approach for
spherically symmetric, static and non-extremal spacetimes, based on the Gauss
and geodesic curvatures of the optical metric Qiao2022a ; Qiao2022b ;
Cunha2022 . Interestingly, although some universal properties of light rings
for stationary and axisymmetric spacetimes, including the extension to the
extremal case, were recently reported Guo2021 , the static case remains
essentially open 111After the completion of this work, the existence of
external null circular geodesics in spherically symmetric black holes has been
reported by Y. Peng in arXiv:2211.14463. In addition, S. Hod has shown in
arXiv:2211.15983 that spherically symmetric extermal black holes possess at
least one external light when the dominant energy condition and Einstein’s
field equations are assumed.. In this sense, we would like to mention that the
problem of the existence of light rings in static, spherically symmetric and
asymptotically flat extremal black holes was considered in Ref. Hod2022
within the framework of general Relativity, obtaining that it crucially
depends on the sign of the tangential pressure of the matter sector.
In the present work we shall focus our attention on light rings for general
static and extremal black holes with different horizon topologies and
Minkowskian or (A)dS asymptotics, irrespective of the underlying gravitational
theory. Therefore, our results will be valid for theories beyond General
Relativity. The manuscript is organized as follows: section II is devoted to
prove our main results using the Newman-Penrose formalism, whose essentials
are introduced at this point. Discussions and final remarks are left to
section III.
## II Light rings through the Newman-Penrose calculus
Here we will follow Penrose and Rindler’s conventions Penrose1984 . Let us
start from a spherically symmetric and static geometry, written as
$ds^{2}=f(r)dt^{2}-g(r)dr^{2}-r^{2}d\Omega^{2}$, where
$d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}$. After choosing the
following null tetrad
$\displaystyle l^{\mu}$ $\displaystyle=$
$\displaystyle\left(\frac{1}{\sqrt{2f}},-\frac{1}{\sqrt{2g}},0,0\right),$
$\displaystyle n^{\mu}$ $\displaystyle=$
$\displaystyle\left(\frac{1}{\sqrt{2f}},\frac{1}{\sqrt{2g}},0,0\right),$
$\displaystyle m^{\mu}$ $\displaystyle=$
$\displaystyle\left(0,0,-\frac{1}{\sqrt{2}r},\frac{\textrm{i}\csc\theta}{\sqrt{2}r}\right),$
(1)
where $l_{\mu}n^{\mu}$=1 and $m_{\mu}\bar{m}^{\mu}=-1$, with the bar denoting
complex conjugation, the only non–vanishing Newman-Penrose scalars for the
considered spacetime (which will be either Petrov type D or O due to spherical
symmetry) are
$\displaystyle\Psi_{2}$ $\displaystyle=$ $\displaystyle
C_{pqrs}l^{p}m^{q}\bar{m}^{r}n^{s}$ $\displaystyle\Phi_{11}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}R_{ab}l^{a}n^{b}+3\Lambda$ $\displaystyle\Phi_{00}$
$\displaystyle=$ $\displaystyle-\frac{1}{2}R_{ab}l^{a}l^{b}$
$\displaystyle\Phi_{22}$ $\displaystyle=$
$\displaystyle-\frac{1}{2}R_{ab}n^{a}n^{b}$ $\displaystyle\Lambda$
$\displaystyle=$ $\displaystyle\frac{R}{24},$
where $C_{abcd}$ and $R_{ab}$ stand for the Weyl and Ricci curvatures,
respectively, and $R=g^{ab}R_{ab}$.arXiv:2211.15983 (2022).
In particular, for the geometry under consideration, we obtain
$\displaystyle\Psi_{2}$ $\displaystyle=$
$\displaystyle\frac{f^{\prime\prime}}{12fg}-\frac{f^{\prime}g^{\prime}}{24fg^{2}}-\frac{f^{\prime
2}}{24f^{2}g}-\frac{f^{\prime}}{12rfg}+\frac{g^{\prime}}{12rg^{2}}+\frac{1}{6r^{2}g}-\frac{1}{6r^{2}}$
$\displaystyle\Phi_{11}$ $\displaystyle=$
$\displaystyle\frac{f^{\prime\prime}}{8fg}-\frac{f^{\prime}g^{\prime}}{16fg^{2}}-\frac{f^{\prime
2}}{16f^{2}g}-\frac{1}{4r^{2}g}+\frac{1}{4r^{2}}$ $\displaystyle R$
$\displaystyle=$
$\displaystyle-\frac{f^{\prime\prime}}{fg}+\frac{f^{\prime}g^{\prime}}{2fg^{2}}+\frac{f^{\prime
2}}{2f^{2}g}-\frac{2f^{\prime}}{rfg}+\frac{2g^{\prime}}{rg^{2}}-\frac{2}{r^{2}g}+\frac{2}{r^{2}}$
$\displaystyle\Phi_{00}$ $\displaystyle=$
$\displaystyle\frac{\Phi_{22}}{16}=\frac{4\left(gf^{\prime}+fg^{\prime}\right)}{rfg^{2}},$
(3)
where the prime denotes derivative w. r. t. the radial coordinate.
Interestingly, in the case $\Phi_{00}=\Phi_{22}=0$, which implies $f(r)g(r)=A$
(we always can take $A=1$ by a re-parametrization of the time coordinate) we
can solve for the metric potential and its derivatives in terms of the Newman
Penrose scalars, obtaining
$\displaystyle f$ $\displaystyle=$ $\displaystyle
1-2r^{2}\left(\Lambda+\Phi_{11}-\Psi_{2}\right)$ $\displaystyle f^{\prime}$
$\displaystyle=$ $\displaystyle-2r\left(2\Lambda+\Psi_{2}\right)$
$\displaystyle f^{\prime\prime}$ $\displaystyle=$ $\displaystyle
4\left(-\Lambda+\Phi_{11}+\Psi_{2}\right).$ (4)
At this point, let us introduce the light ring condition as follows: a black
hole has an external light ring located at $r_{\gamma}$, where
$r_{\gamma}>r_{h}$ being $r_{h}$ the location of the outermost event horizon,
when
$D(r_{\gamma})=r_{\gamma}f^{\prime}(r_{\gamma})-2f(r_{\gamma})=0.$ (5)
For our purposes, we rewrite the $D$-function appearing in Eq. (5) as
$D(r)=-2+2r^{2}(2\Phi_{11}-3\Psi_{2}),$ (6)
or
$\displaystyle\frac{D}{2r^{2}}$ $\displaystyle=$ $\displaystyle-
K_{G}+2\Phi_{11}-3\Psi_{2}$ (7) $\displaystyle=$ $\displaystyle-
K_{G}+2\left(\Phi_{11}+3\Lambda\right)-3\left(\Psi_{2}+2\Lambda\right),$
where $K_{G}=\frac{1}{r^{2}}$ stands for the Gaussian curvature of the angular
sector of the spacetime (which is a 2-sphere in the case here considered).
Let us remark that the function $D(r)$ goes to -2 not only for an
asymptotically flat spacetime, but also for (A)dS asymptotics (which are
conformally flat), where $\Psi_{2}=0$.
The first of Eqs. (II), which we refer to the Penrose-Rindler equation, will
be useful along the manuscript. It can be expressed as
$\displaystyle\frac{f}{2r^{2}}$ $\displaystyle=$
$\displaystyle\frac{K_{G}}{2}-\Lambda-\Phi_{11}+\Psi_{2}$ (8) $\displaystyle=$
$\displaystyle\frac{K_{G}}{2}-\left(\Phi_{11}+3\Lambda\left)+\right(\Psi_{2}+2\Lambda\right).$
At this point, a couple of comments are in order:
* •
The dominant (null) energy condition, together with Einstein’s equations,
imply $\Phi_{11}+3\Lambda\geq 0$ ($\Phi_{00}\geq 0$) Witt1973 .
* •
$\frac{K_{G}}{2}-\left(\Phi_{11}+3\Lambda\right)+\left(\Psi_{2}+2\Lambda\right)=0$
for both extremal or non-extremal black hole event horizons.
* •
An extremal black hole is characterized by $f^{\prime}(r_{h})=0$ which, in the
$\Phi_{00}=0$ case, is equivalent to
$\left(\Psi_{2}+2\Lambda\right)|_{r_{h}}=0$.
* •
$\frac{K_{G}}{2}-\left(\Phi_{11}+3\Lambda\right)=0$ for an extremal black hole
horizon.
Therefore, we can conclude that $D=-3\left(\Psi_{2}+2\Lambda\right)>0$ for a
non-extremal black hole. Then, having into account the $D\rightarrow-2$
asymptotic limit, at least one external light ring is shown to exist. This
results generalizes previous findings Cunha2022 by including (A)dS
asymptotics.
Note that the condition $D^{\prime}(r_{h})>0$ implies the existence of at
least one external light ring for extremal black holes, which satisfy
$D(r_{h})=0$. In fact, a straightforward calculation reveals that, for an
extremal black hole horizon,
$\displaystyle D^{\prime}(r_{h})$ $\displaystyle=$ $\displaystyle
2r_{h}\left(3\Psi_{2}+2\Phi_{11}\right)|_{r_{h}}$ (9) $\displaystyle=$
$\displaystyle 4r_{h}\left(\Phi_{11}-3\Lambda\right)|_{r_{h}}$
$\displaystyle=$ $\displaystyle 4r_{h}\left(2K_{G}-R\right)|_{r_{h}}.$
Therefore, we obtain the following
Theorem 1. Let us consider an asymptotically conformal based on the dominant
energy condition which characterizes the external matter fields in non-vacuum
extremal black hole spacetimes, .
ly flat, spherically symmetric, static and extremal black hole spacetime with
$\Phi_{00}=0$. If $R(r_{h})<2K_{G}(r_{h})$, then there is at least one
external light ring.
In particular, the existence of external light rings for extremal black holes
with $R(r_{h})\leq 0$ is guaranteed.
Our results can be extended to the $\Phi_{00}\neq 0$ case. Recall that
$f^{\prime}(r_{h})=0$ also for an extremal black hole. In this case, we get
from Eqs. (II) that, at the horizon of an extremal black hole:
$\displaystyle\left(\Phi_{11}+3\Lambda\right)$ $\displaystyle|_{r_{h}}=$
$\displaystyle\frac{1}{2r_{h}^{2}}+\frac{g^{\prime}(r_{h})}{4r_{h}g(r_{h})^{2}}$
$\displaystyle\Phi_{00}|_{r_{h}}$ $\displaystyle=$
$\displaystyle\frac{4g^{\prime}(r_{h})}{r_{h}g(r_{h})^{2}}$ $\displaystyle
R|_{r_{h}}$ $\displaystyle=$
$\displaystyle\frac{2}{r_{h}^{2}}+\frac{2g^{\prime}(r_{h})}{r_{h}g(r_{h})^{2}}-\frac{f^{\prime\prime}(r_{h})}{f(r_{h})g(r_{h})},$
(10)
and, therefore,
$D^{\prime}(r_{h})=r_{h}f^{\prime\prime}(r_{h})=\left(2K_{G}+\frac{\Phi_{00}}{2}-R\right)|_{r_{h}}.$
(11)
At this point, we take take advantage of a result by Hayward concerning
trapping horizons Hayward1994a ; Hayward1994b which, in the spherically
symmetric case, can be re-stated as follows:
Signature law. If the null energy condition holds on a spherically symmetric
trapping horizon, $r_{t}$, the horizon is null if and only if
$\Phi_{00}(r_{t})=0$.
But, in the spherically symmetric and static case, both event and trapping
horizons coincide Nielsen2006 ; Nielsen2010 , $r_{t}=r_{h}$. Therefore, as any
event horizon is null, the signature law implies that, if the null energy
condition holds on $r_{h}$, then $\Phi_{00}(r_{h})=0$. Therefore, we can state
the following
Theorem 2. Let us consider an asymptotically conformally flat, spherically
symmetric, static and extremal black hole spacetime. If the null energy
condition holds on the outermost event horizon, $r_{h}$, and
$R(r_{h})<2K_{G}(r_{h})$, then there is at least one external light ring. In
particular, the existence of this external light ring is guaranteed if
$R(r_{h})\leq 0$.
Finally, we would like to point out that our results can be easily extended to
the case of hyperbolic and planar horizons. The line element is written as
$ds^{2}=f(r)dt^{2}-g(r)dr^{2}-r^{2}\left(d\theta^{2}+\gamma^{2}(\theta)d\phi^{2}\right)$,
where $\gamma(\theta)=\\{1,\sinh\theta,\sin\theta\\}$ for planar, hyperbolic
and spherical horizons, respectively. For these metrics, both $l^{\mu}$ and
$n^{\mu}$ are independent of the geometry of the angular sector, but
$m^{\mu}=\left(0,0,-\frac{1}{\sqrt{2}r},\frac{\textrm{i}}{\gamma(\theta)\sqrt{2}r}\right)$.
Within this general situation, a straightforward calculation reveals that Eqs.
(II) can be expressed as
$\displaystyle f$ $\displaystyle=$
$\displaystyle\alpha-2r^{2}\left(\Lambda+\Phi_{11}-\beta\Psi_{2}\right)$
$\displaystyle f^{\prime}$ $\displaystyle=$
$\displaystyle-2r\left(2\Lambda+\beta\Psi_{2}\right)$ $\displaystyle
f^{\prime\prime}$ $\displaystyle=$ $\displaystyle
4\left(-\Lambda+\Phi_{11}+\beta\Psi_{2}\right),$ (12)
where $(\alpha,\beta)$ is $(0,1)$, $(-1,1)$ and $(1,1)$ for planar, hyperbolic
and spherical horizons, respectively. Therefore, following the same arguments
we can state the following
Theorem 3. Let us consider an asymptotically conformally flat, static and
extremal black hole spacetime. If the null energy condition holds on the
outermost event horizon, $r_{h}$, and
$R(r_{h})<2K_{G}(r_{h})=\frac{2\alpha}{r_{h}^{2}}$, then there is at least one
external light ring.
## III Discussion and final remarks
At this point, the results we have obtained are valid for any metric theory of
gravity where photons follow null geodesics. In the particular case of General
Relativity, including a non-vanishing cosmological constant, $\lambda$, the
aforementioned sufficient condition reads
$T<\frac{K_{G}}{4\pi}-\frac{\lambda}{2\pi},$ (13)
where $T$ stands for the trace of the energy-momentum tensor. Note that the
trace energy condition, the assertion that the trace of the stress-energy
tensor should (in mostly minus signature) be non-negative it has now been
completely abandoned and is no longer cited in the literature Barcelo2002 . In
this sense, there are not restrictions of this kind with respect to Eq. (13).
Finally, we can easily recover a very recent result Hod2022 concerning light
rings in extremal, static and spherically symmetric black holes within the
framework of General Relativity under the dominant energy condition.
Specifically, if the dominant energy condition holds, then
$\Phi_{00}(r_{h})=0$. Then, if the Einstein equations are assumed,
$2K_{G}(r_{h})-R(r_{h})>0$ implies
$-8\pi\Big{(}\rho(r_{h})-p_{r}(r_{h})-2p_{t}(r_{h})\Big{)}<\frac{2}{r_{h}^{2}}$.
Note that $\Phi_{00}(r_{h})=0$ implies, by Einstein’s equations,
$\rho(r_{h})+p_{r}(r_{h})=0$ and, therefore, we get
$p_{r}(r_{h})+p_{t}(r_{h})>\frac{1}{8\pi r_{H}^{2}}$. But, as shown in Ref.
Mayo1996 , $1-8\pi r_{h}^{2}p_{r}(r_{h})=0$ for an extremal black hole. Then,
$R(r_{h})<2K_{G}(r_{h})$ implies $p_{t}(r_{h})<0$, which coincides with the
main result of Ref. Hod2022 when our mostly minus signature is switched to
the mostly plus choice of Hod2022 .
Summarizing, we have successfully employed the Newman-Penrose formalism to
tackle the problem of the existence of external light rings in static an
extremal black holes with planar, hyperbolic and spherical horizons, deriving
a sufficient condition expressed in terms of the scalar curvature of the whole
spacetime and the Gaussian curvature of the horizon. We have extended previous
results in spherical symmetry by including conformally flat asymptotics and,
although our results do not depend on the underlying metric gravitational
theory, we have recovered very recent results concerning extremal black holes
within General Relativity using the Newman-Penrose formalism.
## IV Acknowledgements
P. B. is funded by the Beatriz Galindo contract BEAGAL 18/00207, Spain. P. B.
acknowledges Anaís, Lucía, Inés and Ana for continuous support.
## References
* (1) B. P. Abbott et al. (Virgo and LIGO Scientific Collaborations), Phys. Rev. Lett. 116, 061102 (2016).
* (2) B. P. Abbott et al. (LIGO Scientific and Virgo Collaborations), Phys. Rev. X 9, 031040 (2019).
* (3) K. Akiyama et al. (Event Horizon Telescope Collaboration), Astrophys. J. 875, L1 (2019).
* (4) K. Akiyama et al. (Event Horizon Telescope Collaboration), Astrophys. J. 875, L5 (2019).
* (5) K. Akiyama et al. (Event Horizon Telescope Collaboration), Astrophys. J. 875, L6 (2019).
* (6) V. Cardoso, E. Franzin, and P. Pani, Phys. Rev. Lett. 116, 171101 (2016).
* (7) P. V. P. Cunha and C. A. R. Herdeiro, Gen. Relativ. Gravit. 50, 42 (2018).
* (8) S. Hod, Phys. Lett. B 727, 345 (2013).
* (9) P. V. P. Cunha and C. A. R. Herdeiro, Phys. Rev. Lett. 124, 181101 (2020).
* (10) S. -W. Wei, Phys. Rev. D, 102, 064039 (2020).
* (11) Y. Koga, T. Igata and K. Nakashi, Phys. Rev. D 103, 044003 (2021).
* (12) R. Ghosh and S. Sarkar, Phys. Rev. D 104, 044019 (2021).
* (13) C. -K. Qiao and M. Li, Phys. Rev. D 106, L021501 (2022).
* (14) C. -K. Qiao, Phys. Rev. D 106, 084060 (2022).
* (15) P. V. P. Cunha et al., Class. Quantum Grav. 39, 225007 (2022).
* (16) M. Guo and S. Gao, Phys. Rev. D 103, 104031 (2021).
* (17) S. Hod, Eur. Phys. J. C 82, 663 (2022).
* (18) R. Penrose and W. Rindler, Spinors and Space-Time vol 1. Two-Spinor Calculus and Relativistic Fields (Cambridge: Cambridge University Press, 1984).
* (19) S. W. Hawking on Black holes, les astres occlus, Ed. C. DeWitt, B. S. DeWitt, Gordon and Breach Science Publishers (1973).
* (20) S. A. Hayward, Phys. Rev. D 49, 6467 (1994).
* (21) S. A. Hayward, Class. Quantum Grav. 11, 3025 (1994).
* (22) A. B Nielsen and M. Visser, Class. Quantum Grav. 23, 4637 (2006).
* (23) A. B Nielsen, Class. Quantum Grav. 27, 245016 (2010).
* (24) C. Barcelo and M. Visser, Int. J. Mod. Phys. D 11, 15553 (2002).
* (25) A. E. Mayo and J. D. Bekenstein, Phys. Rev. D 54, 5059 (1996).
|
1468241 [Physics]Fisica Scuola di dottorato Vito Volterra XXXIV 2022 2022
Prof. Paolo Pani<EMAIL_ADDRESS>22 February 2022 Prof. Alfredo
Urbano Prof. Massimo Bianchi Prof. Enrico Barausse
# Probing new physics on the horizon of black holes with gravitational waves
Elisa Maggio
###### Abstract
Black holes are the most compact objects in the Universe. According to general
relativity, black holes have a horizon that hides a singularity where
Einstein’s theory breaks down. Recently, gravitational waves opened the
possibility to probe the existence of horizons and investigate the nature of
compact objects. This is of particular interest given some quantum-gravity
models which predict the presence of horizonless and singularity-free compact
objects. Such exotic compact objects can emit a different gravitational-wave
signal relative to the black hole case. In this thesis, we analyze the
stability of horizonless compact objects, and derive a generic framework to
compute their characteristic oscillation frequencies. We provide an
analytical, physically-motivated template to search for the gravitational-wave
echoes emitted by these objects in the late-time postmerger signal. Finally,
we infer how extreme mass-ratio inspirals observable by future gravitational-
wave detectors will allow for model-independent tests of the black hole
paradigm.
###### Contents
1. 0 Preface
2. 1 Introduction
## Chapter 0 Preface
The work presented in this thesis has been carried out mainly at the Physics
Department of Sapienza University of Rome in the research group of gravity
theory and gravitational wave phenomenology. Part of this work was carried out
at the Consortium for Fundamental Physics, School of Mathematics and
Statistics, University of Sheffield, United Kingdom, and at the Centro de
Astrofísica e Gravitação (CENTRA), Instituto Superior Técnico, Universidade de
Lisboa, Portugal. I thank these institutions for their kind hospitality.
### List of publications
The work in this thesis was accomplished with different scientific
collaborations, whose members I kindly acknowledge.
* •
Chapter LABEL:chapter2 is the outcome of a collaboration with Paolo Pani and
Guilherme Raposo based on:
* E. Maggio, P. Pani, G. Raposo, “Testing the nature of dark compact objects with gravitational waves,” Invited chapter for C. Bambi, S. Katsanevas, K.D. Kokkotas (editors), Handbook of Gravitational Wave Astronomy, Springer, Singapore (2021), arXiv:2105.06410, https://doi.org/10.1007/978-981-15-4702-7$\\_$29-1.
* •
Chapter LABEL:chapter3 is the outcome of a collaboration with Luca
Buoninfante, Anupam Mazumdar and Paolo Pani based on:
* E. Maggio, L. Buoninfante, A. Mazumdar, P. Pani, “How does a dark compact object ringdown?,” Phys. Rev. D 102, 064053 (2020),
arXiv:2006.14628.
* •
Chapter LABEL:chapter4 is the outcome of a collaboration with Vitor Cardoso,
Sam Dolan, Valeria Ferrari and Paolo Pani based on:
* E. Maggio, P. Pani, and V. Ferrari, “Exotic compact objects and how to quench their ergoregion instability,” Phys. Rev. D 96, 104047 (2017), arXiv:1703.03696.
* E. Maggio, V. Cardoso, S. Dolan, and P. Pani, “Ergoregion instability of exotic compact objects: electromagnetic and gravitational perturbations and the role of absorption,” Phys. Rev. D 99, 064007 (2019), arXiv:1807.08840;
* •
Chapter LABEL:chapter5 is the outcome of a collaboration with Swetha Bhagwat,
Paolo Pani and Adriano Testa based on:
* E. Maggio, A. Testa, S. Bhagwat, and P. Pani, “Analytical model for gravitational-wave echoes from spinning remnants,” Phys. Rev. D 100, 064056 (2019), arXiv:1907.03091.
* •
Chapter LABEL:chapter6 is the outcome of a collaboration with Maarten van de
Meent and Paolo Pani based on:
* E. Maggio, M. van de Meent, P. Pani, “Extreme mass-ratio inspirals around a spinning horizonless compact object,” Phys. Rev. D in press (2021), arXiv:2106.07195.
As a part of the activities during my PhD, I served as a member of the LISA
Consortium, being involved in the writing of the LISA Fundamental Physics and
the LISA Waveform White Papers, the LISA Figure of Merit analysis, and the
LISA Early Career Scientists (LECS) group. Part of the outcome of these
activities is in preparation or has been submitted for publication and is not
included in this thesis.
### Conventions
In this work, geometrized units, $G=c=1$, are adopted where $G$ is the
gravitational constant, and $c$ is the speed of light.
The signature of the metric adopts the $(-,+,+,+)$ convention. The Greek
letters run over the four-dimensional spacetime indices. The comma stands for
an ordinary derivative, and the semi-colon stands for a covariant derivative.
$\mathbf{M}^{*}$ is the complex conjugate of a matrix, and $\mathbf{M}^{\top}$
is the transpose of a matrix. $\mathfrak{R}(n)$ and $\mathfrak{I}(n)$ are the
real and the imaginary part of a number, respectively.
### Abbreviations
BH | Black Hole
---|---
ECO | Exotic Compact Object
EMRI | Extreme Mass-Ratio Inspiral
ISCO | Innermost Stable Circular Orbit
LIGO | Large Interferomenter Gravitational-wave Observatory
LISA | Laser Interferometer Space Antenna
GR | General Relativity
GW | Gravitational Wave
NS | Neutron Star
PN | Post-Newtonian
QNM | Quasi-Normal Mode
SNR | Signal-to-Noise Ratio
TH | Tidal Heating
ZAMO | Zero Angular Momentum Observer
## Chapter 1 Introduction
The landmark detection of gravitational waves (GWs) provides the unique
opportunity to test gravity in the strong-field regime and infer the nature of
astrophysical sources. So far, the ground-based detectors LIGO and Virgo have
detected ninety GW events from the coalescence of compact binaries
[LIGOScientific:2018mvr, LIGOScientific:2020ibl, LIGOScientific:2021usb,
LIGOScientific:2021djp]. These detections allowed us to observe for the first
time the coalescence of two black holes (BHs) and revealed that their masses
can be heavier than the ones observed in the electromagnetic spectrum
[LIGOScientific:2016lio, LIGOScientific:2020ibl]. Recent important discoveries
include the first multi-messenger observation of a binary neutron star (NS)
merger [LIGOScientific:2017vwq, LIGOScientific:2017ync] and the observation of
the formation of an intermediate-mass BH [LIGOScientific:2020iuh].
Furthermore, GWs provide a new channel for probing Einstein’s theory of
gravity in a regime inaccessible to traditional astronomical observations,
namely the strong-field and highly dynamical one. Several consistency tests of
the GW data with the predictions of general relativity (GR) have been
performed. No evidence for new physics has been reported within current
measurement accuracies [LIGOScientific:2016lio, LIGOScientific:2019fpa,
LIGOScientific:2020tif, LIGOScientific:2021sio].
The GW signal emitted by compact binary coalescences is characterized by three
main stages: the _inspiral_ phase, when the two bodies orbit around each other
and the emission of GWs makes the orbit shrink, the _merger_ phase, when the
two bodies coalesce, and the _ringdown_ when the final remnant relaxes to an
equilibrium solution. The study of the different stages of the GW signal
allows us to infer the properties of the compact objects and understand their
nature.
Several extensions of GR predict the existence of regular and horizonless
compact objects, also known as exotic compact objects (ECOs) [Giudice:2016zpa,
Cardoso:2019rvt]. Indeed, the presence of the horizon poses some theoretical
problems, the most notable ones being the existence of a singularity in the
black-hole interior and the Hawking information loss paradox [Mathur:2009hf].
ECOs can mimic the features of BHs through electromagnetic observations since
they can be as compact as BHs [Abramowicz:2002vt]. Indeed, the supermassive
object at the center of the M87 galaxy observed by the Event Horizon Telescope
poorly constrains few models of ECOs [EventHorizonTelescope:2019pgp].
Furthermore, current GW observations do not exclude ECOs that could
potentially explain events in the mass gap between NSs and BHs and due to
pair-instability supernova processes [LIGOScientific:2020iuh,
Bustillo:2020syj, LIGOScientific:2020zkf].
One way to distinguish ECOs from BHs is by analyzing the ringdown stage of a
compact binary coalescence. The ringdown is dominated by the complex
characteristic frequencies – the so-called quasi-normal modes (QNMs) – of the
remnant, that differ dramatically if the latter is a BH or an ECO
[Cardoso:2016rao]. By inferring the QNMs of the remnant, we can test whether
they are compatible with the predicted spectrum for a BH.
Current observations of the fundamental QNM in the ringdown of binary
coalescences are compatible with remnant BHs as predicted by GR
[LIGOScientific:2019fpa, LIGOScientific:2020tif, LIGOScientific:2021sio];
however, the characterization of the remnant is still an open problem. The no-
hair theorems establish that BHs in GR are determined uniquely by two
parameters, i.e., their mass and angular momentum [Carter:1971zc,
Robinson:1975bv]. Therefore, the measurement of one complex QNM allows us only
to estimate the parameters of the BH. A test of the BH paradigm would require
the identification of at least two complex QNM frequencies. Louder GW events,
to be collected as detector sensitivity improves, and more sophisticated
parametrized waveforms will allow us to extract more information about the
remnant.
If the remnant of a merger is an ECO that is almost as compact as a BH, the
prompt ringdown signal would be nearly indistinguishable from that of a BH
[Cardoso:2016rao]. A characteristic fingerprint of ECOs would be the
appearance of a modulated train of GW echoes at late times due to the absence
of the horizon [Cardoso:2016rao, Maggio:2019zyv]. Tentative evidence for GW
echoes in LIGO/Virgo data has been reported in the last few years
[Abedi:2016hgu], but recent independent searches argued that the statistical
significance for GW echoes is consistent with noise [Westerweck:2017hus,
LIGOScientific:2020tif, LIGOScientific:2021sio].
Besides ECO fingerprints in the GW emission, in this thesis we analyze the
astrophysical viability of ECOs as BH alternatives. Indeed, spinning
horizonless compact objects are prone to the so-called ergoregion instability
when spinning sufficiently fast [Friedman:1978wla, 10.1093/mnras/282.2.580,
Kokkotas:2002sf]. The endpoint of the instability could be a slowly-spinning
ECO [Cardoso:2014sna, Brito:2015oca] or dissipation within the object could
lead to a stable remnant [Maggio:2017ivp, Maggio:2018ivz, Wang:2019rcf]. If
confirmed, the ergoregion instability could provide a strong theoretical
argument in favor of the BH paradigm for which rapidly spinning compact
objects must have a horizon.
The prospect for detectability of new physics will improve in the future with
the next-generation detectors like the ground-based observatories Einstein
Telescope [Punturo:2010zz] and Cosmic Explorer [Reitze:2019iox], and the
space-based Laser Interferometer Space Antenna (LISA) [LISA:2017pwj]. In
particular, LISA is an extremely promising observatory of fundamental physics.
Planned for launch in 2034, LISA will detect GWs in a lower frequency band
than ground-based detectors. LISA will observe a plethora of astrophysical
sources, particularly extreme mass-ratio inspirals (EMRIs) in which a stellar-
mass object orbits around the supermassive object at the center of a galaxy
[Gair:2017ynp].
EMRIs are unique probes of the nature of supermassive compact objects. Since
LISA will observe inspirals that can last for years, the phase shift of the
waveform will be tracked with high precision and deviations from GR will be
measured accurately. During the inspiral around a horizonless supermassive
object, extra resonances would be excited leaving a characteristic imprint in
the waveform [Cardoso:2019nis, Maggio:2021uge]. Any evidence of partial
reflectivity at the surface of the object would also indicate a departure from
the classical BH picture [Datta:2019epe, Maggio:2021uge].
Within this broad and flourishing context, this thesis aims to investigate the
nature of compact objects and test the existence of horizons with GWs. This
work is organized as follows.
Chapter LABEL:chapter1 is dedicated to the tests of the BH paradigm that have
been currently performed using GWs. A review of the recent GW observations is
presented, and the stages of the gravitational waveform are analyzed. In
particular, the consistency tests of GR and the parametrized tests of
deviations from GR are described. Finally, the prospects of detecting
deviations from GR with next-generation detectors are assessed.
Chapter LABEL:chapter2 illustrates the theoretical motivations for studying
horizonless compact objects. A parametrized classification of horizonless
compact objects is presented depending on their deviations from the standard
BH picture. Some remarkable models of ECOs are reviewed, and their
phenomenology is compared to the BH case.
Chapter LABEL:chapter3 derives the GW signatures of static horizonless compact
objects, particularly their QNM spectrum in the ringdown. The system is
described by perturbation theory. The imposition of the boundary conditions
that describe the response of the compact object to perturbations requires
careful analysis. A numerical procedure for the derivation of the QNMs is
illustrated. The QNMs of horizonless compact objects are derived both for
remnants almost as compact as BHs and with smaller compactness. In the former
case, the presence of characteristic low frequencies in the QNM spectrum is
highlighted. In the latter case, an extended version of the BH membrane
paradigm is applied to derive model-independent deviations from the BH QNM
spectrum. Finally, current constraints on horizonless compact objects and
prospects of detectability are assessed.
Chapter LABEL:chapter4 analyzes spinning horizonless compact objects that are
prone to the ergoregion instability above a critical threshold of the spin.
The QNMs of spinning horizonless compact objects are derived. From the
analysis of the imaginary part of the QNMs, the conditions for which
horizonless compact objects are unstable are identified. An analytical
description of the QNMs in terms of superradiance is detailed, and ways of
quenching the ergoregion instability are provided.
Chapter LABEL:chapter5 describes the GW echoes that would be emitted after the
prompt ringdown in the case of a horizonless merger remnant. An analytical,
physically-motivated template for GW echoes is provided that can be
implemented to perform a matched-filter-based search for echoes. Finally, the
properties of GW echoes are analyzed, and the prospects of detection with
current and future detectors are assessed.
Chapter LABEL:chapter6 is devoted to the analysis of EMRIs with a central
horizonless compact object. During the inspiral, extra resonances are excited
when the orbital frequency matches the characteristic frequencies of the
central ECO. The impact of the resonances in the GW dephasing with respect to
the BH case is assessed. This analysis shows that LISA will be able to probe
the reflectivity of compact objects with unprecedented accuracy.
|
# Topological black holes in Einstein-Maxwell and
4D conformal gravities revisited
Tao Wang, Zhiqiang Zhang, Xiangqing Kong and Liu Zhao
School of Physics, Nankai University, Tianjin 300071, China email:
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>author, email<EMAIL_ADDRESS>
###### Abstract
The thermodynamics of charged topological black holes (TBHs) with different
horizon geometries in $d$-dimensional Einstein-Maxwell and 4-dimensional
conformal gravities is revisited using the restricted phase space formalism.
The concept of subsystems for black holes is introduced, which enables a
precise description for the thermodynamic behaviors of (non-compact) black
holes with infinitely large horizon area. The concrete behaviors can be
different for TBHs in the same underlying theory but with different horizon
geometries, or for those with the same horizon geometry but from different
underlying theories. The high and low temperature limits of the TBHs are also
considered, and the high temperature limits behave universally as low
temperature phonon gases. Finally, using the concept of subsystems, some
conceptual issues in the description for thermal fluctuations in black hole
systems are clarified, and the relative thermal fluctuations for finite
subsystems are also analyzed in some detail.
###### Contents
1. 1 Introduction
2. 2 Formalisms and subsystems of black hole thermodynamics
3. 3 TBHs in Einstein-Maxwell theory
1. 3.1 The metric and the black hole observables
2. 3.2 Subsystems and local thermodynamic variables
3. 3.3 Bounds over independent parameters
4. 3.4 (Non)existence of critical points and HP-like transitions
5. 3.5 Clapeyron and Ehrenfest equations for compact TBHs
6. 3.6 Local thermodynamic behaviors: a comparison between horizon geometries
7. 3.7 High and low temperature limits
4. 4 TBHs in $4d$ conformal gravity
1. 4.1 Thermodynamic Structure
2. 4.2 Local thermodynamic relations and high temperature limit
3. 4.3 Description of thermodynamic processes
5. 5 Fluctuations in subsystems
1. 5.1 The approach with Smoluchowski formula
2. 5.2 The approach with thermodynamic geometry
6. 6 Concluding remarks
## 1 Introduction
Black holes (BHs) in anti-de Sitter (AdS) spacetimes can have event horizons
with different topologies, which are known as TBHs [1, 2, 3, 4, 11, 5, 6, 7,
8, 9, 10]. The existence of TBHs with planar, cylindrical and toroidal
horizons in pure Einstein and Einstein-Maxwell theories was discussed in [1,
2, 3], and the thermodynamics for the toroidal AdS black hole is discussed in
[11]. More generally, the event horizons of TBHs in AdS spacetime can be
either compact or non-compact, or even be high genus Riemann surfaces [4, 7,
8]. For the simplest genus zero cases, the local horizon geometries can be
either maximally symmetric or not. The TBHs which we deal with in the present
work are those which have maximally symmetric event horizons with normalized
scalar curvatures $\epsilon=1,0,-1$, respectively, which in turn correspond to
spherical, planar or hyperbolic cases. Such TBHs in 4-dimensional Einstein-
Maxwell theory were found in [5, 7], and their higher dimensional counterparts
were presented in [10]. It appears that the existence of TBHs in AdS
spacetimes is a generic phenomenon, irrespective of the underlying gravity
models and the types of source fields. For instance, TBHs exist in pure
Einstein gravity [1, 2, 4, 6, 9], Einstein-Maxwell gravity [3, 5, 7, 8, 10],
Einstein-Maxwell-dilaton theory [12, 13], Einstein-Maxwell-Gauss-Bonnet and
Lovelock theories [14, 10], conformal gravity with a Maxwell source [15], etc.
Though one can compactify the transverse spaces for the planar and hyperbolic
cases in order to keep a finite size [7, 8, 16], a non-compactified TBH has an
infinity horizon “area” such that the symbol $\omega_{d-2,\epsilon}$
representing the horizon area seems only to make sense formally. In this work,
we will deal with the non-compactified cases when $\epsilon=0,-1$ and try to
make perfect sense of the thermodynamics of such non-compact TBHs. Of course,
the compact TBHs with $\epsilon=1$ will also be considered altogether in order
to unify the approach and make comparison between the cases with different
choices of $\epsilon$.
In practice, the compact TBHs have attracted much more interests than the non-
compact ones, partly because the non-compact nature makes it harder to
understand the physical properties of the corresponding TBHs. For instance,
the Bekenstein-Hawking entropy formula points out that the entropy of BHs in
Einstein gravity has an area law. For non-compact TBHs, this amounts to an
infinite entropy. A potential solution to avoid the infinities is to adopt
various densities which involves finite factors like $M/\omega_{d-2,\epsilon}$
and/or $Q/\omega_{d-2,\epsilon}$ where $M$ and $Q$ represent respectively the
(infinite) mass and charge for the non-compact TBHs. However, there is a
prerequisite for the density variables to make sense in thermodynamics, i.e.
the Euler homogeneity must hold. Without Euler homogeneity, the intensive
variables would become scale dependent, rendering the density variables ill-
defined. On the other hand, the whole logic for understanding standard
thermodynamic systems is based on the existence of finite subsystems. In
particular, the definition of thermodynamic equilibrium is based on the
concept of subsystem. In the study of black hole thermodynamics, however, the
concept of subsystem has not been addressed and put in a proper position
before. The restricted phase space (RPS) formalism [17, 18, 19, 20, 21] places
Euler homogeneity in a position of utmost importance, which enables us to
study the local thermodynamics of the non-compact TBHs. This formalism also
allows for an easy introduction of finite subsystems for the non-compact TBHs.
A detailed discussion on the importance of Euler homogeneity and the concept
of subsystems will be provided in Section 2.
In this work, we will study the local RPS thermodynamics of the TBHs in
$d$-dimensional Einstein-Maxwell theory and in 4-dimensional conformal gravity
with a Maxwell source, with emphasis paid toward the comparison between
different horizon geometries. Our results indicate that, for TBHs in the same
underlying theory, the local thermodynamic behaviors are quite different for
TBHs in the same theory but with different horizon geometries. On the other
hand, in the high temperature limit, all TBHs in $d$-dimensional AdS spacetime
behave exactly like the quantum phonon gas in nonmetallic crystals residing in
$(d-2)$-dimensional flat space – a feature first discovered for spherical
Tangherlini-AdS black holes[22] and subsequently generalized to the case of
charged spherically symmetric AdS black hole in conformal gravity[23]. Now the
same feature is further validated for charged Tangherlini-AdS TBHs in
Einstein-Maxwell theory in any dimensions $d\geq 4$ and for charged AdS TBHs
in conformal gravity in 4 dimensions, regardless of the horizon geometries. It
is natural to make the conjecture that the above AdS/phonon gas correspondence
may be a universal feature for BHs in AdS spacetime of any dimension $d\geq
4$, regardless of the horizon geometry and the underlying model of gravity.
More detailed behaviors for TBHs in the above two theories will be introduced
in the main text.
Throughout this paper, we will be working in units $c=\hbar=k_{\rm B}=1$,
while leaving the Newton constant $G$ intact, because $G$ needs to be variable
in the RPS formalism. The static black hole solutions will be presented using
Schwarzschild like coordinates.
This paper is organized as follows. Section 2 provides a discussion on the
concept of subsystems and compares the differences between existing black hole
thermodynamic formalisms. Section 3 is devoted to a thorough study of TBHs in
Einstein-Maxwell theory. The introduction of subsystems, the analogues of
Clapeyron and Ehrenfest equations and the high and low temperature limits of
the TBHs are among the major subjects of concern, besides the very detailed
study of thermodynamic processes with emphasis on the comparison between the
effects of different horizon geometries. Section 4 presents the parallel study
for TBHs in $4d$ conformal gravity in much more brevity. In Section 5, we
discuss the local thermodynamic fluctuations in small closed subsystems of the
black holes and clarify some of the conceptual issues in existing treatments
of black hole fluctuations using either Smoluchowski formula or the
thermodynamic geometric approaches. Finally, in Section 6, we present the
summary of the results and make some discussions.
## 2 Formalisms and subsystems of black hole thermodynamics
An outstanding feature of thermodynamic systems is the existence of subsystems
with essentially the same macroscopic behavior, albeit with different
microscopic degrees of freedom. The concept of thermodynamic equilibrium for
ordinary thermodynamic systems is defined by the system-wide homogeneity of
the intensive thermodynamic variables such as the temperature $T$, the
pressure $P$ and the chemical potential $\mu$, and the equilibrium conditions
are described as
$T_{A}=T_{B},\quad P_{A}=P_{B},\quad\mu_{A}=\mu_{B}$
for arbitrarily chosen subsystems $A$ and $B$. If we consider charged BHs as
thermodynamic systems, there should be an additional equilibrium condition
which describes the system-wide homogeneity of the electric potential,
$\hat{\Phi}_{A}=\hat{\Phi}_{B},$
and the condition over the pressure may or may not be present, depending on
the choice of formalisms for black hole thermodynamics. Clearly, we need to
identify what the subsystems are for BHs in order to make sense of the above
equilibrium conditions.
In thermodynamic descriptions for ordinary matters, defining a subsystem has
never been a problem. The whole logic of the classical thermodynamics is
established on the recognition of the following fact, i.e. the thermodynamic
variables can be arranged in two groups, with one group consists of variables
which are uniform and take the same values on different subsystems, and the
other group consists of variables which are dependent on the matter content
within each subsystem (i.e. proportional to the number of particles in each
subsystem) and are additive when different subsystems are combined together.
The Euler homogeneity is a prerequisite for classifying thermodynamic
variables into the uniform and additive groups.
One may wonder why an infinite area of the event horizon could lead to
problems in describing the thermodynamic properties. For ordinary macroscopic
systems in thermodynamic limit, the particle number and hence the internal
energy as well as the entropy also diverge. But this has never constituted a
difficulty in understanding thermodynamic properties. The key point lies in
that, one can always take a finite subsystem and study the thermodynamic
behaviors thereof. Thanks to the Euler homogeneity and the extensivity of
ordinary thermodynamics, any subsystem behaves the same [24]. In the case of
non-compact TBHs, the major obstacle comes from the lack of extensivity in the
usual thermodynamic description in either the traditional formalism[25, 26,
27, 28, 29, 30] or the extended phase space (EPS) formalism [31, 32, 33, 34,
35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,
54] (including the holographic variant thereof [55, 56, 57]). Some authors
even think of the lack of extensivity as one of the distinguished features of
black hole thermodynamics [58, 59, 60]. The real problem lies in that, in the
absence of Euler homogeneity, the thermodynamic behavior becomes scale
dependent, putting two subsystems together yields a larger system with
different thermodynamic behavior. Therefore, studying subsystems does not
resolve the problem for infinitely large systems in the lack of Euler
homogeneity.
However, the Euler homogeneity has rarely been discussed seriously in the
studies of black hole thermodynamics before the restricted phase space (RPS)
formalism was proposed recently in [17, 18, 19, 20, 21, 22]. Some exceptional
works, e.g. [61, 62] and [56], did make some discussions on Euler homogeneity,
but these works depend heavily on the holographic interpretation which
constrains the range of applicability. Moreover, the parameter describing the
microscopic degrees of freedom for the BHs is absent in the works [61, 62],
and the Euler homogeneity in [56] is incomplete (there is a pair of
thermodynamic variables $(\mathcal{P,V})$ but these variables did not appear
in the so-called “Euler relation”). The RPS formalism is distinguished from
that of [56] in that the cosmological constant is not taken as a variable,
thus the variables $(\mathcal{P,V})$ are absent. Moreover, the new variables
$N,\mu$ are defined independently and are interpreted as the number of black
hole molecules and the conjugate chemical potential,
$\displaystyle
N=\dfrac{L^{d-2}\omega_{d-2,\epsilon}}{G},\quad\quad\mu=\dfrac{GTI_{E}}{L^{d-2}\omega_{d-2,\epsilon}},$
(1)
where $L$ is a finite length scale which is chosen to contain all microscopic
degrees of freedom for the BHs[18], whose value remains to be arbitrary
besides the above requirement, and $I_{E}$ is the on-shell Euclidean action
[63, 64]. The factors $\omega_{d-2,\epsilon}$ in $N$ and $\mu$ were absorbed
in $L$ in our previous works on the RPS formalism for compact BHs. However,
for non-compact TBHs, these factors need to be made explicit in order to keep
$\mu$ finite.
For AdS BHs, $N$ is proportional to the central charge of the dual CFT in the
framework of AdS/CFT correspondence [56, 17, 18], but its best interpretation
may be due to ’t Hooft [65], who suggested that any piece of roughly the size
of Planck area (which equals $1/G$) on the event horizon corresponds to a
miscroscopic degree of freedom of the black hole. Since we do not know what
exactly a black hole molecule is, there is a freedom in choosing the
coefficient of proportionality $L$ between $N$ and $1/G$. This situation is
quite similar to the study of thermodynamics of ordinary matter systems, in
which the precise nature of individual molecules does not matter, and the
total number of molecules can be taken arbitrarily as long as the whole system
remains macroscopic, which means that $L$ should be sufficiently large and
remains constant in our present case.
Traditional formalism: $\Lambda,G$ are both constant Bekenstein, Bardeen,
Carter, Hawking et al • Range of applicability: any black hole spacetime; •
Role of mass: internal energy ($E=M$); • The first law: ${\rm d}M=T{\rm
d}S+\Phi{\rm d}Q;$ • Mass formula: the Smarr relation
$M=\dfrac{d-2}{d-3}TS+\Phi Q+\frac{\Lambda\Phi S^{2}}{\pi^{2}(d-1)Q};$ • Phase
transitions: not touched upon.
To make comparison between different formalisms, we created several quick
sheets, listing the main features of some of the major formalisms of black
hole thermodynamics, including the range of applicability, the interpretation
of the mass, and the form of the first law and Smarr/Euler relations in each
formalism. While writing down the first laws and the mass formulae, we have
taken the $d$-dimensional spherically symmetric Tangherlini-RN-AdS black hole
as an example case. It can be seen that only the traditional and the RPS
formalisms are applicable to all black hole spacetimes irrespective of the
asymptotic behaviors. Moreover, at fixed Newton constant $G$, the RPS
formalism falls back to the traditional formalism, except that the Euler
relation still needs the contribution from the product of $N$ and $\mu$. This
situation is fully consistent with the ordinary thermodynamics for closed
systems.
EPS formalism: $\Lambda$ is variable but $G$ is constant Kastor, Ray, Traschen
et al • Range of applicability: AdS black holes; • Role of mass: enthalpy
($H=M$); • The first law: ($P=-\Lambda/8\pi$) ${\rm d}M=T{\rm d}S+\Phi{\rm
d}Q+{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}V{\rm
d}P};$ • Mass formula: the generalized Smarr relation
$M=\dfrac{d-2}{d-3}TS+\Phi Q-\dfrac{2}{d-3}PV;$ • Phase transitions: $P-v$
criticalities studied extensively, but not working for non-spherical BHs.
Holographic EPS formalism: $\Lambda,G$ are both variable Visser et al • Range
of applicability: AdS black holes; • Role of mass: internal energy ($E=M$); •
The first law: ${\rm d}M=T{\rm d}S+\hat{\Phi}\hat{\rm
d}Q{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}-\mathcal{P}{\rm
d}\mathcal{V}}+{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\mu{\rm
d}C};$ • Mass formula: Smarr relation disguised as Euler relation
$M=TS+\hat{\Phi}\hat{Q}+\mu C;$ • Extra variables: $C=\ell^{d-2}/G$ is the
central charge of the dual CFT, and $\mu$ is defined using the mass formula,
${\mu\equiv(M-TS-\hat{\Phi}\hat{Q})/C}.$ • Phase transitions: $\mu-C$
criticalities RPS formalism: $\Lambda$ is constant but $G$ is variable Our
group • Range of applicability: any black hole spacetime; • Role of mass:
internal energy ($E=M$); • The first law: ${\rm d}M=T{\rm
d}S+\hat{\Phi}\hat{\rm
d}Q+{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\mu{\rm
d}N};$ • Mass formula: the true Euler relation $M=TS+\hat{\Phi}\hat{Q}+\mu N;$
• Extra variables: defined independent of holographic duality and mass
formula,
${N=\frac{L^{d-2}\omega_{d-2,\epsilon}}{G},\quad\mu=\frac{GTI_{E}}{L^{d-2}\omega_{d-2,\epsilon}}}.$
• Phase transitions: $T-S$ transition for AdS BHs as well as Hawking-Page like
transitions.
It should mentioned that, the $\mu-C$ criticality described in e.g. [57] under
the holographic EPS formalism contains some conceptual issues. Fig.7 of [57]
depicted the central charge $C$ as a function in the chemical potential $\mu$,
and the authors explicitly mentioned that there is an equal area law on such
curves. This description is incorrect, because the areas in the equal area law
have physical interpretations. For $P-V$ criticalities in ordinary
thermodynamic systems, the areas correspond to works done on the system
following two different isothermal processes, and for $T-S$ criticalities
described in [17, 18, 19], the areas correspond to heats absorbed by the BHs
in two different isocharge processes. The areas depicted in Fig.7 of [57] are
related to $\displaystyle\int C{\rm d}\mu$, whose meaning looks obscure. On
the other hand, $\mu$ should be regarded as an intensive macroscopic state
function in $(T,\hat{\Phi})$, thanks to the Gibbs-Duhem relation. When $\mu$
is multiply-valued, only the lowest values correspond to states in a
thermodynamically stable phase.
Since the RPS formalism contains a variable Newton constant $G$, it looks
necessary to interpret our take on this variable. Essentially, BHs as thermal
objects must be quantum. As such, it should not be surprising that the
gravitational coupling constant could be running with the energy scale along
the renormalization group orbit. On the other hand, we have no objection to
the current observational picture of our universe with a fixed Newton
constant. This may be interpreted as a infrared fixed point for $G$, and, in a
universe with fixed $G$, the number of black hole molecules $N$ is necessarily
fixed also, hence any BH in such a universe must be considered as closed
thermodynamic system. Anyway, this will not affect our discussions on the
phase structures, because, when studying phase transitions, the number of
molecules in the thermodynamic system under investigation should always be
kept fixed.
Once the complete Euler homogeneity is established using the RPS formalism, we
are free to take some finite parts on the event horizon of the TBHs as
subsystems. The local thermodynamic description for the subsystems is free of
the ambiguities brought about by the infinite area. Following this route, we
will be able to make comparison between the cases with different horizon
geometries.
For convenience and consistency of later discussions, we provide some
notational conventions about subsystems. The subsystems which we are
interested in are finite subsystems containing a finite number $\mathcal{N}$
of black hole molecules and have finite entropy $\mathcal{S}$ and electric
charge $\mathcal{Q}$. According to eq.(1), a finite number of black hole
molecules cannot be achieved for non-compact TBHs by choosing an alternative
value for $L$ and/or $G$, provided they are still kept finite. The only way to
take a finite subsystem from the TBHs is to replace $\omega_{d-2,\epsilon}$
with the area $\mathcal{A}$ of a finite part of the $(d-2)$-dimensional
transverse submanifold $\Sigma_{d-2,\epsilon}$ (let us stress that, just like
$\omega_{d-2,\epsilon}$, $\mathcal{A}$ is also a dimensionless quantity). This
implies that the subsystem is actually taken to be a finite piece on the event
horizon of area $\mathcal{A}r_{0}^{d-2}$, where $r_{0}$ represents the radius
of the horizon. This is in agreement with ’t Hooft’s picture for black hole
micro states [65], which states that the micro states of BHs can be regarded
to be located on the event horizons.
The number of black hole molecules contained in the above subsystem reads
$\mathcal{N}=L^{d-2}\mathcal{A}/G$, which need to obey the condition
$1\ll\mathcal{N}\ll N$, and the entropy of the subsystem is
$\mathcal{S}=\mathcal{A}r_{0}^{d-2}/4G$. Likewise, the internal energy
$\mathcal{E}$ and the electric charge $\mathcal{Q}$ of the subsystem must also
be proportional to $\mathcal{A}$. It is extremely important to realize that,
due to Euler homogeneity, the behaviors of any meaningful subsystems are the
same for a system in thermodynamic equilibrium, provided only the quantities
which are zeroth order homogeneous functions in the additive variables are
concerned.
## 3 TBHs in Einstein-Maxwell theory
### 3.1 The metric and the black hole observables
The action of Einstein-Maxwell theory in $d$-dimensions reads
$\displaystyle I=\dfrac{1}{16\pi G}\int{\rm
d}^{d}x\sqrt{-g}(R-2\Lambda)-\dfrac{1}{16\pi}\int{\rm
d}^{d}x\sqrt{-g}F_{\mu\nu}F^{\mu\nu},$ (2)
where the negative cosmological constant $\Lambda$ is related to the AdS
radius $\ell$ via $\Lambda=-(d-1)(d-2)/2\ell^{2}$. Unlike in the EPS
formalism, which takes the AdS radius $\ell$ as one of the thermodynamic
variables, the RPS formalism keeps $\ell$ as a fixed constant. This makes it
possible to establish complete Euler homogeneity which is missing in
alternative considerations.
The line element and the Maxwell field $A_{\mu}$ presented in [10] are given
as follows,
$\displaystyle{\rm d}s^{2}=-f(r){\rm d}t^{2}+\dfrac{{\rm
d}r^{2}}{f(r)}+r^{2}{\rm d}\Omega_{d-2,\epsilon}^{2},$ (3) $\displaystyle
A_{\mu}{\rm d}x^{\mu}=\Phi(r){\rm d}t,$ (4)
where $\epsilon=1,0,-1$, ${\rm
d}\Omega_{d-2,\epsilon}^{2}=\gamma^{(\epsilon)}_{ab}{\rm d}\theta^{a}{\rm
d}\theta^{b}$ represents the line element of the $(d-2)$-dimensional
submanifold $\Sigma_{d-2,\epsilon}$ with normalized scalar curvature
$\epsilon$ and “area” (or, more precisely, solid angle because of its
dimensionless nature)
$\omega_{d-2,\epsilon}=\int{\rm d}^{d-2}\theta\sqrt{{\rm
det}(\gamma^{(\epsilon)}_{ab})},$
and
$\displaystyle f(r)$
$\displaystyle=\epsilon-\left(\dfrac{r_{g}}{r}\right)^{d-3}+\left(\dfrac{r_{Q}}{r}\right)^{2(d-3)}+\dfrac{r^{2}}{\ell^{2}},$
(5) $\displaystyle\Phi(r)$
$\displaystyle=\dfrac{4\pi\,Q}{(d-3)\omega_{d-2,\epsilon}\,r^{d-3}},$ (6)
in which
$\displaystyle r_{g}=\left[\dfrac{16\pi
GM}{(d-2)\omega_{d-2,\epsilon}}\right]^{\frac{1}{d-3}},\quad\quad
r_{Q}=\left[\dfrac{32\pi^{2}GQ^{2}}{(d-2)(d-3)\left(\omega_{d-2,\epsilon}\right)^{2}}\right]^{\frac{1}{2(d-3)}}.$
(7)
The black holes encoded in the above line element is known as the Tangherlini-
RN-AdS TBHs. For $\epsilon=1$, $\omega_{d-2,\epsilon}$ corresponds to the area
of the unit $(d-2)$-sphere, which is finite and makes perfect sense. While for
$\epsilon=0,-1$, respectively, $\omega_{d-2,\epsilon}$ is in fact divergent.
However, it is assumed that the ratios $M/\omega_{d-2,\epsilon}$ and
$Q/\omega_{d-2,\epsilon}$ are both finite, so that the metric and the electric
potential for the TBHs still make sense. Since there is a pole at $d=3$ in the
expression for $r_{Q}$, all we can do is to consider the spacetimes with
$d\geq 4$.
Let $r_{0}$ be a real positive root of $f(r)$ which corresponds to the
position of the event horizon. In the RPS formalism, the internal energy $E$
of the BHs is identified with the mass $M$, which can be expressed as an
explicit function in $r_{0},Q,G$ by solving the equation $f(r_{0})=0$,
$\displaystyle E$
$\displaystyle=M=\dfrac{(d-2)\omega_{d-2,\epsilon}\,r_{0}^{d-3}}{16\,\pi\,G}\left(\epsilon+\dfrac{r_{0}^{2}}{\ell^{2}}\right)+\dfrac{2\,\pi\,Q^{2}}{(d-3)\omega_{d-2,\epsilon}r_{0}^{d-3}}.$
(8)
The Bekenstein-Hawking entropy and the Hawking temperature corresponding to
the above solution read
$\displaystyle S$
$\displaystyle=\dfrac{\omega_{d-2,\epsilon}r_{0}^{d-2}}{4\,G},\qquad
T=\frac{1}{4\pi
r_{0}}\left[(d-3)\epsilon-\frac{32\pi^{2}GQ^{2}}{(d-2)\left(\omega_{d-2,\epsilon}\right)^{2}r_{0}^{2(d-3)}}+\frac{(d-1)r_{0}^{2}}{\ell^{2}}\right].$
(9)
In the RPS formalism, we need to introduce a new pair of thermodynamic
quantities $(N,\mu)$ as given in eq.(1), in which the on-shell Euclidean
action $I_{E}$ is evaluated to be [63, 64]
$\displaystyle
I_{E}=-\frac{\omega_{d-2,\epsilon}r_{0}^{d}\left[(d-3)(d-2)\left(\omega_{d-2,\epsilon}\right)^{2}r_{0}^{2d}\left(1-\epsilon\frac{\ell^{2}}{r_{0}^{2}}\right)+32\pi^{2}G\ell^{2}Q^{2}r_{0}^{4}\right]}{4(d-3)G\left[(d-2)\left(\omega_{d-2,\epsilon}\right)^{2}r_{0}^{2d}\left((d-3)\epsilon\ell^{2}+(d-1)r_{0}^{2}\right)-32\pi^{2}G\ell^{2}Q^{2}r_{0}^{6}\right]}.$
(10)
Although the expression for $I_{E}$ looks quite complicated (and we do not use
the above complicated expression for $I_{E}$ in the rest of this work), the
corresponding expression for the chemical potential is much simpler,
$\displaystyle\mu$ $\displaystyle=\frac{r_{0}^{d-3}}{16\pi
L^{d-2}}\epsilon-\frac{2\pi
G}{(d-2)(d-3)\left(\omega_{d-2,\epsilon}\right)^{2}L^{d-2}r_{0}^{d-3}}Q^{2}-\frac{r_{0}^{d-1}}{16\pi
L^{d-2}\ell^{2}}.$ (11)
In the RPS formalism, the electric charge and potential need to be rescaled,
so that the Newton constant becomes an overall factor in front of the full
action. The reason for this rescaling has been explained in [17]. The rescaled
electric charge and potential are given as
$\displaystyle\hat{Q}=\dfrac{Q\,L^{(d-2)/2}}{\sqrt{G}},\qquad\hat{\Phi}=\dfrac{4\pi\,Q\,\sqrt{G}}{(d-3)\omega_{d-2,\epsilon}\,r^{d-3}\,L^{(d-2)/2}}.$
(12)
It should be reminded that, due to the presence of the formal parameter
$\omega_{d-2,\epsilon}$, the thermodynamic quantities introduced above make
perfect sense only for compact BHs. Nevertheless, if one keeps the formal
notation $\omega_{d-2,\epsilon}$ as if it is a finite constant, it can be
checked with ease that the following basic thermodynamic relations hold for
any TBH, be it compact or not,
$\displaystyle E=T\,S+\hat{\Phi}\,\hat{Q}+\mu\,N,$ (13) $\displaystyle{\rm
d}E=T\,{\rm d}S+\hat{\Phi}\,{\rm d}\hat{Q}+\mu\,{\rm d}N,$ (14) $\displaystyle
S{\rm d}T+\hat{Q}{\rm d}\hat{\Phi}+N{\rm d}\mu=0.$ (15)
The first one of these relations is exactly the Euler relation, the second one
is the first law of thermodynamics, while the third relation follows from the
first and the second, and is known as the Gibbs-Duhem relation, which is very
important in ordinary thermodynamics. These relations signify the first order
homogeneity of $E$ and zeroth order homogeneity of $T,\hat{\Phi},\mu$ in terms
of $(S,\hat{Q},N)$. Such homogeneity behaviors are absent in the traditional
and the EPS formalisms of black hole thermodynamics, which prevents the
analysis of thermodynamic properties of TBHs from the point of view of
subsystems. Now in the RPS formalism, we are able to do so.
### 3.2 Subsystems and local thermodynamic variables
The importance of the concept of subsystems in thermodynamics and its basic
definition have been explained in Section 2. To standardize the operations, we
now introduce the dimensionless mean values of the additive variables per
black hole molecule (with appropriate rescaling) for the subsystem as follows,
$\displaystyle e$ $\displaystyle\equiv 2\pi\ell
a^{d-2}\left(\frac{\mathcal{E}}{\mathcal{N}}\right)=2\pi\ell
a^{d-2}\left(\frac{E}{N}\right),$ (16) $\displaystyle s$ $\displaystyle\equiv
4a^{d-2}\left(\frac{\mathcal{S}}{\mathcal{N}}\right)=4a^{d-2}\left(\frac{S}{N}\right),$
(17) $\displaystyle q$ $\displaystyle\equiv
2\pi\ell\left(\frac{a}{\ell}\right)^{(d-2)/2}\left(\frac{\mathcal{Q}}{\mathcal{N}}\right)=2\pi\ell\left(\frac{a}{\ell}\right)^{(d-2)/2}\left(\frac{\hat{Q}}{N}\right),$
(18)
together with the dimensionless uniform variables which are also rescaled
accordingly,
$\displaystyle\bar{\mu}$ $\displaystyle\equiv 2\pi\ell a^{d-2}\,\mu,\quad
t\equiv\frac{\pi\ell\,T}{2},\quad\phi\equiv({\ell}{a})^{(d-2)/2}\hat{\Phi}.$
(19)
In these definitions, we have introduced $a\equiv L/\ell$. The numeric
coefficients such as $2\pi$, $\pi/2$, $4$ and powers of $a$ are not strictly
necessary, but are included as a convention, which greatly helps in
simplifying the thermodynamic functions and equations of states to be given
shortly. Because of the introduction of the factor $1/\mathcal{N}$ in the
definitions of $e,s,q$, none of the above variables is additive. By slightly
abuse of terminologies, we still refer to $e,s,q$ respectively as (the local
densities of) the internal energy, entropy and electric charge, and to
$t,\phi,\bar{\mu}$ simply as the temperature, electric potential and chemical
potential, regardless of the dimensionless natures thereof.
Now, taking $s,q$ as fundamental independent variables, the other
dimensionless variables can be written as explicit functions in $(s,q)$,
$\displaystyle e$
$\displaystyle=\frac{d-2}{8}s^{\frac{d-3}{d-2}}\epsilon+\frac{1}{d-3}s^{-\frac{d-3}{d-2}}\,q^{2}+\frac{d-2}{8}\,s^{\frac{d-1}{d-2}},$
(20) $\displaystyle t$
$\displaystyle=\frac{d-3}{8}s^{-\frac{1}{d-2}}\epsilon-\frac{1}{d-2}s^{-\frac{2d-5}{d-2}}\,q^{2}+\frac{d-1}{8}\,s^{\frac{1}{d-2}},$
(21) $\displaystyle\phi$
$\displaystyle=\dfrac{2}{d-3}\,s^{-\frac{d-3}{d-2}}\,q,$ (22)
$\displaystyle\bar{\mu}$
$\displaystyle=\frac{1}{8}s^{\frac{d-3}{d-2}}\epsilon-\frac{1}{(d-2)(d-3)}s^{-\frac{d-3}{d-2}}\,q^{2}-\frac{1}{8}\,s^{\frac{d-1}{d-2}}.$
(23)
Eq.(20) presents the (density of) the thermodynamic potential $e$ with adapted
independent variables $(s,q)$, and eqs.(21)-(23) provide the corresponding
equations of states. Notice that all these equations are independent of the
parameter $a$, which reflects the fact that the choice of $L$ is arbitrary and
is irrelevant to the local thermodynamic behaviors. The above equations are
also independent of the number of black hole molecules $\mathcal{N}$ contained
in the subsystem. This is a characteristic feature of ordinary thermodynamic
systems known as the law of corresponding states, however, it is absent in
other formalisms for black hole thermodynamics.
Except the $\phi-q$ relation (22), $q$ always appear in squared form in the
other thermodynamic functions. Therefore, it suffices to consider only the
cases with $q\geq 0$, which also ensures $\phi\geq 0$ because $s$ can never be
negative.
Besides the above independent variables and local thermodynamic functions, we
also need the dimensionless local density of the Helmholtz free energy
$f=e-ts$, specifically for describing possible $t-s$ criticalities. We have,
explicitly,
$\displaystyle f$
$\displaystyle=\frac{1}{8}s^{\frac{d-3}{d-2}}\epsilon+\frac{2d-5}{(d-2)(d-3)}s^{-\frac{d-3}{d-2}}\,q^{2}-\frac{1}{8}\,s^{\frac{d-1}{d-2}}.$
(24)
Please be reminded that, $f$ should be regarded as an implicit function in
$(t,q)$, rather than a function in $(s,q)$. Please also take a notice on the
explicit $\epsilon$-dependences of the expressions for $e,t,\bar{\mu}$ and
$f$, which indicates the influence of horizon geometries on the thermodynamic
functions.
Using the above dimensionless local variables, the Euler relation and the
Gibbs-Duhem relation can be rewritten as
$\displaystyle e=ts+\phi q+\bar{\mu},$ (25) $\displaystyle s{\rm d}t+q{\rm
d}\phi+{\rm d}\bar{\mu}=0.$ (26)
Using these relations we also have
$\displaystyle f=\phi q+\bar{\mu},$ (27) $\displaystyle{\rm d}f=-s\,{\rm
d}t+\phi\,{\rm d}q.$ (28)
The last equation will be useful for analyzing the critical behaviors in the
isocharge $T-S$ processes in the case $\epsilon=1$.
### 3.3 Bounds over independent parameters
Charged BHs subject to some bound – known as the Bogomol’nyi bound – over the
mass and charge in order to ensure the existence of event horizons. In the
context of thermodynamics, the Bogomol’nyi bound arises from the requirement
of non-negativity for the temperature. It can be seen from eq.(21) that, for
$d\geq 4$, the coefficients in front of the terms involving $\epsilon$ and
$q^{2}$ can both be negative, while the last term in (21) is strictly non-
negative. Thus the requirement of non-negativity for $t$ imposes real bound
over the independent variables $s$ and $q$, i.e.
$\displaystyle
q^{2}\leq\frac{d-2}{8}\left[(d-3)s^{\frac{2(d-3)}{d-2}}\epsilon+(d-1)s^{2}\right].$
(29)
When the above bound saturates, the temperature vanishes, which corresponds to
either a completely evaporated black hole with no leftover ($s=0,q=0$), or to
an extremal black hole remnant with nonvanishing $s$ and $q$. This implies
that, for $\epsilon=-1$, the state with $s=0$ is inaccessible even
asymptotically, while for $\epsilon=0,1$, the state with $s=0$ is accessible
at least asymptotically.
We can make a change of variable $q\to\phi$ in (21) by use of eq.(22). Then
the Bogomol’nyi bound can also be written as
$\displaystyle\phi^{2}\leq\frac{d-2}{2(d-3)}\left[\epsilon+\frac{d-1}{d-3}s^{\frac{2}{d-2}}\right].$
(30)
One might also consider the non-negativity of the mass or internal energy as
the source of another bound, but this is not the case in AdS spacetimes,
because even the vacuum has a negative energy density. Therefore, the
Bogomol’nyi bound remains to be the only physical bound over the range of
permitted independent variables.
### 3.4 (Non)existence of critical points and HP-like transitions
A remarkable feature of the RPS description for AdS BHs with compact event
horizons is the existence of isocharge (or iso-angular-momenta) $T-S$ phase
transitions at supercritical temperatures [17, 19, 21]. It is natural to ask
whether the same feature persists in the cases with non-compact horizons.
The analytical expressions for the local thermodynamic functions presented in
Section 3.2 allow us to study the existence of critical points in the
isocharge $t-s$ processes for each choice of spacetime dimension and horizon
geometry. The critical point $(s_{c},q_{c})$, if exists, must obey the
following equations,
$\displaystyle\left(\frac{\partial t}{\partial
s}\right)_{q}=0,\quad\left(\frac{\partial^{2}t}{\partial s^{2}}\right)_{q}=0.$
(31)
Using eq.(21), it can be seen that the above system of equations does not have
any nonzero solution for $\epsilon=0,-1$, thus excluding the existence of
$t-s$ criticalities for planar and hyperbolic black holes in any dimensions.
For spherical black holes, there always exists a critical point in any
dimension $d\geq 4$. The position of the critical point for $4\leq d\leq 10$
together with the corresponding critical temperatures are listed in Table 1.
The existence/nonexistence of critical points makes a sharp difference between
BHs with compact and non-compact event horizons. The difference begins to
manifest in the planar case ($\epsilon=0$), for which the critical point
equations (31) have a unique solution $(s_{c},q_{c})=(0,0)$, which should not
be regarded as a critical point. For the hyperbolic case ($\epsilon=-1$), the
critical point equations (31) do not admit any real non-negative solution.
Table 1: Critical parameters for $\epsilon=1,4\leq d\leq 10$ $d$ | 4 | 5 | 6 | 7 | 8 | 9 | 10
---|---|---|---|---|---|---|---
$s_{c}$ | $\frac{1}{6}$ | $\frac{\sqrt{3}}{9}$ | $\frac{81}{400}$ | $\frac{128}{255}\sqrt{\frac{2}{15}}$ | $\frac{15625}{74088}$ | $\frac{2187\sqrt{14}}{38416}$ | $\frac{5764801}{26873856}$
$q_{c}$ | $\frac{1}{12}$ | $\frac{\sqrt{5}}{30}$ | $\frac{27}{80}\sqrt{\frac{3}{70}}$ | $\frac{32\sqrt{2}}{675}$ | $\frac{3125}{7056}\sqrt{\frac{5}{231}}$ | $\frac{729}{5488}\sqrt{\frac{3}{13}}$ | $\frac{823543}{8757952}\sqrt{\frac{7}{15}}$
$t_{c}$ | $\frac{\sqrt{6}}{6}$ | $\frac{2\sqrt{3}}{5}$ | $\frac{3\sqrt{5}}{7}$ | $\frac{2}{3}\sqrt{\frac{10}{3}}$ | $\frac{5\sqrt{42}}{22}$ | $\frac{6\sqrt{14}}{13}$ | $\frac{7\sqrt{2}}{5}$
Another remarkable feature of the RPS description for charged AdS BHs with
compact event horizons is the existence of HP-like transitions signified by
the zero of the chemical potential [17, 19, 21]. We call such transitions HP-
like, because the HP transition is only defined in the neutral case and is
interpreted as the transition between the neutral AdS black hole and a pure
thermal gas[66]. In the presence of electric charge, the zero for the chemical
potential still exists, but we simply do not understand what a charged thermal
gas is.
Now, according to eq.(23), the HP-like transition could arise only for TBHs
with compact horizons, because only when $\epsilon=1$, $\bar{\mu}$ could have
zero(s) at real positive $(s,q)$. The zero occurs at $r_{0}=\ell$ if $q=0$,
thanks to eq.(11), the corresponding temperature reads $T_{\rm
HP}=\frac{d-2}{2\pi\ell}$, or $t_{\rm HP}=\frac{d-2}{4}$. Therefore, for
compact neutral AdS BHs, the radius of the event horizon could not be
identical to the AdS radius, otherwise the BHs would become pure thermal gases
due to the HP transition. For $\epsilon=0$, $\bar{\mu}$ has only a single zero
at $(s,q)=(0,0)$, which corresponds neither to a reasonable black hole state
nor to that of a thermal gas. For $\epsilon=-1$, $\bar{\mu}$ becomes strictly
negative, therefore, no HP-like transitions could occur.
### 3.5 Clapeyron and Ehrenfest equations for compact TBHs
When $\epsilon=1$ and $t>t_{c}$, there is a coexistence phase between the
stable small and large black holes. The isocharge processes in the coexistence
phase are automatically isothermal, therefore, according to eq.(28), the
Helmholtz free energy is kept fixed in such processes.
For simplicity, let us call the stable large and stable small black hole
phases respectively as phase $A$ and phase $B$. Then, on the coexistence
isocharge isothermal curves, we have
$\displaystyle-s_{A}\,{\rm d}t+\phi_{A}\,{\rm d}q=-s_{B}\,{\rm
d}t+\phi_{B}\,{\rm d}q,$ (32)
so the phase curves in the parameter space $(q,t)$ must obey
$\displaystyle\dfrac{{\rm d}q}{{\rm
d}t}=\dfrac{s_{A}-s_{B}}{\phi_{A}-\phi_{B}}.$ (33)
Using the relation $e=f+ts$ and the constancy of $f$ along the phase curve, we
can rewrite the above equation as
$\displaystyle\dfrac{{\rm d}q}{{\rm
d}t}=\dfrac{e_{A}-e_{B}}{t(\phi_{A}-\phi_{B})}.$ (34)
This is the analogy of Clapeyron equation for compact BHs. In order for the
above equation to be well-defined, the electric potential needs to be
discontinuous between the two phases. This is a feature required for a first
order phase transition. In such occasions, there is also a finite jump in $e$,
the mean internal energy per black hole molecule. If we were looking at the
$t-s$ phase transitions from the point of view of the whole black hole as a
single system, then the jump $e_{A}-e_{B}$ would become a jump in the black
hole mass $E_{A}-E_{B}=M_{A}-M_{B}$, which renders the first order phase
transitions difficult to understand, because the jump in the mass would break
the conservation of energy. However, since we are considering finite
subsystems, the jump $e_{A}-e_{B}$ can be attributed to the strong local
fluctuations, making it sounded physically. More on this point will come out
in Section 4.
When $t$ approaches $t_{c}$, the finite jump for the electric potential
becomes smaller and smaller, until it vanishes at $t=t_{c}$. In this case, the
right hand side (RHS) of eq.(34) becomes ill-defined, and the L’Hospital’s
rule needs to be employed in order to get a finite value. On this occasion, we
have two possible equations,
$\displaystyle\dfrac{{\rm d}q}{{\rm d}t}$
$\displaystyle=\dfrac{\left(\frac{\partial e_{A}}{\partial
t}\right)_{q}-\left(\frac{\partial e_{B}}{\partial
t}\right)_{q}}{t_{c}\left[\left(\frac{\partial\phi_{A}}{\partial
t}\right)_{q}-\left(\frac{\partial\phi_{B}}{\partial t}\right)_{q}\right]},$
(35) $\displaystyle\dfrac{{\rm d}q}{{\rm d}t}$
$\displaystyle=\dfrac{\left(\frac{\partial s_{A}}{\partial
q}\right)_{t}-\left(\frac{\partial s_{B}}{\partial
q}\right)_{t}}{\left(\frac{\partial\phi_{A}}{\partial
q}\right)_{t}-\left(\frac{\partial\phi_{B}}{\partial q}\right)_{t}}.$ (36)
These are the analogues of the Ehrenfest equations.
The elegant thermodynamic structure of the RPS formalism allows us to inherit
the analogous discussions from ordinary thermodynamics. Using the Maxwell
relation
$\displaystyle\left(\frac{\partial s}{\partial
q}\right)_{t}=-\left(\frac{\partial\phi}{\partial t}\right)_{q},$ (37)
the denominator in the RHS of eq.(35) and numerator in the RHS of eq. (36) can
be related to each other. Moreover, introducing the following process
parameters
$\displaystyle c_{q}=t\left(\frac{\partial s}{\partial
t}\right)_{q},\quad\beta_{q}=\dfrac{1}{\phi}\left(\frac{\partial\phi}{\partial
t}\right)_{q},\quad\kappa_{t}=-\dfrac{1}{q}\left(\frac{\partial
q}{\partial\phi}\right)_{t},$ (38)
we can express the consistency condition between eqs.(35) and (36) in terms of
the following relation among the finite jumps of the above process parameters
at the critical point,
$\displaystyle\Delta
c_{q}|_{t=t_{c},q=q_{c}}=\Bigg{(}t\,q\,\phi^{2}\dfrac{\left(\Delta\beta_{q}\right)^{2}}{\Delta\left(\kappa_{t}^{-1}\right)}\Bigg{)}\Bigg{|}_{t=t_{c},q=q_{c},\phi=\phi_{c}}.$
(39)
It is evident that $c_{q}$ is the specific isocharge heat capacity, and
$\beta_{q}$ and $\kappa_{t}$ may be referred to as the isocharge voltage
parameter and the isothermal discharging parameter, respectively[21]. For
later use, we present the analytical expression for the specific isocharge
heat capacity below,
$\displaystyle c_{q}$ $\displaystyle=\left(\frac{\partial e}{\partial
t}\right)_{q}=\frac{\left(\frac{\partial e}{\partial
s}\right)_{q}}{\left(\frac{\partial t}{\partial
s}\right)_{q}}=\frac{(d-3)(d-2)s^{\frac{2(d-3)}{d-2}}\epsilon-8q^{2}+(d-2)(d-1)s^{2}}{-(d-3)s^{\frac{d-4}{d-2}}\epsilon+\frac{8(2d-5)}{(d-2)s}q^{2}+(d-1)s}.$
(40)
It can be seen immediately that the case $\epsilon=1$ is distinguished from
the cases $\epsilon=0,-1$ in that $c_{q}$ can be divergent in the former case,
but not in the latter cases provided $d\geq 4$.
### 3.6 Local thermodynamic behaviors: a comparison between horizon
geometries
Up till now, our analysis has been carried out in generic spacetime dimensions
$d\geq 4$ with the exception of the critical point parameters which are
calculated in concrete dimensions $4\leq d\leq 10$. In this subsection, we
wish to make a comparison between the detailed thermodynamic behaviors of TBHs
with different choices of $\epsilon$. For this purpose, we need to create
plots of thermodynamic functions in various thermodynamic processes. This is a
task which is impossible to fulfill without specifying a concrete spacetime
dimension. Therefore, we will be working with $d=4$ throughout this
subsection. Different choices of $d$ could lead to some quantitative but not
qualitative differences.
The plots for the case $\epsilon=1$ were already presented in [17]. However,
in order to make comparison to the cases $\epsilon=0,-1$, we recreated the
plots for the case $\epsilon=1$ and put the curves in the same figures with
the curves with $\epsilon=0,-1$. We believe that this treatment is best for
illustrating the differences in the behaviors of TBHs with different
$\epsilon$ values.
In $d=4$, eqs.(20)-(24) and (40) are simplified drastically, yielding
$\displaystyle e(s,q)=\frac{\epsilon\,s+4\,{q}^{2}+s^{2}}{4\,\sqrt{s}},$ (41)
$\displaystyle t(s,q)=\dfrac{\epsilon\,s-4\,q^{2}+3\,s^{2}}{8\,s^{3/2}},$ (42)
$\displaystyle\phi(s,q)=\frac{2q}{\sqrt{s}},$ (43)
$\displaystyle\bar{\mu}(s,q)=\frac{\epsilon\,s-4\,{q}^{2}-s^{2}}{8\sqrt{s}},$
(44) $\displaystyle f(s,q)={\frac{\epsilon\,s+12\,{q}^{2}-s^{2}}{8\sqrt{s}}},$
(45) $\displaystyle c_{q}(s,q)=\frac{2s\left(\epsilon
s-4q^{2}+3s^{2}\right)}{-\epsilon s+12q^{2}+3s^{2}}.$ (46)
Notice that the arguments in each of the above equalities are introduced
simply for reminding the explicit dependence on the independent variables
$(s,q)$. These variables are not necessarily the natural (or adapted)
variables for the relevant thermodynamic functions. For instance, the adapted
independent variables for the free energy $f$ should be $(t,q)$, but the
thermodynamic function of states $f(t,q)$ is not given explicitly but rather
is provided implicitly by the joint of the relations (42) and (45). Of course,
one is free to make a change of variable $q\to\phi$ in some of the equations
listed above by use of eq.(43). For instance, we can write
$\displaystyle t(s,\phi)={\frac{\epsilon-{\phi}^{2}+3\,s}{8\sqrt{s}}},$ (47)
$\displaystyle\bar{\mu}(s,\phi)={\frac{\sqrt{s}\left(\epsilon-{\phi}^{2}-s\right)}{8}}.$
(48)
Once again, the adapted independent variable for $\bar{\mu}$ regarded as a
thermodynamic function should be $(t,\phi)$, but the actual relation
$\bar{\mu}(t,\phi)$ is not presented explicitly but rather is provided
implicitly by the joint of eqs. (47) and (48). Moreover, the Bogomol’nyi bound
(29) and (30) can be solved for $s$, which gives a lower bound for each $q$
and $\phi$,
$\displaystyle
s\geq\frac{1}{6}\left(-\epsilon+\sqrt{\epsilon^{2}+48q^{2}}\right),$ (49)
$\displaystyle s\geq-\frac{\epsilon}{3}+\frac{\phi^{2}}{3}.$ (50)
These (in)equalities provide the necessary input for creating the plots given
in Figs. 1-5.
Fig. 1 presents the isocharge $t-s$ and $f-t$ curves of $4d$ TBHs in different
isocharge processes. The first thing to be noticed in these curves is the
existence of $t-s$ phase transitions at $0\leq q\leq q_{c}$ and $t\geq t_{c}$
for the case $\epsilon=1$ (of which the case $q=0$ is an exception in the
sense that there is a phase transition but no equilibrium temperature). The
phase transitions at $t>t_{c}$ are referred to as supercritical. Besides the
phase transitions which occur only for the compact cases, the most remarkable
feature to be stressed is the existence of states with $t=0,s>0$ in the
presence of nonvanishing $q$ and all choices of $\epsilon$. Such states are
extremal black hole remnants, which indicate that charged TBHs cannot be
evaporated completely. For $q=0$, the state with $t=0$ is unreachable for
$\epsilon=1$, but is otherwise reachable for $\epsilon=0,-1$. For
$\epsilon=0$, the state with $q=0,t=0$ corresponds to a completely evaporated
planar black hole ($s=0$), while for $\epsilon=-1$, the state with $q=0,t=0$
has a nonvanishing zero point entropy $s$. Another point which needs to be
stressed is that, for $\epsilon=0,-1$, $t$ always increases monotonically with
$s$ in any isocharge processes, signifying that there is only a stable black
hole state at any positive temperature in these cases.
Figure 1: The isocharge $t-s$ and $f-t$ curves at different choices of $q$
The isocharge specific heat capacity $c_{q}$ corresponding to the above
isocharge processes are presented in Fig. 2. It can be seen that, for
$q=0,\,q<q_{c},\,q=q_{c}$ and $q>q_{c}$, the $c_{q}$ versus $t$ plots have
different number of branches for $\epsilon=1$. In particular, at $q=0$,
$c_{q}$ has two branches, one positive and one negative, each corresponds to a
stable/unstable compact BH. For $0<q<q_{c}$, there are three branches in the
$c_{q}-t$ plots, two of which are positive and one is negative. The two
positive branches correspond respectively to the stable small and stable large
black hole phases, while the negative branch corresponds to the unstable
medium black hole states. When $q=q_{c}$, the unstable negative branch
disappears, and for $q>q_{c}$, there is only a single positive branch, though
the $c_{q}$ versus $t$ behavior gradually changes from non-monotonic to
monotonic as $q$ increases further. In comparison, the $c_{q}-t$ behaviors for
the cases $\epsilon=0,-1$ are much simpler: there is always a single,
positive-valued branch.
Figure 2: Isocharge specific heat capacity versus temperature
Besides the different $c_{q}-t$ behaviors for different choices of $\epsilon$
and different values of charge density for the case $\epsilon=1$, there are
some other remarkable features in the $c_{q}-t$ plots. On the one hand, at
high temperatures, the $c_{q}-t$ curves for different choices of $\epsilon$
become more and more close to each other, which may signify some universal
high temperature behaviors. On the other hand, as $t\to 0$, $c_{q}$ always
tends to zero for $\epsilon=0,-1$ and for $\epsilon=1$ with $q=0$, but at
different rates. This indicates that the low temperature behaviors are
different for different choices of $\epsilon$, and thus are not universal. We
leave the detailed study on the high and low temperature limits to the next
subsection.
Figure 3: The isovoltage $t-s$ and $\bar{\mu}-t$ curves
Fig. 3 presents the isovoltage $t-s$ and $\bar{\mu}-t$ curves at zero/nonzero
electric potentials. The $t-s$ and $\bar{\mu}-t$ curves for the compact BHs at
$\phi<1$ are distinguished from all other cases, including those for compact
BHs at $\phi\geq 1$ and for non-compact TBHs at any choices of $\phi$. These
distinguished curves are either non-monotonic with a minimum for the $t-s$
curves or branched with a single zero for the $\bar{\mu}-t$ curves, which
signify the presence of HP-like transitions. The curves for the rest cases are
all single-valued and monotonic, implying the absence of any kind of
transitions.
Figure 4: The adiabatic $\phi-q$ curves
Figs.4 and 5 present the adiabatic and isothermal $\phi-q$ curves. The
adiabatic curves are very easy to understand, because, according to eq.(43),
the electric potential is proportional to the charge in the adiabatic
processes. There is only one point to be noticed, i.e. every adiabatic
$\phi-q$ curve has an end point at finite $q$, which is due to the Bogomol’nyi
bound. One can see that the upper bound for $q$ depends on both $s$ and
$\epsilon$.
Figure 5: The isothermal $\phi-q$ and $\bar{\mu}-\phi$ curves
The isothermal $\phi-q$ curves are much more subtle, and, in order to get
better understanding of such processes, we present the curves together with
the isothermal $\bar{\mu}-\phi$ plots. It should be mentioned that the
analytic isothermal $\phi-q$ relations cannot be given explicitly in general,
however such relationship can be presented in implicit form by considering
eqs.(42) and (47) jointly, which yields
$\displaystyle q(s,t)$
$\displaystyle=\frac{1}{2}\,\sqrt{s}\sqrt{\epsilon+3s-8\,t\sqrt{s}},$ (51)
$\displaystyle\phi(s,t)$
$\displaystyle=\dfrac{1}{a}\,\sqrt{\epsilon+3s-8\,t\sqrt{s}}.$ (52)
For compact BHs, the isothermal $\phi-q$ relation changes from a single-valued
monotonically increasing curve to single-valued non-monotonic curve, and then
to multivalued branched curves, as $t$ increases from $0$ to some
supercritical value $t>t_{c}$. The branching occurs only when $t>t_{c}$. This
is actually a manifestation of the supercritical $t-s$ phase transitions on
isothermal $\phi-q$ curves. Accordingly, the isothermal $\bar{\mu}-\phi$ curve
changes from single branched ($t=0$) to double branched ($t>0$), with the
lower branch crossing the line $\bar{\mu}=0$, signifying HP-like transition.
Each branch in the $\bar{\mu}-\phi$ curve corresponds to one of the monotonic
segment (or branch) on the corresponding $\phi-t$ curve, and only the
segment/branch corresponding to the lower branch of the chemical potential is
thermodynamically stable. For non-compact TBHs, the isothermal $\phi-q$ and
$\bar{\mu}-\phi$ curves are much simpler and are always single-valued and
monotonic.
Another noticeable feature of the isothermal $\phi-q$ curve for compact BHs is
the existence of a nonzero threshold value for $\phi$ at $t=0,q=0$. The
analytical value for this threshold can be inferred from eqs.(52) and (51),
which reads
$\displaystyle\phi_{\rm thr}^{2}=\epsilon.$ (53)
Thus the nonzero threshold for $\phi$ could arise only for $\epsilon=1$. In
order to understand the origin of the above threshold, it is necessary to
invert the defining relation for $\phi$ given in eq.(18) at $d=4$, which gives
$\hat{\Phi}_{\rm thr}^{2}=\left(\frac{1}{\ell a}\right)^{2}\phi_{\rm
thr}^{2}=\left(\frac{1}{\ell a}\right)^{2}\epsilon.$
It is now evident that, for constant $\ell$ and at $a\to\infty$,
$\hat{\Phi}_{\rm thr}^{2}$ vanishes. We see that the presence of the threshold
is completely a finite size effect, something which is reminiscent to the
Casimir effect, but now takes effect on the electric potential rather on the
energy.
### 3.7 High and low temperature limits
1\. High temperature limit
Recently, a correspondence between $d$-dimensional spherically symmetric
Tangherlini-AdS BHs at high temperatures and quantum phonon gases in
$(d-2)$-dimensional nonmetallic crystals at low temperatures was reported in
[22]. The isocharge plots for the $c_{q}-t$ curves presented in Fig. 2 seem to
indicate that, at high temperatures, the $c_{q}-t$ behavior does not
discriminate horizon geometries. Since the plots were created specifically for
$d=4$, it is necessary to pay a little more efforts to investigate the high
temperature behaviors analytically in generic spacetime dimensions $d\geq 4$
and for generic choices of the parameter $a$.
At first sight, it may look awkwardly difficult to take the high temperature
limit for the thermodynamic functions such as $e,f,s,c_{q}$ etc given the
complicated form of eqs.(20), (21), (24) and (40). However, the actual process
in getting the high temperature limit is quite simple. First, by looking at
eq.(21), one can see that there are two possible high temperature limits for
$\epsilon=1$, which correspond to $s\to 0$ and $s\to\infty$ respectively. On
the other hand, for $\epsilon=0,-1$, there remains only one high temperature
limit corresponding to $s\to\infty$. Therefore, it is clear that the unified
high temperature limit for all choices of $\epsilon$ takes place at
$s\to\infty$, i.e. the high entropy limit. Then, looking back again at
eqs.(20), (21), (24) and (40), one recognizes that the thermodynamic functions
$e,f$ as well as $c_{q}$ will all be dominated by the terms which are
independent of $\epsilon$ and $q$. This makes it crystal clear that the high
temperature limit should not discriminate different values of $q$ and
$\epsilon$. With a few lines of calculation by hand, the high temperature
limit behaviors of $e,f,s,c_{q}$ are found to be
$\displaystyle e$ $\displaystyle\approx\frac{d-2}{8}\left(\frac{t}{t_{\rm
BH}}\right)^{d-1},$ (54) $\displaystyle f$
$\displaystyle\approx-\frac{1}{8}\left(\frac{t}{t_{\rm BH}}\right)^{d-1},$
(55) $\displaystyle s$ $\displaystyle\approx\left(\frac{t}{t_{\rm
BH}}\right)^{d-2},$ (56) $\displaystyle c_{q}$
$\displaystyle\approx(d-2)\left(\frac{t}{t_{\rm BH}}\right)^{d-2},$ (57)
where
$t_{\rm BH}=\frac{d-1}{8}$
is understood to be corresponding to a characteristic temperature, and the
high temperature limit means $t\gg t_{\rm BH}$. The symbol $\approx$ should be
read as “approaches as $t\gg t_{\rm BH}$”.
The above power law dependence of thermodynamic quantities reproduces exactly
the results presented in [22], and thus coincides with the low temperature
quantum phonon gases in $(d-2)$-dimensional nonmetallic crystals. The present
work generalizes this AdS/phonon gas correspondence to the cases in the
presence of different charges and horizon geometries. By the way, since we can
take the liberty of choosing a specific value for the parameter $a$ to make
the correspondence with phonon gases more precise, i.e. up to numerical
coefficients. In this way the arbitrariness in choosing the length scale $L$
can be removed.
2\. Low temperature limit
Unlike the high temperature limit which is automatically the high entropy
limit, the temperature for charged TBHs approaches zero at some finite $s$,
whose value is dependent on $q$. Therefore, the low temperature behaviors for
charged TBHs can be very complicated in generic dimensions. On the other hand,
the neutral compact BHs cannot reach the $t=0$ state even asymptotically.
Therefore, for definiteness, we will consider only the low temperature limits
of neutral non-compact TBHs in analytic form, and it turns out that the planar
and hyperbolic TBHs behave differently.
For neutral planar TBHs ($\epsilon=0,q=0$), $t$ becomes a simple power in $s$
according to eq.(21). Inverting this $t-s$ relation and substituting in
eqs.(20), (24), we get
$\displaystyle s=\left(\frac{8t}{d-1}\right)^{d-2},$ (58) $\displaystyle
e=\frac{d-2}{8}\left(\frac{8t}{d-1}\right)^{d-1},$ (59) $\displaystyle
f=-\frac{1}{8}\left(\frac{8t}{d-1}\right)^{d-1}.$ (60)
Further, by taking the derivative of $e$ with respect to $t$, we have
$\displaystyle c_{q}=\left(\frac{\partial e}{\partial
t}\right)_{q}=(d-2)\left(\frac{8t}{d-1}\right)^{d-2}.$ (61)
These relations are exact at any temperatures, and the expressions coincide
precisely with what we have got in the high temperature limit, as shown in
eqs.(54)-(57). In the low temperature limit ($t\to 0$), the above equations
indicate that $s$ and $c_{q}$ tend to zero as $t^{d-2}$, while $e$ and $f$
tend to zero as $t^{d-1}$.
For neutral hyperbolic TBHs ($\epsilon=-1,q=0$), $s$ can be solved explicitly
from eq.(21), yielding
$\displaystyle s=\left(\frac{4t+\sqrt{16t^{2}+(d-1)(d-3)}}{d-1}\right)^{d-2}.$
(62)
Therefore, as $t\to 0$, we have
$\displaystyle s_{0}\equiv\lim_{t\to
0}s=\left(\frac{d-3}{d-1}\right)^{(d-2)/2}.$ (63)
Substituting eq.(62) into eq.(20) and taking the first derivative of the
result with respect to $t$, we get, in the limit $t\to 0$, the following
asymptotic behaviors,
$\displaystyle e$ $\displaystyle\approx|e_{0}|\left(-1+8\,t^{2}\right),$ (64)
$\displaystyle c_{q}$ $\displaystyle\approx 16\,|e_{0}|\,t,$ (65)
where
$\displaystyle e_{0}\equiv\lim_{t\rightarrow
0}e=-\dfrac{d-2}{4(d-1)}\left(\frac{d-3}{d-1}\right)^{(d-3)/2}.$ (66)
The Helmholtz free energy has precisely the same zero point value as the
internal energy,
$f_{0}\equiv\lim_{t\rightarrow 0}f=e_{0},$
however, above absolute zero, $f$ develops a lowest order correction term
which is linearly dependent on $t$.
For charged TBHs with generic horizon geometries, the analytic analysis for
the low temperature limits is impossible to be carried out, therefore we
resorted to numeric methods. To our surprise, for $4\leq d\leq 10$ and generic
$q>0$, all charged TBHs with generic horizon geometries have similar low
temperature behaviors, e.g. the presence of non-vanishing zero point values
$e_{0}=f_{0}\neq 0$ which depend on $q$ and the linear behavior $c_{q}\propto
t$ for the specific heat capacity.
The above results indicate that, at low temperatures, all charged TBHs behave
like strongly degenerated electron gases. The neutral planar case is an
exception to this generic behavior, probably because the neutral condition in
this case happens to have destroyed the coefficients of all terms in the
series expansions for $e$ and $f$ besides the $t^{d-1}$ terms.
It is remarkable that, besides the presence of a nonvanishing zero point
entropy $s_{0}$ and the negative zero point energy $e_{0}$, the low
temperature asymptotic behaviors of the charged TBHs and the neutral
hyperbolic TBHs are very similar to that of the strongly degenerated electron
gas, especially regarding the $e\sim e_{0}+\alpha_{1}t^{2}$ behavior for the
internal energy and the linear behavior $c_{q}\sim\alpha_{2}t$ for the
specific heat capacity. As $t$ increases from $0$ to $t\gg t_{\rm BH}$, the
behaviors of the charged TBHs and the neutral hyperbolic TBHs smoothly
interpolate between that of strongly degenerated electron gas and quantum
phonon gas. This result is another surprising discovery of this work in
addition to the proof of universal high temperature behaviors for TBHs with
different horizon geometries.
## 4 TBHs in $4d$ conformal gravity
### 4.1 Thermodynamic Structure
The second model which we would like to consider in this work is the
4-dimensional conformal gravity. The thermodynamics of the charged spherically
symmetric AdS BHs has been discussed in [21] under the RPS formalism. The
reason why we look back at this model is because the thermodynamic behavior of
the non-compact TBHs in this model looks quite different, or even physically
ill-posed. This opens a new possibility for testing the physical viability of
various gravity models from the thermodynamic perspective.
Before dwelling into the details of the thermodynamic behavior, we need to
recall some basic data for this specific model of gravity. The action with the
electromagnetic field is given as
$\displaystyle S=\alpha\int{\rm
d}^{4}x\sqrt{-g}\left(\frac{1}{2}C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}+\frac{1}{3}F^{\mu\nu}F_{\mu\nu}\right),$
(67)
where we assume $\alpha=\frac{L^{2}}{16\pi G}$ in order to keep $\alpha$
dimensionless as it should [21]. The static charged AdS black hole solution
for this model can be found in [15], with the metric
$\displaystyle\mathrm{d}s^{2}=-f(r)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{f(r)}+r^{2}\mathrm{d}\Omega_{2,\epsilon}^{2},$
(68) $\displaystyle f(r)=-\frac{1}{3}\Lambda r^{2}+c_{1}r+c_{0}+\frac{d}{r},$
(69)
and the Maxwell field
$\displaystyle A=-\frac{Q}{r}\mathrm{d}t.$ (70)
The parameter $\epsilon=1,0,-1$ is the same as in the previous case, and the
other parameters $Q,c_{0},c_{1},d,\Lambda$ are all integration constants which
need to obey an additional constraint
$\displaystyle 3c_{1}d+\epsilon^{2}+Q^{2}=c_{0}^{2},$ (71)
and, for the purpose to have a reasonable vacuum configuration ($c_{1}=d=Q=0$)
and also for the consistency of the RPS description for this model, we need to
take $c_{0}=\epsilon$.
The charged AdS TBH is located at one of the real positive root $r_{0}$ of the
equation
$\displaystyle f(r)=0.$ (72)
The basic thermodynamic quantities for these TBHs can be found in [15],
$\displaystyle
E=\dfrac{\omega_{2,\epsilon}\alpha\left(c_{0}-\epsilon\right)\left(\Lambda
r_{0}^{2}-3c_{0}\right)}{72\pi r_{0}}+\dfrac{\omega_{2,\epsilon}\alpha
d\left(2\Lambda r_{0}^{2}-c_{0}+\epsilon\right)}{24\pi r_{0}^{2}},$ (73)
$\displaystyle S=\dfrac{\omega_{2,\epsilon}\alpha\left(\epsilon
r_{0}-c_{0}r_{0}-3d\right)}{6r_{0}},\quad\quad T=-\dfrac{\Lambda
r_{0}^{3}+3c_{0}r_{0}+6d}{12\pi r_{0}^{2}},$ (74)
$\displaystyle{\hat{Q}}=\dfrac{\omega_{2,\epsilon}\alpha
Q}{12\pi},\quad\quad\quad\quad\hat{\Phi}=-\dfrac{Q}{r_{0}},$ (75)
where $\omega_{2,\epsilon}$ represents the area of the 2-dimensional
submanifold designated by the line element ${\rm d}\Omega_{2,\epsilon}$. The
opposite sign between ${\hat{Q}}$ and $\Phi$ is specific to this model, which
may be originated from the unusual sign convention in the action for the
Maxwell field.
In the RPS formalism, we introduce
$\displaystyle
N=\dfrac{L^{2}\omega_{2,\epsilon}}{G}=16\pi\alpha\omega_{2,\epsilon},\quad\quad\mu=\dfrac{GTI_{E}}{L^{2}\omega_{2,\epsilon}}=\dfrac{2\left[d\left(3\epsilon-r^{2}\Lambda\right)+2\epsilon
r\left(c_{0}-\epsilon\right)\right]}{3\,r^{2}}.$ (76)
These quantities guarantee the correctness of the Euler relation, the first
law and the Gibbs-Duhem relation as given in eqs.(13)-(15). Please notice
that, for non-compact TBHs, $\omega_{2,\epsilon}$ diverges. Therefore, to
actually make sense of the thermodynamics for the non-compact cases, the
introduction of finite subsystems is unavoidable.
### 4.2 Local thermodynamic relations and high temperature limit
We proceed in analogy to the case of Einstein-Maxwell theory. The necessary
local thermodynamic variables are defined as
$\displaystyle e$ $\displaystyle=\dfrac{2\pi\ell E}{N},\qquad
s\equiv\frac{S}{N},\qquad q\equiv\frac{{\hat{Q}}}{N},$ (77)
$\displaystyle\bar{\mu}\equiv 2\pi\ell\mu,\quad t\equiv 2\pi\ell
T,\qquad\phi\equiv 2\pi\ell\hat{\Phi},$ (78)
and also
$\displaystyle f=e-ts.$ (79)
Taking $(s,q)$ as independent variables, the other local thermodynamic
functions are found to be
$\displaystyle e=\sqrt{s\left(-s\epsilon-384\pi^{3}q^{2}+32\pi s^{2}\right)},$
(80) $\displaystyle
t=\dfrac{1}{\sqrt{s}}\frac{-s\epsilon-192\pi^{3}q^{2}+48\pi
s^{2}}{\sqrt{-s\epsilon-384\pi^{3}q^{2}+32\pi s^{2}}},$ (81)
$\displaystyle\phi=-\frac{384\pi^{3}q\,\sqrt{s}}{\sqrt{-\epsilon
s-384\pi^{3}q^{2}+32\pi s^{2}}},$ (82)
$\displaystyle\bar{\mu}=-\frac{16\pi\,\sqrt{s}\left(s^{2}-12\pi^{2}q^{2}\right)}{\sqrt{-s\epsilon-384\pi^{3}q^{2}+32\pi
s^{2}}}.$ (83)
Using the first two of these formulae, the specific isocharge heat capacity is
calculated to be
$\displaystyle c_{q}=-\frac{s\left(192\pi^{3}q^{2}-48\pi
s^{2}+s\epsilon\right)\left(384\pi^{3}q^{2}-32\pi
s^{2}+s\epsilon\right)}{32\pi\left(1152\pi^{5}q^{4}+576\pi^{3}q^{2}s^{2}-24\pi
s^{4}+s^{3}\epsilon\right)}.$ (84)
Just like in the case of Einstein-Maxwell theory, it suffices to consider only
the cases with $q\geq 0$ (and thus $\phi\leq 0$).
In order to ensure the above functions to be real-valued, the expression under
the square root needs to be non-negative. This gives a bound
$\displaystyle
s\geq\dfrac{\epsilon+\sqrt{12\,(64\,\pi^{2}\,q)^{2}+\epsilon^{2}}}{64\pi}$
(85)
for each $q$. It happens that the above bound automatically ensures the non-
negativity of $t$, thus there is no need to consider the Bogomol’nyi bound in
this case.
The above explicit form for the thermodynamic functions allows us to consider
the common high temperature limit $t\to\infty$ for all horizon geometries,
which is also the large entropy limit. The results read
$\displaystyle\lim_{t\rightarrow+\infty}e$
$\displaystyle=\dfrac{t^{3}}{108\pi},\quad\quad\lim_{t\rightarrow+\infty}f=-\dfrac{t^{3}}{216\pi},$
(86) $\displaystyle\lim_{t\rightarrow+\infty}s$
$\displaystyle=\dfrac{t^{2}}{72\pi},\quad\quad\,\,\,\lim_{t\rightarrow+\infty}c_{q}=\dfrac{t^{2}}{36\pi}.$
(87)
It is worth to notice that these results are independent of $q$ and
$\epsilon$, and the same power law dependence on $t$ as in eqs.(54)-(57) has
appeared once again, further supporting our conjecture about the universality
of the AdS/phonon gas correspondence.
Please be reminded that, for neutral planar TBHs in conformal gravity, the
above power law dependence on $t$ of the thermodynamic quantities are exact,
irrespective of the high temperature limit. Thus the neutral planar TBHs
behave like $2d$ phonon gases even in the low temperature limit. Besides the
neutral planar case, the TBHs in conformal gravity do not admit a low
temperature limit, which is due to the fact that the bound (85) is actually
tighter than the Bogomil’nyi bound, making the states with $t=0$ but $s\neq
0,q\neq 0$ inaccessible. This leaves no more room for studying the low
temperature limit.
### 4.3 Description of thermodynamic processes
A significant difference of conformal gravity from Einstein-Maxwell theory is
the absence of isocharge $t-s$ criticality. This makes the isocharge processes
for TBHs in conformal gravity much simpler.
Figure 6: The isocharge $t-s$ and $f-t$ curves for TBHs in conformal gravity
Figure 7: The isocharge specific heat capacity versus temperature for
conformal gravity Figure 8: The isovoltage $t-s$ and $\bar{\mu}-t$ curves for
TBHs in conformal gravity
The isocharge $t-s$ and $f-t$ plots for neutral and charged TBHs in $4d$
conformal gravity are presented in Fig. 6. The solid curves correspond to
compact spherically symmetric BHs, which were first obtained in [21] and are
reproduced here in order to make comparison to the non-compact TBHs. Besides
the branched/un-branched behaviors for the Helmholtz free energy, there is a
stunning difference between the neutral hyperbolic BH and all the other cases.
The point lies in that the lowest entropy state for the neutral hyperbolic BH
is located at $(s,t)=(0,1)$. This is quite unusual, because it is commonly
believed that a temperature exists only for thermal systems with nonzero
entropy. A state with zero entropy but nonzero temperature is beyond the usual
thermodynamic understandings, and this may be a signature for the illposed-
ness of conformal gravity, or of its hyperbolic black hole solution, although
it has been argued that this model is on-shell equivalent to Einstein gravity
with a negative cosmological constant [67]. In this regard, please be reminded
that, in Einstein gravity with a negative cosmological constant, the
hyperbolic black holes do not have similar problems. This may be the first
example case for testing the viability of gravity models or some of their
solutions from thermodynamic perspective.
Now let us ignore the above problem for the neutral hyperbolic BH and proceed
the analysis by presenting other plots. Fig. 7 presents the plots of isocharge
specific heat capacity versus temperature. These plots look similar to those
for the neutral TBHs in Einstein-Maxwell gravity as presented in the left-most
figure in Fig. 2. However, there is a noticeable difference, i.e. the curve
corresponding to the neutral hyperbolic BH does not extend to the origin on
the $t-c_{q}$ plane.
The isovoltage $t-s$ and $\bar{\mu}-t$ curves are presented in Fig. 8, which
look similar to the isocharge plots given in Fig. 6. As already pointed out in
[21], for spherical BHs in conformal gravity, there is no HP-like transitions
because the chemical potential is strictly negative. One surprising fact which
comes with Fig. 8 is that, for charged hyperbolic BHs in conformal gravity,
the HP-like transition could occur. This behavior is in sharp contrast to the
cases in Einstein-Maxwell theory. As can be inferred from Fig. 3, the HP-like
transition could occur only for compact BHs in Einstein-Maxwell theory. Now in
conformal gravity, the situation seems to be inverted. Another important point
to observe in Fig. 8 is the existence of states $(s=0,t>0)$. Such states
already show up in Fig. 6 for neutral hyperbolic BHs, but now reappear for
both planar and hyperbolic TBHs. The appearance of such unphysical states on
the isocharge and isovoltage $t-s$ curves drives us to raise concerns about
the consistency of conformal gravity as a candidate for a physically viable
alternative theory of gravity, or, at least, question the validity of non-
compact TBHs as physically viable solutions. It may be reasonable to impose
constraints over the integration constants by thermodynamic considerations,
e.g. to set $\epsilon=1$.
Figure 9: The adiabatic $\phi-q$ curves for TBHs in conformal gravity Figure
10: The isothermal $\phi-q$ curves for TBHs in conformal gravity Figure 11:
The isothermal $\bar{\mu}-\phi$ curves for TBHs in conformal gravity
For completeness, we also present the the $\phi-q$ curves in adiabatic and
isothermal processes in Fig. 9 and Fig. 10. There is no significant
differences between the adiabatic $\phi-q$ curves for TBHs with different
horizon geometries, however the isothermal $\phi-q$ curves for planar TBHs are
distinguished from the spherical and hyperbolic cases.
Finally, we present the isothermal $\bar{\mu}-\phi$ curves for TBHs in
conformal gravity in Fig. 11, which may be used in conjunction with the
isovoltage $\bar{\mu}-t$ curves as presented in Fig. 8 for illustrating the
presence/absence of HP-like transitions in conformal gravity.
## 5 Fluctuations in subsystems
The thermodynamic fluctuations for black holes have been considered by
different authors in the past, either by use of Smoluchowski formula or in
terms of thermodynamic geometry [69, 70, 71, 52, 72, 73, 74, 75, 76, 68]. All
previous works on this subject have treated the black hole as a whole system,
without subdividing it into subsystems. Please be reminded that the
Smoluchowski formula is derived for a finite thermodynamic system by immersing
it in a large heat reservoir, which is in turn considered to be an isolated
system. For a black hole as a whole system, the role of reservoir remains
obscure. On the other hand, in approaches with the use of thermodynamic
geometry such as the Ruppeiner geometry[77, 78], one looks for the divergent
points of the Ricci scalar associated with the Ruppeiner metric (which is
defined by the matrix of second derivatives of the entropy as a function of
states) for potential phase transition behaviors. Why Riemannian geometry
plays any role in the space of macro states for thermodynamic systems seems to
have never been explained sufficiently clear. In particular, the meaning of
general coordinate transformations on the space of macro states remains poorly
understood, especially when Euler homogeneity has to be taken into account.
In this section, we shall discuss the thermodynamic fluctuations for TBHs
using some of the ideas developed in Section 2. Our standing point is still
based on subsystems, and the fluctuations which we discuss will be referred to
as local fluctuations. Some of the conceptual issues in the previous treatment
of black hole thermodynamic fluctuations will be clarified on the fly with our
discussions.
### 5.1 The approach with Smoluchowski formula
The concept of subsystems introduced in Section 2 allows us to consider the
local thermal fluctuations in the subsystems using standard thermodynamic
methods.
The major conceptual issue in treating the black hole fluctuations using
Smoluchowski formula lies as follows. On the one hand, the derivation of
Smoluchowski formula requires a heat reservoir containing a large number of
particles (i.e. $N\to\infty$), so that the relative fluctuations becomes
negligible. On the other hand, for black holes regarded as a whole system, it
is hard to find an appropriate reservoir to carry out the process for deriving
Smoluchowski formula. One might imagine to immerse the black hole in an
external thermal environment whose scale is much larger than the black hole
itself, and take the environment as reservoir. However, this picture is
clearly incorrect, because the black hole cannot stay in equilibrium with the
environment. Rather, it may absorb matter and energy from the environment and
evolves into a larger black hole at a different thermodynamic state.
The correct way to deal with the problem is to take the black hole itself as
the reservoir, and consider the local fluctuations for its finite subsystems.
For non-compact TBHs, the horizon area is already infinitely large, so are the
additive quantities such as the total internal energy, entropy, electric
charge and the number of black hole molecules contained therein. Therefore
there is no problem in considering such black holes as reservoirs. While for
compact BHs, the horizon area is finite, and so are the additive quantities.
This may raise concerns about the validity for taking such black holes as heat
reservoirs. The way out is to consider the large $L$ or small $G$ limit, in
which case the number $N$ of black hole molecules blows up, and it becomes
more reasonable to take such black holes as reservoirs.
According to the above reasoning, we can now always take the whole black hole
as a heat reservoir. All we need is to keep in mind that the number $N$ of
black hole molecules must be considered to be very large. Taking a subsystem
which corresponds to a fixed small area on the event horizon (small in the
sense $\mathcal{S}\ll S$, but still large enough to make $\mathcal{N}\gg 1$),
the corresponding additive quantities of the subsystem must obey
$\mathcal{E}\ll E,\quad 1\ll\mathcal{N}\ll N,\quad\mathcal{S}\ll
S,\quad\mathcal{Q}\ll Q.$
The corresponding thermodynamic identity for the subsystem can be written as
$\displaystyle{\rm d}\mathcal{E}=T{\rm d}\mathcal{S}+\hat{\Phi}{\rm
d}\mathcal{Q}+\mu{\rm d}\mathcal{N}.$
In accordance with the dimensionless notations introduced in Sections 2 and 3,
we also introduce the dimensionless additive quantities for the subsystem,
$\displaystyle\mathfrak{e}=\mathcal{N}e,\quad\mathfrak{s}=\mathcal{N}s,\quad\mathfrak{q}=\mathcal{N}q.$
(88)
Then, the thermodynamic identity for the subsystem can be rewritten as
$\displaystyle{\rm d}\mathfrak{e}=t{\rm d}\mathfrak{s}+\phi{\rm
d}\mathfrak{q}+\bar{\mu}{\rm d}\mathcal{N}.$
When ${\rm d}\mathcal{N}=0$, the subsystem is closed.
Following the standard procedure, we can get the Smoluchowski formula for the
probability density corresponding to the fluctuation configuration $(\delta
t,\delta\mathfrak{s},\delta\phi,\delta\mathfrak{q})$ in a small closed
subsystem,
$\displaystyle\mathcal{P}=A\exp\left(-\frac{\delta
t\delta\mathfrak{s}+\delta\phi\delta\mathfrak{q}}{2t}\right),$ (89)
where $A$ is an appropriate normalization constant.
In order to make use of the above distribution in the evaluation for the
relative fluctuations, one needs to be aware of the fact that, among the four
fluctuation quantities, only two are independent. As an example case, we can
take $(\delta\mathfrak{s},\delta\mathfrak{q})$ as independent fluctuations, so
that the other two fluctuations can be expressed as
$\displaystyle\delta t$ $\displaystyle=\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}\delta\mathfrak{s}+\left(\frac{\partial
t}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}\delta\mathfrak{q},\quad\delta\phi=\left(\frac{\partial\phi}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}\delta\mathfrak{s}+\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}\delta\mathfrak{q}.$
With the aid of the Maxwell relation
$\displaystyle\left(\frac{\partial
t}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}$
$\displaystyle=\left(\frac{\partial\phi}{\partial\mathfrak{s}}\right)_{\mathfrak{q}},$
the probability density (89) becomes
$\displaystyle\mathcal{P}=A\exp\left\\{-\dfrac{1}{2\,t}\left[\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}\left(\delta\mathfrak{s}\right)^{2}+\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}\left(\delta\mathfrak{q}\right)^{2}+2\left(\frac{\partial
t}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}\delta\mathfrak{q}\delta\mathfrak{s}\right]\right\\}.$
(90)
Using this probability density, it is straightforward to calculate the
following mean-square (relative) fluctuation,
$\displaystyle\left\langle(\delta\mathfrak{s})^{2}\right\rangle=t\left(\frac{\partial\mathfrak{s}}{\partial
t}\right)_{\mathfrak{q}}=\mathcal{N}c_{q},\quad\left\langle\left(\dfrac{\delta\mathfrak{s}}{\mathfrak{s}}\right)^{2}\right\rangle=\frac{\left\langle(\delta\mathfrak{s})^{2}\right\rangle}{\mathfrak{s}^{2}}=\frac{c_{q}}{\mathcal{N}s^{2}}\propto\dfrac{1}{\mathcal{N}}.$
(91)
Likewise, we have
$\displaystyle\left\langle\left(\delta\mathfrak{q}\right)^{2}\right\rangle=\dfrac{t\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}}{\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}-\left[\left(\frac{\partial
t}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}\right]^{2}},\quad\left\langle\left(\dfrac{\delta\mathfrak{q}}{\mathfrak{q}}\right)^{2}\right\rangle=\dfrac{\left\langle\left(\delta\mathfrak{q}\right)^{2}\right\rangle}{\mathfrak{q}^{2}}\propto\dfrac{1}{\mathcal{N}},$
(92)
and there is a nontrivial correlation between the entropy and charge
fluctuations,
$\displaystyle\left\langle\delta\mathfrak{s}\delta\mathfrak{q}\right\rangle=-\dfrac{t\left(\frac{\partial
t}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}}{\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{s}-\left[\left(\frac{\partial
t}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}\right]^{2}}.$ (93)
Alternatively, taking $(\delta\mathfrak{s},\delta\phi)$ as independent
fluctuations, the probability density can be rewritten as
$\displaystyle\mathcal{P}=A\exp\left\\{-\dfrac{1}{2t}\left[\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\phi}\left(\delta\mathfrak{s}\right)^{2}+\left(\frac{\partial\mathfrak{q}}{\partial\phi}\right)_{\mathfrak{s}}\left(\delta\phi\right)^{2}\right]\right\\}.$
(94)
Using this form of the probability density, we can get
$\displaystyle\left\langle\left(\delta\phi\right)^{2}\right\rangle=\dfrac{t}{\left(\frac{\partial\mathfrak{q}}{\partial\phi}\right)_{\mathfrak{s}}},\quad\left\langle\left(\dfrac{\delta\phi}{\phi}\right)^{2}\right\rangle=\dfrac{\left\langle\left(\delta\phi\right)^{2}\right\rangle}{\phi^{2}}\propto\dfrac{1}{\mathcal{N}}.$
The mean-square entropy fluctuation remains the same as above, and there is no
correlation between the entropy and potential fluctuation due to the lack of
mixed term involving product of $\delta\mathfrak{s}$ and $\delta\phi$ in
eq.(94).
Let us remind that the fluctuation for the local internal energy can be
obtained by use of the first order relation
$\delta\mathfrak{e}=t\delta\mathfrak{s}+\phi\delta\mathfrak{q}.$
This can be achieved by first taking the square of the above equation on both
sides and then calculating the average under the probability distribution
$\mathcal{P}$,
$\displaystyle\left\langle(\delta\mathfrak{e})^{2}\right\rangle=t^{2}\left\langle(\delta\mathfrak{s})^{2}\right\rangle+2t\phi\left\langle\delta\mathfrak{s}\delta\mathfrak{q}\right\rangle+\phi^{2}\left\langle(\delta\mathfrak{q})^{2}\right\rangle.$
(95)
All these procedures seem to be standard as in any text book on
thermodynamics. However, there is still something to be taken care of. When
evaluating the mean-square relative fluctuations, we assumed that expressions
for the probability density are of Gaussian type, which means that the
exponents can be arranged in a quadratic form
$-\frac{1}{2t}\sum_{i,j}X_{i}K_{ij}X_{j}$, where $X_{i}$ represent the
independent fluctuations and the matrix $K$ must be positive definite. In the
first case (90), we have $X=(\delta\mathfrak{s},\delta\mathfrak{q})$ and hence
$\displaystyle K=\begin{pmatrix}\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}&\left(\frac{\partial
t}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}\\\ \left(\frac{\partial
t}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}&\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}\end{pmatrix}.$
(96)
In the second case, we have $X=(\delta\mathfrak{s},\delta\phi)$, hence
$\displaystyle K=\begin{pmatrix}\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\phi}&0\\\
0&\left(\frac{\partial\mathfrak{q}}{\partial\phi}\right)_{\mathfrak{s}}\end{pmatrix}.$
(97)
The point is that, the positive-definiteness of $K$, or simply ${\rm
det}\,K>0$, is not guaranteed in either cases.
Let us look at the first choice (96) and take the compact TBHs as our example
system. In this case, we have
$\displaystyle{\rm det}\,K=\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}-\left[\left(\frac{\partial
t}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}\right]^{2}.$ (98)
Notice that ${\rm det}\,K$ as given above coincide precisely with what appears
in the denominators of eqs.(92) and (93). The second term on the RHS of
eq.(98) is always non-positive, therefore, the positive-definiteness of $K$
will be broken provided the first term is negative or vanishes. According to
eq.(88), we have
$\displaystyle\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}=\frac{1}{\mathcal{N}}\left(\frac{\partial
t}{\partial
s}\right)_{q},\quad\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}=\frac{1}{\mathcal{N}}\left(\frac{\partial\phi}{\partial
q}\right)_{s}.$
Then it can be inferred from Fig. 4 that $\left(\frac{\partial\phi}{\partial
q}\right)_{s}$ is always positive, while according to the first two columns in
Fig. 1, $\left(\frac{\partial t}{\partial s}\right)_{q}$ can be negative or
zero for spherical BHs with $q<q_{c}$. When this happens, the positive-
definiteness of $K$ is broken, and the procedure for calculating the relative
mean-square fluctuations would lead to divergent results. Such strong relative
fluctuations in the chosen subsystem will destroy the equilibrium state of the
whole black hole system, until new equilibrium is reached. This process is
nothing but the phase transition process, as it is evident that the states at
which $\left(\frac{\partial t}{\partial s}\right)_{q}$ takes negative or
vanishing values are located exactly in the phase transition zone. In fact,
the condition for breaking the positive-definiteness of $K$ does not require
strict non-positivity for $\left(\frac{\partial t}{\partial s}\right)_{q}$.
For positive, but sufficiently small values of $\left(\frac{\partial
t}{\partial s}\right)_{q}$, ${\rm det}\,K$ can still be non-positive. That is
why meta-stable states need to be replaced by truly stable states in the
coexistence zone for stable small and large black hole phases.
For the second choice (97), we have
$\displaystyle{\rm det}\,K=\left(\frac{\partial
t}{\partial\mathfrak{s}}\right)_{\phi}\left(\frac{\partial\mathfrak{q}}{\partial\phi}\right)_{\mathfrak{s}}=\left(\frac{\partial
t}{\partial s}\right)_{\phi}\left(\frac{\partial q}{\partial\phi}\right)_{s}.$
(99)
Once again, $\left(\frac{\partial q}{\partial\phi}\right)_{s}$ is always
positive, and thus whether $K$ is positive definite depends solely on the sign
of $\left(\frac{\partial t}{\partial s}\right)_{\phi}$. It can be seen in Fig.
3 that, for spherical AdS black holes in Einstein-Maxwell theory,
$\left(\frac{\partial t}{\partial s}\right)_{\phi}$ can be negative or zero
provided $\phi<1/a$ and $0<t\leq t_{\rm min}$, where $t_{\rm min}$ is the
lowest temperature of such black holes. In such cases, the corresponding
relative mean-squared fluctuations also diverge, and such divergences signify
the non-equilibrium phase transition from an unstable small black hole phase
to a stable large black hole phase. The celebrated HP transition is a special
case of these transitions which corresponds to a transition not between two
black hole branches, but rather between an AdS black hole phase and a thermal
gas with zero chemical potential.
It should be mentioned that, when the (relative) fluctuations for the local
entropy and charge become large, so is
$\left\langle(\delta\mathfrak{e})^{2}\right\rangle$, as is implied by eq.(95).
This explains why there could be a finite jump in $e$ when first order phase
transition happens.
As an addendum to the above discussions, let us mention that, if the subsystem
which we choose were open rather than closed, then the Smoluchowski formula
(89) would become
$\mathcal{P}=A\exp\left(-\frac{\delta
t\delta\mathfrak{s}+\delta\phi\delta\mathfrak{q}+\delta\bar{\mu}\delta\mathcal{N}}{2t}\right).$
However, this form of Smoluchowski formula is barely of any use, because,
taking any three of the six fluctuation variables and re-express the rest ones
in terms of them would lead to a distribution of the form
$\displaystyle\mathcal{P}=A\exp\left(-\frac{1}{2t}XKX^{T}\right)$ (100)
with a degenerate matrix $K$. The reason lying in behind is the Gibbs-Duhem
relation, which implies that there are at most two independent fluctuation
variables.
Summarizing the above discussions, let us mention that the local fluctuations
in small closed subsystems of a black hole can be strong enough to destroy the
equilibrium state of the entire system, leading to thermodynamic instabilities
and phase transitions. Looking at the places where the coefficient matrix $K$
in the Smoluchowski distribution becomes non-positive-definite can be a quick
method for finding black hole phase transitions. These ideas work only in the
presence of Euler homogeneity, which allows for the investigation of subsystem
behaviors.
### 5.2 The approach with thermodynamic geometry
There exist different descriptions for thermodynamic geometries in the
literature, e.g. Weinhold geometry[79, 80], Ruppeiner geometry[77, 78], etc.
The metrics for Weinhold and Ruppeiner geometries differ only by a finite Weyl
factor, therefore, regarding the divergence behaviors, the two descriptions
are basically identical.
Let us take Ruppeiner geometry as an example. The ordinary treatment for black
hole fluctuations using Ruppeiner geometry starts from defining the metric on
the space of macro states as
$\displaystyle g_{ab}=-\left(\frac{\partial^{2}S}{\partial X^{a}\partial
X^{b}}\right),$ (101)
where $X^{a},X^{b}$ represent the other additive quantities on which the
entropy $S$ depends. The phase transition points would then be identified as
the divergent points of the Ricci scalar $R(g_{ab})$ associated with the
Ruppeiner metric (101). For BHs with finite horizon areas, and when their
thermodynamic properties are described using the traditional or extended phase
space formalisms, this approach has been verified for years, which proves to
be effective [69, 70, 71, 52, 72, 73, 74, 75, 76]. However, for BHs whose
thermodynamic behaviors are described using the RPS formalism, or for BHs with
infinite horizon areas (such as the non-compact TBHs studied in this work),
there would be some technical problems with the Ruppeiner geometric approach.
The first problem is relatively minor, which is related to the divergence of
additive quantities for the non-compact TBHs, and can be easily resolved by
considering only finite subsystems. The second problem arises as a consequence
of the complete Euler homogeneity, or more precisely, the Gibbs-Duhem
relation. Taking the compact BHs in Einstein-Maxwell theory as an example
(which avoid the first problem). We can solve the Euler relation to get the
entropy as a function in $(E,\hat{Q},N)$, with the total differential
$\displaystyle{\rm d}S=\frac{1}{T}({\rm d}E-\hat{\Phi}{\rm d}\hat{Q}-\mu{\rm
d}N).$ (102)
Clearly, we have
$\left(\frac{\partial S}{\partial
E}\right)_{\hat{Q},N}=\frac{1}{T},\quad\left(\frac{\partial
S}{\partial\hat{Q}}\right)_{E,N}=-\frac{\hat{\Phi}}{T},\quad\left(\frac{\partial
S}{\partial N}\right)_{E,\hat{Q}}=-\frac{\mu}{T}.$
Due to the Gibbs-Duhem relation, only two of the three quantities
$(T,\hat{\Phi},\mu)$ are independent. Therefore, taking $X^{a}=(E,\hat{Q},N)$
would give rise to a degenerate matrix
$g_{ab}=-\left(\frac{\partial^{2}S}{\partial X_{a}\partial X_{b}}\right)$ with
${\rm det}(g_{ab})=0$, which cannot be taken as a Riemannian metric. In
essence, this problem is of the same nature as the the problem of degeneracy
of the matrix $K$ in eq.(100).
There is a single solution to resolve the above two problems altogether, i.e.
by considering only closed subsystems with fixed $\mathcal{N}$. If we persist
in using the dimensionless variables, eq.(102) will be replaced with
$\displaystyle{\rm d}\mathfrak{s}=\frac{1}{t}({\rm d}\mathfrak{e}-\phi{\rm
d}\mathfrak{q}).$ (103)
If we take $x^{a}\equiv(t,\mathfrak{q})$ as independent variables and consider
$\mathfrak{e}$ and $\phi$ as functions in $(t,\mathfrak{q})$, the local
Ruppeiner-like metric
$\mathfrak{g}_{ab}\equiv-\left(\frac{\partial^{2}\mathfrak{s}}{\partial
x^{a}\partial x^{b}}\right)$
can be evaluated to be
$\displaystyle\mathfrak{g}_{ab}=\frac{1}{t}\begin{pmatrix}\frac{\mathfrak{c}_{\mathfrak{q}}}{t}&0\cr
0&\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{t}\end{pmatrix},$
(104)
where $\mathfrak{c}_{\mathfrak{q}}=\left(\frac{\partial\mathfrak{e}}{\partial
t}\right)_{\mathfrak{q}}=\mathcal{N}c_{q}$ is the dimensionless heat capacity
of the subsystem. The global version of a similar metric was given in [73],
which is inapplicable for non-compact TBHs. Here the local version recovers
the full applicability for the cases of all TBHs with any horizon geometry.
The point lies in that the line element
$\displaystyle\delta^{2}\mathfrak{s}=-\mathfrak{g}_{ab}\delta x^{a}\delta
x^{b}$ (105)
represents the second order fluctuation for the dimensionless entropy
$\mathfrak{s}$ of the finite subsystem (the first order fluctuation vanishes
provided the subsystem is in equilibrium with the reservoir). In the stable
states of the subsystem, $\delta^{2}\mathfrak{s}$ needs to be negative, which
corresponds to positive values of $\mathfrak{c}_{\mathfrak{q}}$ and
$\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{t}$. However, in the
unstable states, $\delta^{2}\mathfrak{s}$ can be non-negative, indicating that
the state of the subsystem would never fall back to the initial state
spontaneously after the fluctuation. Moreover, at places where
$\mathfrak{c}_{\mathfrak{q}}$ and
$\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{t}$ diverge, the
local fluctuation becomes infinitely strong, consequently the equilibrium of
the whole black hole system can be destroyed, leading to a system-wide phase
transition. Such situations indeed happen, as can be inferred from Figs. 2 and
5.
Now let us make our critical points by asking the following two questions:
(1) Why Ruppeiner geometry works by identifying phase transition points with
the divergence points of the Ricci scalar associated with the Ruppeiner
metric?
(2) Do phase transitions have anything to do with the Riemannian structure on
the space of macro states?
The answer to the first question can be given by evaluating the Ricci scalar
$R(\mathfrak{g}_{ab})$ associated with $\mathfrak{g}_{ab}$ explicitly,
$\displaystyle R(\mathfrak{g}_{ab})$
$\displaystyle=\frac{1}{2(\mathfrak{c}_{\mathfrak{q}})^{2}({\partial_{\mathfrak{q}}\phi})^{2}}\Big{\\{}t\partial_{\mathfrak{q}}\phi\big{[}\partial_{t}\mathfrak{c}_{\mathfrak{q}}\big{(}t\partial_{t,\mathfrak{q}}\phi-\partial_{\mathfrak{q}}\phi\big{)}+(\partial_{\mathfrak{q}}\mathfrak{c}_{\mathfrak{q}})^{2}\big{]}$
$\displaystyle\qquad+\mathfrak{c}_{\mathfrak{q}}\big{[}t\big{(}\partial_{\mathfrak{q}}\mathfrak{c}_{\mathfrak{q}}\partial_{\mathfrak{q},\mathfrak{q}}\phi+t(\partial_{t,\mathfrak{q}}\phi)^{2}\big{)}-2t\partial_{\mathfrak{q}}\phi\big{(}\partial_{\mathfrak{q},\mathfrak{q}}\mathfrak{c}_{\mathfrak{q}}+t\partial_{t,t,\mathfrak{q}}\phi\big{)}-(\partial_{\mathfrak{q}}\phi)^{2}\big{]}\Big{\\}}.$
(106)
$R(\mathfrak{g}_{ab})$ could be divergent either at the zeros of the
denominator or at the divergent points of the numerator. For $t>0$, the only
zero comes from the zero of
$\partial_{\mathfrak{q}}\phi\equiv\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{t}$,
which corresponds to the existence of horizontal tangent to the isothermal
$\phi-q$ curves, as can be seen in the first row of Fig. 5. Whether
$R(\mathfrak{g}_{ab})$ is indeed divergent at the zero of
$\partial_{\mathfrak{q}}\phi$ cannot be determined by looking at eq.(106)
alone without calculating the various derivatives appearing in the numerator.
However, at $t=0$, an extra zero arises in the denominator for some of the
TBHs with reasonable low temperature limits, because
$\mathfrak{c}_{\mathfrak{q}}$ goes to zero either as $\mathcal{O}(t^{1})$ or
as $\mathcal{O}(t^{d-2})$. In this case, $R(\mathfrak{g}_{ab})$ is definitely
divergent, because the last term on the second line of eq.(106) gives a
contribution $-\frac{1}{2\mathfrak{c}_{\mathfrak{q}}}$, which diverges at
$t=0$. This divergence could not be identified as a phase transition point,
because the low temperature limit of the TBHs – when exists – looks completely
normal. This may be a signature for the failure of the Ruppeiner geometric
approach, as is noticed in [69, 70] in the cases of four dimensional
asymptotically flat Kerr black hole and higher dimensional multiply rotating
black holes without cosmological constant, for which no phase transitions of
any type exist.
The divergent points of the numerator occur at places where
$\mathfrak{c}_{\mathfrak{q}}$ diverges, because, at the divergent points of
$\mathfrak{c}_{\mathfrak{q}}$,
$\partial_{\mathfrak{q},\mathfrak{q}}\mathfrak{c}_{\mathfrak{q}}$ diverges at
a much faster rate than $\mathfrak{c}_{\mathfrak{q}}$ itself, which leads to a
divergence in $R(\mathfrak{g}_{ab})$. Using the identity
$\displaystyle\partial_{\mathfrak{q}}\phi$
$\displaystyle=\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{t}=\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{\mathfrak{s}}+\left(\frac{\partial\phi}{\partial\mathfrak{s}}\right)_{\mathfrak{q}}\left(\frac{\partial\mathfrak{s}}{\partial\mathfrak{q}}\right)_{t}=\frac{1}{\mathcal{N}}\left[\left(\frac{\partial\phi}{\partial
q}\right)_{s}+\left(\frac{\partial\phi}{\partial
s}\right)_{q}\left(\frac{\partial s}{\partial q}\right)_{t}\right]$
$\displaystyle=\frac{1}{\mathcal{N}}\left[\left(\frac{\partial\phi}{\partial
q}\right)_{s}-\left(\frac{\partial\phi}{\partial
s}\right)_{q}\dfrac{\left(\frac{\partial t}{\partial
q}\right)_{s}}{\left(\frac{\partial t}{\partial
s}\right)_{q}}\right]=\frac{1}{\mathcal{N}}\left[\left(\frac{\partial\phi}{\partial
q}\right)_{s}-\frac{c_{q}}{t}\left(\frac{\partial\phi}{\partial
s}\right)_{q}\left(\frac{\partial t}{\partial q}\right)_{s}\right],$
one can see that $\partial_{\mathfrak{q}}\phi$ diverges precisely at the same
states where $\mathfrak{c}_{\mathfrak{q}}$ diverges. Therefore, there is no
need to consider the divergence of $R(\mathfrak{g}_{ab})$ caused by the
divergence of $\partial_{\mathfrak{q}}\phi$ in the numerator separately.
In all cases, the divergences of $R(\mathfrak{g}_{ab})$ are actually inherited
from the divergent or degenerate points of $\mathfrak{g}_{ab}$. However, since
the Ruppeiner metric is already singular or degenerate at the divergent points
of $R(\mathfrak{g}_{ab})$, the evaluation of $R(\mathfrak{g}_{ab})$ itself at
such points is at most formal, and it is unnecessary to do so – it suffices to
find the divergent/degenerate points of the Ruppeiner metric, which is
technically much easier and physically more sounded. Moreover, by looking for
the places where $\mathfrak{c}_{\mathfrak{q}}$ and/or
$\left(\frac{\partial\phi}{\partial\mathfrak{q}}\right)_{t}$ become negative,
one can judge the thermodynamic instability of the subsystem. This information
can be clearly inferred from the Ruppeiner metric but not from the
corresponding curvature scalar.
Our answer to the second question listed above tends to be “No”, because the
role of general coordinate transformations has never been made sufficiently
clear in such settings, in spite of the numerous arguments made in [78]. The
point lies in that, each variable in the space of macro states has a clear
physical meaning, and the singular/degenerate points in the Ruppeiner metric
also have very clear physical correspondences. Therefore, it is not clear what
a general coordinate transform on the space of macro states means. Especially,
when the coordinate transform is nonlinear, the Euler homogeneity which is a
characteristic property of thermodynamics would be broken. The argument [78]
that the energy and the homogeneity are irrelevant to the analysis of
fluctuations is incorrect, especially in the present local thermodynamic
framework, because, in the absence of Euler homogeneity, there will be no
point to take a subsystem as a representative for investigating mean behaviors
of the complete system. Moreover, the singularities in $\mathfrak{g}_{ab}$ or
its inverse are different from the coordinate singularities which could appear
in spacetime metrics in relativistic theories of gravity or in generic
Riemannian spaces. In the latter cases, the Ricci scalar needs not to inherit
the singularities in the metric, because some of the singularities may be
spurious and may be removed by coordinate transformations. The use of
Riemannian geometric constructions on the space of macro states can be
confusing and cumbersome, while the use of conventional techniques for finding
phase transition points is logically clearer and technically simpler.
## 6 Concluding remarks
The RPS formalism proves to be a powerful tool for analyzing thermodynamic
behaviors for TBHs with different horizon geometries. The complete Euler
homogeneity of this formalism is especially useful for understanding the
properties of non-compact TBHs, which are hard to understand properly using
other formalisms of black hole thermodynamics. The concept of subsystems for
black holes introduced in this work allows all conventional approaches in
ordinary thermodynamic systems to be introduced into black hole systems, which
in turn helps in analyzing the local thermodynamic behaviors of finite parts
of the black hole event horizons as subsystems.
Our analysis for the concrete example cases of TBHs in Einstein-Maxwell theory
and in $4d$ conformal gravity indicate that, the local thermodynamic behaviors
for TBHs can be quite different for TBHs in the same underlying theory but
with different horizon geometries. On the other hand, TBHs with the same
horizon geometry but from different underlying gravity models can also behave
very differently, and there exist special cases in which certain behavior of
some black hole solutions looks ill-posed from thermodynamic perspective, and
this opens a novel possibility for detecting the physical viability of gravity
models or their solutions using thermodynamic considerations.
Besides the detailed comparison for concrete thermodynamic behaviors of TBHs
with different horizon geometries in the above two models of gravity, the most
striking new result of this work lies in the high temperature and low
temperature limits. On the high temperature end, our analysis indicates that
all TBHs behave like quantum phonon gases in nonmetallic crystals residing in
a flat space with two less dimensions than the black hole spacetime. This
behavior is irrespective of the underlying gravity model, the spacetime
dimension and the concrete horizon geometry. This fact further supports our
conjecture about the universality of the AdS/phonon gas correspondence. On the
low temperature end, it is shown that most of the TBHs in $4d$ conformal
gravity do not admit a low temperature limit, except for the neutral planer
case which still behaves like $2d$ phonon gases as in the high temperature
limit; while in Einstein-Maxwell theory, all charged TBHs as well as the
neutral hyperbolic TBHs behave like strongly degenerated electron gases, and
the neutral planar TBHs retain the phonon gas behavior. The neutral compact
TBHs in Einstein-Maxwell theory do not admit a low temperature limit.
The introduction of finite closed subsystems allows for an analysis about the
local thermodynamic fluctuations, taking the whole black hole with (nearly)
infinitely many microscopic degrees of freedom as an isolated heat reservoir.
This view point is very important, because, in the absence of the concept of
subsystems, the only way to consider thermal fluctuations of a black hole
might be to immerse it in an external heat reservoir. However, such
configurations can hardly be trustable, the large external heat reservoir will
most probably destroy the black hole configuration and change it into
something else (which may be a larger black hole in a different thermodynamic
state), rather than stay in equilibrium with the black hole. The concrete
analysis of the strong local mean fluctuations in the presence of instability
allows us to understand the origin of the finite jump in the mean internal
energy per black hole molecule $e$ during the process of first order phase
transition, which is otherwise difficult to understand in the global picture
(in which the jump becomes that of the mass of the black hole, which breaks
energy conservation). Certain conceptual and technical issues in the local
version of the Ruppeiner geometric approach to black hole fluctuations are
also discussed in some detail.
Last but not the least, let us pay some words on the potential observational
consequences of the first order black hole phase transitions. Please be aware
that, the first order $t-s$ phase transitions could appear only for BHs with a
compact event horizon in AdS backgrounds. Since our universe is not
asymptotically AdS, it appears that there can hardly be any $t-s$ phase
transitions for black holes in the observational universe. However, as pointed
out in Refs.[81, 82, 83], BHs confined in a box can mimic the behaviors of AdS
BHs. In reality, a confining potential provided by another strong
gravitational source can play the role of a confining box, thus there is a
possibility to observe black hole phase transitions in the real universe.
Since it has been made clear in the main text of this paper, the $t-s$ phase
transitions are accompanied by a jump in the black hole mass, a black hole in
a confining potential will most probably undergo a transition from a state
with larger mass to a state with smaller mass during the first order phase
transition in the absence of external mass source. During this process, some
extra mass could be ejected out of the black hole. Such mass ejection could in
principle be observed and may provide a mechanism for explaining the black
hole “burp” phenomenon reported recently in [84].
## Acknowledgement
This work is supported by the National Natural Science Foundation of China
under the grant No. 12275138.
## References
* [1] Lemos, J. P. S. “Three dimensional black holes and cylindrical general relativity,” Phys. Lett. B, 353(1):46–51, 1995.[arXiv:gr-qc/9404041].
* [2] Lemos, J. P. S. “Two-dimensional black holes and planar general relativity,” Class.Quant.Grav., 12(4):1081, 1995.[arXiv:gr-qc/9407024].
* [3] Lemos, J. P. S.; Zanchin, V. T. “Rotating charged black strings and three-dimensional black holes,” Phys. Rev. D, 54(6):3840, 1996.[arXiv:hep-th/9511188].
* [4] Åminneborg, S.; Bengtsson, I.; Holst, S.; Peldan, P. “Making anti-de sitter black holes,” Class.Quant.Grav., 13(10):2707, 1996.[arXiv:gr-qc/9604005].
* [5] Mann, R. B. “Pair production of topological anti-de sitter black holes,” Class.Quant.Grav., 14(5):L109, 1997.[arXiv:gr-qc/9607071].
* [6] Smith, W. L.; Mann, R. B. “Formation of topological black holes from gravitational collapse,” Phys. Rev. D, 56, 4942, 1997.[arXiv:gr-qc/9703007]
* [7] Brill, D. R.; Louko, J.; Peldan, P. “Thermodynamics of (3+1)-dimensional black holes with toroidal or higher genus horizons,” Phys. Rev. D, 56(6):3600, 1997.[arXiv:gr-qc/9705012].
* [8] Mann, R. B. “Topological black holes-outside looking in,” Annals Israel Phys.Soc., 13:311, 1997.[arXiv:gr-qc/9709039].
* [9] Birmingham, D. “Topological black holes in anti-de sitter space,” Class. Quant. Grav., 16(4):1197, 1999. [arXiv:hep-th/9808032].
* [10] Cai, R. G.; Soh, K.-S. “Topological black holes in the dimensionally continued gravity,” Phys. Rev. D 59, 044013, 1999. [arXiv:gr-qc/9808067].
* [11] Peça, C. S.; Lemos, J. P. S. “Thermodynamics of toroidal black holes,” J. Math. Phys., 41(7):4783–4789, 2000.[arXiv:gr-qc/9809029].
* [12] Mahapatra,S.; Priyadarshinee, S.; Reddy, G. M.; Shukla, B. “Exact topological charged hairy black holes in AdS Space in D-dimensions,” Phys. Rev. D 102, 024042, 2020. [arXiv:2004.00921]
* [13] Priyadarshinee, S.; Mahapatra, S.; Banerjee, I. “Analytic topological hairy dyonic black holes and thermodynamics,” Phys. Rev. D 104, 084023, 2021. [arXiv:2108.02514].
* [14] Cai, R. G. “Gauss-Bonnet black holes in AdS spaces,” Phys. Rev. D 65, 084014, 2002. [arXiv:hep-th/0109133].
* [15] Li, J.; Liu, H. S.; Lü, H. and Wang, Z. L. “Fermi surfaces and analytic green’s functions from conformal gravity,” J. High Energy Phys., 2013(2):1–39, 2013. [arXiv:1210.5000].
* [16] S, Nag. “The complex analytic theory of Teichmüller spaces,” Wiley, New York, 1988. ISBN: 0471627739
* [17] Gao, Z. and Zhao, L. “Restricted phase space thermodynamics for AdS black holes via holography,” Class. Quant. Grav., (2021). [arXiv:2112.02386].
* [18] Wang, T. and Zhao, L. “Black hole thermodynamics is extensive with variable newton constant,” Phys. Lett. B, 827:136935, (2022). [arXiv:2112.11236].
* [19] Gao, Z.; Kong, X. and Zhao, L. “Thermodynamics of Kerr-AdS black holes in the restricted phase space,” Euro. Phys. J. C, 82(2):1–10, (2022). [arXiv:2112.08672].
* [20] Zhao, L. “Thermodynamics for higher dimensional rotating black holes with variable Newton constant,” Chin. Phys. C 46 (2022) 055105. [arXiv:2201.00521].
* [21] Kong, X.; Wang, T.; Gao, Z. and Zhao, L. “Restricted phased space thermodynamics for black holes in higher dimensions and higher curvature gravities,” Entropy, 24(8):1131, 2022. [arXiv:2208.07748].
* [22] Kong, X.; Wang, T. and Zhao, L. “High temperature AdS black holes are low temperature quantum phonon gases,” Phys. Lett. B, 836:137623, (2023). [arXiv:2209.12230].
* [23] Zhao, L.; Zhang, Z.; Kong, X. “Restricted phase space thermodynamics of charged AdS black holes in conformal gravity,” Chin. Phys. C 47 (2023) 095105. [arXiv:2211.00963]
* [24] Callen, H. B. “Thermodynamics and an Introduction to Thermostatistics,” 2nd Eds., John Willy & Sons 1985, ISBN: 9780471862567.
* [25] Bekenstein, J. D. “Black holes and the second law,” Lett. Nuovo Cimento, 4(15):737–740, (1972).
* [26] Bekenstein, J. D. “Black holes and entropy,” Phys. Rev. D, 7(8):2333–2346, (1973).
* [27] Bardeen, J. M.; Carter, B. and Hawking S. W. “The four laws of black hole mechanics,” Comm. Math. Phys., 31(2):161–170, (1973).
* [28] Smarr, L. “Mass formula for Kerr black holes,” Phys. Rev. Lett., 30(2):71, (1973).
* [29] Hawking, S. W. “Particle creation by black holes,” Euclidean quantum gravity, pages 167–188. World Scientific,(1975).
* [30] Bekenstein, J. D. “Statistical black-hole thermodynamics,” Phys. Rev. D, 12(10):3077, 1975.
* [31] Kastor, D.; Ray, S. and Traschen, J. “Enthalpy and the mechanics of AdS black holes,” Class. Quant. Grav., 26(19):195011, (2009). [arXiv:0904.2765].
* [32] Dolan, B. P. “The cosmological constant and the black hole equation of state,” Class. Quant. Grav., 28(12):125020, (2010). [arXiv:1008.5023].
* [33] Dolan, B. P. “Pressure and volume in the first law of black hole thermodynamics,” Class. Quant. Grav., 28(23):235017, (2011). [arXiv:1106.6260].
* [34] Dolan, B. P. “Compressibility of rotating black holes,” Phys. Rev. D, 84(12):127503, (2011). [arXiv:1109.0198].
* [35] Kubizňák, D. and Mann, R. B. “P-V criticality of charged AdS black holes,” J. High Energy Phys., 2012(7):1–25, (2012). [arXiv:1205.0559].
* [36] Gunasekaran, S.; Kubizňák, D. and Mann, R. B. “Extended phase space thermodynamics for charged and rotating black holes and born-infeld vacuum polarization,” J. High Energy Phys., 2012(11):1–43, 2012. [arXiv:1208.6251].
* [37] Belhaj, A.; Chabab, M.; Moumni, H. El. and Sedra, M. B. “On thermodynamics of AdS black holes in arbitrary dimensions,” Chin. Phys. Lett., 29(10):100401, 2012. [arXiv:1210.4617].
* [38] Hendi, S. H. and Vahidinia, M. H. “Extended phase space thermodynamics and p- v criticality of black holes with a nonlinear source,” Phys. Rev. D, 88(8):084045, 2013. [arXiv:1212.6128].
* [39] Chen, S. B.; Liu, X. F. and Liu, C. Q. “P—V criticality of an AdS black hole in f (r) gravity,” Chin. Phys. Lett., 30(6):060401, 2013. [arXiv:1301.3234].
* [40] Zhao, R.; Zhao, H. H.; Ma, M. S. and Zhang, L. C. “On the critical phenomena and thermodynamics of charged topological dilaton AdS black holes,” Euro. Phys. J. C, 73(12):1–10, 2013. [arXiv:1305.3725].
* [41] Poshteh, M. B. J.; Mirza, B. and Sherkatghanad, Z. “Phase transition, critical behavior, and critical exponents of Myers-Perry black holes,” Phys. Rev. D, 88(2):024005, 2013. [arXiv:1306.4516].
* [42] Altamirano, N.; Kubizňák, D. and Mann, R. B. “Reentrant phase transitions in rotating anti–de sitter black holes,” Phys. Rev. D, 88(10):101502, 2013. [arXiv:1306.5756].
* [43] Cai, R. G.; Cao, L. M.; Li, L. and Yang, R. Q. “PV criticality in the extended phase space of Gauss-Bonnet black holes in AdS space,” J. High Energy Phys., 2013(9):1–22, (2013). [arXiv:1306.6233].
* [44] Belhaj, A.; Chabab, M.; Moumni, H. El.; Medari, L. and Sedra, M. B. “The thermodynamical behaviors of Kerr—Newman AdS black holes,” Chin. Phys. Lett., 30(9):090402, 2013. [arXiv:1307.7421].
* [45] Xu, W.; Xu, H. and Zhao, L. “Gauss–Bonnet coupling constant as a free thermodynamical variable and the associated criticality,” Euro. Phys. J. C, 74(7):1–13, (2014). [arXiv:1311.3053].
* [46] Zou, D. C.; Zhang, S. J. and Wang, B. “Critical behavior of Born-Infeld AdS black holes in the extended phase space thermodynamics,” Phys. Rev. D, 89(4):044002, 2014. [arXiv:1311.7299].
* [47] Altamirano, N.; Kubizňák, D.; Mann, R. B. and Sherkatghanad, Z. “Thermodynamics of rotating black holes and black rings: phase transitions and thermodynamic volume,” Galaxies, 2(1):89–159, 2014. [arXiv:1401.2586].
* [48] Wei, S. W. and Liu, Y. X. “Triple points and phase diagrams in the extended phase space of charged Gauss-Bonnet black holes in AdS space,” Phys. Rev. D, 90(4):044057, 2014. [arXiv:1402.2837].
* [49] Kubizňák, D. and Mann, R. B. “Black hole chemistry,” Can. J. Phys., 93(9):999–1002, 2015. [arXiv:1404.2126].
* [50] Zou, D. C.; Liu, Y. and Wang, B. “Critical behavior of charged Gauss-Bonnet-AdS black holes in the grand canonical ensemble,” Phys. Rev. D, 90(4):044063, 2014. [arXiv:1404.5194]
* [51] Xu, H.; Xu, W. and Zhao, L. “Extended phase space thermodynamics for third-order lovelock black holes in diverse dimensions,” Euro. Phys. J. C, 74(9):1–15, 2014. [arXiv:1405.4143].
* [52] Zhang, J. L.; Cai, R. G. and Yu, H. “Phase transition and thermodynamical geometry of Reissner-Nordström-AdS black holes in extended phase space,” Phys. Rev. D, 91(4):044028, (2015). [arXiv:1502.01428].
* [53] Kubizňák, D.; Mann, R. B. and Teo, M. “Black hole chemistry: thermodynamics with Lambda,” Class. Quant. Grav., 34(6):063001, (2017). [arXiv:1608.06147].
* [54] Lemos, J. P. S. and Zaslavskii, O. B. “Black hole thermodynamics with the cosmological constant as independent variable: Bridge between the enthalpy and the Euclidean path integral approaches,” Phys. Lett. B, 786:296–299, (2018). [arXiv:1806.07910].
* [55] Cong, W.; Kubizňák, D. and Mann, R. B. “Thermodynamics of AdS black holes: critical behavior of the central charge,” Phys. Rev. Lett., 127(9):091301, (2021). [arXiv:2105.02223].
* [56] Visser, M. R. “Holographic thermodynamics requires a chemical potential for color,” Phys. Rev. D, 105(10):106014, 2022. [arXiv:2101.04145]
* [57] Cong, W.; Kubizňák, D.; Mann, R. B.; Visser, M. R. “Holographic CFT phase transitions and criticality for charged AdS black holes,” J. High Energy Phys., 2022(8): 1–37, 2022.[arXiv:2112.14848].
* [58] Abreu, E. M. C.; Neto, J. A.; Barboza Jr. E. M.; Soares, B. B. “Black holes quasinormal modes, loop quantum gravity Immirzi parameter and nonextensive statistics,” Phys. Lett. B 798, 10, 135011, 2019. [arXiv:1910.03123].
* [59] Çimdiker, I.; Dabrowski, M. P.; Gohar, H. “Equilibrium temperature for black holes with nonextensive entropy,” arXiv preprint, 2022. [arXiv:2208.04473].
* [60] Nakarachinda, R.; Promsiri, C.; Tannukij, L.; Wongjun, P. “Thermodynamics of Black Holes with Rńyi Entropy from Classical Gravity,” [arXiv:2211.05989].
* [61] Y. Tian, X.-N. Wu, H. Zhang, “Holographic Entropy Production,”, _JHEP_ 1410:170 (2014), [arXiv:1407.8273].
* [62] Y. Tian, “A topological charge of black holes,” _Class. Quantum Grav._ 36 (2019) 245001, [arXiv:1804.00249].
* [63] Gibbons, G. W. and Hawking, S. W. “Action integrals and partition functions in quantum gravity,” Phys. Rev. D, 15(10):2752–2756, (1977).
* [64] Chamblin, A.; Emparan, R.; Johnson, C. V. and Myers, R. C. “Charged AdS black holes and catastrophic holography,” Phys. Rev. D, 60(6):064018, (1999). [arXiv:hep-th/9902170].
* [65] G. ’t Hooft, “Dimensional reduction in quantum gravity,” [arXiv:gr-qc/9310026].
* [66] Hawking, S. W. and Page, D. N. “Thermodynamics of black holes in anti-de Sitter space,” Commun. Math. Phys., 87(4):577–588, (1983).
* [67] Maldacena, J. “Einstein Gravity from Conformal Gravity,” [arXiv:1105.5632].
* [68] Peca, C. S.; Lemos,J. P. S. “Thermodynamics of reissner–nordström–anti-de sitter black holes in the grand canonical ensemble,” Phys. Rev. D, 59(12):124007, 1999. [arXiv:gr-qc/9805004].
* [69] Åman, J. E.; Bengtsson, I. and Pidokrajt, N. “Geometry of black hole thermodynamics,” Gen. Rel. Grav., 35(10):1733–1743, 2003. [arXiv:gr-qc/0304015].
* [70] Åman, J. E. and Pidokrajt, N. “Geometry of higher-dimensional black hole thermodynamics,” Phys. Rev. D, 73(2):024017, 2006. [arXiv:hep-th/0510139].
* [71] Sahay, A.; Sarkar, T. and Sengupta, G. “Thermodynamic geometry and phase transitions in Kerr-Newman-AdS black holes,” J. High Energy Phys., 2010(4):1–41, 2010. [arXiv:1002.2538].
* [72] Wei, S. W.; Liang, B. and Liu, Y. X. “Critical phenomena and chemical potential of a charged AdS black hole,” Phys. Rev. D, 96(12):124018, (2017). [arXiv:1705.08596].
* [73] Wei, S. W.; Liu, Y. X. and Mann, R. B. “Ruppeiner geometry, phase transitions, and the microstructure of charged AdS black holes,” Phys. Rev. D, 100(12):124033, 2019. [arXiv:1909.03887].
* [74] Xu, Z. M.; Wu, B. and Yang, W. L. “Diagnosis inspired by the thermodynamic geometry for different thermodynamic schemes of the charged BTZ black hole,” Euro. Phys. J. C, 80(10):1–10, 2020. [arXiv:2002.00117].
* [75] Xu, Z. M.; Wu, B. and Yang, W. L. “Ruppeiner thermodynamic geometry for the Schwarzschild-AdS black hole,” Phys. Rev. D, 101(2):024018, 2020. [arXiv:1910.12182].
* [76] Wang, C.; Yin, S. P.; Xu, Z. M.; Wu, B.; Yang, W. L. “Ruppeiner geometry and the fluctuation of the RN-AdS black hole in framework of the extensive thermodynamics,” [arXiv:2210.08822].
* [77] Ruppeiner, G. “Application of Riemannian geometry to the thermodynamics of a simple fluctuating magnetic system,” Phys. Rev. A, 24(1):488, 1981.
* [78] Ruppeiner, G. “Riemannian geometry in thermodynamic fluctuation theory,” Rev. of Mod. Phys., 67(3):605, 1995.
* [79] Weinhold, F. “Metric geometry of equilibrium thermodynamics,” J. Chem. Phys., 63(6):2479–2483, 1975.
* [80] Weinhold, F. “Classical and geometrical theory of chemical and phase thermodynamics,” John Wiley & Sons, 2009. ISBN: 978-0-470-40236-8
* [81] Custódio, P. S.; Horvath, J. E. “Thermodynamics of black holes in a finite box,” American J. Phys. 71, 1237, 2003. [arXiv:gr-qc/0302079].
* [82] Witek, H.; Cardoso, V.; Gualtieri, L.; Herdeiro, C.; Nerozzi, A.; Sperhake, U.; Zilhão, M. “Black holes in a box,” J. Phys.: Conf. Ser. 229, 012072, 2009.
* [83] Witek, H.; Cardoso, V.; Gualtieri, L.; Herdeiro, C.; Nerozzi, A.; Sperhake, U.; Zilhão, M. “Black holes in a box: towards the numerical evolution of black holes in AdS,” Phys. Rev. D. 82, 104037, 2010. [arXiv:1004.4633].
* [84] Cendes, Y. et al. “A mildly relativistic outflow launched two years after disruption in tidal disruption event AT2018hyz,” APJ 938, 28, 2022. [arXiv:2206.14297].
|
# Rethinking Disparity: A Depth Range Free Multi-View Stereo Based on
Disparity
Qingsong Yan ,1,2, Qiang Wang ,3, Kaiyong Zhao 4, Bo Li 2, Xiaowen Chu ,2,5,
Fei Deng †,1 Corresponding author
###### Abstract
Existing learning-based multi-view stereo (MVS) methods rely on the depth
range to build the 3D cost volume and may fail when the range is too large or
unreliable. To address this problem, we propose a disparity-based MVS method
based on the epipolar disparity flow (E-flow), called DispMVS, which infers
the depth information from the pixel movement between two views. The core of
DispMVS is to construct a 2D cost volume on the image plane along the epipolar
line between each pair (between the reference image and several source images)
for pixel matching and fuse uncountable depths triangulated from each pair by
multi-view geometry to ensure multi-view consistency. To be robust, DispMVS
starts from a randomly initialized depth map and iteratively refines the depth
map with the help of the coarse-to-fine strategy. Experiments on DTUMVS and
Tanks&Temple datasets show that DispMVS is not sensitive to the depth range
and achieves state-of-the-art results with lower GPU memory.
## 1 Introduction
Multi-view stereo matching (MVS) is a core technique in 3D reconstruction that
has been extensively studied (Goesele et al. 2007; Furukawa and Ponce 2009;
Galliani et al. 2015; Schönberger et al. 2016). Although traditional methods
try to introduce additional constraints (Xu and Tao 2019; Romanoni and
Matteucci 2019; Xu et al. 2020; Wang et al. 2020; Xu and Tao 2020b) to deal
with textureless regions or repeated textures, they still have difficulty in
guaranteeing the generation of high-quality point clouds in many cases.
Figure 1: The influence of the depth range. This figure compares DispMVS with
two state-of-the-art methods ( GBiNet (Mi, Di, and Xu 2022) and IterMVS (Wang
et al. 2022) ) and shows that DispMVS can still generate high-quality point
clouds when changing the depth range by Eq. 23. Unlike other methods that use
the depth range to build a 3D cost volume and estimate the depth map, DispMVS
only builds a 2D cost volume along the epipolar line and uses the multi-view
geometry to estimate the depth map.
Recently, learning-based methods have brought a new light to MVS. MVSNet (Yao
et al. 2018) shows a fully differentiable pipeline, which firstly uses a
convolutional neural network (CNN) to extract features from input images,and
then splits the 3D space into several bins covering a certain depth range to
build a 3D cost volume by differentiable homograph, and finally relies on a 3D
CNN to regress the depth map. Although MVSNet achieves impressive results on
several public benchmarks (Aanæs et al. 2016; Knapitsch et al. 2017), it is
not efficient and requires a lot of GPU memory to infer. Therefore, the
coarse-to-fine strategy (Gu et al. 2020; Yang et al. 2020; Cheng et al. 2020;
Mi, Di, and Xu 2022) and the recurrent network (Yao et al. 2019; Wei et al.
2021; Wang et al. 2021, 2022) are used to upgrade the MVSNet. The coarse-to-
fine strategy uses the coarse stage to recover a coarse depth map and reduces
the number of bins needed at the fine stage to reduce the GPU memory.
Furthermore, the recurrent network, such as LSTM (Shi et al. 2015) and GRU
(Cho et al. 2014), can directly replace the 3D CNN to process the 3D cost
volume with a lower GPU memory footprint during the inference. Since these
strategies may sacrifice accuracy due to the reduced search range and the lack
of global context, many studies have improved the design of loss functions and
feature processing modules. In terms of loss functions, there is not only the
regression-based method to guide continuous depth (Yao et al. 2018; Gu et al.
2020) but also the classification-based method to guide discrete depth through
3D cost volume (Yao et al. 2019; Mi, Di, and Xu 2022), or a mixture of them to
achieve better depth prediction quality (Peng et al. 2022). As for the feature
processing module, the deformation convolution (Wei et al. 2021), the
attention mechanism (Luo et al. 2020), and the Transformer (Zhu et al. 2021)
are used to improve the quality of image features. In addition, the visibility
information (Xu and Tao 2020c; Zhang et al. 2020) and the epipolar (Ma et al.
2021; Yang et al. 2022) are also used to boost the performance.
A critical issue of those existing methods is the fixed depth range in
building the cost volume (Cheng et al. 2020; Mi, Di, and Xu 2022). Usually,
the depth range decides the 3D distribution of cost volume that the network
attempts to fit, and the size of cost volume is limited to the computational
and memory capability, which results in these methods can be easy to over-fit
the configured depth range. Fig. 1 shows that the quality of point cloud
construction by two state-of-the-art methods, GBiNet (Mi, Di, and Xu 2022) and
IterMVS (Wang et al. 2022), is dramatically degraded when the depth range is
enlarged. The reason is that these methods cannot capture enough matching
information with a fixed number of depth bins.
In this paper, we propose a new MVS pipeline, which allows the CNN to focus
only on the matching problem between two different views and relies on the
multi-view geometry to recover the depth by matching results. The
contributions of this paper are as follows.
* •
Instead of constructing the 3D cost volume, this paper only constructs the 2D
cost volume to match pixels between each pair and generates the depth map by
triangulation. In other words, DispMVS exploits the multi-view geometry to
reduce the burden of networks, which does not rely on the computationally
expensive 3D cost volume to estimate the depth.
* •
We redesign the flow to deal with the multi-view stereo matching without
applying stereo rectification for each pair. First, we propose the epipolar
disparity flow (E-flow) and reveal the relationship between the E-flow and the
depth. Then we extend E-flow from two-view to multi-view and iteratively
update the E-flow by a fused depth to maintain multi-view consistency.
* •
DispMVS achieves the state-of-the-art result on the DTUMVS and Tanks&Temple
without the 3D cost volume, demonstrating the effectiveness of combining
multi-view geometry with a learning-based approach.
## 2 Related Work
With decades of development of MVS, many traditional and learning methods are
proposed. Traditional MVS cannot surpass the limitations of the artificially
designed matching pipeline and fail to reconstruct the non-Lambertian regions.
On the contrary, learning-based methods can automatically find the most
helpful information in a data-driven manner, and the benchmarking results of
(Aanæs et al. 2016; Knapitsch et al. 2017) show that the learning-based method
can easily outperform traditional methods. Generally, the learning-based
methods build a 3D cost volume, and we can categorize them into 3D
convolution-based and RNN-based methods according to how they handle the 3D
cost volume.
#### Traditional MVS
Traditional MVS has three major types: volumetric method, point cloud method,
and depth map method. The volumetric method (Seitz and Dyer 1999; Kutulakos
and Seitz 2000; Kostrikov, Horbert, and Leibe 2014) splits the space into
inside and outside and cuts out the surface. The point cloud method (Lhuillier
and Quan 2005; Goesele et al. 2007; Furukawa and Ponce 2009) reconstructs
dense point clouds from sparse point clouds. Although these methods can
reconstruct high-quality results, the volumetric and point cloud methods
require lots of GPU memory and are hard to parallelize. The depth map method
is the most popular, which separately reconstructs the depth map of each view
and fuses them to generate the point cloud (Merrell et al. 2007; Galliani et
al. 2015). Shen (Shen 2013) and Colmap (Zheng et al. 2014; Schönberger et al.
2016) extends the PatchMatch (Bleyer, Rhemann, and Rother 2011) to multi-view
and simultaneously estimates the normal and the depth. Meanwhile, Gipuma
(Galliani et al. 2015) and ACMM (Xu and Tao 2019) use a GPU to improve the
computational efficiency. The superpixel (Romanoni and Matteucci 2019), the
plane (Xu and Tao 2020b), and the mesh (Wang et al. 2020) are used to reduce
mismatching in non-Lambertian regions. Although these methods can achieve
stable results, they cannot surpass the limitation of hand-craft methods in a
challenging environment.
#### 3D Convolution Method
Various approaches have been proposed to address the shortcomings of MVSNet
(Yao et al. 2018).
The straightforward solution to reduce the GPU memory is building fewer 3D
cost volume. Fast-MVSNet (Yu and Gao 2020) only calculates the 3D cost volume
on a sparse depth map and propagates the sparse depth map into a dense depth
map. CasMVSNet (Gu et al. 2020) and CVP-MVSNet (Yang et al. 2020) uses a
coarse-to-fine strategy to deal with the 3D cost volume and reduce computation
cost on the high resolution. GBiNet (Mi, Di, and Xu 2022) treats the MVS as a
binary search problem and only builds the 3D cost volumes on the side with a
high probability of containing the depth.
Various methods are proposed to improve the 3D cost volume quality. P-MVSNet
(Luo et al. 2019) uses the isotropic and anisotropic 3D convolution to
estimate the depth map. AttMVS (Luo et al. 2020) applies attention mechanism
into the 3D cost volume to improve the robustness. UCS-MVSNet (Cheng et al.
2020) uses the uncertainty as a guide to adjust the 3D cost volume. EPP-MVSNet
(Ma et al. 2021) propose an epipolar-assembling module to enhance the 3D cost
volume. MVSTR (Zhu et al. 2021) relies on the 3D-geometry transformer
(Dosovitskiy et al. 2021) to obtain global context and 3D consistency. Also,
considering the occlusion, Vis-MVSNet (Zhang et al. 2020) and PVSNet (Xu and
Tao 2020c) introduce the visibility to filter out unreliable regions. Besides,
MVSCRF (Xue et al. 2019) uses a conditional random field to ensure the
smoothness of the depth map, and Uni-MVSNet (Peng et al. 2022) combines the
regression and classification by the unified focal loss.
Figure 2: The pipeline of DispMVS. After extracting features from input
images, DispMVS first uses a random depth map to initialize the E-flow between
each pair and then triangulates a new depth map by the E-flow that is updated
through a GRU module. Finally, with several iterations, DispMVS can
reconstruct a high-quality depth map.
In addition, unsupervised MVS has also achieved impressive results. Un-MVSNet
(Khot et al. 2019) uses photometric as a guide to learning the depth map.
MVSNet2 (Dai et al. 2019) uses multi-view depth maps to filter out occlusion.
M3VSNet (Huang et al. 2021) uses the normal to ensure smoothness and improve
the depth map. JADAS (Xu et al. 2021) combines depth and semantics to improve
the depth map. RC-MVSNet (Chang et al. 2022) trains a NeRF (Mildenhall et al.
2020) to maintain rendering consistency to solve ambiguity correspondences
among views.
#### RNN Method
Instead of using the 3D CNN to process the 3D cost volume, the RNN-based
method uses the more efficient LSTM (Shi et al. 2015) or GRU (Cho et al.
2014). R-MVSNet (Yao et al. 2019) uses the GRU to regularize the cost volume
sequentially. RED-Net (Liu and Ji 2020) utilizes a 2D recurrent encoder-
decoder structure to process the cost volume based on the GRU module. D2HC-
RMVSNet (Yan et al. 2020) and AA-RMVSNet (Wei et al. 2021) proposes a hybrid
architecture that combines the LSTM and the UNet (Ronneberger, Fischer, and
Brox 2015). PatchMatchNet (Wang et al. 2021) introduces an end-to-end
PatchMatch (Bleyer, Rhemann, and Rother 2011) with adaptive propagation during
each iteration and achieves competitive performance with lower GPU memory.
Recently, IterMVS (Wang et al. 2022) uses RAFT (Teed and Deng 2020; Lipson,
Teed, and Deng 2021) as backbone and iterative updates the depth map based on
the GRU module.
## 3 Method
In this section, we introduce the details of the proposed method. The pipeline
of DispMVS is demonstrated in Fig. 2. Unlike other methods depending on the
depth range and differentiable homograph warping to build the 3D cost volume,
we use a network to match pixels along the epipolar line and triangulate the
depth. Therefore, we first discuss the relationship between the flow, the
depth, and the E-flow. Then, we extend the E-flow to multi-view and explain
the details of DispMVS and the loss function.
### 3.1 Flow and Depth
Given a reference view $r$ and a source view $s$ with their interior matrix
$K_{r},K_{s}$ and their relative exterior matrix $[R_{s},T_{s}]$, we define
$d_{r},d_{s}$ as the depth, and $\vec{f}_{r\to s},\vec{f}_{s\to r}$ as the
flow of each view. Assuming that the scene is static, we can convert depth to
flow and vice versa according to the multi-view geometry (Hartley and
Zisserman 2003).
Depth The depth describes the 3D shape of an image and can re-project a pixel
on the image plane to 3D space. Eq. 1 re-projects a pixel $p_{r}$ in $r$ to
$P_{p_{r}}$ in 3D by its depth $d_{r}(p_{r})$, in which $\tilde{p}_{r}$ is the
homogeneous representation of $p_{r}$ for computation efficiency. $P_{p_{r}}$
can also be projected to $s$ by Eq. 2.
$\displaystyle P_{p_{r}}$ $\displaystyle=d_{r}(p_{r})K_{r}^{-1}\tilde{p}_{r}$
(1) $\displaystyle p_{s}$ $\displaystyle\simeq K_{s}(R_{s}P_{p_{r}}+T_{s})$
(2)
Flow The flow describes the movement of pixels on the image plane between two
images. For a matched pixel pair $p_{r}$ in $r$ and $p_{s}$ in $s$, we
calculate the flow $\vec{f}_{p_{r\to s}}$ by Eq. 3. Generally, the flow does
not need to follow geometry constraints and has two degrees of freedom.
$\displaystyle\vec{f}_{r\to s}(p_{r})=p_{s}-p_{r}$ (3)
Depth to Flow Eq. 4 shows how to convert the the $d_{r}(p_{r})$ to the
$f_{r}(p_{r})$, where $\Rightarrow$ denotes the conversion. We first re-
project $p_{r}$ to $P_{p_{r}}$ by $d_{r}(p_{r})$ as Eq. 1 shows and then
project $P_{p_{r}}$ to $s$ by Eq. 2 to get the matched pixel $p_{s}$. Finally,
we can calculate $\vec{f}_{r\to s}(p_{r})$ by Eq. 3.
$\displaystyle d_{r}(p_{r})\Rightarrow P_{p_{r}}\Rightarrow
p_{s}\Rightarrow\vec{f}_{r\to s}(p_{r})$ (4)
Flow to Depth Although triangulation is s straightforward method to convert
$f_{r}$ to $d_{r}$ (Hartley and Zisserman 2003), it has to solve a not
differentiable homogeneous linear function. Considering this, we use a
differentiable closed-form solution to calculate the depth, even though it is
not optimal. Given $p_{r}$ and $f_{r\to s}(p_{r})$, we can determine $p_{s}$
by Eq. 3. Based on multi-view geometric consistency, we have the constrain in
Eq. 5:
$\displaystyle
d_{r}(p_{r})K_{r}^{-1}\tilde{p}_{r}=R_{s}d_{s}(p_{s})K_{s}^{-1}\tilde{p}_{s}+T_{s}$
(5)
Let $T_{s}=(t_{sx},t_{sy},t_{sz})^{T}$,
$K_{r}^{-1}\tilde{p}_{r}=(p_{rx},p_{ry},p_{rz})^{T}$ and
$R_{s}K_{s}^{-1}\tilde{p}_{s}=(p_{sx},p_{sy},p_{sz})^{T}$. We can calculate
$d_{r}(p_{r})$ by Eq. 8:
$\displaystyle\left\\{\begin{array}[]{@{}l@{\quad}l}d_{xr}(p_{r})&=(t_{sx}p_{sz}-t_{sz}p_{rx})/(p_{sx}p_{sz}-p_{sz}p_{rx})\\\\[3.0pt]
d_{yr}(p_{r})&=(t_{sy}p_{sz}-t_{sz}p_{ry})/(p_{sy}p_{sz}-p_{sz}p_{rx})\end{array}\right.$
(8)
Eq. 8 shows that there are two ways to compute the depth, namely
$d_{xr}(p_{r})$ and $d_{yr}(p_{r})$, since $\vec{f}_{r\to s}(p_{r})$ is a 2D
vector that provides flow in $x$ dimension $\vec{f}_{xr\to xs}(p_{r})$ and $y$
dimension $\vec{f}_{yr\to ys}(p_{r})$. Theoretically, $d_{xr}(p_{r})$ equals
$d_{yr}(p_{r})$. However, a smaller flow is not numerically stable and will
bring noise into the triangulation. Therefore we select the depth triangulated
by the larger flow by Eq. 11:
$\displaystyle
d_{r}(p_{r})=\left\\{\begin{array}[]{@{}l@{\quad}l}d_{xr}(p_{r})&\mbox{if
$|\vec{f}_{xr\to xs}(p_{r})|\geq|\vec{f}_{yr\to ys}(p_{r})|$}\\\\[3.0pt]
d_{yr}(p_{r})&\mbox{if $|\vec{f}_{xr\to xs}(p_{r})|<|\vec{f}_{yr\to
ys}(p_{r})|$}\end{array}\right.$ (11)
Figure 3: Depth ($d_{p_{r}}$), flow ($\vec{f}_{r\to s}(p_{r})$) and E-flow
($e_{r\to s}(p_{r})$). We draw $p_{s}$ in $r$ to visualize the flow and
E-flow. $d_{p_{r}}$ is the depth of $p_{r}$. $\vec{f}_{r\to s}(p_{r})$ and
$e_{r\to s}(p_{r})$ describe the way $p_{r}$ moves. In static scenes,
$d_{p_{r}}$, $\vec{f}_{r\to s}(p_{r})$ and $e_{r\to s}(p_{r})$ are all
correlated and can be converted to each other based on multi-view geometry.
### 3.2 E-flow: the epipolar disparity flow
The flow describes the movement of a pixel on the image plane but does not
obey the epipolar geometry, which introduces ambiguity when triangulating the
depth. Therefore, we use the epipolar geometry to restrain the flow and define
the epipolar disparity flow (E-flow) by Eq. 12, where $\vec{e}_{dir}$ is the
normalized direction vector of the epipolar line, and $\cdot$ is the dot
product of vectors.
$\displaystyle e_{r\to s}(p_{r})=\vec{e}_{dir}(p_{r})\cdot(p_{s}-p_{r})$ (12)
E-flow and Flow Compared with the flow, the E-flow is a scalar and only moves
on the epipolar line. In the static scene, the flow and the E-flow are two
different ways to describe pixel movement, and their relationship is shown in
Eq. 13. Fig. 3 visualizes the flow $\vec{f}_{r\to s}(p_{r})$ and the E-flow
$e_{r\to s}(p_{r})$ of pixel $p_{r}$.
$\displaystyle\vec{f}_{r\to s}(p_{r})=\vec{e}_{dir}(p_{r})e_{r\to s}(pr)$ (13)
E-flow and Depth Considering the relationship between the E-flow and the flow,
and the relationship between the flow and the depth, we can convert E-flow to
depth, and vice versa by Eq. 14, in which $\Leftrightarrow$ denotes the
interconversion.
$\displaystyle e_{r\to s}\Leftrightarrow\vec{f}_{r\to s}\Leftrightarrow d_{r}$
(14)
Figure 4: Different Sampling Spaces. (a) shows the sampling space of existing
methods that evenly samples in the 3D space to build a 3D cost volume. (b)
shows the sampling space of DispMVS that evenly samples on each source image
plane and estimates the depth by triangulation, whose distribution is decided
by the relative poses of the source image.
### 3.3 DispMVS
Given a reference image $r$ and $N$ source images $s_{i}(1<=i<=N)$, DispMVS
firstly extracts features from all input images and then iteratively updates
the depth from random initialization. DispMVS separately estimates the E-flow
of each pair by a 2D cost volume and converts the E-flow to the depth for
later multi-view depth fusion by a weighted sum. Fig. 2 shows the pipeline of
DispMVS with the first iteration at the coarse stage.
Feature Extraction Following RAFT (Teed and Deng 2020), we use two identical
encoders to extract features from input images. One encoder simultaneously
extracts matching features from input images to calculate similarities.
Meanwhile, another encoder extracts context features from the reference image
to iteratively update the E-flow. As we apply a coarse-to-fine strategy to
speed up efficiency and accuracy, these encoders use a UNet structure
(Ronneberger, Fischer, and Brox 2015) to extract coarse feature maps
$c_{r},c_{s_{i}}$ with resolution $1/16$ for the coarse stage and
$f_{r},f_{s_{i}}$ with resolution $1/4$ for the fine stage. Besides, we also
use the deformable convolutional network (Dai et al. 2017) at the decoder part
to capture valuable information.
Initialization of E-flow DispMVS relies on the E-flow to estimate the depth,
so it needs an initial depth as the starting point. We adopt the
initialization strategy from PatchMatch (Bleyer, Rhemann, and Rother 2011) and
initialize $d_{r,0}$ by Eq. 15, where $rand$ comes from a normal distribution,
and $(d_{min},d_{max})$ are lower and upper bound of the depth range. With
$d_{r,0}$, DispMVS can initialize the E-flow for pair by Eq. 14, which is zero
for the first iteration. Fig. 2 shows how DispMVS reconstructs a coarse depth
from a random depth.
$\displaystyle\frac{1}{d_{r,0}}={rand\times(\frac{1}{d_{min}}-\frac{1}{d_{max}})+\frac{1}{d_{max}}}$
(15)
E-flow Estimation After the initialization, DispMVS estimates $e_{s_{i}}$ of
each pair $(r,s_{i})$ by the GRU module. For $p_{r}$ and coarse feature
$c_{r}(p_{r})$, DispMVS uses $e_{r\to s_{i}}(p_{r})$ to find matched point
$p_{s_{i}}$ on $s_{i}$ and samples feature from $c_{s_{i}}(p_{s})$. To
increase the reception field, DispMVS applies $m_{s}$ times of the average-
pool on $c_{s_{i}}$ to generate features with different scales and evenly
samples $m_{p}$ points around $p_{s_{i},t}$ along the epipolar line at each
scale with a distance of one pixel. Then, DispMVS calculates the similarity
between $c_{r}(p_{r})$ and $m_{p}\times m_{s}$ features from $c_{s_{i}}$ by
Eq. 16 to build a 2D cost volume. In the end, DispMVS feeds the cost volume
and $e_{r\to s_{i}}(p_{r})$ to the GRU module to estimate a new $e_{r\to
s_{i}}(p_{r})$ and the weight $w_{s_{i}}(p_{r})$. In DispMVS, we set
$m_{s}=4,m_{p}=9$ at the coarse stage and $m_{s}=2,m_{p}=5$ at the fine stage.
Fig. 4 compares two different sampling spaces.
$\displaystyle
simi(c_{r}(p_{r}),c_{s_{i}}(p_{s_{i}}))=\sum_{0<=i<=D}c_{r}(p_{r})[i]c_{s_{i}}(p_{s_{i}})[i]$
(16)
Multi-View E-flow Estimation Since the E-flow only applies to two views,
DispMVS utilizes a weighted sum to extend the E-flow to the multi-view
situation. DispMVS converts $e_{r\to s_{i}}$ to $d_{r\to s_{i}}$ by Eq. 14 and
then fuses $d_{r}$ by Eq. 19, where $w_{s_{i}}$ is normalized by the softmax.
To be noted, DispMVS iteratively reconstruct the depth, which means that there
are several conversion between the depth and the E-flow, thus further
improving multi-view consistency.
$\displaystyle\left\\{\begin{array}[]{@{}l@{\quad}l}w_{s,t+1}&=softmax(w_{s,t+1})\\\\[3.0pt]
d_{r,t+1}&=\sum_{1<=i<=N}{d_{r\to s_{i},t+1}\times
w_{s_{i},t+1}}\end{array}\right.$ (19)
Coarse-to-Fine Stragety Following CasMVSNet (Gu et al. 2020), DispMVS uses a
coarse-to-fine strategy. DispMVS uses $c_{r},c_{s_{i}}$ with $t_{c}$
iterations at the coarse stage and $f_{r},f_{s_{i}}$ with $t_{f}$ iterations
at the fine stage. DispMVS starts from a random depth at the coarse stage and
upsamples (Teed and Deng 2020) the coarse depth to the fine stage for later
refinement. Generally, DispMVS needs more $t_{c}$ and fewer $t_{f}$ to improve
efficiency and accuracy.
Loss Function As DispMVS outputs a depth in each iteration, we calculate the
L1-loss for all depth maps to speed up convergence. To improve the stability
of the training procedure, we use the inverse depth range to normalize the
ground truth (gt) and the depth $d_{r,i}$. Eq. 20 shows our loss function, in
which $\gamma=0.9$:
$\displaystyle
loss=\sum_{j={t_{c},t_{f}}}{\sum_{0<=i<j}{\gamma^{i}|norm(gt_{i})-norm(d_{r,i})|}}$
(20)
## 4 Experiments
In this section, we benchmark our DispMVS on two public datasets and compare
it with a set of existing methods. We also conduct ablation experiments to
explore the effects of different settings of DispMVS.
Table 1: The evaluation results on DTUMVS (Aanæs et al. 2016). The lower the
Accuracy (Acc), Completeness (Comp), Overall and Mem (GPU Memory), the better.
We split methods into three categories and highlight the best in bold for
each.
| Method | Acc$\downarrow$ | Comp$\downarrow$ | Overall$\downarrow$ | Mem$\downarrow$
---|---|---|---|---|---
Trad | Comp (Campbell et al. 2008) | 0.835 | 0.554 | 0.695 | —
Furu (Furukawa and Ponce 2009) | 0.613 | 0.941 | 0.777 | —
Tola (Tola, Strecha, and Fua 2012) | 0.342 | 1.190 | 0.766 | —
Gipuma (Galliani et al. 2015) | 0.283 | 0.873 | 0.578 | —
Conv | MVSNet (Yao et al. 2018) | 0.396 | 0.527 | 0.462 | 9384M
Point-MVSNet (Chen et al. 2019) | 0.342 | 0.411 | 0.376 | —
P-MVSNet (Luo et al. 2019) | 0.406 | 0.434 | 0.420 | —
Fast-MVSNet (Yu and Gao 2020) | 0.336 | 0.403 | 0.370 | —
CVP-MVSNet (Yang et al. 2020) | 0.296 | 0.406 | 0.351 | —
Vis-MVSNet (Zhang et al. 2020) | 0.369 | 0.361 | 0.365 | 4775M
CIDER (Xu and Tao 2020a) | 0.417 | 0.437 | 0.427 | —
CasMVSNet (Gu et al. 2020) | 0.325 | 0.385 | 0.355 | 4591M
UCS-Net (Cheng et al. 2020) | 0.338 | 0.349 | 0.344 | 4057M
EPP-MVSNet (Ma et al. 2021) | 0.414 | 0.297 | 0.355 | —
UniMVSNet (Peng et al. 2022) | 0.352 | 0.278 | 0.315 | 3216M
GBiNet (Mi, Di, and Xu 2022) | 0.315 | 0.262 | 0.289 | 2018M
RNN | R-MVSNet (Yao et al. 2019) | 0.383 | 0.452 | 0.417 | —
D2HC-RMVSNet (Yan et al. 2020) | 0.395 | 0.378 | 0.386 | —
AA-RMVSNet (Wei et al. 2021) | 0.376 | 0.339 | 0.357 | 11973M
PatchMatchNet (Wang et al. 2021) | 0.427 | 0.277 | 0.352 | 1629M
IterMVS (Wang et al. 2022) | 0.373 | 0.354 | 0.363 | 842M
DispMVS (ours) | 0.354 | 0.324 | 0.339 | 1368M
Figure 5: Point clouds We visualize some point clouds generated by DispMVS on
DTUMVS (Aanæs et al. 2016) and Tanks & Temple (Knapitsch et al. 2017). Table
2: The evaluation results on Tanks & Temple (Knapitsch et al. 2017). Higher
scores indicate higher quality of the point cloud. We split methods into two
categories and highlight the best in bold for each.
Advanced | Intermediate
---|---
Method | Mean | Aud. | Bal. | Cou. | Mus. | Pal. | Tem. | Mean | Fam. | Fra. | Hor. | Lig. | M60 | Pan. | Pla. | Tra
MVSNet (Yao et al. 2018) | - | - | - | - | - | - | - | 43.48 | 55.99 | 28.55 | 25.07 | 50.79 | 53.96 | 50.86 | 47.90 | 34.69
Point-MVSNet (Chen et al. 2019) | - | - | - | - | - | - | - | 48.27 | 61.79 | 41.15 | 34.20 | 50.79 | 51.97 | 50.85 | 52.38 | 43.06
UCS-Net (Cheng et al. 2020) | - | - | - | - | - | - | - | 54.83 | 76.09 | 53.16 | 43.03 | 54.00 | 55.60 | 51.49 | 57.38 | 47.89
Vis-MVSNet (Zhang et al. 2020) | 33.78 | 20.79 | 38.77 | 32.45 | 44.20 | 28.73 | 37.70 | 60.03 | 77.40 | 60.23 | 47.07 | 63.44 | 62.21 | 57.28 | 60.54 | 52.07
CasMVSNet (Gu et al. 2020) | 31.12 | 19.81 | 38.46 | 29.10 | 43.87 | 27.36 | 28.11 | 56.42 | 76.36 | 58.45 | 46.20 | 55.53 | 56.11 | 54.02 | 58.17 | 46.56
EPP-MVSNet (Ma et al. 2021) | 35.72 | 21.28 | 39.74 | 35.34 | 49.21 | 30.00 | 38.75 | 61.68 | 77.86 | 60.54 | 52.96 | 62.33 | 61.69 | 60.34 | 62.44 | 55.30
UniMVSNet (Peng et al. 2022) | 38.96 | 28.33 | 44.36 | 39.74 | 52.89 | 33.80 | 34.63 | 64.36 | 81.20 | 66.43 | 53.11 | 63.46 | 66.09 | 64.84 | 62.23 | 57.53
GBiNet (Mi, Di, and Xu 2022) | 37.32 | 29.77 | 42.12 | 36.30 | 47.69 | 31.11 | 36.93 | 61.42 | 79.77 | 67.69 | 51.81 | 61.25 | 60.37 | 55.87 | 60.67 | 53.89
R-MVSNet (Yao et al. 2019) | 24.91 | 12.55 | 29.09 | 25.06 | 38.68 | 19.14 | 24.96 | 48.40 | 69.96 | 46.65 | 32.59 | 42.95 | 51.88 | 48.80 | 52.00 | 42.38
D2HC-RMVSNet (Yan et al. 2020) | - | - | - | - | - | - | - | 59.20 | 74.69 | 56.04 | 49.42 | 60.08 | 59.81 | 59.61 | 60.04 | 53.92
AA-RMVSNet (Wei et al. 2021) | 33.53 | 20.96 | 40.15 | 32.05 | 46.01 | 29.28 | 32.71 | 61.51 | 77.77 | 59.53 | 51.53 | 64.02 | 64.05 | 59.47 | 60.85 | 54.90
PatchmatchNet (Wang et al. 2021) | 32.31 | 23.69 | 37.73 | 30.04 | 41.80 | 28.31 | 32.29 | 53.15 | 66.99 | 52.64 | 43.24 | 54.87 | 52.87 | 49.54 | 54.21 | 50.81
IterMVS (Wang et al. 2022) | 33.24 | 22.95 | 38.74 | 30.64 | 43.44 | 28.39 | 35.27 | 56.94 | 76.12 | 55.80 | 50.53 | 56.05 | 57.68 | 52.62 | 55.70 | 50.99
DispMVS (ours) | 34.90 | 26.09 | 38.01 | 33.19 | 44.90 | 28.49 | 38.75 | 59.07 | 74.73 | 60.67 | 54.13 | 59.58 | 58.02 | 53.39 | 58.63 | 53.42
#### Datasets
We use three different datasets throughout our experiments. The DTUMVS (Aanæs
et al. 2016) is an indoor dataset in a controlled environment containing 79
scenes for training, 22 for testing, and 18 for validation. The BlendedMVS
(Yao et al. 2020) is a large dataset captured from various outdoor scenes,
with 106 scenes for training and the rest 7 scenes for testing. The
Tanks&Temple (Knapitsch et al. 2017) is an outdoor multi-view stereo benchmark
that contains 14 real-world scenes under complex conditions.
#### Implementation
We implement DispMVS by PyTorch (Paszke et al. 2019) and train two models on
the DTUMVS and the BlendedMVS separately. On the DTUMVS, we set the image
resolution to $640\times 512$ and $N=5$. On the BlendedMVS, we set the image
resolution to $768\times 576$ and $N=5$. For all models, we apply the training
strategy in PatchmatchNet (Wang et al. 2021) for better learning of the weight
and use the Adam (Kingma and Ba 2015)( $\beta_{1}=0.9,\beta_{2}=0.999$ )
optimizer with an initial learning rate of 0.0002 that halves every four
epochs for 16 epochs. The training procedure is finished on two V100 with
$t_{c}=8,t_{f}=2$ considering the GPU memory limitation. During the
evaluation, we filter and fuse all depth maps into point clouds to compare
with the ground truth.
### 4.1 Evaluation on DTUMVS
We evaluate the DTUMVS on the test part and resize all images to $1600\times
1152$ with $N=5$. Table 1 compares DispMVS with other state-of-the-art
methods. We split existing methods into traditional methods, 3D convolution-
based methods, and RNN-based methods. DispMVS has the best overall score among
RNN-based methods and is 0.024 lower than IterMVS (Wang et al. 2022) and 0.018
lower than AA-RMVSNet (Wei et al. 2021). DispMVS ranks the 3rd among all
methods. GBiNet (Mi, Di, and Xu 2022) and UniMVSNet (Peng et al. 2022) are the
top two methods, but they incur much higher GPU memory. We visualize some
point clouds of DTUMVS generated by DispMVS in the first row of Fig. 5. These
qualitative and quantitative experimental results demonstrate the
effectiveness of DispMVS in obtaining depth by triangulation, even though
DispMVS does not construct any 3D cost volume.
### 4.2 Evaluation on Tanks&Tmple
As Tanks&Temple does not provide training samples, we apply the model
pretrained on the BlendedMVS to it. We resize all images of Tanks&Temple to
$1600\times 1152$ and set $N=7$. Table 2 shows the results of learning-based
methods, split into 3D convolution-based and RNN-based. Tanks&Temple contains
two subsets, Intermedia and Advanced. DispMVS achieves the best mean score on
the Advanced subset among the RNN-based method. Although AA-RMVSNet (Wei et
al. 2021) outperforms DispMVS on the Intermedia subset, AA-RMVSNet uses nearly
ten times more GPU memory than DispMVS. Overall, the results on Tanks&Temple
demonstrate that DispMVS has robust generalization with a low amount of GPU
memory. The second row of Fig. 5 shows point clouds generated on the
Tanks&Temple by DispMVS.
### 4.3 Ablation Studies
In this subsection, we discuss the core parts of our method. Considering that
the RAFT structure has been thoroughly studied in (Teed and Deng 2020; Lipson,
Teed, and Deng 2021; Wang et al. 2022), we conduct ablation experiments on the
coarse-to-fine strategy, the random initialization, and the changes in depth
range. Throughout all ablation experiments, we use the DTUMVS as a baseline
dataset.
The coarse-to-fine stragety DispMVS-M firstly estimates the depth with the
feature from $1/16$ and refines the depth with the feature from $1/4$.
DispMVS-S only extracts features from $1/8$ to recover the depth map to make
the comparison fairer. Table. 3 shows that the coarse-to-fine strategy
dramatically improves the overall score from 0.455 for DispMVS-M to 0.339 for
DispMVS-S. Therefore, we choose DispMVS-M as the method used in this paper.
Random Initilization Unlike existing methods that build the 3D cost volume
from a known depth range, DispMVS starts from a random initial depth, which
means that the input of DispMVS could be different every time. To measure the
effects of the random initialization, we conduct three times of initilization
without fixing the random seed and evaluate point clouds. Table 4 shows that
the variance of metrics between different inferences is smaller than 1e-6,
which proves that DispMVS are robust to the random initial depth and can
always generate a high-quality depth map.
Table 3: Comparison between single stage and multi stage. “-S” indicates reconstruction with only one resolution at $1/8$, while “-M” indicates reconstruction with multiple resolutions (the method in this paper). Method | Acc$\downarrow$ | Comp$\downarrow$ | Overall$\downarrow$
---|---|---|---
DispMVS-S | 0.500 | 0.410 | 0.455
DispMVS-M | 0.354 | 0.324 | 0.339
Depth Range The existing MVS methods split a given depth range into several
bins and build a 3D cost volume by differentiable homography. However, DispMVS
is insensitive to the depth range as it only constructs a 2D cost volume on
the image plane along the epipolar line. We select two state-of-the-art
methods (GBiNet (Mi, Di, and Xu 2022) and IterMVS (Wang et al. 2022)) and
manually change the depth range by Eq. 23. All methods are trained by the same
dataset with the same depth range. Table. 5 shows that performance of GBiNet
and IterMVS decreases dramatically, but DispMVS can be robust to these
changes. Fig. 1 visualizes point clouds generated by different methods with
different depth ranges, where GBiNet and IterMVS cannot converge when the
depth range is too large.
$\displaystyle
range_{x}=\left\\{\begin{array}[]{<EMAIL_ADDRESS>d_{max}&=d_{max}\times x\end{array}\right.\vspace{-1em}$ (23)
### 4.4 Limitations
As DispMVS needs to keep building the 2D cost volume during the iteration, its
computational efficiency is relatively low. In our experiment, DispMVS needs
around 0.7 seconds to process a view on the DTUMVS. Compared with IterMVS
(Wang et al. 2022) which only needs around 0.3 seconds per view, DispMVS needs
a more efficient epipolar matching module. In addition, DispMVS needs around
48G GPU memory during training because DispMVS needs several iterations to
update the depth by the GRU module, which needs to save all gradients and
intermediate results.
Table 4: Evaluate the random initialization. A lower variance means the difference between multi results is smaller. Method | Acc$\downarrow$ | Comp$\downarrow$ | Overall$\downarrow$
---|---|---|---
rand-1 | 0.353829 | 0.324110 | 0.338970
rand-2 | 0.354811 | 0.324946 | 0.339878
rand-3 | 0.354272 | 0.324324 | 0.339298
variance | 2.418e-7 | 1.890e-7 | 2.110e-7
Table 5: Influences of changing the depth range. The lower, the better for
all metrics under different depth ranges.
Range | Method | Acc$\downarrow$ | Comp$\downarrow$ | Overall$\downarrow$
---|---|---|---|---
$range_{1}$ | GBiNet (Mi, Di, and Xu 2022) | 0.315 | 0.262 | 0.289
IterMVS (Wang et al. 2022) | 0.373 | 0.354 | 0.363
DispMVS (ours) | 0.354 | 0.324 | 0.339
$range_{2}$ | GBiNet (Mi, Di, and Xu 2022) | 0.480 | 0.556 | 0.518
IteMVS (Wang et al. 2022) | 0.532 | 1.471 | 1.002
DispMVS (ours) | 0.348 | 0.404 | 0.376
$range_{3}$ | GBiNet (Mi, Di, and Xu 2022) | 0.618 | 1.303 | 0.960
IterMVS (Wang et al. 2022) | 0.935 | 6.985 | 3.960
DispMVS (ours) | 0.314 | 0.671 | 0.493
## 5 Conclusion
This paper introduces a new pipeline of MVS, called DispMVS, which does not
need to build any 3D cost volumes but triangulates the depth map by multi-view
geometry. DispMVS is a depth range invariant method and can be generalized to
the dataset with different ranges with the training set, which proves that 3D
cost volume is unnecessary for MVS. Compared with existing learning-based
methods, DispMVS lets the network focus on matching pixels that the CNN
network is good at and uses the multi-view geometry to deal with the geometry
information. Experiments on datasets show that DispMVS achieves comparable
results with other 3D convolution methods and outperforms RNN-based methods
with a lower GPU memory requirement.
#### Acknowledgments
This work was supported in part by grant RMGS2021-8-10 from Hong Kong Research
Matching Grant Scheme and the NVIDIA Academic Hardware Grant.
Supplementary material
## 1 Depth Normalization
To make our method more numeric stable, we apply a depth normalization by Eq.
24, where we use the inversed minimum depth $d_{min}$ and inversed maximum
depth $d_{max}$ to normalize the input depth. The depth normalization can
decrease the effect of outliers, especially in the multi-view depth fusion and
the loss function. In the multi-view depth fusion, we convert the epipolar
disparity flow (E-flow) to depth by triangulation, which may introduce
outliers when E-flow is too small. As for the loss function, the normalization
can make it easier for the network to converge when the depth range is
different among datasets.
$\displaystyle
depth\\_norm=\frac{1/depth-1/depth_{max}}{1/depth_{min}-1/depth_{max}}$ (24)
## 2 GRU Update
To update the E-flow, we utilize two GRU modules at the coarse and fine stages
of DispMVS with the same network structure of RAFT (Teed and Deng 2020). The
input and output of this module is seen in Eq. 29, where $f_{t},f_{r},f_{h}$
are 2D convolution networks and $W_{t},W_{r},W_{h}$ are the corresponding
parameters. After updating E-flow, we triangulate the depth and fuse them in
each iteration. Fig. 6 shows the depth maps generated by DispMVS on DTU and
Tanks&Temple, in which we can see that the depth map recovers from coarse to
fine.
$\displaystyle\left\\{\begin{array}[]{@{}l@{\quad}l}z_{t}&=\sigma(f_{t}([h_{t-1},x_{t}],W_{t}))\\\\[3.0pt]
r_{t}&=\sigma(f_{r}([h_{t-1},x_{t}],W_{r}))\\\\[3.0pt]
\tilde{h_{t}}&=tanh(f_{h}([r_{t}\bigodot h_{t-1},x_{t}],W_{h}))\\\\[3.0pt]
h_{t}&=(1-z_{t})\bigodot h_{t-1}+z_{t}\bigodot\tilde{h_{t}}\end{array}\right.$
(29)
## 3 Depth Upsampling
Following RAFT (Teed and Deng 2020), we upsample a coarse depth map to a fine
depth map through a convex combination of each pixel’s nine neighbors instead
of directly using the bilinear upsampling. Since this upsampling module
acquires the fusion weights by learning, it can retain more details and have
sharp edges at the boundaries.
## 4 Depth Fusion
DispMVS only predicts the depth map and needs to fuse them to point clouds for
further evaluation. The fusion can filter the depth map to remove inconsistent
regions and reduce the noise in the point cloud. Following MVSNet (Yao et al.
2018), we use geometric consistency to filger out inconsistent regions and
reduce the noise in the point cloud. Geometric consistency contains two steps,
the first step projects depth maps of source images to the reference image to
mask out regions with large depth differences, and the second step reprojects
the depth map of the reference image to source images to mask out regions with
large pixel displacements. Besides, DispMVS also uses the depth range to
filter out points out of bound. After filtering out these inconsistent regions
and invalid regions, DispMVS reproject pixels to 3D space to generate point
clouds. Fig. 7 and Fig. 8 show point clouds generated by DispMVS.
Figure 6: Depth Maps in each iteration on DTUMVS and Tanks&Temple DispMVS
iteratively recovers the depth map. An interesting phenomenon is that DispMVS
cannot reconstruct the sky region, where DispMVS cannot find reliable matches
and is far away from the view. But DispMVS can reconstruct the thin structure
nearby.
Figure 7: Results on DTUMVS We visualize 22 scenes in the test.
Figure 8: Results on Tanks&Temple We visualize all scenes in Intermedia and
Advanced.
## References
* Aanæs et al. (2016) Aanæs, H.; Jensen, R. R.; Vogiatzis, G.; Tola, E.; and Dahl, A. B. 2016. Large-scale data for multiple-view stereopsis. _International Journal of Computer Vision_ , 120(2): 153–168.
* Bleyer, Rhemann, and Rother (2011) Bleyer, M.; Rhemann, C.; and Rother, C. 2011. Patchmatch stereo-stereo matching with slanted support windows. In _Bmvc_ , volume 11, 1–11.
* Campbell et al. (2008) Campbell, N. D.; Vogiatzis, G.; Hernández, C.; and Cipolla, R. 2008. Using multiple hypotheses to improve depth-maps for multi-view stereo. In _European Conference on Computer Vision_ , 766–779. Springer.
* Chang et al. (2022) Chang, D.; Božič, A.; Zhang, T.; Yan, Q.; Chen, Y.; Süsstrunk, S.; and Nießner, M. 2022. RC-MVSNet: Unsupervised Multi-View Stereo with Neural Rendering. In _European conference on computer vision_.
* Chen et al. (2019) Chen, R.; Han, S.; Xu, J.; and Su, H. 2019. Point-based multi-view stereo network. In _Proceedings of the IEEE/CVF international conference on computer vision_ , 1538–1547.
* Cheng et al. (2020) Cheng, S.; Xu, Z.; Zhu, S.; Li, Z.; Li, L. E.; Ramamoorthi, R.; and Su, H. 2020\. Deep stereo using adaptive thin volume representation with uncertainty awareness. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2524–2534.
* Cho et al. (2014) Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; and Bengio, Y. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing_ , 1724–1734.
* Dai et al. (2017) Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; and Wei, Y. 2017. Deformable convolutional networks. In _Proceedings of the IEEE international conference on computer vision_ , 764–773.
* Dai et al. (2019) Dai, Y.; Zhu, Z.; Rao, Z.; and Li, B. 2019. Mvs2: Deep unsupervised multi-view stereo with multi-view symmetry. In _2019 International Conference on 3D Vision (3DV)_ , 1–8. Ieee.
* Dosovitskiy et al. (2021) Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2021\. An image is worth 16x16 words: Transformers for image recognition at scale. In _International Conference on Learning Representations_.
* Furukawa and Ponce (2009) Furukawa, Y.; and Ponce, J. 2009. Accurate, dense, and robust multiview stereopsis. _IEEE transactions on pattern analysis and machine intelligence_ , 32(8): 1362–1376.
* Galliani et al. (2015) Galliani, S.; et al. 2015. Massively parallel multiview stereopsis by surface normal diffusion. In _Proceedings of the IEEE International Conference on Computer Vision_ , 873–881.
* Goesele et al. (2007) Goesele, M.; Snavely, N.; Curless, B.; Hoppe, H.; and Seitz, S. M. 2007. Multi-view stereo for community photo collections. In _2007 IEEE 11th International Conference on Computer Vision_ , 1–8. IEEE.
* Gu et al. (2020) Gu, X.; Fan, Z.; Zhu, S.; Dai, Z.; Tan, F.; and Tan, P. 2020. Cascade cost volume for high-resolution multi-view stereo and stereo matching. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2495–2504.
* Hartley and Zisserman (2003) Hartley, R.; and Zisserman, A. 2003. _Multiple view geometry in computer vision_. Cambridge university press.
* Huang et al. (2021) Huang, B.; Yi, H.; Huang, C.; He, Y.; Liu, J.; and Liu, X. 2021. M3VSNet: Unsupervised multi-metric multi-view stereo network. In _2021 IEEE International Conference on Image Processing (ICIP)_ , 3163–3167. IEEE.
* Khot et al. (2019) Khot, T.; Agrawal, S.; Tulsiani, S.; Mertz, C.; Lucey, S.; and Hebert, M. 2019. Learning unsupervised multi-view stereopsis via robust photometric consistency. _arXiv preprint arXiv:1905.02706_.
* Kingma and Ba (2015) Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In _International Conference on Learning Representations_.
* Knapitsch et al. (2017) Knapitsch, A.; Park, J.; Zhou, Q.-Y.; and Koltun, V. 2017. Tanks and temples: Benchmarking large-scale scene reconstruction. _ACM Transactions on Graphics (ToG)_ , 36(4): 1–13.
* Kostrikov, Horbert, and Leibe (2014) Kostrikov, I.; Horbert, E.; and Leibe, B. 2014. Probabilistic labeling cost for high-accuracy multi-view reconstruction. In _Proceedings of the ieee conference on computer vision and pattern recognition_ , 1534–1541.
* Kutulakos and Seitz (2000) Kutulakos, K. N.; and Seitz, S. M. 2000. A theory of shape by space carving. _International journal of computer vision_ , 38: 199–218.
* Lhuillier and Quan (2005) Lhuillier, M.; and Quan, L. 2005. A quasi-dense approach to surface reconstruction from uncalibrated images. _IEEE transactions on pattern analysis and machine intelligence_ , 27: 418–433.
* Lipson, Teed, and Deng (2021) Lipson, L.; Teed, Z.; and Deng, J. 2021. Raft-stereo: Multilevel recurrent field transforms for stereo matching. In _2021 International Conference on 3D Vision (3DV)_ , 218–227. IEEE.
* Liu and Ji (2020) Liu, J.; and Ji, S. 2020. A novel recurrent encoder-decoder structure for large-scale multi-view stereo reconstruction from an open aerial dataset. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 6050–6059.
* Luo et al. (2019) Luo, K.; Guan, T.; Ju, L.; Huang, H.; and Luo, Y. 2019. P-mvsnet: Learning patch-wise matching confidence aggregation for multi-view stereo. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 10452–10461.
* Luo et al. (2020) Luo, K.; Guan, T.; Ju, L.; Wang, Y.; Chen, Z.; and Luo, Y. 2020. Attention-aware multi-view stereo. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 1590–1599.
* Ma et al. (2021) Ma, X.; Gong, Y.; Wang, Q.; Huang, J.; Chen, L.; and Yu, F. 2021. EPP-MVSNet: Epipolar-assembling based Depth Prediction for Multi-view Stereo. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 5732–5740.
* Merrell et al. (2007) Merrell, P.; Akbarzadeh, A.; Wang, L.; Mordohai, P.; Frahm, J.-M.; Yang, R.; Nistér, D.; and Pollefeys, M. 2007. Real-time visibility-based fusion of depth maps. In _2007 IEEE 11th International Conference on Computer Vision_ , 1–8. Ieee.
* Mi, Di, and Xu (2022) Mi, Z.; Di, C.; and Xu, D. 2022. Generalized Binary Search Network for Highly-Efficient Multi-View Stereo. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 12991–13000.
* Mildenhall et al. (2020) Mildenhall, B.; Srinivasan, P. P.; Tancik, M.; Barron, J. T.; Ramamoorthi, R.; and Ng, R. 2020. Nerf: Representing scenes as neural radiance fields for view synthesis. In _European conference on computer vision_ , 405–421. Springer.
* Paszke et al. (2019) Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. 2019. Pytorch: An imperative style, high-performance deep learning library. _Advances in neural information processing systems_ , 32.
* Peng et al. (2022) Peng, R.; Wang, R.; Wang, Z.; Lai, Y.; and Wang, R. 2022. Rethinking Depth Estimation for Multi-View Stereo: A Unified Representation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 8645–8654.
* Romanoni and Matteucci (2019) Romanoni, A.; and Matteucci, M. 2019. Tapa-mvs: Textureless-aware patchmatch multi-view stereo. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 10413–10422.
* Ronneberger, Fischer, and Brox (2015) Ronneberger, O.; Fischer, P.; and Brox, T. 2015. U-net: Convolutional networks for biomedical image segmentation. In _International Conference on Medical image computing and computer-assisted intervention_ , 234–241. Springer.
* Schönberger et al. (2016) Schönberger, J. L.; Zheng, E.; Frahm, J.-M.; and Pollefeys, M. 2016. Pixelwise view selection for unstructured multi-view stereo. In _European conference on computer vision_ , 501–518. Springer.
* Seitz and Dyer (1999) Seitz, S. M.; and Dyer, C. R. 1999. Photorealistic scene reconstruction by voxel coloring. _International Journal of Computer Vision_ , 35: 151–173.
* Shen (2013) Shen, S. 2013. Accurate multiple view 3d reconstruction using patch-based stereo for large-scale scenes. _IEEE transactions on image processing_ , 22(5): 1901–1914.
* Shi et al. (2015) Shi, X.; Chen, Z.; Wang, H.; Yeung, D.-Y.; Wong, W.-K.; and Woo, W.-c. 2015. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. _Advances in neural information processing systems_ , 28.
* Teed and Deng (2020) Teed, Z.; and Deng, J. 2020. Raft: Recurrent all-pairs field transforms for optical flow. In _European conference on computer vision_ , 402–419. Springer.
* Tola, Strecha, and Fua (2012) Tola, E.; Strecha, C.; and Fua, P. 2012. Efficient large-scale multi-view stereo for ultra high-resolution image sets. _Machine Vision and Applications_ , 23(5): 903–920.
* Wang et al. (2021) Wang, F.; Galliani, S.; Vogel, C.; Speciale, P.; and Pollefeys, M. 2021. Patchmatchnet: Learned multi-view patchmatch stereo. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 14194–14203.
* Wang et al. (2022) Wang, F.; et al. 2022. IterMVS: Iterative Probability Estimation for Efficient Multi-View Stereo. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 8606–8615.
* Wang et al. (2020) Wang, Y.; Guan, T.; Chen, Z.; Luo, Y.; Luo, K.; and Ju, L. 2020. Mesh-guided multi-view stereo with pyramid architecture. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2039–2048.
* Wei et al. (2021) Wei, Z.; Zhu, Q.; Min, C.; Chen, Y.; and Wang, G. 2021. Aa-rmvsnet: Adaptive aggregation recurrent multi-view stereo network. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 6187–6196.
* Xu et al. (2021) Xu, H.; Zhou, Z.; Qiao, Y.; Kang, W.; and Wu, Q. 2021. Self-supervised multi-view stereo via effective co-segmentation and data-augmentation. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, 3030–3038.
* Xu and Tao (2019) Xu, Q.; and Tao, W. 2019. Multi-scale geometric consistency guided multi-view stereo. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 5483–5492.
* Xu and Tao (2020a) Xu, Q.; and Tao, W. 2020a. Learning inverse depth regression for multi-view stereo with correlation cost volume. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, 12508–12515.
* Xu and Tao (2020b) Xu, Q.; and Tao, W. 2020b. Planar prior assisted patchmatch multi-view stereo. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, 12516–12523.
* Xu and Tao (2020c) Xu, Q.; and Tao, W. 2020c. Pvsnet: Pixelwise visibility-aware multi-view stereo network. _arXiv preprint arXiv:2007.07714_.
* Xu et al. (2020) Xu, Z.; Liu, Y.; Shi, X.; Wang, Y.; and Zheng, Y. 2020. Marmvs: Matching ambiguity reduced multiple view stereo for efficient large scale scene reconstruction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 5981–5990.
* Xue et al. (2019) Xue, Y.; Chen, J.; Wan, W.; Huang, Y.; Yu, C.; Li, T.; and Bao, J. 2019. Mvscrf: Learning multi-view stereo with conditional random fields. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 4312–4321.
* Yan et al. (2020) Yan, J.; Wei, Z.; Yi, H.; Ding, M.; Zhang, R.; Chen, Y.; Wang, G.; and Tai, Y.-W. 2020. Dense hybrid recurrent multi-view stereo net with dynamic consistency checking. In _European conference on computer vision_ , 674–689. Springer.
* Yang et al. (2020) Yang, J.; Mao, W.; Alvarez, J. M.; and Liu, M. 2020. Cost volume pyramid based depth inference for multi-view stereo. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 4877–4886.
* Yang et al. (2022) Yang, Z.; Ren, Z.; Shan, Q.; and Huang, Q. 2022. Mvs2d: Efficient multi-view stereo via attention-driven 2d convolutions. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 8574–8584.
* Yao et al. (2018) Yao, Y.; Luo, Z.; Li, S.; Fang, T.; and Quan, L. 2018. Mvsnet: Depth inference for unstructured multi-view stereo. In _Proceedings of the European conference on computer vision (ECCV)_ , 767–783.
* Yao et al. (2019) Yao, Y.; Luo, Z.; Li, S.; Shen, T.; Fang, T.; and Quan, L. 2019. Recurrent mvsnet for high-resolution multi-view stereo depth inference. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 5525–5534.
* Yao et al. (2020) Yao, Y.; Luo, Z.; Li, S.; Zhang, J.; Ren, Y.; Zhou, L.; Fang, T.; and Quan, L. 2020\. Blendedmvs: A large-scale dataset for generalized multi-view stereo networks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 1790–1799.
* Yu and Gao (2020) Yu, Z.; and Gao, S. 2020. Fast-mvsnet: Sparse-to-dense multi-view stereo with learned propagation and gauss-newton refinement. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 1949–1958.
* Zhang et al. (2020) Zhang, J.; Yao, Y.; Li, S.; Luo, Z.; and Fang, T. 2020. Visibility-aware Multi-view Stereo Network. In _British Machine Vision Conference_.
* Zheng et al. (2014) Zheng, E.; Dunn, E.; Jojic, V.; and Frahm, J.-M. 2014. Patchmatch based joint view selection and depthmap estimation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 1510–1517.
* Zhu et al. (2021) Zhu, J.; Peng, B.; Li, W.; Shen, H.; Zhang, Z.; and Lei, J. 2021. Multi-View Stereo with Transformer. _arXiv preprint arXiv:2112.00336_.
|
# Improved Smoothed Analysis of 2-Opt for the Euclidean TSP
Bodo Manthey<EMAIL_ADDRESS>Department of Applied Mathematics,
University of Twente, Enschede, The Netherlands Jesse van Rhijn
<EMAIL_ADDRESS>Corresponding author. Supported by NWO grant
OCENW.KLEIN.176. Department of Applied Mathematics, University of Twente,
Enschede, The Netherlands
###### Abstract
The 2-opt heuristic is a simple local search heuristic for the Travelling
Salesperson Problem (TSP). Although it usually performs well in practice, its
worst-case running time is exponential in the number of cities. Attempts to
reconcile this difference between practice and theory have used smoothed
analysis, in which adversarial instances are perturbed probabilistically. We
are interested in the classical model of smoothed analysis for the Euclidean
TSP, in which the perturbations are Gaussian. This model was previously used
by Manthey & Veenstra, who obtained smoothed complexity bounds polynomial in
$n$, the dimension $d$, and the perturbation strength $\sigma^{-1}$. However,
their analysis only works for $d\geq 4$. The only previous analysis for $d\leq
3$ was performed by Englert, Röglin & Vöcking, who used a different
perturbation model which can be translated to Gaussian perturbations. Their
model yields bounds polynomial in $n$ and $\sigma^{-d}$, and super-exponential
in $d$. As the fact that no direct analysis exists for Gaussian perturbations
that yields polynomial bounds for all $d$ is somewhat unsatisfactory, we
perform this missing analysis. Along the way, we improve all existing smoothed
complexity bounds for Euclidean 2-opt with Gaussian perturbations.
Keywords: Travelling Salesperson Problem, local search, smoothed analysis
## 1 Introduction
The Travelling Salesperson problem is a standard combinatorial optimization
problem, which has attracted considerable interest from academic, educational
and industrial directions. It can be stated rather compactly: given a
Hamiltonian graph $G=(V,E)$ and edge weights $w:E\to\mathbb{R}$, find a
minimum weight Hamiltonian cycle (tour) on $G$.
Despite this apparent simplicity, the TSP is NP-hard [10]. A particularly
interesting variant of the TSP is the Euclidean TSP, in which the $n$ vertices
of the graph are identified with a point cloud in $\mathbb{R}^{d}$, and the
edge weights are the Euclidean distances between these points. Even this
restricted variant is NP-hard [14].
As a consequence of this hardness, practitioners often turn to heuristics. One
commonly used heuristic is 2-opt [1]. This heuristic takes as its input a tour
$T$, and finds two sets of two edges each, $\\{e_{1},e_{2}\\}\subseteq T$ and
$\\{f_{1},f_{2}\\}\nsubseteq T$, such that exchanging $\\{e_{1},e_{2}\\}$ for
$\\{f_{1},f_{2}\\}$ yields again a tour $T^{\prime}$, and the total weight of
$T^{\prime}$ is strictly less than the total weight of $T$. This procedure is
repeated with the new tour, and stops once no such edges exist. The resulting
tour is said to be locally optimal.
Englert, Röglin and Vöcking constructed Euclidean TSP instances on which 2-opt
can take exponentially many steps to find a locally optimal tour [8]. Despite
this pessimistic result, 2-opt performs remarkably well in practice, usually
requiring time sub-quadratic in $n$ and obtaining tours which are only a few
percent worse than the optimum [1, chapter 8].
To explain this discrepancy, the tools of probabilistic analysis have proved
useful [13, 5, 7, 6, 8]. In particular, smoothed analysis, a hybrid framework
between worst-case and average-case analysis, has been successfully used in
the analysis of 2-opt [7, 8, 13]. In the original version of this framework,
the instances one considers are initially adversarial, and then perturbed by
Gaussians. The resulting smoothed time complexity is then generally a function
of the instance size $n$ and the standard deviation of the Gaussian
perturbations, $\sigma$.
Englert et al. obtained smoothed time complexity bounds for 2-opt on Euclidean
instances by considering a more general model, in which the points are chosen
in the unit hypercube according to arbitrary probability densities. The only
restrictions to these densities are that (i) they are independent, and (ii)
they are all bounded from above by $\phi$. Their results can be transferred to
Gaussian perturbations roughly by setting $\phi=\sigma^{-d}$, which yields a
smoothed complexity that is $O(\operatorname{poly}(n,\sigma^{-d}))$, ignoring
factors depending only on $d$.
As the exponential dependence on $d$ is somewhat unsatisfactory, Manthey &
Veenstra [13] performed a simpler smoothed analysis yielding bounds polynomial
in $n$, $1/\sigma$, and $d$. However, their analysis is limited to $d\geq 4$.
While polynomial bounds for all $d$ can be obtained by simply taking the
result of Englert et al. for $d\in\\{2,3\\}$, no smoothed analysis that
directly uses Gaussian perturbations exists for these cases. We set out to
perform this missing analysis, improving the smoothed complexity bounds for
all $d\geq 2$ along the way.
Our analysis combines ideas from both Englert et al. and Manthey & Veenstra.
From the former, we borrow the idea of conditioning on the outcomes of some of
the distances between points in an arbitrary 2-change. We can then analyze the
2-change by examining the angles between certain edges in the 2-change, which
are themselves random variables. From the latter, we borrow the Gaussian
perturbation model (originally introduced by Spielman & Teng for the Simplex
Method [15]).
We also note that in addition to improving the results of Manthey & Veenstra,
our approach is significantly simpler than the analysis of Englert et al. The
crux of the simplification is a carefully constructed random experiment to
model a single 2-change, which allows us to bypass the need for the involved
convolution integrals used by Englert et al.
We will begin by introducing some definitions and earlier results, before
providing basic probability theoretical results (Section 2) that we will make
heavy use of throughout the paper. We then proceed by analyzing a single
2-change in a similar manner as Englert et al., simplifying some of their
analysis in the process (Section 3). Next, we prove a first smoothed
complexity bound by examining so-called linked pairs of 2-changes (Section 4),
an idea used by both Englert et al. and Manthey & Veenstra. Finally, we
improve on this bound for $d\geq 3$ (Section 5), yielding the best known
bounds for all dimensions.
## 2 Preliminaries
### 2.1 Travelling Salesperson Problem
Let $\mathcal{Y}\subseteq[-1,1]^{d}$ be a point set of size $n$. The Euclidean
Travelling Salesperson Problem (TSP) asks for a tour that visits each point
$y\in\mathcal{Y}$ exactly once, such that the total length of the tour is
minimized. The length of a tour in this variant of the TSP is the sum of the
Euclidean distances between consecutive points in the tour. Formally, if the
points in $\mathcal{Y}$ are visited in the order $T=(y_{\pi(i)})_{i=0}^{n-1}$
defined by a permutation $\pi$ of $[n]$, then the length of the tour $T$ is
$L(T)=\sum_{i=0}^{n-1}\|y_{\pi(i)}-y_{\pi(i+1)}\|,$
where the indices are taken modulo $n$, and $\|\cdot\|$ denotes the standard
Euclidean norm in $\mathbb{R}^{d}$. Since the Euclidean TSP is undirected, the
tour $T^{\prime}$ in which the vertices are visited in the reverse order has
the same length as $T$. We consider these tours to be identical.
### 2.2 Smoothed Analysis
Smoothed analysis is a framework for the analysis of algorithms, which was
introduced in 2004 by Spielman & Teng [15]. The method is particularly
suitable to algorithms with a fragile worst-case input [11]. Since its
introduction, the method has been applied to a wide variety of algorithms [12,
16].
Heuristically, one imagines that an adversary chooses an input to the
algorithm. The input is then perturbed in a probabilistic fashion. The hope is
that any particularly pathological instances that the adversary might choose
are destroyed by the random perturbation. One then computes a bound on the
expected number of steps that the algorithm performs, where the expectation is
taken with respect to the perturbation.
For our model of a smoothed TSP instance, we allow the adversary to choose a
point set $\mathcal{Y}\subseteq[-1,1]^{d}$ of size $n$. We then perturb each
point $y_{i}\in\mathcal{Y}$ with an independent $d$-dimensional Gaussian
random variable $g_{i}$, $i\in[n]$, with mean 0 and standard deviation
$\sigma$. This yields a new point set, $\mathcal{X}=\\{y_{i}+g_{i}\mid
y_{i}\in\mathcal{Y}\\}$. We will bound the expected number of steps taken by
the 2-opt heuristic on the TSP instance defined by $\mathcal{X}$, with the
expectation taken over this Gaussian perturbation. We will refer to this
quantity as the smoothed complexity of 2-opt.
For the purposes of our analysis, we always assume that $\sigma\leq 1$. This
is a mild restriction, as the bound for $\sigma=1$ also applies to all larger
values of $\sigma$, and small perturbations are particularly interesting in
smoothed analysis.
For a general outline of the strategy, consider a 2-change where the edges
$\\{a,z_{1}\\}$ and $\\{b,z_{2}\\}$ are replaced by $\\{a,z_{2}\\}$ and
$\\{b,z_{1}\\}$. The change in tour length of this 2-change is
$\Delta=\|a-z_{1}\|+\|b-z_{2}\|-\|a-z_{2}\|-\|b-z_{1}\|.$
Since the locations of the points $\\{a,b,z_{1},z_{2}\\}$ are random
variables, so is $\Delta$. We seek to bound the probability that there exists
a 2-change whose improvement is exceedingly small, enabling us to use a
potential argument.
Let $\Delta_{\mathrm{min}}$ denote the improvement of the least-improving
2-change in the instance. If $\mathbb{P}(\Delta_{\mathrm{min}}\leq\epsilon)$
is suitably small for small $\epsilon$, then each iteration is likely to
decrease the tour length by a large amount. As long as the initial tour has
bounded length, this then provides a limit to the number of iterations that
the heuristic can perform, since the tour length is bounded from below by 0.
### 2.3 Basic Results
We state some general results that we will need at points throughout the
paper.
The next lemma provides a simple framework that we can use to prove smoothed
complexity bounds for 2-opt.
Let $\Delta_{\mathrm{min}}$ denote the smallest improvement of any 2-change,
and let $\Delta^{\mathrm{link}}_{\mathrm{min}}$ denote the smallest
improvement of any pair of linked 2-changes (see Section 4 for a definition of
linked pairs).
###### Lemma 1 ([13, Lemma 2.2]).
Suppose that the longest tour has a length of at most $L$ with probability at
least $1-1/n!$. Let $\alpha>1$ be a constant. If for all $\epsilon>0$ it holds
that
$\mathbb{P}(\Delta_{\mathrm{min}}\in(0,\epsilon])=O\mathopen{}\mathclose{{}\left(P\epsilon^{\alpha}}\right)$,
then the smoothed complexity of 2-opt is bounded from above by
$O(P^{1/\alpha}L)$. The same holds if we replace $\Delta_{\mathrm{min}}$ by
$\Delta_{\mathrm{min}}^{\mathrm{link}}$, provided that
$P^{1/\alpha}L=\Omega(n^{2})$.
#### 2.3.1 Probability Theory
We provide some basic probability theoretical results. Throughout the paper,
given a random variable $X$, we denote its probability density by $f_{X}$ and
its cumulative distribution function by $F_{X}$. If we furthermore condition
on some event $Y$, we write $f_{X|Y}$ for the conditional density of $X$ given
$Y$.
#### Chi Distributions
Suppose we are given two points $y_{1},y_{2}\in\mathcal{Y}$ and perturb both
points with independent Gaussian random variables $g_{1}$ and $g_{2}$,
resulting in $x_{i}=y_{i}+g_{i}$, $i\in[2]$. Then the distance
$\|x_{1}-x_{2}\|$ between the two perturbed points is distributed according to
a noncentral $d$-dimensional chi distribution with noncentrality parameter
$s=\|y_{1}-y_{2}\|$, which we denote $\chi_{d}^{s}$. We call $\chi_{d}^{0}$ a
central $d$-dimensional $\chi$ distribution. We have two useful expressions
for the chi distribution [9]:
$\displaystyle\chi_{d}^{s}(r)=\frac{e^{-\frac{r^{2}+s^{2}}{2\sigma^{2}}}\cdot\frac{r^{d-1}}{\sigma^{d}}}{(rs/\sigma^{2})^{d/2-1}}I_{d/2-1}\mathopen{}\mathclose{{}\left(\frac{rs}{\sigma^{2}}}\right)=e^{-\frac{s^{2}}{2\sigma^{2}}}\sum_{i=0}^{\infty}\frac{1}{i!}\mathopen{}\mathclose{{}\left(\frac{s^{2}}{2\sigma^{2}}}\right)^{i}\chi_{d+2i}(r),$
(1)
where $\chi_{d}(r)=\chi_{d}^{0}(r)$, the central chi distribution. Here,
$I_{\nu}(x)$ denotes the modified Bessel function of the first kind, of order
$\nu>-1/2$, defined as [2]
$\displaystyle
I_{\nu}(x)=\sum_{k=0}^{\infty}\frac{1}{k!\Gamma(k+\nu+1)}\mathopen{}\mathclose{{}\left(\frac{x}{2}}\right)^{2k+\nu}.$
(2)
#### General Results
In the following, we use the notion of stochastic dominance. Let $X$ and $Y$
be two real-valued random variables. We say that $X$ stochastically dominates
$Y$ if for all $x$, it holds that $\mathbb{P}(X\geq x)\geq\mathbb{P}(Y\geq
x)$, and this inequality is strict for some $x$. We may equivalently say that
the density of $X$ stochastically dominates the density of $Y$.
To use Lemma 1, we need to limit the probability that any TSP tour in our
smoothed instance is too long. This was previously done by Manthey & Veenstra;
we state their result in Lemma 2.
###### Lemma 2 ([13, Lemma 2.3]).
Let $c\geq 2$ be a sufficiently large constant, and let
$D=c\cdot(1+\sigma\sqrt{n\log n})$. Then
$\mathbb{P}(\mathcal{X}\nsubseteq[-D,D]^{d})\leq 1/n!$.
The next lemma is a reformulation of another result by Manthey & Veenstra
[13]. The lemma is very useful in conjunction with Lemma 4, as we will have
cause to condition on the outcome of drawing noncentral $d$-dimensional chi
random variables.
###### Lemma 3 ([13, Lemma 2.8]).
The noncentral $d$-dimensional chi distribution with parameter $\mu>0$ and
standard deviation $\sigma$ stochastically dominates the central
$d$-dimensional chi distribution with the same standard deviation.
The following lemma from Manthey & Veenstra is slightly generalized compared
to its original statement. We do not provide a proof, since the original proof
remains valid when simply replacing the original assumption with ours.
###### Lemma 4 ([13, Lemma 2.7]).
Assume $c\in\mathbb{R}_{\geq 0}$ is a fixed constant and $d\in\mathbb{N}$ is
fixed and arbitrary with $d>c$. Let $\chi_{d}$ denote the $d$-dimensional chi
distribution with variance $\sigma^{2}$. Then
$\int_{0}^{\infty}\chi_{d}(x)x^{-c}\mathop{}\\!\mathrm{d}x=\Theta\mathopen{}\mathclose{{}\left(\frac{1}{d^{c/2}\sigma^{c}}}\right).$
### 2.4 Limiting the Adversary
In our analysis we will closely study the angles between edges in the smoothed
TSP instance. These angles can be initially specified to our detriment by the
adversary. However, the power of the adversary is limited by the strength of
the Gaussian perturbations. We quantify the power of the adversary in Theorem
5. See Figure 1 for a sketch accompanying the theorem.
$x$$\mu$$y$$R$$g\sim\mathcal{N}_{d}(0,\sigma^{2})$$s$$\phi$ Figure 1: The
setting of Theorem 5. As mentioned in the proof of Theorem 5, we may assume
without loss of generality that $\mu$ lies on $L$.
###### Theorem 5.
Let $L$ be some line in $\mathbb{R}^{d}$, and let $x\in L$. Let $y$ be a point
drawn from a $d$-dimensional Gaussian distribution with mean
$\mu\in\mathbb{R}^{d}$ and variance $\sigma^{2}$. Let $\phi$ denote the angle
between $L$ and $x-y$, and let $R=\|x-y\|$ and $s=\|x-\mu\|$. Let
$f_{\phi|R=r}$ denote the density of $\phi$, conditioned on a specific outcome
$r>0$ for $R$. Then for all $d\geq 2$,
$\sup_{\phi\in[0,\pi]}f_{\phi|R=r}(\phi)=O\mathopen{}\mathclose{{}\left(\sqrt{d}+\frac{\sqrt{rs}}{\sigma}}\right).$
Moreover, for $d\geq 3$,
$\sup_{\phi\in(0,\pi)}\frac{f_{\phi|R=r}(\phi)}{\sin\phi}=O\mathopen{}\mathclose{{}\left(\sqrt{d}+\frac{rs}{\sigma^{2}\sqrt{d}}}\right).$
Theorem 5 yields the following corollary, which provides information on the
angle between two Gaussian random points in $\mathbb{R}^{d}$ with respect to
some third point. This corollary is especially useful when analyzing 2-changes
in smoothed TSP instances.
###### Corollary 6.
Let $x\in\mathbb{R}^{d}$. Let $y$ and $z$ be drawn from $d$-dimensional
Gaussian distributions with arbitrary means and the same variance
$\sigma^{2}$. Let $\phi$ denote the angle between $y-x$ and $z-x$, and let
$R=\|x-y\|$ and $S=\|x-z\|$. Let $f_{\phi|R=r,S=s}$ denote the density of
$\phi$ conditioned on some outcome $r>0$ for $R$ and $s>0$ for $S$. Then for
all $d\geq 2$,
$\sup_{\phi\in[0,\pi]}f_{\phi|R=r,S=s}(\phi)=O\mathopen{}\mathclose{{}\left(\sqrt{d}+\frac{\sqrt{\min\\{r\bar{r},s\bar{s}}\\}}{\sigma}}\right),$
where $\bar{r}=\|x-\mathbb{E}(y)\|$ and $\bar{s}=\|x-\mathbb{E}(z)\|$.
Moreover, for $d\geq 3$,
$\sup_{\phi\in(0,\pi)}\frac{f_{\phi|R=r,S=s}(\phi)}{\sin\phi}=O\mathopen{}\mathclose{{}\left(\sqrt{d}+\frac{\min\\{r\bar{r},s\bar{s}\\}}{\sigma^{2}\sqrt{d}}}\right).$
###### Proof (assuming Theorem 5).
We denote the density of $\phi$ conditioned on $R=r$ and $S=s$ by
$f_{\phi|R=r,S=s}$. We perform a random experiment as follows.
If $r\leq s$, then we let an adversary determine the position of $z$, subject
to $S=s$. Subsequently, we draw the line $L$ through $x$ and $z$. Theorem 5
then yields a bound for $f_{\phi|R=r,S=s}$ of
$O(\sqrt{d}+\sqrt{r\bar{r}}/\sigma)$. The same process yields the bound for
$f_{\phi|R=r,S=s}(\phi)/\sin\phi$ when $d\geq 3$.
If $s\leq r$, then we use a similar argument, just swapping the roles of $y$
and $z$. This yields $O(\sqrt{d}+\sqrt{s\bar{s}}/\sigma)$.
Combining these two bounds yields the corollary. ∎
The remainder of this section is devoted to proving Theorem 5. Recall the
formulas for $\chi_{d}^{s}$, cf. Equation 1. During the proof of Theorem 5, we
will need to bound $\chi_{d}^{s}$ from below, for which We require some lower
bounds on $I_{\nu}$. We thus spend some time in this section proving such
bounds.
The following bound on $I_{\nu}$ holds for all $x\geq 0$ and $\nu>-1/2$; it
results from keeping only the $k=0$ term in Equation 2.
###### Lemma 7.
For all $x\geq 0$ and $\nu>-1/2$,
$I_{\nu}(x)\geq\frac{(x/2)^{\nu}}{\Gamma\mathopen{}\mathclose{{}\left(\nu+1}\right)}.$
As will become apparent during the proof of Theorem 5, the bound in Lemma 7 is
too weak for large values of $x$. We thus need a stronger bound for this
regime.
###### Lemma 8.
Given $x>1$ and $\nu\geq 0$, it holds that
$I_{\nu}(x)\geq c_{\nu}\cdot\frac{e^{x}}{\sqrt{x}},$
for some $c_{\nu}>0$ that depends only on $\nu$.
###### Proof.
First, suppose $\nu\geq 1/2$. Our starting point is the following integral
representation of $I_{\nu}$, which holds for $\nu>-1/2$ [2]:
$\displaystyle
I_{\nu}(x)=\frac{(x/2)^{\nu}}{\pi^{1/2}\Gamma(\nu+1/2)}\int_{-1}^{1}e^{xt}(1-t^{2})^{\nu-\frac{1}{2}}\mathop{}\\!\mathrm{d}t.$
(3)
Observe first that the factor in front of the integral is non-negative, as is
the integrand. We first restrict the domain of integration to $(1-1/x,1)$,
which is permissible as $x>1$. Next, we use the identity
$(1-t^{2})=(1-t)(1+t)$ to replace $(1-t^{2})^{\nu-1/2}$ in the integrand by
$(1-t)^{\nu-1/2}$. This yields a lower bound, since $t$ only takes positive
values over the restricted domain of integration, and $\nu\geq 1/2$.
Next, we substitute $u=1-t$, which yields
$\int_{0}^{1/x}e^{x(1-u)}u^{\nu-\frac{1}{2}}\mathop{}\\!\mathrm{d}u=e^{x}\int_{0}^{1/x}e^{-xu}u^{\nu-\frac{1}{2}}\mathop{}\\!\mathrm{d}u\geq
e^{x}\int_{0}^{1/x}(1-xu)u^{\nu-\frac{1}{2}}\mathop{}\\!\mathrm{d}u,$
making use of the standard inequality $e^{x}\geq 1+x$. Note that the integrand
remains non-negative for all values of $u$ over which we integrate. The
remaining integral evaluates to
$\displaystyle\int_{0}^{1/x}(1-xu)u^{\nu-\frac{1}{2}}\mathop{}\\!\mathrm{d}u$
$\displaystyle=\frac{1}{\nu+1/2}\frac{1}{x^{\nu+1/2}}-\frac{x}{\nu+3/2}\frac{1}{x^{\nu+3/2}}$
$\displaystyle=\mathopen{}\mathclose{{}\left(\frac{1}{\nu+1/2}-\frac{1}{\nu+3/2}}\right)x^{-\nu-1/2}.$
Thus, we are left with
$I_{\nu}(x)\geq\mathopen{}\mathclose{{}\left(\frac{1}{\nu+1/2}-\frac{1}{\nu+3/2}}\right)\frac{1}{2^{\nu}\sqrt{\pi}\Gamma(\nu+1/2)}\frac{e^{x}}{\sqrt{x}}.$
Letting $c_{\nu}$ be the entire prefactor of $e^{x}/\sqrt{x}$, we are done for
$\nu\geq 1/2$.
The case $\nu<1/2$ can be carried out analogously; however, rather than using
$1-t^{2}=(1+t)(1-t)\geq 1-t$, we instead use $1-t^{2}=(1+t)(1-t)\leq 2(1-t)$,
since $1-t^{2}$ now appears in the denominator of the integrand in Equation 3.
∎
While Lemma 8 is useful for large values of $x$ and constant $\nu$, it is too
weak for large values of $\nu$ due to the constant $c_{\nu}$. We can however
use it to obtain another bound, which we will use at a key step in the proof
of Theorem 5. First, we need the following lemma, which can be found as an
equation in a paper by Amos.
###### Lemma 9 ([3]).
For all $x>0$ and $\nu\geq 1$,
$\frac{I_{\nu}(x)}{I_{\nu-1}(x)}\geq\frac{\sqrt{x^{2}+\nu^{2}}-\nu}{x}.$
We can use this lemma recursively to bound $I_{\nu}$ from below for all
$\nu\geq 0$, with the base case given by Lemma 8.
###### Lemma 10.
There exists a constant $c>0$ such that, for all $x>1$ and $\nu\geq 0$,
$I_{\nu}(x)\geq
c\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\nu^{2}}-\nu}{x}}\right)^{\nu+\frac{1}{2}}\frac{e^{\sqrt{x^{2}+\nu^{2}}}}{\sqrt{x}}.$
###### Proof.
First, we assume $\nu\in\mathbb{N}$ for the sake of clarity; fractional $\nu$
and $\nu<1$ will be addressed at the end of the proof. We start by using Lemma
9. Applied iteratively, it yields
$I_{\nu}(x)\geq
I_{0}(x)\prod_{k=1}^{\nu}\frac{\sqrt{x^{2}+k^{2}}-k}{x}=x^{\nu}I_{0}(x)\prod_{k=1}^{\nu}\frac{1}{\sqrt{x^{2}+k^{2}}+k}.$
Equivalently,
$\frac{I_{0}(x)\cdot
x^{\nu}}{I_{\nu}(x)}\leq\prod_{k=1}^{\nu}(\sqrt{x^{2}+k^{2}}+k).$
To bound this product, we first take its logarithm to convert it to a sum:
$\ln\prod_{k=1}^{\nu}(\sqrt{x^{2}+k^{2}}+k)=\sum_{k=1}^{\nu}\ln(\sqrt{x^{2}+k^{2}}+k).$
It is tempting to now bound this sum by integrating the summand over
$[1,\nu+1]$, as the summand is monotone increasing in $k$. However, the
resulting bound turns out to be slightly too weak for our purposes. Instead,
we refine this by using the Euler-Maclaurin formula [4]. The formula states
that, for a function $f$ that is $p$-times continuously differentiable on
$[m,n]$,
$\sum_{i=m}^{n}f(i)=\int_{m}^{n}f(k)\mathop{}\\!\mathrm{d}k+\frac{f(n)+f(m)}{2}+\sum_{k=1}^{\lfloor
p/2\rfloor}\frac{B_{2k}}{(2k)!}(f^{(2k-1)}(n)-f^{(2k-1)}(m))+R_{p},$
where $B_{k}$ denotes the $k$th Bernoulli number with $B_{1}=\frac{1}{2}$, and
$R_{p}$ is a remainder term. The remainder can be bounded from above as [2]
$|R_{p}|\leq\frac{2\zeta(p)}{(2\pi)^{p}}\int_{m}^{n}|f^{(p)}(x)|\mathop{}\\!\mathrm{d}x,$
with $\zeta$ the Riemann zeta function. We apply this formula to
$f(k)=\ln(\sqrt{x^{2}+k^{2}}+k)$. It suffices to take $p=2$, so that we retain
only the first term of the sum. We have
$f^{\prime}(k)=\frac{1}{\sqrt{x^{2}+k^{2}}}.$
Observe that $f^{\prime\prime}(k)\leq 0$ for all $x,k\in\mathbb{R}$, so we
have $|f^{\prime\prime}(k)|=-f^{\prime\prime}(k)$. This enables us to write
the estimate for the remainder term as
$|R_{2}|\leq-\frac{2\zeta(2)}{4\pi^{2}}\int_{1}^{\nu}f^{\prime\prime}(k)\mathop{}\\!\mathrm{d}k=-\frac{1}{12}\mathopen{}\mathclose{{}\left(f^{\prime}(\nu)-f^{\prime}(1)}\right).$
Since $B_{2}=\frac{1}{6}$ [2], we obtain
$\displaystyle\sum_{k=1}^{\nu}f(k)$
$\displaystyle=\int_{1}^{\nu}f(k)\mathop{}\\!\mathrm{d}k+\frac{f(1)+f(\nu)}{2}+\frac{1}{12}(f^{\prime}(\nu)-f^{\prime}(1))+R_{p}$
$\displaystyle\leq\int_{1}^{\nu}f(k)\mathop{}\\!\mathrm{d}k+\frac{f(1)+f(\nu)}{2}+\frac{1}{6}\mathopen{}\mathclose{{}\left|f^{\prime}(\nu)-f^{\prime}(1)}\right|.$
The integral evaluates to
$\sqrt{1+x^{2}}-\sqrt{x^{2}+\nu^{2}}+\ln\mathopen{}\mathclose{{}\left(\frac{1}{1+\sqrt{1+x^{2}}}}\right)+\nu\ln\mathopen{}\mathclose{{}\left(\sqrt{x^{2}+\nu^{2}}+\nu}\right).$
Meanwhile, we have
$\frac{f(1)+f(\nu)}{2}=\ln\sqrt{1+\sqrt{1+x^{2}}}+\frac{1}{2}\ln(\sqrt{x^{2}+\nu^{2}}+\nu),$
and
$\mathopen{}\mathclose{{}\left|f^{\prime}(1)-f^{\prime}(\nu)}\right|=\frac{1}{\sqrt{x^{2}+1}}-\frac{1}{\sqrt{x^{2}+\nu^{2}}}\leq
1.$
Putting this all together,
$\sum_{k=1}^{\nu}\ln(\sqrt{x^{2}+\nu^{2}}+\nu)\leq\sqrt{1+x^{2}}-\sqrt{x^{2}+\nu^{2}}+\ln\mathopen{}\mathclose{{}\left(\frac{(\sqrt{x^{2}+\nu^{2}}+\nu)^{\nu+\frac{1}{2}}}{\sqrt{1+\sqrt{1+x^{2}}}}}\right)+1.$
Exponentiating, we find
$\displaystyle\frac{I_{0}(x)x^{\nu}}{I_{\nu}(x)}\leq
e\cdot\frac{e^{\sqrt{1+x^{2}}-\sqrt{x^{2}+\nu^{2}}}}{\sqrt{1+\sqrt{1+x^{2}}}}\mathopen{}\mathclose{{}\left(\sqrt{x^{2}+\nu^{2}}+\nu}\right)^{\nu+\frac{1}{2}}.$
Using that $1+\sqrt{1+x^{2}}\geq x$,
$\displaystyle I_{\nu}(x)$
$\displaystyle\geq\frac{1}{e}\cdot\mathopen{}\mathclose{{}\left(\frac{x}{\sqrt{x^{2}+\nu^{2}}+\nu}}\right)^{\nu+\frac{1}{2}}e^{\sqrt{x^{2}+\nu^{2}}-\sqrt{1+x^{2}}}I_{0}(x)$
$\displaystyle=\frac{1}{e}\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\nu^{2}}-\nu}{x}}\right)^{\nu+\frac{1}{2}}e^{\sqrt{x^{2}+\nu^{2}}-\sqrt{1+x^{2}}}I_{0}(x).$
To conclude the proof for integral $\nu$, we apply Lemma 8 for $\nu=0$ to
obtain $I_{0}(x)\geq c_{0}\cdot e^{x}/\sqrt{x}$, and observe that
$|\sqrt{1+x^{2}}-x|\leq 1$ for all $x\geq 0$.
For fractional $\nu$, one can follow the same proof, simply replacing $I_{0}$
by $I_{\nu^{\prime}}$ for some $\nu^{\prime}\in(0,1)$ throughout. Meanwhile,
for $\nu<1$, one can choose a suitable constant to match the bound from the
lemma statement to the bound from Lemma 8. ∎
The final piece of preparation for Theorem 5 is now the following inequality.
###### Lemma 11.
Let $x\geq 0$ and $y\geq 1$. Then
$\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+y^{2}}+y}{\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(y-\frac{1}{2}}\right)^{2}}+\mathopen{}\mathclose{{}\left(y-\frac{1}{2}}\right)}}\right)^{y}\leq
e.$
###### Proof.
Let $f(x,y)$ denote the function in brackets. We first show that $f$ is
nonincreasing in $x$. Observe that $f(x,y)$ is nonincreasing if and only if
$\ln f(x,y)$ is nonincreasing. We have
$\frac{\partial}{\partial
x}\ln\mathopen{}\mathclose{{}\left(\sqrt{x^{2}+y^{2}}+y}\right)=\frac{1}{\sqrt{x^{2}+y^{2}}+y}\cdot\frac{x}{\sqrt{x^{2}+y^{2}}}.$
Thus,
$\frac{\partial}{\partial x}\ln f(x,y)=y\cdot x\\\
\times\mathopen{}\mathclose{{}\left(\frac{1}{\sqrt{x^{2}+y^{2}}\mathopen{}\mathclose{{}\left(\sqrt{x^{2}+y^{2}}+y^{2}}\right)}-\frac{1}{\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(y-\frac{1}{2}}\right)^{2}}\mathopen{}\mathclose{{}\left(\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(y-\frac{1}{2}}\right)^{2}}+\mathopen{}\mathclose{{}\left(y-\frac{1}{2}}\right)}\right)}}\right).$
As the factor inside the parentheses is nonpositive and we assume $x\geq 0$
and $y\geq 1$, we see that $\ln f(x,y)$, and hence $f(x,y)$, is nonincreasing
in $x$.
We desire an upper bound for $f(x,y)^{y}$, so we set $x=0$:
$f(x,y)^{y}\leq
f(0,y)^{y}=\mathopen{}\mathclose{{}\left(\frac{y}{y-\frac{1}{2}}}\right)^{y}=\mathopen{}\mathclose{{}\left(\frac{1}{1-\frac{1}{2y}}}\right)^{y}\leq\mathopen{}\mathclose{{}\left(1+\frac{1}{y}}\right)^{y}\leq
e,$
where the penultimate inequality holds for $y\geq 1$. ∎
We can now prove Theorem 5.
###### Proof of Theorem 5.
Observe that the upper bound on the density of $\phi$ is independent of the
orientation of the line $L$. Hence, we rotate $L$ about $x$ such that $L$
passes through $\mu$. We begin by proving the first part of the theorem.
Let $f_{Y}$ denote the density of $y$,
$f_{Y}(y)=\frac{1}{(2\pi)^{d/2}\sigma^{d}}e^{-\frac{\|y-\mu\|^{2}}{2\sigma^{2}}}.$
We center our coordinate system on $x$, and orient the $y_{1}$-axis along
$\mu-x$, so that $\mu=(s,0,\ldots,0)$. We then switch to spherical coordinates
$(r,\phi,\theta_{1},\ldots,\theta_{d-2})$, where
$\displaystyle y_{1}$ $\displaystyle=r\cos\phi,$ $\displaystyle y_{2}$
$\displaystyle=r\sin\phi\cos\theta_{1},$ $\displaystyle y_{3}$
$\displaystyle=r\sin\phi\sin\theta_{2}\cos\theta_{2},$ $\displaystyle\vdots$
$\displaystyle y_{d}$
$\displaystyle=r\sin\phi\sin\theta_{2}\ldots\sin\theta_{d-3}\sin\theta_{d-2}.$
Here, $r$ ranges from $0$ to $\infty$, $\theta_{d-2}$ ranges from $0$ to
$2\pi$, while all other angles range from $0$ to $\pi$. Due to the orientation
of our coordinate system, the coordinate angle $\phi$ corresponds to the
random variable $\phi$ from the theorem statement.
To compute the density of $\phi$ conditioned on $R=r$, we write
$f_{\phi|R=r}(\phi)=\frac{f_{\phi,R}(\phi,r)}{f_{R}(r)},$
where $f_{\phi,R}$ denotes the joint density of $\phi$ and $R$. We obtain this
density by integrating the density of $f_{Y}$ transformed to spherical
coordinates over $\theta_{1}$ through $\theta_{d-2}$. Meanwhile, $f_{R}$
denotes the density of $R$, which is a noncentral $d$-dimensional chi
distributed random variable with parameter $s$.
The joint density $f_{\phi,R}$ is
$\displaystyle
f_{\phi,R}(\phi,r)=\frac{1}{(2\pi)^{d/2}}\frac{r^{d-1}}{\sigma^{d}}e^{-\frac{r^{2}+\sigma^{2}}{2\sigma^{2}}}e^{\frac{rs\cos\phi}{\sigma^{2}}}\sin^{d-2}\phi\int_{0}^{2\pi}\mathop{}\\!\mathrm{d}\theta\prod_{k=1}^{d-3}\int_{0}^{\pi}\sin^{k}\theta\mathop{}\\!\mathrm{d}\theta.$
It holds that, for $k\in\mathbb{N}$,
$\int_{0}^{\pi}\sin^{k}\theta\mathop{}\\!\mathrm{d}\theta=\frac{\sqrt{\pi}\Gamma\mathopen{}\mathclose{{}\left(\frac{k+1}{2}}\right)}{\Gamma\mathopen{}\mathclose{{}\left(\frac{k+2}{2}}\right)}.$
By telescoping, it follows that
$\prod_{k=1}^{d-3}\int_{0}^{\pi}\sin^{k}\theta\mathop{}\\!\mathrm{d}\theta=\pi^{\frac{d-3}{2}}\cdot\frac{\Gamma(1)}{\Gamma\mathopen{}\mathclose{{}\left(\frac{d-1}{2}}\right)}=\frac{\pi^{\frac{d-3}{2}}}{\Gamma\mathopen{}\mathclose{{}\left(\frac{d-1}{2}}\right)}.$
Inserting this into our expression for $f_{\phi,R}$, we obtain
$f_{\phi,R}(\phi,r)\leq\frac{2^{1-\frac{d}{2}}}{\sqrt{\pi}}\frac{r^{d-1}}{\sigma^{d}}\frac{\sin^{d-2}\phi}{\Gamma\mathopen{}\mathclose{{}\left(\frac{d-1}{2}}\right)}e^{-\frac{r^{2}+s^{2}}{2\sigma^{2}}}e^{\frac{rs\cos\phi}{\sigma^{2}}}.$
Next, we use the expression for $f_{R}$ given in Equation 1. Combining this
with the above bound for $f_{\phi,R}$, we have
$f_{\phi|R=r}(\phi)\leq\frac{2^{1-\frac{d}{2}}}{\sqrt{\pi}}\frac{\sin^{d-2}\phi}{\Gamma\mathopen{}\mathclose{{}\left(\frac{d-1}{2}}\right)}\mathopen{}\mathclose{{}\left(\frac{rs}{\sigma^{2}}}\right)^{\frac{d}{2}-1}\frac{e^{\frac{rs\cos\phi}{\sigma^{2}}}}{I_{d/2-1}(rs/\sigma^{2})}.$
For brevity, let $x:=rs/\sigma^{2}$, and let $\nu:=d/2-1$. Then, up to a
constant, $f_{\phi|R=r}$ is bounded from above by
$\displaystyle\frac{x^{\nu}\sin^{2\nu}(\phi)e^{x\cos\phi}}{2^{\nu}\Gamma\mathopen{}\mathclose{{}\left(\nu+\frac{1}{2}}\right)I_{\nu}(x)}.$
(4)
For any fixed $x$ and $\nu$, Equation 4 is maximized when $\phi=\phi^{*}$,
where $\phi^{*}$ satisfies
$\displaystyle\sin^{2}\phi^{*}=\frac{2\nu}{x}\cos\phi^{*}.$ (5)
Obtaining this is a matter of ordinary calculus. This equation has a unique
solution in $[0,\pi]$ of
$\displaystyle\phi^{*}=2\arctan\mathopen{}\mathclose{{}\left(\sqrt{\frac{\sqrt{\nu^{2}+x^{2}}-x}{\nu}}}\right)=2\arctan\mathopen{}\mathclose{{}\left(\sqrt{\sqrt{x^{2}/\nu^{2}+1}-x/\nu}}\right).$
(6)
It can also be verified that
$\displaystyle\cos\phi^{*}=\sqrt{1+\frac{\nu^{2}}{x^{2}}}-\frac{\nu}{x}=\frac{\sqrt{x^{2}+\nu^{2}}-\nu}{x}.$
(7)
Using this identity together with Equation 5 in Equation 4, we find
$\displaystyle f_{\phi|R=r}(\phi)$
$\displaystyle\leq\Theta(1)\cdot\frac{\nu^{\nu}}{\Gamma\mathopen{}\mathclose{{}\left(\nu+\frac{1}{2}}\right)}\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\nu^{2}}-\nu}{x}}\right)^{\nu}\cdot\frac{e^{x\mathopen{}\mathclose{{}\left(\sqrt{1+\frac{\nu^{2}}{x^{2}}}-\frac{\nu}{x}}\right)}}{I_{\nu}(x)}$
$\displaystyle=\Theta(1)\cdot\frac{(\nu/e)^{\nu}}{\Gamma\mathopen{}\mathclose{{}\left(\nu+\frac{1}{2}}\right)}\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\nu^{2}}-\nu}{x}}\right)^{\nu}\cdot\frac{e^{x\mathopen{}\mathclose{{}\left(\sqrt{1+\frac{\nu^{2}}{x^{2}}}}\right)}}{I_{\nu}(x)}$
$\displaystyle=\Theta(1)\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\nu^{2}}-\nu}{x}}\right)^{\nu}\cdot\frac{e^{\sqrt{x^{2}+\nu^{2}}}}{I_{\nu}(x)},$
since Stirling’s Formula yields $(\nu/e)^{\nu}/\Gamma(\nu+1/2)=\Theta(1)$.
We consider two cases, $x\leq 1$ and $x>1$.
##### Case 1: $x\leq 1$.
We apply Lemma 7 to Equation 4, and find an upper bound of
$O\mathopen{}\mathclose{{}\left(\frac{\Gamma(\nu+1)}{\Gamma(\nu+1/2)}}\right)=O(\sqrt{\nu}).$
##### Case 2: $x>1$.
We use Lemma 10, which yields
$\displaystyle f_{\phi|R=r}(\phi)$
$\displaystyle\leq\Theta(1)\cdot\sqrt{\frac{x}{\sqrt{x^{2}+\nu^{2}}-\nu}}\cdot\sqrt{x}=\Theta(1)\cdot\sqrt{\frac{\sqrt{x^{2}+\nu^{2}}+\nu}{x}}\cdot\sqrt{x}$
$\displaystyle=O\mathopen{}\mathclose{{}\left(\sqrt{x}+\sqrt{\nu}}\right).$
Inserting the definitions of $x$ and $\nu$ concludes the proof of the first
part.
Next, let $d\geq 3$, or equivalently, $\nu\geq\frac{1}{2}$. We assume $x>1$ in
the following; the case $x\leq 1$ simply follows from using Lemma 7 in
Equation 4 and dividing by $\sin\phi$.
To bound $f_{\phi|R=r}(\phi)/\sin\phi$, we follow mostly the same process. We
return once more to Equation 4, and divide by $\sin\phi$. For any fixed $x$
and $\nu$, the resulting equation is then maximized when $\phi=\phi^{*}$,
where $\phi^{*}$ satisfies
$\sin^{2}\phi^{*}=\frac{2\nu-1}{x}\cos\phi^{*}.$
The angle $\phi^{*}$ satisfies Equations 6 and 7, with $\nu$ replaced by
$\nu-\frac{1}{2}$. Inserting this in Equation 4 and working through the
algebra, we eventually obtain
$\frac{f_{\phi|R=r}(\phi)}{\sin\phi}\leq\Theta(1)\cdot\frac{\mathopen{}\mathclose{{}\left(\frac{\nu-\frac{1}{2}}{e}}\right)^{\nu-\frac{1}{2}}}{\Gamma\mathopen{}\mathclose{{}\left(\nu+\frac{1}{2}}\right)}\cdot\sqrt{x}\cdot\sqrt{\frac{\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)^{2}}+\nu-\frac{1}{2}}{x}}\\\
\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)^{2}}-\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)}{x}}\right)^{\nu}\cdot\frac{\exp\mathopen{}\mathclose{{}\left(\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)^{2}}}\right)}{I_{\nu}(x)}.$
Observe that for $\nu\geq\frac{1}{2}$, we have
$\frac{\mathopen{}\mathclose{{}\left(\frac{\nu-\frac{1}{2}}{e}}\right)^{\nu-\frac{1}{2}}}{\Gamma\mathopen{}\mathclose{{}\left(\nu+\frac{1}{2}}\right)}\leq\frac{(\nu/e)^{\nu-\frac{1}{2}}}{\Gamma\mathopen{}\mathclose{{}\left(\nu+\frac{1}{2}}\right)}=\frac{(\nu/e)^{\nu}}{\Gamma\mathopen{}\mathclose{{}\left(\nu+\frac{1}{2}}\right)}\cdot\sqrt{\frac{e}{\nu}}\in
O\mathopen{}\mathclose{{}\left(\frac{1}{\sqrt{\nu}}}\right).$
Since we assume $x>1$, we may apply Lemma 10 to find
$\frac{f_{\phi|R=r}(\phi)}{\sin\phi}\leq\Theta(1)\cdot\sqrt{\frac{x}{\nu}}\cdot\sqrt{\frac{\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)^{2}}+\nu-\frac{1}{2}}{x}}\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)^{2}}-\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)}{x}}\right)^{\nu}\\\
\cdot\sqrt{x}\cdot\mathopen{}\mathclose{{}\left(\frac{x}{\sqrt{x^{2}+\nu^{2}}-\nu}}\right)^{\nu+\frac{1}{2}}.$
Through some more elementary algebra, we can bound this (up to a constant) by
$\frac{\sqrt{x^{2}+\nu^{2}}+\nu}{\sqrt{\nu}}\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\nu^{2}}+\nu}{\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)^{2}}+\nu-\frac{1}{2}}}\right)^{\nu}.$
The first factor in this expression evaluates to $O(\sqrt{\nu}+x/\sqrt{\nu})$.
To conclude, we must show that
$\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\nu^{2}}+\nu}{\sqrt{x^{2}+\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)^{2}}+\mathopen{}\mathclose{{}\left(\nu-\frac{1}{2}}\right)}}\right)^{\nu}\in
O(1)$
for $\nu\in\\{1/2,1,3/2,\ldots\\}$. For $\nu=\frac{1}{2}$, we have
$\mathopen{}\mathclose{{}\left(\frac{\sqrt{x^{2}+\frac{1}{4}}+\frac{1}{2}}{x}}\right)^{\frac{1}{2}}\leq\sqrt{1+\frac{1}{x}}<\sqrt{2},$
where the latter inequality holds for $x>1$. For $\nu\geq 1$, we use Lemma 11
to bound the given quantity by $e$. This then proves the second part of the
theorem. ∎
## 3 Analysis of Single 2-Changes
To improve upon the previous analyses, it pays to examine where the analysis
of Euclidean 2-opt with Gaussian perturbations [13] fails for $d\in\\{2,3\\}$.
The problem is that in the course of the proof, Manthey & Veenstra compute
$\int_{0}^{\infty}\frac{1}{x^{2}}\chi_{d-1}(x)\mathop{}\\!\mathrm{d}{x},$
where $\chi_{d}$ denotes the $d$-dimensional chi distribution. This integral
is finite only when $d\geq 4$.
This problem does not appear in the results obtained by Englert et al. [8].
They consider a more general model of smoothed analysis wherein the adversary
specifies a probability density for each point in the TSP instance
independently. Since the only information available on the probability
densities is their upper bound, they consider a simplified model of a 2-change
to keep the analysis tractable. The analysis is then translated to their
generic model, which incurs a factor which is super-exponential in $d$.
Even when one considers $d$ to be a constant as Englert et al. do, the
genericity of their model still comes at a cost when translated to a smoothed
analysis with Gaussian perturbations, eventually yielding a bound which is
polynomial in $\sigma^{-d}$.
Specifying the perturbations as Gaussian enables us to analyze the true random
experiment modeling a 2-change more closely, as we know the distributions of
the distances between points in the smoothed instance. Combined with Theorem
5, which provides information on the angles between edges in the instance, we
can carry out an analysis that improves on both Englert et al.’s as well as
Manthey & Veenstra’s result when we consider Gaussian perturbations.
$a$$z_{1}$$b$$z_{2}$$A_{1}$$A_{2}$$R$$\phi_{1}$$\phi_{2}$ Figure 2: Labels of
points and angles involved in a single 2-change.
We first set up our model of a 2-change perturbed by Gaussian random
variables. To obtain a bound for this case, we first formulate a different
analysis of single 2-changes. Consider a 2-change involving the points
$\\{a,b,z_{1},z_{2}\\}\subseteq[-D,D]^{d}$, where the edges $\\{a,z_{1}\\}$
and $\\{b,z_{2}\\}$ are replaced by $\\{b,z_{1}\\}$ and $\\{a,z_{2}\\}$. The
improvement to the tour length due to this 2-change is
$\Delta=\|a-z_{1}\|-\|b-z_{1}\|+\|b-z_{2}\|-\|a-z_{2}\|.$
To analyze $\Delta$, we first define $A_{1}:=\|a-z_{1}\|$,
$A_{2}:=\|b-z_{2}\|$, and $R:=\|a-b\|$. Moreover, we identify the angle
$\phi_{1}$ as the angle between $a-z_{1}$ and $a-b$, and restrict it to
$[0,\pi]$. The corresponding angle $\phi_{2}$ is defined similarly. The
restriction of these angles to $[0,\pi]$ is without loss of generality; one
may readily observe from Figure 2 that flipping the sign of either $\phi_{1}$
or $\phi_{2}$ does not change the value of $\Delta$.
While Figure 2 may give the impression that we are restricting the analysis to
the $d=2$ case, the analysis is valid for any $d\geq 2$. The two triangles
$\triangle az_{1}b$ and $\triangle az_{2}b$ will lie in two separate planes in
general. The distances involved must thus be understood as $d$-dimensional
Euclidean distances.
With these definitions, we have $\Delta=\eta_{1}+\eta_{2}$, where for
$i\in[2]$
$\eta_{i}=A_{i}-\sqrt{A_{i}^{2}+R^{2}-2A_{i}R\cos\phi_{i}},$
which follows from the Law of Cosines.
Suppose we condition on the events $A_{1}=a_{1}$, $A_{2}=a_{2}$, and $R=r$,
for some $a_{1},a_{2},r>0$. Under these events, $\eta_{1}$ and $\eta_{2}$ are
independent random variables. Moreover, $\Delta$ is completely fixed by
revealing the angles $\phi_{1}$ and $\phi_{2}$. Since we condition on
$A_{i}=a_{i}$ and $R=r$, we can then bound the density of $\phi_{i}$ using
Corollary 6.
We can use this independence to obtain bounds for
$\mathbb{P}(\Delta\in(0,\epsilon])$ for some small $\epsilon>0$ under these
events, for various orderings of $a_{1}$, $a_{2}$ and $r$. These bounds are
given in Lemma 15.
We begin by obtaining a bound to the density of $\eta_{i}$, $i\in[2]$, using
the fact that all randomness in $\eta_{i}$ is contained in the angle
$\phi_{i}$ under the conditioning that $A_{i}=a_{i}$ and $R=r$. We denote by
$f_{\phi_{i}|R=r,A_{i}=a_{i}}$ the density of the angle $\phi_{i}$,
conditioned on $R=r$ and $A_{i}=a_{i}$.
###### Lemma 12.
Let $i\in[2]$. The density of $\eta_{i}=\|a-z_{i}\|-\|b-z_{i}\|$, conditioned
on $A_{i}=a_{i}$ and $R=r$, is bounded from above by
$\frac{a_{i}+r}{a_{i}r}\cdot\frac{f_{\phi_{i}|R=r,A_{i}=a_{i}}(\phi_{i}(\eta))}{|\sin\phi_{i}(\eta)|},$
where
$\phi_{i}(\eta)=\arccos\mathopen{}\mathclose{{}\left(\frac{a_{i}^{2}+r^{2}-(a_{i}-\eta)^{2}}{2a_{i}r}}\right).$
###### Proof.
Let the conditional density of $\eta_{i}$ be $f_{\eta_{i}|R=r,A_{i}=a_{i}}$.
Since $\phi_{i}$ is restricted to $[0,\pi]$ by assumption, there exists a
bijection between $\eta_{i}$ and $\phi_{i}$. To be precise, we have
$\phi_{i}(\eta_{i})=\arccos\mathopen{}\mathclose{{}\left(\frac{a_{i}^{2}+r^{2}-(a_{i}-\eta_{i})^{2}}{2a_{i}r}}\right).$
By standard transformation rules of probability densities, it holds that
$f_{\eta_{i}|R=r,A=a_{i}}(\eta)=\mathopen{}\mathclose{{}\left|\frac{\mathop{}\\!\mathrm{d}\phi_{i}(\eta)}{\mathop{}\\!\mathrm{d}\eta}}\right|f_{\phi_{i}|R=r,A_{i}=a_{i}}(\phi_{i}(\eta)).$
The derivative is easily evaluated:
$\frac{\mathop{}\\!\mathrm{d}\phi_{i}(\eta)}{\mathop{}\\!\mathrm{d}\eta}=\frac{-1}{\sqrt{1-\mathopen{}\mathclose{{}\left(\frac{a_{i}^{2}+r^{2}-(a_{i}-\eta)^{2}}{2a_{i}r}}\right)}}\cdot\frac{a_{i}-\eta}{a_{i}r}=\frac{-1}{\sin\phi(\eta)}\cdot\frac{a_{i}-\eta}{a_{i}r}.$
Finally, we have $a_{i}-\eta\leq a_{i}+r$, which follows from the triangle
inequality. This concludes the proof. ∎
With Corollary 6, we have an upper bound for $f_{\phi_{i}|R=r,A_{i}=a_{i}}$.
Unfortunately, simply inserting this upper bound is not enough for us to bound
$f_{\eta_{i}|A_{i}=a_{i},R=r}$, since the density as obtained from Lemma 12
diverges for $\phi=0$ and $\phi=\pi$. There is however a way to cure this
divergence.
We now consider a full 2-change (cf. Figure 2). To analyze the improvement
$\Delta$ caused by this 2-change, we construct a random experiment,
conditioned on the outcomes $A_{1}=a_{1}$, $A_{2}=a_{2}$, and $R=r$. We write
this random experiment in Algorithm 1, since we will need to execute different
experiments depending on the ordering of the values of $a_{1}$, $a_{2}$ and
$r$. The parameters $b_{1}$ and $b_{2}$ of this algorithm will take values in
$\\{a_{1},a_{2},r\\}$, depending on this ordering.
Algorithm 1 The algorithm we use to model a random 2-change with fixed
$A_{1}=a_{1}$, $A_{2}=a_{2}$, and $R=r$.
1:function RandomExpt($b_{1}$, $b_{2}$)
2: Draw $\phi_{1}\sim f_{\phi|R=r,A_{1}=a_{1}}$
3: Draw $\phi_{2}\sim f_{\phi|R=r,A_{2}=a_{2}}$
4: if $\sqrt{b_{1}}\sin\phi_{1}>\sqrt{b_{2}}\sin\phi_{2}$ then
5: return $(1,\phi_{1})$
6: else
7: return $(2,\phi_{2})$
8: end if
9:end function
The function RandomExpt outlined in Algorithm 1 branches on the outcome of the
variable $Z_{i}=\sqrt{b_{i}}\sin\phi_{i}$, $i\in[2]$, where $b_{i}$ is some
distance; we will choose $b_{i}$ among $\\{r,a_{i}\\}$ in subsequent lemmas.
Note that `RandomExpt` returns a tuple $(i,\phi)$, where $i\in[2]$. We call
the angle returned by RandomExpt the _good angle_. Moreover, we label the
event $i=1$ as $E_{1}$, and $i=2$ by $E_{2}$. The crux of the analysis is now
to analyze $\eta_{1}$ if $E_{1}$ occurs, and $\eta_{2}$ if $E_{2}$ occurs, as
under $E_{i}$ the density of $\eta_{i}$ is bounded from above.
###### Lemma 13.
Let $(i,\phi)=\texttt{RandomExpt}(b_{1},b_{2})$ for some $b_{1},b_{2}>0$. Let
$j=3-i$. The density of $\phi$, conditioned on $R=r$, $A_{1}=a_{1}$,
$A_{2}=a_{2}$, is then bounded from above by
$\frac{2M_{\phi_{1}}M_{\phi_{2}}}{\mathbb{P}(E_{i})}\cdot\arcsin\mathopen{}\mathclose{{}\left(\min\mathopen{}\mathclose{{}\left\\{1,\sqrt{\frac{b_{i}}{b_{j}}}\sin\phi}\right\\}}\right),$
where
$M_{\phi_{i}}=\max_{0\leq\phi\leq\pi}f_{\phi_{i}|R=r,A_{i}=a_{i}}(\phi)$.
###### Proof.
We omit the conditioning on $A_{1}=a_{1}$, $A_{2}=a_{2}$ and $R=r$ in the
following, for the sake of clarity. We prove only the case $i=1$, thus
conditioning on $E_{1}$, as the proof for $i=2$ proceeds essentially
identically.
Let $X_{i}=\sqrt{b_{i}}\sin\phi_{i}$, $i\in[2]$. The event $E_{1}$ is then
equivalent to $X_{1}>X_{2}$. Let $Z$ in turn denote the random variable given
by $X_{1}$ conditioned on $E_{1}$. The cumulative distribution function of $Z$
is equal to
$F_{Z}(x)=\mathbb{P}(X_{1}\leq
x\>\lvert\>X_{1}>X_{2})=\frac{\mathbb{P}(X_{1}\leq x\wedge
X_{1}>X_{2})}{\mathbb{P}(E_{1})}.$
By the independence of $X_{1}$ and $X_{2}$, this is equal to
$F_{Z}(x)=\frac{1}{\mathbb{P}(E_{1})}\cdot\int_{0}^{x}f_{X_{1}}(y)\int_{0}^{y}f_{X_{2}}(z)\mathop{}\\!\mathrm{d}z\mathop{}\\!\mathrm{d}y.$
Computing the density of $Z$ is then simply a matter of differentiation. Since
$\mathbb{P}(E_{1})$ does not depend on $x$, we obtain
$f_{Z}(x)=\frac{1}{\mathbb{P}(E_{1})}\cdot
f_{X_{1}}(x)\int_{0}^{x}f_{X_{2}}(z)\mathop{}\\!\mathrm{d}z.$
We next require the density of $X_{i}=\sqrt{b_{i}}\sin\phi_{i}$. Observe that
$\displaystyle\mathbb{P}(X_{i}\leq
x)=\mathbb{P}\mathopen{}\mathclose{{}\left(\phi_{i}\leq\arcsin(x/\sqrt{b_{i}})}\right)+\mathbb{P}\mathopen{}\mathclose{{}\left(\phi_{i}\geq\pi-\arcsin(x/\sqrt{b_{i}})}\right).$
(8)
Differentiating this expression to $x$, we find for $x<\sqrt{b_{i}}$
$\displaystyle f_{X_{i}}(x)$
$\displaystyle=\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}x}\mathopen{}\mathclose{{}\left(\mathbb{P}\mathopen{}\mathclose{{}\left(\phi_{i}\leq\arcsin(x/\sqrt{b_{i}})}\right)+1-\mathbb{P}\mathopen{}\mathclose{{}\left(\phi_{i}\geq\pi-\arcsin(x/\sqrt{b_{i}})}\right)}\right)$
$\displaystyle=\frac{\mathop{}\\!\mathrm{d}}{\mathop{}\\!\mathrm{d}x}\mathopen{}\mathclose{{}\left(\arcsin\mathopen{}\mathclose{{}\left(\frac{x}{\sqrt{b_{i}}}}\right)}\right)\cdot\mathopen{}\mathclose{{}\left[f_{\phi_{i}}\mathopen{}\mathclose{{}\left(\arcsin\mathopen{}\mathclose{{}\left(\frac{x}{\sqrt{b_{i}}}}\right)}\right)+f_{\phi_{i}}\mathopen{}\mathclose{{}\left(\pi-\arcsin\mathopen{}\mathclose{{}\left(\frac{x}{\sqrt{b_{i}}}}\right)}\right)}\right]$
$\displaystyle=\frac{1}{\sqrt{b_{i}-x^{2}}}\cdot\mathopen{}\mathclose{{}\left[f_{\phi_{i}}\mathopen{}\mathclose{{}\left(\arcsin\mathopen{}\mathclose{{}\left(\frac{x}{\sqrt{b_{i}}}}\right)}\right)+f_{\phi_{i}}\mathopen{}\mathclose{{}\left(\pi-\arcsin\mathopen{}\mathclose{{}\left(\frac{x}{\sqrt{b_{i}}}}\right)}\right)}\right],$
and $0$ for $x\geq\sqrt{b_{i}}$. Letting
$M_{\phi_{i}}=\max_{0\leq\phi\leq\pi}f_{\phi_{i}|R=r,A_{i}=a_{i}}(\phi)$,
which exists by Corollary 6, we obtain
$f_{X_{i}}(x)\leq
2M_{\phi_{i}}\cdot\begin{cases}\frac{1}{\sqrt{b_{i}-x^{2}}},&\text{if
}x<\sqrt{b_{i}},\\\ 0,&\text{otherwise.}\end{cases}$
Using this density, together with the identity
$\int_{0}^{x}(\sqrt{b}-y^{2})^{-1/2}\mathop{}\\!\mathrm{d}y=\arcsin(x/\sqrt{b})$
for $x<\sqrt{b}$, we obtain
$f_{Z}(x)\leq\frac{2M_{\phi_{1}}M_{\phi_{2}}}{\mathbb{P}(E_{1})}\cdot\frac{\arcsin\mathopen{}\mathclose{{}\left(\min\mathopen{}\mathclose{{}\left\\{1,\frac{x}{\sqrt{b_{2}}}}\right\\}}\right)}{\sqrt{b_{1}-x^{2}}}$
if $x<\sqrt{b_{1}}$, and $f_{Z}(x)=0$ otherwise. It remains to convert $Z$
back to $\phi$, where $\phi$ is the good angle. Since we have conditioned on
$E_{1}$, we know that $Z=\sqrt{b_{1}}\sin\phi$. Using similar considerations
as used in Equation 8, we have
$\displaystyle
f_{Z}(x)=\frac{1}{\sqrt{b_{1}-x^{2}}}f_{\phi}(\arcsin(x/\sqrt{b_{1}}))+\frac{1}{\sqrt{b_{1}-x^{2}}}f_{\phi}(\pi-\arcsin(x/\sqrt{b_{1}})).$
Since this expression holds for all $x\in(0,\sqrt{b_{1}})$, and since
probability densities are non-negative, it follows that
$f_{\phi}(\phi)\leq\frac{2M_{\phi_{1}}M_{\phi_{2}}}{\mathbb{P}(E_{1})}\cdot\arcsin\mathopen{}\mathclose{{}\left(\min\mathopen{}\mathclose{{}\left\\{1,\sqrt{\frac{b_{1}}{b_{2}}}\sin\phi}\right\\}}\right),$
for all $\phi\in(0,\pi)$. ∎
For the next part, we apply Lemma 13 to Lemma 12 to bound the density of
$\eta_{i}$, given that $E_{i}$ occurs.
###### Lemma 14.
Let $i\in[2]$ and $j=3-i$. Let $f_{\eta_{i}|E_{i}}$ denote the density of
$\eta_{i}$, conditioned on $E_{i}$ as well as the outcomes $R=r$,
$A_{1}=a_{1}$, and $A_{2}=a_{2}$. Then
$\displaystyle
f_{\eta_{i}|E_{i}}(\eta)\leq\frac{1}{\mathbb{P}(E_{i})}\cdot\frac{2\pi
M_{\phi_{1}}M_{\phi_{2}}}{\min\\{a_{1},r\\}\min\\{a_{2},r\\}},$
where
$M_{\phi_{i}}=\max_{0\leq\phi\leq\pi}f_{\phi_{i}|R=r,A_{i}=a_{i}}(\phi)$.
###### Proof.
We prove only the case $i=1$. From Lemma 12, we know that
$f_{\eta_{i}|E_{i}}(\eta)\leq\frac{a_{i}+r}{a_{i}r}\cdot\frac{f_{\phi_{i}|E_{i},A_{1}=a_{1},A_{2}=a_{2}}(\phi)}{\sin\phi}.$
Let $(i,\phi)=\verb|RandomExpt|(b_{1},b_{2})$, for some $b_{1},b_{2}>0$. We
will choose values for $b_{1}$ and $b_{2}$ depending on the ordering of
$a_{1},a_{2}$ and $r$. Note that we may do this, since we know the choices of
$a_{1}$, $a_{2}$ and $r$ before executing `RandomExpt`.
Since we condition on $E_{1}$, we know that $i=1$, and hence that $\phi_{1}$
is the good angle. By Lemma 13, we can obtain a bound for
$f_{\phi|E_{i},A_{1}=a_{1},A_{2}=a_{2},R=r}$. We thus find
$f_{\eta_{1}|E_{1}}(\eta)\leq\frac{2M_{\phi_{1}}M_{\phi_{2}}}{\mathbb{P}(E_{1})}\cdot\frac{a_{1}+r}{a_{1}r}\cdot\frac{\arcsin\mathopen{}\mathclose{{}\left(\min\mathopen{}\mathclose{{}\left\\{1,\sqrt{\frac{b_{1}}{b_{2}}}\sin\phi}\right\\}}\right)}{\sin\phi}.$
First, suppose $\sin\phi\geq\sqrt{b_{2}/b_{1}}$. Then the arcsine evaluates to
$\pi/2$, and so the above is bounded from above by
$\frac{\pi}{2}\sqrt{\frac{b_{1}}{b_{2}}}.$
Second, suppose $\sin\phi<\sqrt{b_{2}/b_{1}}$. Since $\arcsin(x)\leq\pi x/2$
for $x\in(0,1)$, this case yields the same bound, and we obtain
$f_{\eta_{1}|E_{1}}(\eta)\leq\frac{\pi
M_{\phi_{1}}M_{\phi_{2}}}{\mathbb{P}(E_{1})}\cdot\frac{a_{1}+r}{a_{1}r}\cdot\sqrt{\frac{b_{1}}{b_{2}}}$
We now examine the four relevant orderings of $a_{1}$, $a_{2}$ and $r$.
##### Case 1: $a_{1},a_{2}\leq r$.
We let $b_{1}=a_{1}$ and $b_{2}=a_{2}$. Then we have
$\frac{a_{1}+r}{a_{1}r}\cdot\sqrt{\frac{a_{1}}{a_{2}}}=\frac{a_{1}+r}{r\sqrt{a_{1}a_{2}}}\leq\frac{2r}{r\sqrt{a_{1}a_{2}}}=\frac{2}{\sqrt{a_{1}a_{2}}}.$
##### Case 2: $a_{1},a_{2}\geq r$.
We let $b_{1}=b_{2}=r$, and obtain
$\frac{a_{1}+r}{a_{1}r}\leq\frac{2a_{1}}{a_{1}r}=\frac{2}{r}.$
##### Case 3: $a_{1}\geq r\geq a_{2}$.
We let $b_{1}=r$ and $b_{2}=a_{2}$, which yields
$\frac{a_{1}+r}{a_{1}r}\cdot\sqrt{\frac{r}{a_{2}}}=\frac{a_{1}+r}{\sqrt{a_{2}r}a_{1}}\leq\frac{2}{\sqrt{a_{2}r}}.$
##### Case 4: $a_{2}\geq r\geq a_{1}$.
We let $b_{1}=a_{1}$ and $b_{2}=r$, to find
$\frac{a_{1}+r}{a_{1}r}\sqrt{\frac{a_{1}}{r}}\leq\frac{2r\sqrt{a_{1}}}{a_{1}r\sqrt{r}}=\frac{2}{\sqrt{a_{1}r}}.$
This final case concludes the proof. ∎
The bound on the density of $\eta_{i}$ from Lemma 14 puts us in the position
to prove a bound on the probability that $\Delta\in(0,\epsilon]$.
###### Lemma 15.
Let $\Delta$ denote the improvement of a 2-change. Then
$\displaystyle\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>A_{1}=a_{1},A_{2}=a_{2},R=r)\leq\frac{\pi
M_{\phi_{1}}M_{\phi_{2}}\epsilon}{\min\\{a_{1},r\\}\min\\{a_{2},r\\}},$
where
$M_{\phi_{i}}=\max_{0\leq\phi\leq\pi}f_{\phi_{i}|R=r,A_{i}=a_{i}}(\phi)$.
###### Proof.
We condition first on $E_{1}$, and then let an adversary choose an outcome for
$\eta_{2}$, say, $\eta_{2}=t$. Then we have $\Delta\in(0,\epsilon]$ iff
$\eta_{1}\in(-t,-t+\epsilon]$, which is an interval of size $\epsilon$.
Since the probability that $\eta_{1}$ falls into an interval of size
$\epsilon$ is at most $\epsilon\cdot\max_{\eta}f_{\eta_{1}|E_{1}}(\eta)$, all
we need to conclude the proof for $E_{1}$ is a bound on
$f_{\eta_{1}|E_{1}}(\eta)$. This is provided by Lemma 14.
We then repeat the same argument for $E_{2}$. The result is obtained by
applying the Law of Total Probability. ∎
With Lemma 15, we could prove a bound on the smoothed complexity of 2-opt
already. However, the resulting bound would be weaker than existing results.
Instead of analyzing single 2-changes, we thus use the framework of linked
pairs of 2-changes in Section 4.
For the analysis in Section 4, it is convenient to have some lemmas similar to
Lemma 15, with one or more of the distances $A_{1}$, $A_{2}$ and $R$
integrated out. These are given in Lemmas 16, 17 and 18. The proofs are
straightforward computations.
###### Lemma 16.
For $i\in[2]$,
$\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>A_{i}=a_{i},R=r)=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}D}{\sigma^{2}}+\frac{d}{\sqrt{a_{i}r}}+\frac{d}{r}+\frac{d^{3/4}\sqrt{D}}{\sigma}\mathopen{}\mathclose{{}\left(\frac{1}{\sqrt{a_{i}}}+\frac{1}{\sqrt{r}}}\right)}\right)\cdot\epsilon}\right).$
###### Proof.
We assume $i=1$, since by symmetry the result for $i=2$ follows essentially
identically.
Consider the cases $a_{1}\leq r$ and $a_{1}\geq r$ separately.
##### Case 1: $a_{1}\leq r$.
For this case, we have by Lemma 15 for some constants
$c,c^{\prime},c^{\prime\prime}>0$,
$\displaystyle\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>A_{1}=a_{1},A_{2}=a_{2},R=r)$
$\displaystyle\leq
c\cdot\frac{M_{\phi_{1}}M_{\phi_{2}}\epsilon}{\sqrt{a_{1}}}\cdot\begin{cases}\frac{1}{\sqrt{r}},&\text{if
}a_{2}\geq r\\\ \frac{1}{\sqrt{a_{2}}},&\text{if }a_{2}\leq r\end{cases}$
$\displaystyle\leq
c^{\prime}\cdot\frac{M_{\phi_{1}}\epsilon}{\sqrt{a_{1}}}\cdot\begin{cases}\sqrt{\frac{d}{r}}+\frac{d^{1/4}\sqrt{D}}{\sigma},&\text{if
}a_{2}\geq r\\\
\sqrt{\frac{d}{a_{2}}}+\frac{d^{1/4}\sqrt{D}}{\sigma},&\text{if }a_{2}\leq
r\end{cases}$ $\displaystyle\leq
c^{\prime\prime}\cdot\frac{M_{\phi_{1}}\epsilon}{\sqrt{a_{1}}}\mathopen{}\mathclose{{}\left(\sqrt{\frac{d}{r}}+\sqrt{\frac{d}{a_{2}}}+\frac{d^{1/4}\sqrt{D}}{\sigma}}\right),$
where we use Corollary 6 to bound $M_{\phi_{2}}$.
We can now use Lemmas 4 and 3 to integrate out $a_{2}$, leaving us with
$O\mathopen{}\mathclose{{}\left(\frac{M_{\phi_{1}}\epsilon}{\sqrt{a_{1}}}\mathopen{}\mathclose{{}\left(\sqrt{\frac{d}{r}}+\frac{d^{1/4}}{\sqrt{\sigma}}+\frac{d^{1/4}\sqrt{D}}{\sigma}}\right)}\right).$
Using that $D\geq 1$ and $\sigma\leq 1$, we see that the third term in the
inner brackets is at least as large as the second term, and so we obtain
$\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>A_{1}=a_{1},A_{2}=a_{2},R=r)=O\mathopen{}\mathclose{{}\left(\frac{M_{\phi_{1}}}{\sqrt{a_{1}}}\mathopen{}\mathclose{{}\left(\sqrt{\frac{d}{r}}+\frac{d^{1/4}\sqrt{D}}{\sigma}}\right)\cdot\epsilon}\right).$
Now we use Corollary 6 to conclude
$M_{\phi_{1}}=O\mathopen{}\mathclose{{}\left(d^{1/4}\sqrt{Da_{1}}/\sigma}\right)$,
yielding
$O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\sqrt{\frac{d}{a_{1}}}+\frac{d^{1/4}\sqrt{D}}{\sigma}}\right)\cdot\mathopen{}\mathclose{{}\left(\sqrt{\frac{d}{r}}+\frac{d^{1/4}\sqrt{D}}{\sigma}}\right)\cdot\epsilon}\right)\\\
=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{d}{\sqrt{a_{1}r}}+\frac{d^{3/4}\sqrt{D}}{\sigma}\mathopen{}\mathclose{{}\left(\frac{1}{\sqrt{a_{1}}}+\frac{1}{\sqrt{r}}}\right)+\frac{\sqrt{d}D}{\sigma^{2}}}\right)\cdot\epsilon}\right).$
##### Case 2: $a_{1}\geq r$.
Here, Lemma 15 tells us
$\displaystyle\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>A_{1}=a_{1},A_{2}=a_{2},R_{1}=r))$
$\displaystyle\leq c\cdot
M_{\phi_{1}}M_{\phi_{2}}\epsilon\cdot\begin{cases}\frac{1}{r},&\text{if
}a_{2}\geq r\\\ \frac{1}{\sqrt{ra_{2}}},&\text{if }a_{2}\leq r\end{cases}$
$\displaystyle\leq c^{\prime}\cdot
M_{\phi_{1}}\epsilon\cdot\begin{cases}\frac{\sqrt{d}}{r}+\frac{d^{1/4}\sqrt{D}}{\sigma\sqrt{r}},&\text{if
}a_{2}\geq r,\\\
\sqrt{\frac{d}{ra_{2}}}+\frac{d^{1/4}\sqrt{D}}{\sigma\sqrt{r}},&\text{if
}a_{2}\leq r\end{cases}$ $\displaystyle\leq c^{\prime\prime}\cdot
M_{\phi_{1}}\epsilon\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}}{r}+\sqrt{\frac{d}{ra_{2}}}+\frac{d^{1/4}\sqrt{D}}{\sigma\sqrt{r}}}\right),$
again for some $c,c^{\prime},c^{\prime\prime}>0$ and using Corollary 6 to
bound $M_{\phi_{2}}$.
Integrating out $a_{2}$ using Lemmas 4 and 3, we have
$O\mathopen{}\mathclose{{}\left(M_{\phi_{1}}\epsilon\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}}{r}+\frac{d^{1/4}}{\sqrt{\sigma
r}}+\frac{d^{1/4}\sqrt{D}}{\sigma\sqrt{r}}}\right)}\right)\subseteq
O\mathopen{}\mathclose{{}\left(M_{\phi_{1}}\epsilon\cdot\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}}{r}+\frac{d^{1/4}\sqrt{D}}{\sigma\sqrt{r}}}\right)}\right).$
Using Corollary 6 to insert
$M_{\phi_{1}}=O\mathopen{}\mathclose{{}\left(\sqrt{d}+d^{1/4}\sqrt{Dr}/\sigma}\right)$,
we find
$O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}}{r}+\frac{d^{1/4}\sqrt{D}}{\sigma\sqrt{r}}}\right)\cdot\mathopen{}\mathclose{{}\left(\sqrt{d}+\frac{d^{1/4}\sqrt{Dr}}{\sigma}}\right)\cdot\epsilon}\right)=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{d}{r}+\frac{d^{3/4}\sqrt{D}}{\sigma\sqrt{r}}+\frac{\sqrt{d}D}{\sigma^{2}}}\right)\cdot\epsilon}\right).$
The result follows from these two cases. ∎
###### Lemma 17.
For $i\in[2]$,
$\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>A_{i}=a_{i})=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}D}{\sigma^{2}}+\frac{d^{3/4}\sqrt{D}}{\sigma\sqrt{a_{i}}}}\right)\cdot\epsilon}\right).$
###### Proof.
From Lemma 16, we have
$\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>A_{i}=a_{i},R=r)=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}D}{\sigma^{2}}+\frac{d}{\sqrt{a_{i}r}}+\frac{d}{r}+\frac{d^{3/4}\sqrt{D}}{\sigma}\mathopen{}\mathclose{{}\left(\frac{1}{\sqrt{a_{i}}}+\frac{1}{\sqrt{r}}}\right)}\right)\cdot\epsilon}\right).$
We can then apply Lemmas 4 and 3 to integrate out $r$. This leaves
$O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}D}{\sigma^{2}}+\frac{d^{1/4}}{\sqrt{a_{i}\sigma}}+\frac{\sqrt{d}}{\sigma}+\frac{\sqrt{dD}}{\sigma^{3/2}}+\frac{d^{3/4}\sqrt{D}}{\sigma\sqrt{a_{i}}}}\right)\cdot\epsilon}\right)\subseteq
O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}D}{\sigma^{2}}+\frac{d^{3/4}\sqrt{D}}{\sigma\sqrt{a_{i}}}}\right)\cdot\epsilon}\right),$
as claimed. ∎
###### Lemma 18.
$\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>R=r)=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}D}{\sigma^{2}}+\frac{d}{r}+\frac{d^{3/4}\sqrt{D}}{\sigma\sqrt{r}}}\right)\cdot\epsilon}\right).$
###### Proof.
The result follows from taking Lemma 17 and integrating out $a_{i}$ using
Lemmas 4 and 3. ∎
## 4 Linked Pairs of 2-Changes
To obtain bounds on the smoothed complexity of 2-opt, we consider so-called
linked pairs of 2-changes, introduced previously by Englert et al. [8]. A pair
of 2-changes is said to be linked if some edge removed from the tour by one
2-change is added to the tour by the other 2-change.
Such linked pairs have been considered in several previous works [8, 13]. In
each case, the distinction has been made between several types of linked
pairs. In our analysis, only two of these types are relevant, and so we will
describe only these types for the sake of brevity.
We consider 2-changes which share exactly one edge, and subdivide them into
pairs of type 0 and of type 1. A generic 2-change removes the edges
$\\{z_{1},z_{2}\\}$ and $\\{z_{3},z_{6}\\}$ while adding $\\{z_{1},z_{6}\\}$
and $\\{z_{2},z_{3}\\}$. The other 2-change removes $\\{z_{3},z_{4}\\}$ and
$\\{z_{5},z_{6}\\}$ while adding $\\{z_{3},z_{6}\\}$ and $\\{z_{4},z_{5}\\}$.
Note that $\\{z_{3},z_{6}\\}$ occurs in both 2-changes.
* •
If $|\\{z_{1},\ldots,z_{6}\\}|=6$, then we say the linked pair is of type 0.
* •
If $|\\{z_{1},\ldots,z_{6}\\}|=5$, then we say the linked pair is of type 1.
Type 1 can itself be subdivided into two types, 1a and 1b. We will detail this
distinction in Section 4.2.
Before moving on to analyzing linked pairs, we state a useful lemma that
justifies limiting the discussion to just linked pairs of types 0 and 1.
###### Lemma 19 ([8, Lemma 9]).
In every sequence of $t$ consecutive 2-changes the number of disjoint pairs of
2-changes of type 0 or type 1 is at least $\Omega(t)-O(n^{2})$.
### 4.1 Type 0
We begin with type 0, as this is by far the simplest linked pair. For clarity,
see Figure 3 (left) for an illustration of a type 0 linked pair. It should be
noted that, while Figure 3 shows a specific configuration of vertices in two
dimensions, the results of this section hold generally; the analysis does not
depend on any point having a particular orientation with respect to its
neighbors. The same holds for the results in Section 4.2.
The improvement of a type 0 linked pair is completely specified by a small
number of random variables. We require five distances between vertices,
$R_{1}=\|z_{1}-z_{3}\|$, $A_{1}=\|z_{3}-z_{6}\|$, $A_{2}=\|z_{1}-z_{2}\|$,
$R_{2}=\|z_{4}-z_{6}\|$ $A_{3}=\|z_{4}-z_{5}\|$. Additionally, we need the
following angles:
1. 1.
$\phi_{1}$ between $z_{2}-z_{1}$ and $z_{3}-z_{1}$,
2. 2.
$\phi_{2}$ between $z_{1}-z_{3}$ and $z_{6}-z_{3}$,
3. 3.
$\phi_{1}^{\prime}$ between $z_{3}-z_{6}$ and $z_{4}-z_{6}$,
4. 4.
$\phi_{3}$ between $z_{6}-z_{4}$ and $z_{5}-z_{4}$.
Note that, if we condition on $A_{1}=a_{1}$, the events
$\Delta_{1}\in(0,\epsilon]$ and $\Delta_{2}\in(0,\epsilon]$ are independent.
We can then apply Lemma 15, together with several applications of Lemma 4.
$z_{1}$$z_{2}$$z_{3}$$z_{4}$$z_{5}$$z_{6}$$\phi_{2}$$\phi_{1}$$\phi_{1}^{\prime}$$\phi_{3}$
$z_{1}$$z_{2}$$z_{3}$$z_{4}$$z_{5}$$\phi$
$z_{1}$$z_{2}$$z_{3}$$z_{4}$$z_{5}$$R$$\phi$
Figure 3: Labels of points involved in the three types of pairs of linked
2-changes. Left: type 0. Center: type 1a. Right: type 1b.
###### Lemma 20.
Let $\Delta^{\mathrm{link}}_{\mathrm{min}}$ denote the minimum improvement of
any type 0 pair of linked 2-changes, and assume that
$\mathcal{X}\subseteq[-D,D]^{d}$. Then
$\mathbb{P}(\Delta^{\mathrm{link}}_{\mathrm{min}}\in(0,\epsilon])=O\mathopen{}\mathclose{{}\left(\frac{dD^{2}n^{6}\epsilon^{2}}{\sigma^{4}}}\right).$
###### Proof.
The result follows from the independence of $\Delta_{1}$ and $\Delta_{2}$ when
conditioning on $A_{1}=a_{1}$. Observe that
$\mathbb{P}(\Delta^{\mathrm{link}}\in(0,\epsilon])\leq\mathbb{P}(\Delta_{1}\in(0,\epsilon]\wedge\Delta_{2}\in(0,\epsilon])$.
Thus, using Lemma 17,
$\displaystyle\mathbb{P}(\Delta^{\mathrm{link}}\in(0,\epsilon]\>\lvert\>A_{1}=a_{1})=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}D}{\sigma^{2}}+\frac{d^{3/4}\sqrt{D}}{\sigma\sqrt{a_{1}}}}\right)^{2}\epsilon^{2}}\right).$
Straightforward algebra yields
$\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}D}{\sigma^{2}}+\frac{d^{3/4}\sqrt{D}}{\sigma\sqrt{a_{1}}}}\right)^{2}=O\mathopen{}\mathclose{{}\left(\frac{dD^{2}}{\sigma^{4}}+\frac{d^{3/2}D}{\sigma^{2}a_{1}}+\frac{d^{5/4}D^{3/2}}{\sigma^{3}\sqrt{a_{1}}}}\right).$
Using Lemmas 3 and 4 to integrate out $a_{1}$, we obtain
$\frac{dD^{2}}{\sigma^{4}}+\frac{dD}{\sigma^{3}}+\frac{dD^{3/2}}{\sigma^{7/2}}=O\mathopen{}\mathclose{{}\left(\frac{dD^{2}}{\sigma^{4}}}\right).$
Taking a union bound over the $O(n^{6})$ different type 0 pairs completes the
proof. ∎
### 4.2 Type 1
As mentioned previously, type 1 linked pairs can be subdivided into two
distinct subtypes. Subtype 1a shares exactly one edge between the two
2-changes, while subtype 1b shares two edges.
#### 4.2.1 Type 1a
We first consider type 1a. See Figure 3 (center) for a graphical
representation of the type, as well as the labels of the points and edges
involved.
Let the 2-change replacing $\\{z_{1},z_{2}\\}$ and $\\{z_{3},z_{4}\\}$ by
$\\{z_{2},z_{3}\\}$ and $\\{z_{1},z_{4}\\}$ be called $S_{1}$, and the
2-change replacing $\\{z_{1},z_{4}\\}$ and $\\{z_{3},z_{5}\\}$ by
$\\{z_{1},z_{3}\\}$ and $\\{z_{4},z_{5}\\}$ be called $S_{2}$.
We proceed by conditioning on $A_{2}=\|z_{3}-z_{4}\|=a_{2}$ and
$A_{3}=\|z_{4}-z_{5}\|=a_{3}$. Using Lemma 15, we can then compute the
probability that $\Delta_{1}\in(0,\epsilon]$. Moreover, the location of
$z_{5}$ is then still random. Hence, the random variable
$\eta=\|z_{3}-z_{5}\|-\|z_{4}-z_{5}\|$ can be analyzed independently from
$\Delta_{1}$.
For the density of $\eta$, we have the following lemma from Englert et al [8].
###### Lemma 21 ([8, Lemma 15, modified]).
Let $i\in[2]$, and assume that $\mathcal{X}\subseteq[-D,D]^{d}$. For
$a_{2},a_{3}\in(0,2\sqrt{d}D]$ and
$\eta\in(-a_{2},\min\\{a_{2},2a_{3}-a_{2}\\})$,
$\displaystyle f_{\eta|A_{2}=a_{2},A_{3}=a_{3}}(\eta)\leq
M_{\phi}\cdot\begin{cases}\sqrt{\frac{2}{a_{2}^{2}-\eta^{2}}},&\text{if
}a_{3}\geq a_{2},\\\ \sqrt{\frac{2}{(a_{2}+\eta)(2a_{3}-a_{2}-\eta}},&\text{if
}a_{3}<a_{2},\end{cases}$
where
$M_{\phi}=\max_{0\leq\phi\leq\pi}f_{\phi|A_{2}=a_{2},A_{3}=a_{3}}(\phi)$. For
$\eta\notin(-r,\min\\{a_{2},2a_{3}-a_{2}\\})$, the density vanishes.
Note that the factor $M_{\phi}$ was not present in the original statement of
Lemma 21. This is because the original statement concerned a simplified random
experiment, wherein the points $z_{5}$ and $z_{3}$ are chosen uniformly from a
hyperball centered on $z_{4}$. As such, $\phi$ is assumed to be distributed
uniformly111 This assumption is only valid for $d=2$. To see this, observe
that by conditioning on $A_{i}=a_{i}$, the point $z_{i}$ is distributed
uniformly on the $(d-1)$-sphere with radius $a_{i}$. For $d>2$, the density of
$\phi$ is thus concentrated near $\phi=\pi/2$. An upper bound for this density
can be obtained by setting $s=0$ in Theorem 5, yielding $O(\sqrt{d})$. As
Englert et al. assume $d$ to be constant, this has no effect on their eventual
result.. Since we do not analyze a simplified random experiment, we cannot
make this assumption. However, examining the original proof of Lemma 21, this
can be resolved by simply inserting the upper bound of the density of $\phi$,
conditioned on $A_{2}=a_{2}$ and $A_{3}=a_{3}$. This bound is provided to us
by Corollary 6.
###### Lemma 22.
Let $\Delta_{2}$ be the improvement yielded by $S_{2}$, and assume that
$\mathcal{X}\subseteq[-D,D]^{d}$. Then
$\mathbb{P}(\Delta_{2}\in(0,\epsilon]\>\lvert\>A_{2}=a_{2})=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{d^{1/4}\sqrt{D}}{\sigma}+\sqrt{\frac{d}{a_{2}}}}\right)\cdot\sqrt{\epsilon}}\right).$
###### Proof.
We obtain the density of $\eta$ from Lemma 21. As before, we need to subdivide
into the cases $a_{2}\leq a_{3}$ and $a_{2}\geq a_{3}$.
##### Case 1: $a_{3}\leq a_{2}$.
For this case, the conditional density of $\eta$ reads
$f_{\eta|A_{2}=a_{2},A_{3}=a_{3}}(\eta)\leq
M_{\phi}\cdot\begin{cases}\sqrt{\frac{2}{a_{3}(a_{2}+\eta)}},&\eta\leq
a_{3}-a_{2},\\\ \sqrt{\frac{2}{a_{3}(2a_{3}-a_{2}-\eta}},&\eta\geq
a_{3}-a_{2}.\end{cases}$
We assume that the random variable $\|z_{1}-z_{4}\|-\|z_{1}-z_{3}\|$ has been
fixed by the adversary. This fixes an interval of size $\epsilon$ for $\eta$
to fall within, should $\Delta_{2}\in(0,\epsilon]$ occur. Observe that
$f_{\eta|A_{2}=a_{2},A_{3}=a_{3}}$ integrated over any interval of size
$\epsilon$ yields at most $O(M_{\phi}\sqrt{\epsilon/a_{3}})$. Since $a_{3}\leq
a_{2}$, we have $M_{\phi}=O(\sqrt{d}+d^{1/4}\sqrt{Da_{3}}/\sigma)$. Thus, for
any interval $I$ of size $\epsilon$,
$\mathbb{P}(\eta\in
I\>\lvert\>A_{2}=a_{2},A_{3}=a_{3})=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\sqrt{\frac{d}{a_{3}}}+\frac{d^{1/4}\sqrt{D}}{\sigma}}\right)\cdot\sqrt{\epsilon}}\right).$
##### Case 2: $a_{3}\geq a_{2}$.
For this case, we have
$f_{\eta|A_{2}=a_{2},A_{3}=a_{3}}(\eta)=M_{\phi}\sqrt{\frac{2}{a_{2}}}\cdot\sqrt{\frac{1}{a_{2}-|\eta|}}.$
Similarly as in Case 1, this function integrates to at most
$O(M_{\phi}\sqrt{\epsilon/a_{2}})$. Here, we have
$M_{\phi}=O(\sqrt{d}+d^{1/4}\sqrt{Da_{2}}/\sigma)$, so we obtain
$\mathbb{P}(\eta\in
I\>\lvert\>A_{2}=a_{2},A_{3}=a_{3})=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\sqrt{\frac{d}{a_{2}}}+\frac{d^{1/4}\sqrt{D}}{\sigma}}\right)\cdot\sqrt{\epsilon}}\right).$
Combining the two cases above, we see that
$\mathbb{P}(\Delta_{2}\in(0,\epsilon]\>\lvert\>A_{2}=a_{2},A_{3}=a_{3})=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{d^{1/4}\sqrt{D}}{\sigma}+\sqrt{\frac{d}{a_{2}}}+\sqrt{\frac{d}{a_{3}}}}\right)\cdot\sqrt{\epsilon}}\right).$
We can now integrate out $a_{3}$ using Lemmas 4 and 3. Then, using $D\geq 1$,
$d\geq 2$ and $\sigma\leq 1$, we eventually arrive at the stated result. ∎
Using Lemmas 4 and 22, we can easily prove the following statement about type
1a pairs of 2-changes.
###### Lemma 23.
Let $\Delta^{\mathrm{link}}_{\mathrm{min}}$ denote the minimum improvement of
any type 1a pair of 2-changes, and assume that
$\mathcal{X}\subseteq[-D,D]^{d}$. Then
$\mathbb{P}(\Delta^{\mathrm{link}}_{\mathrm{min}}\in(0,\epsilon])=O\mathopen{}\mathclose{{}\left(\frac{n^{5}d^{3/4}D^{3/2}}{\sigma^{3}}\epsilon^{3/2}}\right).$
###### Proof.
As in the proof of Lemma 20, we can simply use Lemmas 17 and 22 to compute the
probability that both $\Delta_{1}\in(0,\epsilon]$ and
$\Delta_{2}\in(0,\epsilon]$, which bounds the probability that
$\Delta_{1}+\Delta_{2}\in(0,\epsilon]$:
$\displaystyle\mathbb{P}(\Delta_{1},\Delta_{2}\in(0,\epsilon]\>\lvert\>A_{2}=a_{2})$
$\displaystyle=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{d^{3/4}D^{3/2}}{\sigma^{3}}+\frac{dD}{\sigma^{2}\sqrt{a_{2}}}+\frac{d^{5/4}\sqrt{D}}{\sigma
a_{2}}}\right)\cdot\epsilon^{3/2}}\right).$
Using Lemmas 4 and 3, with $d\geq 2$, $D\geq 1$ and $\sigma\leq 1$ in
conjunction with a union bound over the $O(n^{5})$ pairs of type 1a yields the
result. ∎
#### 4.2.2 Type 1b
The final type of linked pair we consider is type 1b. See Figure 3 (right) for
a graphical representation.
Let $S_{1}$ denote the 2-change replacing $\\{z_{1},z_{3}\\}$ and
$\\{z_{2},z_{4}\\}$ with $\\{z_{2},z_{3}\\}$ and $\\{z_{1},z_{4}\\}$, and let
$S_{2}$ denote the 2-change replacing $\\{z_{2},z_{5}\\}$ and
$\\{z_{1},z_{4}\\}$ with $\\{z_{1},z_{5}\\}$ and $\\{z_{2},z_{5}\\}$. From
Figure 3, it is evident that we can treat $\Delta_{1}$ and
$\eta=\|z_{2}-z_{5}\|-\|z_{1}-z_{5}\|$ as independent variables, as long as we
condition on $R=r$.
###### Lemma 24.
Let $\Delta^{\mathrm{link}}_{\mathrm{min}}$ denote the minimum improvement of
any type 1b pair of 2-changes, and assume that
$\mathcal{X}\subseteq[-D,D]^{d}$. Then
$\mathbb{P}(\Delta^{\mathrm{link}}_{\mathrm{min}}\in(0,\epsilon])=O\mathopen{}\mathclose{{}\left(\frac{n^{5}d^{3/4}D^{3/2}}{\sigma^{3}}\epsilon^{3/2}}\right).$
###### Proof.
The proof follows along the exact same lines as Lemma 23. small modifications.
∎
Lemmas 20, 23 and 24 enable us to prove an upper bound to the smoothed
complexity of 2-opt in the present probabilistic model.
###### Theorem 25.
The expected number of iterations performed by 2-opt for smoothed Euclidean
instances of TSP in $d\geq 2$ dimensions is bounded from above by
$O\mathopen{}\mathclose{{}\left(dD^{2}n^{4+\frac{1}{3}}/\sigma^{2}}\right)$.
###### Proof.
We assume for this proof that the entire instance is contained within
$[-D,D]^{d}$, with $D=\Theta(1+\sigma\sqrt{n\log n})$. This occurs with
probability at least $1-1/n!$. Thus, with probability at least $1-1/n!$, the
longest tour in the instance has length at most $2\sqrt{d}Dn$. The assumption
that the entire instance lies within this hypercube enables us to use Lemmas
20, 23 and 24, which were proved under this assumption.
Let $E$ denote the event that, among all type 0 and type 1 linked pairs of
2-changes, the pair with the smallest improvement is of type 0, and let
$E^{c}$ denote the event that this pair is of type 1a or type 1b. Let the
random variable $T$ denote the number of iterations taken by 2-opt to reach a
local optimum.
We first compute $\mathbb{E}(T\>\lvert\>E)$. We apply Lemma 1 with $\alpha=2$,
which is feasible due to Lemma 20. We then obtain immediately that
$\mathbb{E}(T\>\lvert\>E)=O(dD^{2}n^{4}/\sigma^{2})$.
Next, we compute $\mathbb{E}(T\>\lvert\>E^{c})$. In this case, we apply Lemma
1 with $\alpha=3/2$ (cf. Lemmas 23 and 24). This yields
$\mathbb{E}(T\>\lvert\>E^{c})=O(dD^{2}n^{4+\frac{1}{3}}/\sigma^{2})$.
Combining the bounds for $E$ and $E^{c}$ yields the result. ∎
## 5 Improving the Analysis for $d\geq 3$
The bottleneck in Theorem 25 stems from Lemmas 23 and 24, which bound the
probability that any linked pair of type 1a or type 1b improves the tour by at
most $\epsilon$. The probability given by these lemmas is proportional to
$\epsilon^{3/2}$, which yields an extra factor of $n^{1/3}$ compared to type 0
linked pairs.
For $d\geq 3$, we can improve this to $\epsilon^{2}$, yielding improved
smoothed complexity bounds. The key to this improvement is to use the second
part of Corollary 6 to bound the density of $\eta_{i}$ as in Lemma 12. This
immediately yields the following result on $\eta_{i}=\|a-z_{i}\|-\|b-z_{i}\|$.
###### Lemma 26.
Let $i\in[2]$, and assume that $\mathcal{X}\subseteq[-D,D]^{d}$. The density
of $\eta_{i}$ in $d\geq 3$ dimensions, conditioned on $A_{i}=a_{i}$ and $R=r$,
is bounded from above by
$O\mathopen{}\mathclose{{}\left(\frac{a_{i}+r}{a_{i}r}\cdot\mathopen{}\mathclose{{}\left(\sqrt{d}+\frac{D\min\\{r,a_{i}\\}}{\sigma^{2}}}\right)}\right).$
###### Proof.
We call the desired density $f_{\eta_{i}|A=a_{i},R=r}$. From Lemma 12, we know
that
$f_{\eta_{i}|A_{i}=a_{i},R=r}(\eta)\leq\frac{a_{i}+r}{a_{i}r}\cdot\frac{f_{\phi_{i}|A_{i}=a_{i},R=r}(\phi_{i}(\eta))}{|\sin\phi_{i}(\eta)|}.$
Since $d\geq 3$, we can use the second part of Corollary 6 to obtain the
desired bound, making use of the assumption that all points fall within
$[-D,D]^{d}$. ∎
Lemma 26 enables us to find an improved version of Lemma 15.
###### Lemma 27.
Let $\Delta$ denote the improvement of a 2-change in $d\geq 3$ dimensions. Let
$i\in[2]$, and assume that $\mathcal{X}\subseteq[-D,D]^{d}$. Then
$\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>A_{i}=a_{i},R=r)=O\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(\frac{\sqrt{d}}{\min\\{a_{i},r\\}}+\frac{D}{\sigma^{2}}}\right)\cdot\epsilon}\right).$
###### Proof.
Let $j=3-i$. We assume that $\eta_{j}=t$ is fixed by the adversary. Then
$\Delta\in(0,\epsilon]$ iff $\eta_{i}\in(-t,-t+\epsilon]=:I$, an interval of
size $\epsilon$. By Lemma 26, we have a bound for the density of $\eta_{i}$.
Thus, we find
$\mathbb{P}(\Delta\in(0,\epsilon]\>\lvert\>A_{i}=a_{i},R=r)=O\mathopen{}\mathclose{{}\left(\frac{a_{i}+r}{a_{i}r}\cdot\mathopen{}\mathclose{{}\left(\sqrt{d}+D\min\\{r,a_{i}\\}/\sigma^{2}}\right)\cdot\epsilon}\right).$
Considering the cases $a_{i}\leq r$ and $a_{i}<r$ separately and using the
assumptions that all points lie within $[-D,D]^{d}$ and that $D\geq 1$ and
$\sigma\leq 1$ yields the stated result. ∎
The following lemma now yields the probability that any linked pair of
2-changes improves the tour by at most $\epsilon$. We omit the proof, since it
follow easily from Lemma 27 along the same lines as the lemmas in Section 4.
###### Lemma 28.
Let $\Delta^{\mathrm{link}}_{\mathrm{min}}$ denote the minimum improvement of
any linked pair of 2-changes of type 0 or type 1 for $d\geq 3$, and assume
that $\mathcal{X}\subseteq[-D,D]^{d}$. Then
$\mathbb{P}(\Delta^{\mathrm{link}}_{\mathrm{min}}\in(0,\epsilon])=O\mathopen{}\mathclose{{}\left(\frac{D^{2}n^{6}\epsilon^{2}}{\sigma^{4}}}\right).$
We then obtain our result for $d\geq 3$.
###### Theorem 29.
The expected number of iterations performed by 2-opt for smoothed Euclidean
instances of TSP in $d\geq 3$ dimensions is bounded from above by
$O\mathopen{}\mathclose{{}\left(\sqrt{d}D^{2}n^{4}/\sigma^{2}}\right)$.
###### Proof.
The theorem follows immediately from applying Lemmas 1 and 28, since by Lemma
2 any tour in our smoothed instance has length at most $2\sqrt{d}Dn$ with
probability at least $1-1/n!$. ∎
## 6 Discussion
| Englert, Röglin & Vöcking [8] | Manthey & Veenstra [13] | This paper
---|---|---|---
$d=2$ | $O\mathopen{}\mathclose{{}\left(n^{4+\frac{1}{3}}/\sigma^{5+\frac{1}{3}}\cdot\log\frac{n}{\sigma}}\right)$ | - | $O\mathopen{}\mathclose{{}\left(n^{4+\frac{1}{3}}/\sigma^{2}}\right)$
$d=3$ | $O\mathopen{}\mathclose{{}\left(n^{4+\frac{1}{3}}/\sigma^{8}\cdot\log\frac{n}{\sigma}}\right)$ | - | $O\mathopen{}\mathclose{{}\left(n^{4}/\sigma^{2}}\right)$
$d\geq 4$ | $O\mathopen{}\mathclose{{}\left(c_{d}\cdot n^{4+\frac{1}{3}}/\sigma^{8d/3}}\right)$ | $O\mathopen{}\mathclose{{}\left(\sqrt{d}n^{4}/\sigma^{4}}\right)$ | $O\mathopen{}\mathclose{{}\left(\sqrt{d}n^{4}/\sigma^{2}}\right)$
Table 1: Previous and current smoothed complexity bounds for Gaussian noise, for $\sigma=O(1/\sqrt{n\log n})$. Note that for $d\geq 4$, the bounds of Englert et al. include a factor $c_{d}$ which is super-exponential in $d$. | Englert, Röglin & Vöcking [8] | Manthey & Veenstra [13] | This paper
---|---|---|---
$d=2$ | $O\mathopen{}\mathclose{{}\left(n^{7}\log^{3+\frac{2}{3}}n}\right)$ | - | $O\mathopen{}\mathclose{{}\left(n^{5+\frac{1}{3}}\log n}\right)$
$d=3$ | $O\mathopen{}\mathclose{{}\left(n^{8+\frac{1}{3}}\log^{5}n}\right)$ | - | $O\mathopen{}\mathclose{{}\left(n^{5}\log n}\right)$
$d\geq 4$ | $O\mathopen{}\mathclose{{}\left(c_{d}\cdot n^{4+\frac{1+4d}{3}}\log^{1+\frac{4d}{3}}n}\right)$ | $O\mathopen{}\mathclose{{}\left(\sqrt{d}n^{6}\log^{2}n}\right)$ | $O\mathopen{}\mathclose{{}\left(\sqrt{d}n^{5}\log n}\right)$
Table 2: Previous and current smoothed complexity bounds for Gaussian noise,
for $\sigma=\Omega(1/\sqrt{n\log n})$. Note that for $d\geq 4$, the bounds of
Englert et al. include a factor $c_{d}$ which is super-exponential in $d$.
For convenience, we provide comparisons of the previous smoothed complexity
bounds with our bound from Theorem 25 in Tables 1 and 2. These bounds are
provided both for small values of $\sigma$ and for large values, meaning
$\sigma=\Omega(1/\sqrt{n\log n})$ and $\sigma=O(1/\sqrt{n\log n})$.
Observe from Tables 2 and 1 that the bound for $d=2$ has a worse dependence on
$n$ compared to the bound for $d\geq 3$. The technical reasons for this
difference can be understood from Section 5. A more intuitive explanation for
the difference is that our analysis benefits from large angles between edges
in the smoothed TSP instance. In $d=2$, the density of these angles is maximal
when they are small, while for $d\geq 3$ it is maximal when the angles are
large. In effect, this means that the adversary has less power to specify
these angles to our detriment when $d\geq 3$.
From these tables, the greatest improvement is made for $d=3$, where we
improve by $n^{3+\frac{1}{3}}\log^{4}n$ in the large $\sigma$ case, and by
$\sqrt[3]{n}\log(n/\sigma)/\sigma^{6}$ for small $\sigma$. For $d=2$, the
improvement is more modest at $n^{1+\frac{2}{3}}\log^{2+\frac{2}{3}}n$ for
large $\sigma$ and $\log(n/\sigma)/\sigma^{3+\frac{1}{3}}$ for small $\sigma$.
For $d\geq 4$, we improve by $n\log n$ for large $\sigma$, and by
$\sigma^{-2}$ for small $\sigma$.
Note that we improve upon previous bounds mainly in the dependence on the
perturbation strength. In an intuitive sense, this is most substantial for
instances that are weakly perturbed from the adversarial instance, or in other
words, that are close to worst case. In addition, the small-$\sigma$ case is
considered more interesting for a smoothed analysis, since small $\sigma$
model the intuition of smoothed analysis of a small perturbation, while large
$\sigma$ reduce the analysis basically to an average-case analysis In order to
improve the explicit dependence on $n$, which is the same as for Manthey &
Veenstra [13], we believe new techniques are necessary.
As a final comment, we note that the techniques we employed in Sections 3 and
5 can also be used to improve and significantly simplify the analysis of the
one-step model used by Englert et al [8]. For $d\geq 3$, the improvement
amounts to a factor of $n^{1/3}\phi^{1/6}\log(n\phi)$, while for $d=2$, the
improvement is just $\log(n\phi)$, where $\phi$ denotes the upper bound of the
density functions used in the one-step model.
## References
* [1] “Local Search in Combinatorial Optimization” Princeton University Press, 2003 DOI: 10.2307/j.ctv346t9c
* [2] Milton Abramowitz “Handbook of Mathematical Functions, With Formulas, Graphs, and Mathematical Tables,” USA: Dover Publications, Inc., 1974
* [3] D.. Amos “Computation of Modified Bessel Functions and Their Ratios” In _Mathematics of Computation_ 28.125, 1974, pp. 239–251 DOI: 10.1090/S0025-5718-1974-0333287-7
* [4] Tom M. Apostol “An Elementary View of Euler’s Summation Formula” In _The American Mathematical Monthly_ 106.5 Mathematical Association of America, 1999, pp. 409–418 DOI: 10.2307/2589145
* [5] Barun Chandra, Howard Karloff and Craig Tovey “New Results on the Old K-Opt Algorithm for the Traveling Salesman Problem” In _SIAM Journal on Computing_ 28.6 Society for Industrial and Applied Mathematics, 1999, pp. 1998–2029 DOI: 10.1137/S0097539793251244
* [6] Christian Engels and Bodo Manthey “Average-Case Approximation Ratio of the 2-Opt Algorithm for the TSP” In _Operations Research Letters_ 37.2, 2009, pp. 83–84 DOI: 10.1016/j.orl.2008.12.002
* [7] Matthias Englert, Heiko Röglin and Berthold Vöcking “Smoothed Analysis of the 2-Opt Algorithm for the General TSP” In _ACM Transactions on Algorithms_ 13.1, 2016, pp. 10:1–10:15 DOI: 10.1145/2972953
* [8] Matthias Englert, Heiko Röglin and Berthold Vöcking “Worst Case and Probabilistic Analysis of the 2-Opt Algorithm for the TSP” Corrected version: https://arxiv.org/abs/2302.06889 In _Algorithmica_ 68.1, 2014, pp. 190–264 DOI: 10.1007/s00453-013-9801-4
* [9] Norman L. Johnson, Samuel Kotz and Narayanaswamy Balakrishnan “Continuous Univariate Distributions, Volume 2” John Wiley & Sons, 1995
* [10] Bernhard Korte and Jens Vygen “Combinatorial Optimization: Theory and Algorithms”, Algorithms and Combinatorics Berlin Heidelberg: Springer-Verlag, 2000 DOI: 10.1007/978-3-662-21708-5
* [11] Bodo Manthey “Smoothed Analysis of Local Search” In _Beyond the Worst-Case Analysis of Algorithms_ Cambridge: Cambridge University Press, 2021, pp. 285–308 DOI: 10.1017/9781108637435.018
* [12] Bodo Manthey and Heiko Röglin “Smoothed Analysis: Analysis of Algorithms Beyond Worst Case” In _it - Information Technology_ 53.6 De Gruyter Oldenbourg, 2011, pp. 280–286 DOI: 10.1524/itit.2011.0654
* [13] Bodo Manthey and Rianne Veenstra “Smoothed Analysis of the 2-Opt Heuristic for the TSP: Polynomial Bounds for Gaussian Noise” Full, improved version: https://arxiv.org/abs/2308.00306 In _Algorithms and Computation_ , Lecture Notes in Computer Science Berlin, Heidelberg: Springer, 2013, pp. 579–589 DOI: 10.1007/978-3-642-45030-3˙54
* [14] Christos H. Papadimitriou “The Euclidean Travelling Salesman Problem Is NP-complete” In _Theoretical Computer Science_ 4.3, 1977, pp. 237–244 DOI: 10.1016/0304-3975(77)90012-3
* [15] Daniel A. Spielman and Shang-Hua Teng “Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time” In _Journal of the ACM_ 51.3, 2004, pp. 385–463 DOI: 10.1145/990308.990310
* [16] Daniel A. Spielman and Shang-Hua Teng “Smoothed Analysis: An Attempt to Explain the Behavior of Algorithms in Practice” In _Communications of the ACM_ 52.10, 2009, pp. 76–84 DOI: 10.1145/1562764.1562785
|
# Quantitative Besicovitch projection theorem for irregular sets of directions
Damian Dąbrowski Department of Mathematics and Statistics
University of Jyväskylä, P.O. Box 35 (MaD)
FI-40014 University of Jyväskylä
Finland<EMAIL_ADDRESS>
###### Abstract.
The classical Besicovitch projection theorem states that if a planar set $E$
with finite length is purely unrectifiable, then almost all orthogonal
projections of $E$ have zero length. We prove a quantitative version of this
result: if $E\subset\mathbb{R}^{2}$ is AD-regular and there exists a set of
direction $G\subset\mathbb{S}^{1}$ with $\mathcal{H}^{1}(G)\gtrsim 1$ such
that for every $\theta\in G$ we have
$\|\pi_{\theta}\mathcal{H}^{1}|_{E}\|_{L^{\infty}}\lesssim 1$, then a big
piece of $E$ can be covered by a Lipschitz graph $\Gamma$ with
$\operatorname{Lip}(\Gamma)\lesssim 1$. The main novelty of our result is that
the set of good directions $G$ is assumed to be merely measurable and large in
measure, while previous results of this kind required $G$ to be an arc.
As a corollary, we obtain a result on AD-regular sets which avoid a large set
of directions, in the sense that the set of directions they span has a large
complement. It generalizes the following easy observation: a set $E$ is
contained in some Lipschitz graph if and only if the complement of the set of
directions spanned by $E$ contains an arc.
###### Key words and phrases:
Favard length, Besicovitch projection theorem, quantitative rectifiability,
Lipschitz graph
###### 2010 Mathematics Subject Classification:
28A75 (primary) 28A78 (secondary)
###### Contents
1. 1 Introduction
2. 2 Sketch of the proof
3. 3 Preliminaries
4. 4 Main proposition and proof of Theorem 1.7
5. 5 Rectangles and generalized cubes
6. 6 Conical energies
7. 7 Estimating interior energy and obtaining good cones
8. 8 Estimating exterior energy
9. 9 Proof of the key geometric lemma
10. A Proof of Corollary 3.2
## 1\. Introduction
### 1.1. Besicovitch projection theorem
A Borel set $E\subset\mathbb{R}^{2}$ is said to be _purely unrectifiable_ if
for any (1-dimensional) Lipschitz graph $\Gamma\subset\mathbb{R}^{2}$ we have
$\mathcal{H}^{1}(E\cap\Gamma)=0.$
One of the fundamental results of geometric measure theory is the Besicovitch
projection theorem, which states that if $E\subset\mathbb{R}^{2}$ is purely
unrectifiable and $\mathcal{H}^{1}(E)<\infty$, then almost all orthogonal
projections of $E$ have zero length. We reformulate this result below in a way
that is more suitable for the purpose of this article.
Let $\mathbb{T}\coloneqq\mathbb{R}/\mathbb{Z}$, and for $\theta\in\mathbb{T}$
we set $e_{\theta}\coloneqq(\cos(2\pi\theta),\,\sin(2\pi\theta)),$ and
$\pi_{\theta}(x)\coloneqq e_{\theta}\cdot x$, so that
$\pi_{\theta}\mathrel{\mathop{\ordinarycolon}}\mathbb{R}^{2}\to\mathbb{R}$ is
the orthogonal projection map to the line
$\ell_{\theta}\coloneqq\operatorname{span}(e_{\theta})$.
###### Definition 1.1.
Given a Borel set $E\subset\mathbb{R}^{2}$, we define its _Favard length_
(also known as its _Buffon’s needle probability_) as
$\operatorname{Fav}(E)=\int_{0}^{1}\mathcal{H}^{1}(\pi_{\theta}(E))\,d\theta.$
###### Theorem A ([Bes39]).
Let $E\subset\mathbb{R}^{2}$ be an $\mathcal{H}^{1}$-measurable set with
$0<\mathcal{H}^{1}(E)<\infty$. Suppose that $\operatorname{Fav}(E)>0$. Then,
there exists a Lipschitz graph $\Gamma$ such that
$\mathcal{H}^{1}(\Gamma\cap E)>0.$
The planar result stated above is due to Besicovitch [Bes39], see [Mat95,
Theorem 18.1] for a modern reference. A higher dimensional counterpart of
Theorem A, dealing with $n$-dimensional subsets of $\mathbb{R}^{d}$, was shown
by Federer [Fed47], see also an alternative proof due to White [Whi98]. In
this paper we will only be concerned with 1-dimensional subsets of
$\mathbb{R}^{2}$.
Note that Theorem A is a purely qualitative result: it gives no estimate on
the size of $\mathcal{H}^{1}(\Gamma\cap E)$, nor on the Lipchitz constant of
$\Gamma$. In the last thirty years many classical definitions and results of
geometric measure theory have been quantified (see e.g. [Jon90, DS91, DS93a,
AT15, TT15, Tol17]), finding applications in PDEs and harmonic analysis (see
e.g. [Dav98, Tol03, Tol05, NTV14, AHM+16, AHM+20]). However, obtaining a
quantitative counterpart to Theorem A proved to be a notoriously difficult
problem. Beyond its intrinsic appeal, this question is closely related to
_Vitushkin’s conjecture_ , which we briefly discuss in Subsection 1.5.
The problem of quantifying Theorem A has seen a number of breakthroughs in the
last few years [MO18, CT20, Orp21], which we will discuss shortly. In this
article we make further progress on this question.
### 1.2. Quantifying Besicovitch projection theorem
In order to state our result, we need to quantify the finite length assumption
of Theorem A.
###### Definition 1.2.
We say that a set $E\subset\mathbb{R}^{2}$ is Ahlfors-David-regular, or AD-
regular, if $E$ is closed and there exists a constant $C\geq 1$ such that for
all $x\in E$ and $0<r<\operatorname{diam}(E)$
$C^{-1}r\leq\mathcal{H}^{1}(E\cap B(x,r))\leq Cr.$
We will say that $E$ is AD-regular with constant $C_{0}$ if the inequality
above holds with $C=C_{0}$.
The following conjecture, if true, would be a very satisfactory quantitative
version of the Besicovitch projection theorem.
###### Conjecture 1.3.
Let $s\in(0,1),\ C_{0}\in(1,\infty)$, and let $E\subset\mathbb{R}^{2}$ be a
bounded AD-regular set with constant $C_{0}$. Suppose that
$\operatorname{Fav}(E)\geq s\operatorname{diam}(E).$ (1.1)
Then, there exists a Lipschitz graph $\Gamma\subset\mathbb{R}^{2}$ with
$\operatorname{Lip}(\Gamma)\lesssim_{s,C_{0}}1$ and
$\mathcal{H}^{1}(\Gamma\cap E)\gtrsim_{s,C_{0}}\mathcal{H}^{1}(E).$
###### Remark 1.4.
A weaker version of Conjecture 1.3 was stated by David and Semmes in 1993
[DS93b], and very recently proved by Orponen [Orp21]. This is Theorem C
discussed below.
###### Remark 1.5.
The AD-regularity assumption in Conjecture 1.3 cannot be dropped nor replaced
by the weaker assumption $\mathcal{H}^{1}(E)\sim\operatorname{diam}(E)$, see
[CDOV22, Proposition 6.1].
###### Remark 1.6.
Observe that the assumption (1.1) implies that there exists an
$\mathcal{H}^{1}$-measurable set $G\subset\mathbb{T}$ with
$\mathcal{H}^{1}(G)\gtrsim s$ such that
$\mathcal{H}^{1}(\pi_{\theta}(E))\gtrsim s\operatorname{diam}(E)\quad\text{for
all $\theta\in G$.}$ (1.2)
That is, $\operatorname{Fav}(E)\geq s\operatorname{diam}(E)$ implies that
there exists a big set $G$ of “good directions” where $E$ has big projections.
On the other hand, the existence of a set $G$ as above implies that
$\operatorname{Fav}(E)\gtrsim s^{2}\operatorname{diam}(E)$. Hence, the two
conditions are equivalent, up to a constant. We stress that, a priori, the set
of good directions $G$ arising from (1.1) is only measurable and large in
measure, possibly very scattered and irregular.
Significant progress towards proving Conjecture 1.3 has been recently achieved
by Martikainen and Orponen [MO18] and in the aforementioned work of Orponen
[Orp21]. We make further progress by proving the following result.
###### Theorem 1.7.
Let $s\in(0,1),C_{0},M\in(1,\infty)$, and let $E\subset\mathbb{R}^{2}$ be a
bounded AD-regular set with constant $C_{0}$. Set $\mu=\mathcal{H}^{1}|_{E}$.
Assume that there exists an $\mathcal{H}^{1}$-measurable set
$G\subset\mathbb{T}$ with $\mathcal{H}^{1}(G)\geq s$ and such that
$\|\pi_{\theta}\mu\|_{L^{\infty}(\mathbb{R})}\leq M\quad\text{for all
$\theta\in G$,}$ (1.3)
where $\pi_{\theta}\mu$ is the push-forward of $\mu$ by $\pi_{\theta}$.
Then, there exists a Lipschitz graph $\Gamma\subset\mathbb{R}^{2}$ with
$\operatorname{Lip}(\Gamma)\lesssim_{C_{0},M}1$ and
$\mathcal{H}^{1}(\Gamma\cap E)\gtrsim_{s,C_{0},M}\mathcal{H}^{1}(E).$
Note that the $L^{\infty}$-condition (1.3) implies the big projections
condition (1.2):
$\mathcal{H}^{1}(\pi_{\theta}(E))\geq M^{-1}\mu(E)\gtrsim
M^{-1}C_{0}^{-1}\operatorname{diam}(E),$
but in general (1.3) is much stronger than (1.2).
###### Remark 1.8.
The main novelty of Theorem 1.7 is that it allows us to work with a set of
directions $G\subset\mathbb{T}$ which is merely $\mathcal{H}^{1}$-measurable
and large in measure, just like the set of good directions arising from
Conjecture 1.3 (see Remark 1.6). Previous results of this type, which we
discuss below, needed to assume something about projections in a large
_interval_ of directions. Just how big of a difference this makes is discussed
further in Remark 1.13.
### 1.3. Comparison with results of Martikainen and Orponen
Let us compare Theorem 1.7 with the results from [MO18] and [Orp21]. We only
state their planar versions for simplicity, but both have higher-dimensional
counterparts.
###### Theorem B ([MO18]).
Let $s\in(0,1),C_{0},M\in(1,\infty)$, and let $E\subset\mathbb{R}^{2}$ be an
AD-regular set with constant $C_{0}$. Let $E_{1}\subset E\cap B(0,1)$ be an
$\mathcal{H}^{1}$-measurable subset with $\mathcal{H}^{1}(E_{1})\geq s$. Set
$\mu=\mathcal{H}^{1}|_{E_{1}}$.
Assume there exists $\theta_{0}\in\mathbb{T}$ such that for
$G=(\theta_{0},\,\theta_{0}+s)$ we have
$\int_{G}\|\pi_{\theta}\mu\|_{L^{2}(\mathbb{R})}^{2}\,d\theta\leq M.$ (1.4)
Then, there exists a Lipschitz graph $\Gamma\subset\mathbb{R}^{2}$ with
$\operatorname{Lip}(\Gamma)\lesssim_{s,C_{0},M}1$ and
$\mathcal{H}^{1}(\Gamma\cap E_{1})\gtrsim_{s,C_{0},M}\mathcal{H}^{1}(E_{1}).$
The result below was conjectured in [DS93b], and it was proved very recently
by Orponen.
###### Theorem C ([Orp21]).
Let $s\in(0,1),C_{0}\in(1,\infty)$, and let $E\subset\mathbb{R}^{2}$ be an AD-
regular set with constant $C_{0}$. Suppose that for every $x\in E$ and
$0<r<\operatorname{diam}(E)$ there exists $\theta_{x,r}\in\mathbb{T}$ such
that for all $\theta\in G_{x,r}=(\theta_{x,r},\,\theta_{x,r}+s)$ we have
$\mathcal{H}^{1}(\pi_{\theta}(E\cap B(x,r)))\geq sr.$ (1.5)
Then, for every $x\in E$ and $0<r<\operatorname{diam}(E)$ there exists a
Lipschitz graph $\Gamma_{x,r}\subset\mathbb{R}^{2}$ with
$\operatorname{Lip}(\Gamma_{x,r})\lesssim_{s,C_{0}}1$ and
$\mathcal{H}^{1}(\Gamma_{x,r}\cap E\cap
B(x,r))\gtrsim_{s,C_{0}}\mathcal{H}^{1}(E\cap B(x,r)).$
Observe that none of the three results above (Theorem 1.7, Theorem B, Theorem
C) implies any other, at least not in an obvious way. We summarize the main
differences between them below.
Firstly, as already mentioned in Remark 1.8, in all three results we assume
that $\mathcal{H}^{1}(G)\geq s$, but in Theorem 1.7 we only assume that $G$ is
$\mathcal{H}^{1}$-measurable, whereas in the other two results we assume that
$G$ is an interval. We achieved this improvement at the cost of assuming
better regularity of $\pi_{\theta}\mu$ for each $\theta\in G$ than in either
Theorem B or Theorem C, compare (1.3) with (1.4) and (1.5).
Secondly, observe that Theorem 1.7 and Theorem B are “single-scale results”,
whereas Theorem C is a “multi-scale result”, in the sense that in Theorem C
one needs to assume that $E$ has big projections at _all scales and locations_
in order to get Lipschitz graphs covering $E$. Obtaining a single-scale
version of Theorem C is an open problem stated in [Orp21, Question 1].
Finally, Theorem B holds for large subsets of AD-regular sets, whereas Theorem
1.7 and Theorem C have only been proven for AD-regular sets.
### 1.4. Related results
In [DS93b] David and Semmes proved that if $E\subset\mathbb{R}^{2}$ is AD-
regular, it satisfies _the weak geometric lemma_ (a multi-scale flatness
property), and $\mathcal{H}^{1}(\pi_{\theta}(E))\gtrsim 1$ for some
$\theta\in\mathbb{T}$ (a single direction is enough!), then $E$ contains a big
piece of a Lipschitz graph.
In [JKV97] the authors proved a quantitative Besicovitch projection theorem
for sets $E$ which are boundaries of open sets. The structure of sets with
nearly maximal Favard length was studied in [CDOV22]. A version of Besicovitch
projection theorem for Radon measures was recently shown in [Tas22]. A version
of the Besicovitch projection theorem for metric spaces was proved in [Bat20].
See [CT20, Dąb22] for the study of _conical energies_ , which we also use in
the proof of Theorem 1.7. Closely related concepts of _conical defect_ and
_measures carried by Lipschitz graphs_ were studied in [BN21].
An alternative approach to quantifying Besicovitch projection theorem is to
estimate the rate of decay of Favard length of $\delta$-neighbourhoods of
certain purely unrectifiable sets. See [Mat90, PS02, Tao09, ŁZ10, BV10a,
BV10b, NPV11, BŁV14, Łab14, Wil17, Bon19, ŁM22].
The Besicovitch projection theorem, and some of the results mentioned above,
have been also proven for _generalized projections_ in place of orthogonal
projection. See [HJJL12, BV11, CDT20, BT21, DT22].
### 1.5. Vitushkin’s conjecture
One of the main motivations for the study of Conjecture 1.3 is to complete the
solution to Vitushkin’s conjecture, which asks for the relation between Favard
length and analytic capacity. Different parts of the conjecture have been
verified or disproved in [Cal77, Dav98, Mat86, JM88], but one question
remains: given a $1$-dimensional compact set $E\subset\mathbb{R}^{2}$ with
non-$\sigma$-finite length and $\operatorname{Fav}(E)>0$, is the analytic
capacity of $E$ positive? It is beyond the scope of this introduction to
discuss this in detail, but let us mention that recent progress on this
problem made in [CT20] and [DV22] used the ideas and results obtained in
[MO18] and [Orp21], respectively. Solving Conjecture 1.3 (or even it’s weaker,
multi-scale version) would immediately mark substantial progress on this
question, see [DV22, Remark 1.9]. We refer the interested reader to [DV22] for
details.
### 1.6. Directions spanned by sets
We give an application of Theorem 1.7 to directions spanned by sets.
###### Definition 1.9.
Given a Borel set $E\subset\mathbb{R}^{2}$ we define _the set of directions
spanned by $E$_ as
$D(E)\coloneqq\bigg{\\{}\frac{x-y}{|x-y|}\ \mathrel{\mathop{\ordinarycolon}}\
x,y\in E,\ x\neq y\bigg{\\}}\subset\mathbb{S}^{1},$
or, using our preferred parametrization of the circle,
$D_{\mathbb{T}}(E)\coloneqq\frac{1}{2\pi}\arg(D(E))\subset\mathbb{T}.$
We will denote the complement of $D_{\mathbb{T}}(E)$ by $G_{\mathbb{T}}(E)$,
and we will say that the directions in $G_{\mathbb{T}}(E)$ are _avoided_ by
$E$.
Sets of directions spanned by subsets of $\mathbb{R}^{d}$ have been studied in
[OS11, IMS12]. They are closely related to _radial projections_ due to the
fact that
$D(E)=\bigcup_{x\in E}\pi_{x}(E\setminus\\{x\\}),$
where $\pi_{x}(y)=\frac{x-y}{|x-y|}$ is the radial projection map from $x$.
The behaviour of purely unrectifiable sets under radial projections was
studied in [Mar54, SS06, BŁZ16]. See also [Mat81, Csö00, Csö01, VV22, DG22,
OSW22].
###### Remark 1.10.
Given $G\subset\mathbb{T}$ and $x\in\mathbb{R}^{2}$, consider the cone
$X(x,G)\coloneqq\bigcup_{\theta\in G}\ell_{x,\theta},$ where
$\ell_{x,\theta}=x+\operatorname{span}(e_{\theta})$. Note that if
$E\subset\mathbb{R}^{2}$ satisfies $G_{\mathbb{T}}(E)\neq\varnothing$, then
$E\cap X(x,G_{\mathbb{T}}(E))=\\{x\\}\quad\text{for all $x\in E$},$
and $G_{\mathbb{T}}(E)$ is the largest subset of $\mathbb{T}$ with this
property.
The following is an easy observation used in many geometric measure theory
proofs (for example, in the proof of Theorem A).
###### Observation 1.11.
A set $E\subset\mathbb{R}^{2}$ is contained in some Lipschitz graph
$\Gamma\subset\mathbb{R}^{2}$ if and only if there exists a (non-degenerate)
interval $I\subset\mathbb{T}$ such that
$I\subset G_{\mathbb{T}}(E).$
Furthermore, we have
$\operatorname{Lip}(\Gamma)\lesssim\mathcal{H}^{1}(I)^{-1}$. Usually this
result is stated in terms of the “empty cone condition”
$E\cap X(x,I)=\\{x\\}\quad\text{for all $x\in E$},$
but this is equivalent by Remark 1.10. See [Mat95, Lemma 15.13] or [MO18,
Remark 1.11] for an easy proof.
It is natural to ask if the following generalization of the observation above
is true:
###### Question 1.12.
Let $s\in(0,1),\ C_{0}\geq 1$. Suppose that $E\subset\mathbb{R}^{2}$ is a
bounded AD-regular set with constant $C_{0}$, and that
$\mathcal{H}^{1}(G_{\mathbb{T}}(E))\geq s.$
Is it possible to find a Lipschitz graph $\Gamma\subset\mathbb{R}^{2}$ with
$\operatorname{Lip}(\Gamma)\lesssim_{s,C_{0}}1$ and
$\mathcal{H}^{1}(\Gamma\cap E)\gtrsim_{s,C_{0}}\mathcal{H}^{1}(E)?$
###### Remark 1.13.
Note that in Question 1.12 we added many assumptions compared to Observation
1.11, we weakened the conclusion, and the only assumption that is weaker in
Question 1.12 is that we assume no additional structure on $G_{\mathbb{T}}(E)$
beyond large $\mathcal{H}^{1}$-measure. This makes all the difference: the
case of a big interval, as in Observation 1.11, is very easy, whereas Question
1.12 appears to be non-trivial. Similarly, the fact that Theorem 1.7 does not
assume much regularity about the set of good directions $G$ leads to genuinely
new difficulties compared to Theorem B and Theorem C, and it is not merely a
cosmetic difference.
Using Theorem 1.7 we are able to answer affirmatively the following special
case of Question 1.12.
###### Corollary 1.14.
Let $s\in(0,1),\ C_{0}\geq 1$. Suppose that $E\subset\mathbb{R}^{2}$ is a
bounded AD-regular set with constant $C_{0}$, and that
$\mathcal{H}^{1}(G_{\mathbb{T}}(E))\geq s.$
Suppose further that $E$ is a union of parallel line segments. Then, there
exists a Lipschitz graph $\Gamma\subset\mathbb{R}^{2}$ with
$\operatorname{Lip}(\Gamma)\lesssim_{s,C_{0}}1$ and
$\mathcal{H}^{1}(\Gamma\cap E)\gtrsim_{s,C_{0}}\mathcal{H}^{1}(E).$
###### Proof.
Let $\theta_{0}\in\mathbb{T}$ be such that the line segments comprising $E$
are parallel to $\ell_{\theta_{0}}$. Set
$G\coloneqq G_{\mathbb{T}}(E)\setminus(\theta_{0}-0.1s,\theta_{0}+0.1s).$
Let $\theta\in G$ and $y\in\pi_{\theta}(E)$. Since $E$ avoids the direction
$\theta$, we get that $E$ is a graph over $\ell_{\theta}^{\perp}$, and it
consists of segments forming angle
$\measuredangle(\ell_{\theta_{0}},\ell_{\theta})\sim|\theta-\theta_{0}|$ with
$\ell_{\theta}=(\ell_{\theta}^{\perp})^{\perp}$. It follows that
$\pi_{\theta}^{\perp}\mathcal{H}^{1}|_{E}(y)=\lim_{h\to
0}\frac{\mathcal{H}^{1}(E\cap(\pi_{\theta}^{\perp})^{-1}((y-h,y+h))}{h}\lesssim\lim_{h\to
0}\frac{|\theta-\theta_{0}|^{-1}h}{h}\lesssim s^{-1}.$
Hence, $\|\pi_{\theta}^{\perp}\mathcal{H}^{1}|_{E}\|_{\infty}\lesssim s^{-1}$.
Since
$\mathcal{H}^{1}(G)\geq\mathcal{H}^{1}(G_{\mathbb{T}}(E))-0.2s\geq\frac{s}{2},$
we may apply Theorem 1.7 (with $G^{\perp}$ instead of $G$) to find the desired
Lipschitz graph $\Gamma$ with $\operatorname{Lip}(\Gamma)\lesssim_{s,C_{0}}1$
and $\mathcal{H}^{1}(\Gamma\cap E)\gtrsim_{s,C_{0}}\mathcal{H}^{1}(E)$. ∎
We mention another interesting question in the same vein, which is essentially
a qualitative version of Question 1.12.
It follows from the definition of purely unrectifiable sets and Observation
1.11 that if $E$ is purely unrectifiable and $\mathcal{H}^{1}(E)>0$, then
$D_{\mathbb{T}}(E)$ is dense in $\mathbb{T}$. What can be said about
$\mathcal{H}^{1}(D_{\mathbb{T}}(E))$?
###### Question 1.15.
Suppose that $E\subset\mathbb{R}^{2}$ is purely unrectifiable, and
$0<\mathcal{H}^{1}(E)<\infty$. Do we have
$\mathcal{H}^{1}(D_{\mathbb{T}}(E))=\mathcal{H}^{1}(\mathbb{T})?$
The answer is yes for _homogeneous sets_ (examples of which include self-
similar sets satisfying the strong separation condition for which the linear
parts of the similarities contain no rotations) by [RS19, Proposition 3.1]; in
fact, for such sets Rossi and Shmerkin proved that
$D_{\mathbb{T}}(E)=\mathbb{T}$. To the best of our knowledge, the question is
open for general purely unrectifiable sets. Up until recently it wasn’t even
clear if $\dim_{H}(D_{\mathbb{T}}(E))=1$, but this follows from a recent paper
of Orponen, Shmerkin, and Wang [OSW22].
### 1.7. Plan of the article
In Section 2 we sketch the proof of Theorem 1.7. In Section 3 we introduce
some notation, list all the parameters appearing in the proof, and remind some
useful results from [CT20] and [Dąb22]. In Section 4 we state our main
proposition, Proposition 4.1, and we show how it can be used to prove Theorem
1.7. We prove the main proposition in Sections 5–9. In Section 5 we introduce
a “dyadic grid of rectangles” adapted to Proposition 4.1, and we prove some
basic measure estimates on these rectangles. Section 6 contains a stopping
time argument and a corona decomposition involving conical energies. In
Sections 7–9 we estimate these energies. Finally, in Appendix A we prove one
of the results from Section 3.
### Acknowledgments
I am grateful to Alan Chang, Tuomas Orponen, Xavier Tolsa, and Michele Villa
for inspiring discussions.
I was supported by the Academy of Finland via the projects _Incidences on
Fractals_ , grant No. 321896, and _Quantitative rectifiability and harmonic
measure beyond the Ahlfors-David-regular setting_ , grant No. 347123.
## 2\. Sketch of the proof
Suppose that $E\subset\mathbb{R}^{2}$ is bounded and AD-regular,
$\mu=\mathcal{H}^{1}|_{E}$, $G\subset\mathbb{T}$ satisfies
$\mathcal{H}^{1}(G)\gtrsim 1$, and for all $\theta\in G$ we have
$\|\pi_{\theta}\mu\|_{\infty}\lesssim 1$. Using Proposition 3.1, which is a
result from [CT20], it is easy to show that this implies
$\int_{\mathbb{R}^{2}}\int_{0}^{\operatorname{diam}(E)}\frac{\mu(X(x,G^{\perp},r))}{r}\,\frac{dr}{r}d\mu(x)\lesssim\mu(E),$
(2.1)
where $X(x,G^{\perp},r)=X(x,G^{\perp})\cap B(x,r)$, and $X(x,G^{\perp})$ is
the union of lines passing through $x$ with directions perpendicular to those
from $G$. See §3.1 for the precise definition.
Estimate (2.1) is reminiscent of Proposition 3.3, which was observed in
[Dąb22] but is essentially due to [MO18]. This result says that if the
estimate (2.1) holds with $G$ which is a large interval, then one can find a
big piece of a Lipschitz graph inside $E$. The problem is, the set $G$ given
by Theorem 1.7 may be a very complicated set, possibly consisting of many tiny
intervals, or not containing any intervals at all.
This issue is addressed by our main proposition, Proposition 4.1. Roughly
speaking, it says that if we start with a set of “good directions” $G_{J}$
which almost fills an interval $J$, then the goodness of $G_{J}$ propagates to
all of $J$, and even to the enlarged interval $3J$. More precisely, given an
interval $J\subset\mathbb{T}$, possibly very short, and a set $G_{J}\subset J$
with $\mathcal{H}^{1}(J\setminus G_{J})\leq\varepsilon\mathcal{H}^{1}(J)$,
where $\varepsilon>0$ is very small, and under some additional technical
assumptions involving $\|\pi_{\theta}\mu\|_{\infty}$, one has
$\int_{E}\int_{0}^{\operatorname{diam}(E)}\frac{\mu(X(x,3J,r))}{r}\,\frac{dr}{r}d\mu(x)\\\
\leq
C_{\mathsf{Prop}}\bigg{(}\int_{E}\int_{0}^{\operatorname{diam}(E)}\frac{\mu(X(x,G_{J},r))}{r}\,\frac{dr}{r}d\mu(x)+\mathcal{H}^{1}(J)\mu(E)\bigg{)}.$
(2.2)
Crucially, the constants $\varepsilon$ and $C_{\mathsf{Prop}}$ do not depend
on $\mathcal{H}^{1}(J)$.
Using the idea of the good set $G$ propagating and becoming larger, we are
able to apply Proposition 4.1 iteratively, so that after a bounded number of
iterations we end up with an estimate (2.1) with the set $G$ replaced by some
interval $J_{0}$ with $\mathcal{H}^{1}(J_{0})\sim 1$. This allows us to use
Proposition 3.3 to obtain a big piece of Lipschitz graph inside $E$. All of
this is done in Section 4, assuming that Proposition 4.1 is true. The
remainder of the paper is dedicated to the proof of Proposition 4.1.
In Section 5 we consider a “dyadic lattice of rectangles”
$\mathcal{D}=\bigcup_{k}\mathcal{D}_{k}$, where each $\mathcal{D}_{k}$ is a
partition of $E$. The rectangles we work with have a very large, but fixed,
aspect ratio equal to $\mathcal{H}^{1}(J)^{-1}$, and they all point in the
same direction, corresponding to the mid-point of $J$. A priori, the fact that
$\mu$ is AD-regular only tells us that a rectangle $Q\in\mathcal{D}$ satisfies
$\ell(Q)\lesssim\mu(Q)\lesssim\mathcal{H}^{1}(J)^{-1}\ell(Q),$
where $\ell(Q)$ denotes the length of the shorter side of $Q$. This is no
good: it is crucial that our estimates do not explode as
$\mathcal{H}^{1}(J)\to 0$. Luckily, due to one of the assumptions on
$\|\pi_{\theta}\mu\|_{\infty}$, we show in Lemma 5.1 that $\mu(Q)\sim\ell(Q)$.
So in a sense, we need the $L^{\infty}$-norm in (1.3), and not just the
$L^{2}$-norm as in Theorem B, to ensure that our rectangles are “AD-regular”.
In Section 6 we introduce conical energies $\mathcal{E}_{G}(Q)$ and
$\mathcal{E}_{J}(Q)$, associated to $G_{J}$ and $3J$, respectively. They are
essentially local versions of double intergals from (2.2), so that
$\int_{\mathbb{R}^{2}}\int_{0}^{\operatorname{diam}(E)}\frac{\mu(X(x,G_{J},r))}{r}\,\frac{dr}{r}d\mu(x)\sim\sum_{Q\in\mathcal{D}}\mathcal{E}_{G}(Q)\mu(Q),$
and an analogous estimate holds for $3J$ and $\mathcal{E}_{J}(Q)$. Inspired by
[CT20], we conduct a stopping time argument and a corona decomposition of
$\mathcal{D}$ into a family of trees $\mathsf{Tree}(R),\ R\in\mathsf{Top}$.
What we gain is that for any $R\in\mathsf{Top}$ and most $x\in R$ the cone
$X(x,G_{J})$ does not intersect $E$ at the scales associated to
$\mathsf{Tree}(R)$.
In Sections 7 and 8 we prove that for any $R\in\mathsf{Top}$
$\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{J}(Q)\mu(Q)\lesssim\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{G}(Q)\mu(Q)+\mathcal{H}^{1}(J)\mu(R),$
which is enough to obtain (2.2). To prove the estimate above, we divide
$\mathcal{E}_{J}(Q)$ into an “interior” conical energy
$\mathcal{E}^{int}_{J}(Q)$ associated to $0.5J$, and an “exterior” conical
energy $\mathcal{E}^{ext}_{J}(Q)$ associated to $3J\setminus 0.5J$. In Section
7 we deal with the interior part. This is another important point where we use
the technical assumptions related to $\|\pi_{\theta}\mu\|_{\infty}$: together
with AD-regularity of $E$ they allow us to get a strong, pointwise estimate
$\mathcal{E}^{int}_{J}(Q)\lesssim\mathcal{E}_{G}(Q)$. As a corollary, we get
that for $R\in\mathsf{Top}$ and all $x\in R$ the cone $X(x,0.5J)$ does not
intersect $E$ at the scales associated to $\mathsf{Tree}(R)$.
Finally, in Section 8 we estimate the exterior energy
$\mathcal{E}^{ext}_{J}(Q)$. The argument uses the key geometric lemma of this
article, Lemma 8.4, which we prove in Section 9. The proof is purely
geometric, and we believe it is the true heart of this article.
A simplified version of Lemma 8.4 says the following:
###### Key Geometric Lemma (simplified).
Let $A\subset B(0,1)\subset\mathbb{R}^{2}$ be an AD-regular sets consisting of
horizontal segments. Let $J\subset\mathbb{T}$ be an interval such that
$\mathcal{H}^{1}(J)\leq c$ for a small absolute constant $c>0$, and such that
$X(0,J)$ contains the vertical axis. Assume that
$A\cap X(x,J)=\\{x\\}\quad\text{for every $x\in A$.}$
Suppose that there is a point $y\in A$ and a scale $r\in(0,1)$ such that
$A\cap X(y,3J,2r)\setminus B(y,r)\neq\varnothing.$
Then, there exists an interval $K\subset\mathbb{R}$, which is a connected
component of $\mathbb{R}\setminus\pi_{0}(A)$ (where $\pi_{0}$ is the
projection to the horizontal axis), such that
$\mathcal{H}^{1}(K)\sim\mathcal{H}^{1}(J)r$ and $\pi_{0}(y)\in CK$ for some
absolute $C\geq 1$.
It is not too difficult to show using this lemma that a set $A$ as above
satisfies
$\int_{A}\int_{0}^{\operatorname{diam}(A)}\frac{\mathcal{H}^{1}(A\cap
X(x,3J,r))}{r}\,\frac{dr}{r}d\mathcal{H}^{1}(x)\lesssim\mathcal{H}^{1}(J)\mathcal{H}^{1}(A).$
This is essentially where the last term in (2.2) comes from.
## 3\. Preliminaries
### 3.1. Notation
Given $x\in\mathbb{R}^{2}$ and $\theta\in\mathbb{T}$ we set
$\displaystyle e_{\theta}$
$\displaystyle\coloneqq(\cos(2\pi\theta),\sin(2\pi\theta))\in\mathbb{S}^{1},$
$\displaystyle\pi_{\theta}(x)$ $\displaystyle\coloneqq e_{\theta}\cdot x,$
$\displaystyle\ell_{x,\theta}$ $\displaystyle\coloneqq
x+\operatorname{span}(e_{\theta}),$ $\displaystyle\ell_{\theta}$
$\displaystyle\coloneqq\ell_{0,\theta}.$
For $x\in\mathbb{R}^{2}$ and a measurable set $I\subset\mathbb{T}$ we define
the cone centered at $x$ with directions in $I$ as
$X(x,I)=\bigcup_{\theta\in I}\ell_{x,\theta}.$
Note that we do not require $I$ to be an interval. We also set
$I^{\perp}=I+1/4$.
For $0<r<R$ we define truncated cones as
$\displaystyle X(x,I,r)=X(x,I)\cap B(x,r),$ $\displaystyle
X(x,I,r,R)=X(x,I,R)\setminus B(x,r).$
In case $I=[\theta-a,\theta+a]$, we have an algebraic characterization of
$X(x,I)$: $y\in X(x,I)$ if and only if
$|\pi^{\perp}_{\theta}(y)-\pi^{\perp}_{\theta}(x)|\leq\sin(2\pi a)|x-y|.$
(3.1)
We will denote by $\Delta$ the usual family of half-open dyadic intervals on
$[0,1)\simeq\mathbb{T}$. If $J\in\Delta$, then $\Delta(J)$ denotes the
collection of dyadic intervals contained in $J$. For
$I\in\Delta\setminus\\{[0,1)\\}$, the notation $I^{1}$ will be used for the
dyadic parent of $I$.
Given an interval $I\subset\mathbb{T}$ and $C>0$, we will write $CI$ to denote
the interval with the same midpoint as $I$ and length $C\mathcal{H}^{1}(I)$.
The closure of a set $A$ will be denoted by $\overline{A}$, and its interior
by $\mathrm{int}(A)$.
### 3.2. Constants and parameters
Whenever we write $f\lesssim g$, this should be understood as “there exists an
absolute constant $C>0$ such that $f\leq Cg$.” We will write $f\lesssim_{A}g$
if we allow the constant $C$ to depend on some parameter $A$. We also write
$f\sim g$ to denote $g\lesssim f\lesssim g$, and similarly $f\sim_{A}g$ stands
for $g\lesssim_{A}f\lesssim_{A}g$.
Throughout the proof we use many constants and parameters. We list the most
important ones here for reader’s convenience. The notation
$C_{1}=C_{1}(C_{2})$ means “$C_{1}$ is a parameter whose value depends on the
value of parameter $C_{2}$”.
* •
$C_{0}\geq 1$ is the AD-regularity constant of the set $E$.
* •
$M\geq 1$ is the constant bounding the $L^{\infty}$-norm of projections in the
assumptions of Theorem 1.7 and Proposition 4.1.
* •
$s\in(0,1)$ is the constant from the assumption $\mathcal{H}^{1}(G)\geq s$ in
Theorem 1.7.
* •
$\varepsilon=\varepsilon(C_{0},M)\in(0,1)$ is a constant appearing in
Proposition 4.1, see (4.1). It is chosen in Lemma 7.2. One could take
$\varepsilon=cC_{0}^{-1}M^{-1}$ for some small absolute $c\in(0,1)$.
* •
$C_{\mathsf{Prop}}=C_{\mathsf{Prop}}(C_{0},M)>1$ is a big constant appearing
in the conclusion of Proposition 4.1.
* •
$c_{1}\in(0,1)$ is a small absolute constant appearing in the assumption
$\mathcal{H}^{1}(J)\leq c_{1}C_{0}^{-1}M^{-1}$ of Proposition 4.1. It is fixed
above (9.4).
* •
$\rho=1/1000$ is the constant from Theorem 5.3, so that for
$Q\in\mathcal{D}_{k}$ we have $\ell(Q)=4\rho^{k}$.
* •
$A=A(C_{0},M)\geq 1000$ is a large constant appearing in the definition of
$\mathcal{E}_{G}(Q)$ (6.1). It is fixed in Lemma 9.8, one could take
$A=CC_{0}M$ for some absolute $C\geq 1000$.
* •
$\delta=\delta(A,M,C_{0})\in(0,1)$ is the $\mathsf{BCE}$-parameter, appearing
in (6.3). It is fixed in Lemma 7.3.
* •
$N\sim C_{0}M$ is a parameter appearing in the definition of rectangles
$\mathcal{G}_{i}$, below (9.2). It’s exact value is chosen in Lemma 9.5.
### 3.3. Useful results on cones and projections
We recall some results that will be useful in our proof. The proposition below
is a simplified version of Corollary 3.3 from [CT20].
###### Proposition 3.1.
Let $\mu$ be a finite, compactly supported Borel measure on $\mathbb{R}^{2}$,
and $I\subset\mathbb{T}$ an open set. Then,
$\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\frac{\mu(X(x,I,r))}{r}\,\frac{dr}{r}d\mu(x)\lesssim\int_{I}\|\pi_{\theta}^{\perp}\mu\|_{2}^{2}\,d\theta.$
We get the following corollary.
###### Corollary 3.2.
Let $E\subset\mathbb{R}^{2}$ and $G\subset\mathbb{T}$ be as in Theorem 1.7,
and let $\mu=\mathcal{H}^{1}|_{E}$. Then,
$\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\frac{\mu(X(x,G^{\perp},r))}{r}\,\frac{dr}{r}d\mu(x)\lesssim
M\mathcal{H}^{1}(G)\mu(E),$
where $G^{\perp}=G+1/4$.
If $G$ is open, then this follows almost immediately from Proposition 3.1. The
case of a general measurable set $G$ is a long and uninspiring exercise in
measure theory, so we postpone it to the appendix.
The following result is a simplified version of Proposition 10.1 from [Dąb22],
which in turn is a consequence of Proposition 1.12 from [MO18].
###### Proposition 3.3.
Let $E\subset\mathbb{R}^{2}$ be a bounded AD-regular set with constant
$C_{0}$. Let $F\subset E$ be such that
$\mathcal{H}^{1}(F)\geq\kappa\mathcal{H}^{1}(E)$. Assume there exists an
interval $J\subset\mathbb{T}$ with $\mathcal{H}^{1}(J)=s$ such that for
$\mathcal{H}^{1}$-a.e. $x\in F$
$\int_{0}^{1}\frac{\mathcal{H}^{1}(X(x,J,r)\cap F)}{r}\,\frac{dr}{r}\leq M.$
Then, there exists a Lipschitz graph $\Gamma\subset\mathbb{R}^{2}$ with
$\operatorname{Lip}(\Gamma)\lesssim_{s}1$ and
$\mathcal{H}^{1}(F\cap\Gamma)\gtrsim_{C_{0},s,M,\kappa}\mathcal{H}^{1}(F).$
## 4\. Main proposition and proof of Theorem 1.7
The following is our main proposition.
###### Proposition 4.1.
Let $1\leq C_{0},M<\infty$. There exist constants
$0<\varepsilon<1<C_{\mathsf{Prop}}<\infty$, which depend on $M,C_{0}$, such
that the following holds. Assume that:
1. (a)
$E\subset\mathbb{R}^{2}$ is a bounded AD-regular set with constant $C_{0}$,
and set $\mu=\mathcal{H}^{1}|_{E}$,
2. (b)
$J\subset\mathbb{T}$ is an interval with $\mathcal{H}^{1}(J)\leq
c_{1}C_{0}^{-1}M^{-1}$, where $c_{1}>0$ is a small absolute constant,
3. (c)
there exists $\theta_{0}\in 3J$ such that
$\|\pi_{\theta_{0}}^{\perp}\mu\|_{\infty}\leq M$,
4. (d)
$G\subset J$ is a closed set which satisfies
$\mathcal{H}^{1}(G)\geq(1-\varepsilon)\mathcal{H}^{1}(J),$ (4.1)
5. (e)
for every interval $I$ comprising $J\setminus G$ there exists $\theta_{I}\in
3I$ such that $\|\pi_{\theta_{I}}^{\perp}\mu\|_{\infty}\leq M$,
Then,
$\int_{E}\int_{0}^{\operatorname{diam}(E)}\frac{\mu(X(x,3J,r))}{r}\,\frac{dr}{r}d\mu(x)\\\
\leq
C_{\mathsf{Prop}}\bigg{(}\int_{E}\int_{0}^{\operatorname{diam}(E)}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)+\mathcal{H}^{1}(J)\mu(E)\bigg{)}.$
###### Remark 4.2.
In the proposition above, the interval $J$ may be open, closed, or half-open,
it doesn’t make a difference. In the conclusion we may take $3J$ to be a
closed interval (in fact, the same proof gives the conclusion also with $CJ$
replacing $3J$, if we let $C_{\mathsf{Prop}}$ depend on $C$ as well, and as
long as $\mathcal{H}^{1}(CJ)\leq c_{1}C_{0}^{-1}M^{-1}$).
We prove Proposition 4.1 in Sections 5–9. Now let us show how it can be used
to prove Theorem 1.7. We begin by proving a corollary of Proposition 4.1,
which looks quite similar to Proposition 4.1 itself; the crucial difference is
that it deals with sets $G\subset J$ with
$\mathcal{H}^{1}(G)<(1-\varepsilon)\mathcal{H}^{1}(J)$. Recall that for a
dyadic interval $I\in\Delta$ we denote by $I^{1}$ the dyadic parent of $I$.
###### Corollary 4.3.
Let $1\leq C_{0},M<\infty$. Let $\varepsilon=\varepsilon(M,C_{0})$,
$C_{\mathsf{Prop}}=C_{\mathsf{Prop}}(M,C_{0})$ be as in Proposition 4.1.
Assume that:
1. (a)
$E\subset\mathbb{R}^{2}$ is a bounded AD-regular set with constant $C_{0}$,
and $\mu=\mathcal{H}^{1}|_{E}$,
2. (b)
$J\subset\mathbb{T}$ is a dyadic interval with $\mathcal{H}^{1}(J)\leq
c_{1}C_{0}^{-1}M^{-1}$, where $c_{1}>0$ is as in Proposition 4.1,
3. (c)
$G\subset\overline{J}$ is a finite union of closed dyadic intervals, which
satisfies
$0<\mathcal{H}^{1}(G)<(1-\varepsilon)\mathcal{H}^{1}(J),$ (4.2)
4. (d)
denoting the collection of maximal dyadic intervals contained in $J\setminus
G$ by $\mathcal{B}_{\Delta}$, for every $I\in\mathcal{B}_{\Delta}$ there
exists $\theta_{I}\in I^{1}$ such that
$\|\pi_{\theta_{I}}^{\perp}\mu\|_{\infty}\leq M$.
Then, there exists a closed set $G_{*}$ with
$G\subset G_{*}\subset\overline{J},$ (4.3)
which is a finite union of closed dyadic intervals, such that
$\mathcal{H}^{1}(G_{*})\geq(1+{\varepsilon})\mathcal{H}^{1}(G),$ (4.4)
and
$\int_{E}\int_{0}^{\operatorname{diam}(E)}\frac{\mu(X(x,G_{*},r))}{r}\,\frac{dr}{r}d\mu(x)\\\
\leq
C_{\mathsf{Prop}}\bigg{(}\int_{E}\int_{0}^{\operatorname{diam}(E)}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)+\mathcal{H}^{1}(J)\mu(E)\bigg{)}.$
(4.5)
Moreover, denoting by $\mathcal{B}_{\Delta,*}$ the collection of maximal
dyadic intervals contained in $J\setminus G_{*}$, we have
$\mathcal{B}_{\Delta,*}\subset\mathcal{B}_{\Delta}.$ (4.6)
The statement above is quite involved, but it is very well-suited for its
iterative application later on: note that the resulting set $G_{*}$ satisfies
all the same assumptions as the set $G$ we started with, except perhaps for
the measure assumption (4.2).
We divide the proof of Corollary 4.3 into several steps.
#### Definition of $G_{*}$
Let $\mathcal{I}\subset\Delta(J)$ be the family of maximal dyadic intervals
such that for every $I\in\mathcal{I}$
$\mathcal{H}^{1}(I\cap G)\geq(1-\varepsilon)\mathcal{H}^{1}(I).$ (4.7)
Since $G$ is a finite union of closed dyadic intervals, we get immediately
that
$G\subset\bigcup_{I\in\mathcal{I}}\overline{I},$
and that $\mathcal{I}$ is a finite family. Observe that the intervals in
$\mathcal{I}$ are pairwise disjoint by maximality. Moreover, we have
$J\notin\mathcal{I}$ due to (4.2), so that all $I\in\mathcal{I}$ are strictly
contained in ${J}$.
Consider the family
$\mathcal{I}^{1}=\\{I^{1}\\}_{I\in\mathcal{I}}\subset\Delta(J)$, where $I^{1}$
denotes the dyadic parent of $I$, and let $\mathcal{I}_{*}$ be the family of
maximal dyadic intervals from $\mathcal{I}^{1}$. The intervals in
$\mathcal{I}_{*}$ are pairwise disjoint by maximality, and the family
$\mathcal{I}_{*}$ is finite because $\mathcal{I}$ is finite. We set
$G_{*}\coloneqq\bigcup_{I\in\mathcal{I}_{*}}\overline{I}.$
It remains to show that $G_{*}$ satisfies (4.3), (4.4), (4.5), and (4.6).
###### Proof of (4.3).
Note that
$G\subset\bigcup_{I\in\mathcal{I}}\overline{I}\subset\bigcup_{I\in\mathcal{I}}\overline{I^{1}}=\bigcup_{I\in\mathcal{I}_{*}}\overline{I}=G_{*}.$
Since $\mathcal{I}_{*}\subset\Delta(J)$, we also have
$G_{*}\subset\overline{J}$. ∎
###### Proof of (4.4).
Recall that $\mathcal{I}$ was defined as the collection of maximal dyadic
intervals where (4.7) holds. Let $I\in\mathcal{I}_{*}$. We know that $I$ is a
parent of some $I^{\prime}\in\mathcal{I}$, and $I^{\prime}$ is a maximal
interval where (4.7) holds. It follows that $I$ does not satisfy (4.7), which
means that
$\mathcal{H}^{1}(I\cap G)<(1-\varepsilon)\mathcal{H}^{1}(I),$
or equivalently,
$\mathcal{H}^{1}(I\setminus G)\geq\varepsilon\mathcal{H}^{1}(I).$
Using this estimate we compute
$\mathcal{H}^{1}(G_{*})=\sum_{I\in\mathcal{I}_{*}}\mathcal{H}^{1}(I)=\sum_{I\in\mathcal{I}_{*}}\mathcal{H}^{1}(I\cap
G)+\sum_{I\in\mathcal{I}_{*}}\mathcal{H}^{1}(I\setminus G)\\\
=\mathcal{H}^{1}(G)+\sum_{I\in\mathcal{I}_{*}}\mathcal{H}^{1}(I\setminus
G)\geq\mathcal{H}^{1}(G)+\varepsilon\sum_{I\in\mathcal{I}_{*}}\mathcal{H}^{1}(I)\\\
=\mathcal{H}^{1}(G)+\varepsilon\mathcal{H}^{1}(G_{*})\geq(1+\varepsilon)\mathcal{H}^{1}(G).$
This shows (4.4). ∎
###### Proof of (4.5).
Without loss of generality, we may assume that $\operatorname{diam}(E)=1$. Fix
$I\in\mathcal{I_{*}}$, and let $J_{I}\in\mathcal{I}$ be an interval such that
$(J_{I})^{1}=I$. We claim that we may apply Proposition 4.1 with $J=J_{I}$ and
$G=G\cap J_{I}$. Indeed, assumption (a) is the same as in Corollary 4.3, and:
* •
assumption (b) holds since $\mathcal{H}^{1}(J_{I})\leq\mathcal{H}^{1}(J)\leq
c_{1}C_{0}^{-1}M^{-1}$.
* •
assumption (c) holds because $(J_{I})^{1}=I$ has non-empty intersection with
both $G$ and $J\setminus G$, so in particular $I$ strictly contains some
$K\in\mathcal{B}_{\Delta}$. We assumed that there exists $\theta_{K}\in
K^{1}\subset I$ such that $\|\pi_{\theta_{K}}^{\perp}\mu\|_{\infty}\leq M$.
Since $I\subset 3J_{I}$, we may take $\theta_{0}=\theta_{K}$.
* •
assumption (d) follows from the definition of $\mathcal{I}$ (4.7).
* •
assumption (e) holds because any interval $K$ comprising $J_{I}\setminus G$
contains some dyadic interval $K^{\prime}\in\mathcal{B}_{\Delta}$, and since
$(K^{\prime})^{1}\subset 3K$, we may take
$\theta_{K}\coloneqq\theta_{K^{\prime}}$.
We checked all the assumptions of Proposition 4.1, and so we may conclude that
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,3J_{I},r))}{r}\,\frac{dr}{r}d\mu(x)\\\ \leq
C_{\mathsf{Prop}}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G\cap
J_{I},r))}{r}\,\frac{dr}{r}d\mu(x)+C_{\mathsf{Prop}}\mathcal{H}^{1}(J_{I})\mu(E).$
Summing over $I\in\mathcal{I}_{*}$ yields
$\displaystyle\int_{E}\int_{0}^{1}$
$\displaystyle\frac{\mu(X(x,G_{*},r))}{r}\,\frac{dr}{r}d\mathcal{H}^{1}(x)$
$\displaystyle=\sum_{I\in\mathcal{I_{*}}}\int_{E}\int_{0}^{1}\frac{\mu(X(x,I,r))}{r}\,\frac{dr}{r}d\mu(x)$
$\displaystyle\leq\sum_{I\in\mathcal{I_{*}}}\int_{E}\int_{0}^{1}\frac{\mu(X(x,3J_{I},r))}{r}\,\frac{dr}{r}d\mu(x)$
$\displaystyle\leq\sum_{I\in\mathcal{I_{*}}}C_{\mathsf{Prop}}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G\cap
J_{I},r))}{r}\,\frac{dr}{r}d\mu(x)+\sum_{I\in\mathcal{I_{*}}}C_{\mathsf{Prop}}\mathcal{H}^{1}(J_{I})\mu(E)$
$\displaystyle\leq
C_{\mathsf{Prop}}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)+C_{\mathsf{Prop}}\mathcal{H}^{1}(J)\mu(E).$
This shows (4.5). ∎
###### Proof of (4.6).
Let $I\in\mathcal{B}_{\Delta,*}$, so that
$I\cap G_{*}=\varnothing\quad\text{and}\quad I^{1}\cap G_{*}\neq\varnothing.$
(4.8)
We want to prove that $I\in\mathcal{B}_{\Delta}$. Since $G\subset G_{*}$, it
is clear that $I\cap G=\varnothing,$ so we only need to show that
$I^{1}\cap G\neq\varnothing.$ (4.9)
Let $I^{\prime}$ be the dyadic sibling of $I$, that is, the unique interval
$I^{\prime}\in\Delta(J)$ such that $I\cup I^{\prime}=I^{1}$. It follows from
(4.8) that $I^{\prime}\cap G_{*}\neq\varnothing$. By the definition of
$G_{*}$, there exists $P\in\mathcal{I}_{*}$ such that $P\cap
I^{\prime}\neq\varnothing$. Hence, we have either $P\subset I^{\prime}$ or
$I^{\prime}\subsetneq P$. The latter would imply $I^{1}\subset P$, which is
not possible because $I\cap P\subset I\cap G_{*}=\varnothing$. Thus, we have
$P\subset I^{\prime}$.
Let $J_{P}\in\mathcal{I}$ be such that $P=(J_{P})^{1}$. By the definition of
$\mathcal{I}$ (4.7) we have
$\mathcal{H}^{1}(J_{P}\cap G)\geq(1-\varepsilon)\mathcal{H}^{1}(J_{P}).$
Since $J_{P}\subset P\subset I^{\prime}$, it follows that $I^{\prime}\cap
G\neq\varnothing$. In particular the parent $(I^{\prime})^{1}=I^{1}$ satisfies
$I^{1}\cap G\neq\varnothing$. This gives (4.9), and concludes the proof of
(4.6). ∎
This finishes the proof of Corollary 4.3.
### 4.1. Proof of Theorem 1.7
#### Preliminaries
Recall that $G^{\perp}=G+1/4$. Let $J_{0}\subset\mathbb{T}$ be a dyadic
interval with
$2^{-1}c_{1}C_{0}^{-1}M^{-1}\leq\mathcal{H}^{1}(J_{0})\leq
c_{1}C_{0}^{-1}M^{-1}$
and such that
$\mathcal{H}^{1}(J_{0}\cap G^{\perp})\geq s\mathcal{H}^{1}(J_{0}).$
It is clear that such interval exists since
$\mathcal{H}^{1}(G^{\perp})=\mathcal{H}^{1}(G)\geq s$. Using inner regularity
of Lebesgue measure, we may find a closed subset $G^{\prime}\subset
G^{\perp}\cap J_{0}$ such that
$\mathcal{H}^{1}(G^{\prime})\geq\frac{1}{2}\mathcal{H}^{1}(G^{\perp}\cap
J_{0})\geq\frac{s}{2}\mathcal{H}^{1}(J_{0}).$
Let $\varepsilon=\varepsilon(C_{0},M)$ be as in Proposition 4.1. We define
$\mathcal{G}\subset\Delta(J_{0})$ as the family of maximal dyadic intervals
such that for every $I\in\mathcal{G}$
$\mathcal{H}^{1}(I\cap G^{\prime})\geq(1-\varepsilon)\mathcal{H}^{1}(I).$
It follows from Lebesgue differentiation theorem that
$\mathcal{H}^{1}\bigg{(}G^{\prime}\setminus\bigcup_{I\in\mathcal{G}}I\bigg{)}=0.$
In particular,
$\mathcal{H}^{1}\bigg{(}\bigcup_{I\in\mathcal{G}}I\bigg{)}\geq\mathcal{H}^{1}(G^{\prime})\geq\frac{s}{2}\mathcal{H}^{1}(J_{0}).$
Let $\mathcal{G}_{0}\subset\mathcal{G}$ be a finite sub-collection such that
$\mathcal{H}^{1}\bigg{(}\bigcup_{I\in\mathcal{G}_{0}}I\bigg{)}\geq\frac{1}{2}\mathcal{H}^{1}\bigg{(}\bigcup_{I\in\mathcal{G}}I\bigg{)}\geq\frac{s}{4}\mathcal{H}^{1}(J_{0}).$
(4.10)
Set
$G_{0}=\bigcup_{I\in\mathcal{G}_{0}}\overline{I},$
so that $G_{0}$ is a finite union of closed dyadic intervals.
Without loss of generality, we may assume that $\operatorname{diam}(E)=1$. For
each $I\in\mathcal{G}_{0}$ we apply Proposition 4.1 (with $J=I$ and
$G=G^{\prime}\cap I$; it is straightforward to see that all the assumptions
are satisfied) to conclude that
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,\overline{I},r))}{r}\,\frac{dr}{r}d\mu(x)\\\
\leq C_{\mathsf{Prop}}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G^{\prime}\cap
I,r))}{r}\,\frac{dr}{r}d\mu(x)+C_{\mathsf{Prop}}\mathcal{H}^{1}(I)\mu(E).$
(4.11)
Summing (4.11) over $I\in\mathcal{G}_{0}$ we get
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,G_{0},r))}{r}\,\frac{dr}{r}d\mu(x)\\\ \leq
C_{\mathsf{Prop}}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G^{\prime},r))}{r}\,\frac{dr}{r}d\mu(x)+C_{\mathsf{Prop}}\mathcal{H}^{1}(G_{0})\mu(E).$
(4.12)
Notice also that if $\mathcal{B}_{\Delta,0}$ are maximal dyadic intervals
contained in $J_{0}\setminus G_{0}$, and $I\in\mathcal{B}_{\Delta,0}$, then
$I^{1}$ contains some interval from $\mathcal{G}_{0}$, and in particular
$I^{1}\cap G^{\prime}\neq\varnothing$. Since $G^{\prime}\subset G^{\perp}$, we
get from (1.3) that there exists $\theta_{I}\in I^{1}$ such that
$\|\pi_{\theta_{I}}\mu\|\leq M$. Hence, $G_{0}$ satisfies all the assumptions
of Corollary 4.3, except perhaps for the measure assumption (4.4).
#### Iteration
We are in position to start the iteration. Assume for a moment that
$\mathcal{H}^{1}(G_{0})<(1-\varepsilon)\mathcal{H}^{1}(J_{0})$ so that $G_{0}$
satisfies all the assumptions of Corollary 4.3. We apply Corollary 4.3, and we
define $G_{1}\coloneqq(G_{0})_{*}$, so that
$\mathcal{H}^{1}(G_{1})\geq(1+\varepsilon)\mathcal{H}^{1}(G_{0})\geq\frac{s(1+\varepsilon)}{4}\mathcal{H}^{1}(J_{0}),$
and all the other conclusions of Corollary 4.3 hold for $G_{1}$. If
$\mathcal{H}^{1}(G_{1})<(1-\varepsilon)\mathcal{H}^{1}(J_{0})$, then we may
apply Corollary 4.3 yet again to get a set $G_{2}\coloneqq(G_{1})_{*}$.
In general, if after $k$-applications of Corollary 4.3 we get a set
$G_{k}\coloneqq(G_{k-1})_{*}$ satisfying
$\mathcal{H}^{1}(G_{k})<(1-\varepsilon)\mathcal{H}^{1}(J_{0})$, then we may
continue applying Corollary 4.3. If for some $k=k_{0}$ we get
$\mathcal{H}^{1}(G_{k_{0}})\geq(1-\varepsilon)\mathcal{H}^{1}(J_{0})$, then we
may apply Proposition 4.1 instead (with $G=G_{k_{0}},J=J_{0}$), so that
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,3J_{0},r))}{r}\,\frac{dr}{r}d\mu(x)\\\ \leq
C_{\mathsf{Prop}}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G_{k_{0}},r))}{r}\,\frac{dr}{r}d\mu(x)+C_{\mathsf{Prop}}\mathcal{H}^{1}(J_{0})\mu(E).$
Recall that for each $k$ we had $G_{k+1}=(G_{k})_{*}$, so that by (4.5)
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,G_{k+1},r))}{r}\,\frac{dr}{r}d\mu(x)\\\
\leq
C_{\mathsf{Prop}}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G_{k},r))}{r}\,\frac{dr}{r}d\mu(x)+C_{\mathsf{Prop}}\mathcal{H}^{1}(J_{0})\mu(E).$
Putting the two estimates above together (the second one used $k_{0}$ times),
and also recalling (4.12), we get
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,3J_{0},r))}{r}\,\frac{dr}{r}d\mu(x)\\\ \leq
C_{\mathsf{Prop}}^{k_{0}+1}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G_{0},r))}{r}\,\frac{dr}{r}d\mu(x)+(k_{0}+1)C_{\mathsf{Prop}}^{k_{0}+1}\mathcal{H}^{1}(J_{0})\mu(E)\\\
\leq
C_{\mathsf{Prop}}^{k_{0}+2}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G^{\prime},r))}{r}\,\frac{dr}{r}d\mu(x)+(k_{0}+2)C_{\mathsf{Prop}}^{k_{0}+2}\mathcal{H}^{1}(J_{0})\mu(E).$
(4.13)
#### Bounding the number of iterations
We claim that the iteration ends (i.e. we obtain a set $G_{k_{0}}$ with
$\mathcal{H}^{1}(G_{k_{0}})\geq(1-\varepsilon)\mathcal{H}^{1}(J_{0})$) after
at most
$k_{0}\lesssim_{s,\varepsilon}1$ (4.14)
steps. Indeed, we had
$\mathcal{H}^{1}(G_{0})=\mathcal{H}^{1}\bigg{(}\bigcup_{I\in\mathcal{G}_{0}}I\bigg{)}\overset{\eqref{eq:meas}}{\geq}\frac{s}{4}\mathcal{H}^{1}(J_{0}),$
and so by (4.4) for each $G_{k}$ we have a lower bound
$\mathcal{H}^{1}(G_{k})\geq(1+\varepsilon)\mathcal{H}^{1}(G_{k-1})\geq(1+\varepsilon)^{k}\mathcal{H}^{1}(G_{0})\geq\frac{s(1+\varepsilon)^{k}}{4}\mathcal{H}^{1}(J_{0}).$
Taking $k_{0}=k_{0}(s,\varepsilon)$ so large that
$s(1+\varepsilon)^{k_{0}}/4\geq(1-\varepsilon)$, we see that the iterative
procedure described above ends after at most $k_{0}$ applications of Corollary
4.3.
#### End of the proof
Taking into account estimates (4.13) and (4.14), the fact that
$\varepsilon=\varepsilon(M,C_{0}),$
$C_{\mathsf{Prop}}=C_{\mathsf{Prop}}(M,C_{0}),$ $\mathcal{H}^{1}(J)\leq 1$,
and that $G^{\prime}\subset G^{\perp}$, we get
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,3J_{0},r))}{r}\,\frac{dr}{r}d\mu(x)\\\ \leq
C(M,C_{0},s)\int_{E}\int_{0}^{1}\frac{\mu(X(x,G^{\perp},r))}{r}\,\frac{dr}{r}d\mu(x)+C(M,C_{0},s)\mu(E).$
Hence, by Corollary 3.2
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,3J_{0},r))}{r}\,\frac{dr}{r}d\mu(x)\lesssim_{M,C_{0},s}\mu(E).$
Let $M_{0}=M_{0}(M,C_{0},s)$ be a big constant. We define
$E_{*}\coloneqq\big{\\{}x\in E\ \mathrel{\mathop{\ordinarycolon}}\
\int_{0}^{1}\frac{\mu(X(x,3J_{0},r))}{r}\,\frac{dr}{r}\leq M_{0}\big{\\}}.$
By Chebyshev’s inequality, if $M_{0}$ is chosen big enough, we have
$\mu(E_{*})\geq\frac{\mu(E)}{2}.$
Applying Proposition 3.3 to $E_{*}$ and $3J_{0}$, and recalling that
$\mathcal{H}^{1}(J_{0})\sim C_{0}^{-1}M^{-1}$, we obtain a Lipschitz graph
$\Gamma$ with $\operatorname{Lip}(\Gamma)\lesssim_{M,C_{0}}1$ and
$\mathcal{H}^{1}(\Gamma\cap E_{*})\gtrsim_{C_{0},M,M_{0}}\mu(E).$
This finishes the proof of Theorem 1.7.
$\square$
The remainder of the paper is dedicated to the proof of Proposition 4.1.
## 5\. Rectangles and generalized cubes
Suppose that $E\subset\mathbb{R}^{2}$ is a bounded AD-regular set with
constant $C_{0}$, and set $\mu=\mathcal{H}^{1}|_{E}.$ Since Proposition 4.1 is
scale-invariant, we may assume without loss of generality that
$\operatorname{diam}(E)=1$.
Let $J,G\subset\mathbb{T}$ be as in Proposition 4.1. By rotating $E$, we may
assume that $J$ is centered at $1/4$, so that the cone $X(0,J)$ is centered on
the vertical axis. Note that that $\pi_{0}=\pi_{1/4}^{\perp}$ is the
projection to the horizontal axis, i.e., $\pi_{0}(x,y)=x$. Recall that there
exists $\theta_{0}\in 3J$ such that
$\|\pi_{\theta_{0}}^{\perp}\mu\|_{\infty}\leq M.$ (5.1)
### 5.1. Rectangles
Throughout the article we will be working with many rectangles, typically with
one side much longer than the other. Let us fix some notation.
Given a rectangle $\mathcal{R}\subset\mathbb{R}^{2}$, we will denote the
length of its shorter side by $\ell(\mathcal{R})$, and the length of its
longer side by $\mathscr{L}(\mathcal{R})$. We will also write
$\theta(\mathcal{R})\in[0,1/2)\subset\mathbb{T}$ to denote the “direction” of
$\mathcal{R}$, so that $\ell_{\theta(\mathcal{R})}$ is parallel to the longer
sides of $\mathcal{R}$ (for squares, it doesn’t matter which of the two
directions we choose).
Given a constant $C>0$ and a rectangle $\mathcal{R}$, we will sometimes write
$C\mathcal{R}$ to denote the (unique) rectangle with the same center as
$\mathcal{R}$,
$\ell(C\mathcal{R})=C\ell(\mathcal{R}),\,\mathscr{L}(C\mathcal{R})=C\mathscr{L}(\mathcal{R})$,
and such that their longer sides are parallel to each other.
Most of the rectangles $\mathcal{R}$ we will be working with will have a fixed
direction $\theta(\mathcal{R})=1/4$, and a fixed aspect ratio
$\mathscr{L}(\mathcal{R})/\ell(\mathcal{R})=\mathcal{H}^{1}(J)^{-1}$. In other
words, they will be very tall, vertically aligned rectangles. We fix notation
specific to these rectangles.
Given $x\in\mathbb{R}^{2}$ and $r>0$ we set
$\mathcal{R}(x,r)=x+\bigg{[}-\frac{r}{2},\,\frac{r}{2}\bigg{]}\times\bigg{[}-\frac{r}{2\mathcal{H}^{1}(J)},\,\frac{r}{2\mathcal{H}^{1}(J)}\bigg{]},$
so that $\ell(\mathcal{R}(x,r))=r$ and
$\mathscr{L}(\mathcal{R}(x,r))=\mathcal{H}^{1}(J)^{-1}r$. Note that
$\pi_{0}(\mathcal{R}(x,r))=\pi_{0}(x)+[-r/2,r/2]$.
###### Lemma 5.1.
Let $\mathcal{R}$ be a rectangle, and suppose that for some
$\theta\in\mathbb{T}$ with
$|\theta-\theta(\mathcal{R})|\lesssim\frac{\ell(\mathcal{R})}{\mathscr{L}(\mathcal{R})}$
(5.2)
we have $\|\pi_{\theta}^{\perp}\mu\|_{L^{\infty}}\leq M$. Then,
$\mu(\mathcal{R})\lesssim M\ell(\mathcal{R}).$ (5.3)
###### Proof.
Let $\mathcal{R}$ and $\theta$ be as above, and set
$\alpha=|\theta-\theta(\mathcal{R})|\cdot 2\pi$. It follows from elementary
trigonometry that
$\mathcal{H}^{1}(\pi_{\theta}^{\perp}(\mathcal{R}))=\ell(\mathcal{R})\bigg{(}\cos(\alpha)+\frac{\mathscr{L}(\mathcal{R})}{\ell(\mathcal{R})}\sin(\alpha)\bigg{)}.$
From (5.2) we have
$\alpha\lesssim\frac{\ell(\mathcal{R})}{\mathscr{L}(\mathcal{R})}$, and so
$\mathcal{H}^{1}(\pi_{\theta}^{\perp}(\mathcal{R}))\lesssim\ell(\mathcal{R}).$
Since $\|\pi_{\theta}^{\perp}\mu\|_{L^{\infty}}\leq M$, we get
$\mu(\mathcal{R})\leq\mu((\pi_{\theta}^{\perp})^{-1}(\pi_{\theta}^{\perp}(\mathcal{R})))\leq
M\mathcal{H}^{1}(\pi_{\theta}^{\perp}(\mathcal{R}))\lesssim
M\ell(\mathcal{R}).$
∎
###### Corollary 5.2.
For any $x\in\mathbb{R}^{2}$ and $r>0$ we have
$\mu(\mathcal{R}(x,r))\lesssim Mr.$ (5.4)
###### Proof.
Observe that for $\mathcal{R}=\mathcal{R}(x,r)$ we have
$\theta(\mathcal{R})=1/4\in J$. Recall that there exists $\theta_{0}\in 3J$
such that $\|\pi_{\theta_{0}}^{\perp}\mu\|_{\infty}\leq M$. Since
$|\theta_{0}-\theta(\mathcal{R})|\leq
2\mathcal{H}^{1}(J)=2\ell(\mathcal{R})/\mathscr{L}(\mathcal{R})$, we get from
(5.3)
$\mu(\mathcal{R}(x,r))\lesssim M\ell(\mathcal{R}(x,r))=Mr.$
∎
### 5.2. Generalized dyadic cubes
We say that a metric space $(X,d)$ has a finite doubling property if any ball
$B_{X}(x,2r)\subset X$ can be covered by finitely many balls of the form
$B_{X}(x_{i},r)$. The following is a special case of Theorem 2.1 from [KRS12].
###### Theorem 5.3 ([KRS12]).
Let $\rho=1/1000$. Suppose that $(X,d)$ is a metric space with the finite
doubling property. Then, for every $k\in\mathbb{Z}$ there exists a collection
$\mathcal{D}_{k}$ of generalized cubes on $X$ such that the following hold:
1. (1)
For each $k\in\mathbb{Z}$, $X=\bigcup_{Q\in\mathcal{D}_{k}}Q$, and the union
is disjoint.
2. (2)
If $Q_{1},Q_{2}\in\bigcup_{k}\mathcal{D}_{k}$ satisfy $Q_{1}\cap
Q_{2}\neq\varnothing$, then either $Q_{1}\subset Q_{2}$ or $Q_{2}\subset
Q_{1}$.
3. (3)
For every $Q\in\mathcal{D}_{k}$ there exists $x_{Q}\in Q$ such that
$B_{X}(x_{Q},0.4\rho^{k})\subset Q\subset B_{X}(x_{Q},2\rho^{k}).$
Consider $X=E$ endowed with the metric
$d((x_{1},y_{1}),(x_{2},y_{2}))=\max\big{(}|x_{1}-x_{2}|,\,\mathcal{H}^{1}(J)\,{|y_{1}-y_{2}|}\big{)}.$
(5.5)
Note that for $x\in E$ and $r>0$, the ball with respect to $d$ is of the form
$B_{X}(x,r)=\mathcal{R}(x,2r)\cap E$.
It is clear that $(E,d)$ has the finite doubling property, and so we may use
Theorem 5.3 to obtain a lattice of generalized cubes
$\mathcal{D}=\bigcup_{k\in\mathbb{Z}}\mathcal{D}_{k}$ associated to $(E,d)$.
Given $Q\in\mathcal{D}_{k}$, we will write
$\displaystyle\ell(Q)$ $\displaystyle\coloneqq 4\rho^{k},$
$\displaystyle\mathsf{Ch}(Q)$ $\displaystyle\coloneqq\\{P\in\mathcal{D}_{k+1}\
\mathrel{\mathop{\ordinarycolon}}\ P\subset Q\\},$
$\displaystyle\mathcal{D}(Q)$ $\displaystyle\coloneqq\\{P\in\mathcal{D}\
\mathrel{\mathop{\ordinarycolon}}\ P\subset Q,\ \ell(P)\leq\ell(Q)\\}.$
Observe that $Q\subset\mathcal{R}(x_{Q},\ell(Q))\cap E$. We set
$\displaystyle\mathcal{R}_{Q}$
$\displaystyle\coloneqq\mathcal{R}(x_{Q},\ell(Q)),$ (5.6)
$\displaystyle\mathscr{L}(Q)$
$\displaystyle\coloneqq\mathcal{H}^{1}(J)^{-1}\ell(Q),$
so that $\ell(\mathcal{R}_{Q})=\ell(Q)$ and
$\mathscr{L}(\mathcal{R}_{Q})=\mathscr{L}(Q)$.
Note that if $P,Q\in\mathcal{D}$ satisfy $P\cap Q=\varnothing$ and
$\ell(P)\geq\ell(Q)$, then by (3) in Theorem 5.3 we have $d(x_{P},x_{Q})\geq
0.1\ell(P)\geq 0.05\ell(P)+0.05\ell(Q)$, so in particular
$0.1\mathcal{R}_{P}\cap 0.1\mathcal{R}_{Q}=\varnothing.$ We set
$\mathcal{R}(Q)\coloneqq 0.1\mathcal{R}_{Q}.$
We record for future reference that
$\displaystyle\mathcal{R}(Q)\cap E\subset$ $\displaystyle
Q\subset\mathcal{R}_{Q}\cap E,$ $\displaystyle 2\mathcal{R}_{Q}$
$\displaystyle\subset 2\mathcal{R}_{P}\quad\quad\quad\text{if $P\subset
Q=\varnothing$,}$ $\displaystyle\mathcal{R}(Q)\cap$
$\displaystyle\mathcal{R}(P)=\varnothing\quad\quad\text{if $P\cap
Q=\varnothing$}.$
Observe also that for any $C>0$ such that
$C\ell(Q)\lesssim\operatorname{diam}(E)=1$ we have
$CC_{0}\,\ell(Q)\lesssim\mu(C\mathcal{R}_{Q})\overset{\eqref{eq:ADRrectangles}}{\lesssim}CM\ell(Q).$
(5.7)
In particular,
$C_{0}\,\ell(Q)\lesssim\mu(Q)\lesssim M\ell(Q).$
## 6\. Conical energies
Let $A=A(C_{0},M)\geq 1000$ be a large constant which we will fix later on.
Inspired by [CT20] and [Dąb22], we introduce the following conical energy
associated to the set of directions $G\subset J$. For any $Q\in\mathcal{D}$ we
set
$\mathcal{E}_{G}(Q)\coloneqq\frac{1}{\mu(Q)}\int_{2A\mathcal{R}_{Q}}\int_{A^{-1}\mathscr{L}(Q)}^{A^{3}\mathscr{L}(Q)}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x).$
(6.1)
We have the following easy upper bound for $\mathcal{E}_{G}(Q)$.
###### Lemma 6.1.
For any $Q\in\mathcal{D}$ we have
$\mathcal{E}_{G}(Q)\lesssim_{A,M,C_{0}}\mathcal{H}^{1}(J).$ (6.2)
###### Proof.
Observe that for any $x\in 2A\mathcal{R}_{Q}$ and
$r\in(A^{-1}\mathscr{L}(Q),A^{3}\mathscr{L}(Q))$ we have
$X(x,G,r)\subset
X(x,J,A^{3}\mathscr{L}(Q))\subset\mathcal{R}(x,A^{4}\ell(Q)),$
so that
$\mu(X(x,G,r))\leq\mu(\mathcal{R}(x,A^{4}\ell(Q)))\overset{\eqref{eq:ADRrectangles}}{\lesssim}A^{4}M\,\ell(Q).$
Hence,
$\mathcal{E}_{G}(Q)=\frac{1}{\mu(Q)}\int_{2A\mathcal{R}_{Q}}\int_{A^{-1}\mathscr{L}(Q)}^{A^{3}\mathscr{L}(Q)}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)\\\
\lesssim_{A,M}\frac{1}{\mu(Q)}\int_{2A\mathcal{R}_{Q}}\int_{A^{-1}\mathscr{L}(Q)}^{A^{3}\mathscr{L}(Q)}\frac{\ell(Q)}{\mathscr{L}(Q)}\,\frac{dr}{r}d\mu(x)\\\
\sim_{A}\mathcal{H}^{1}(J)\frac{\mu(2A\mathcal{R}_{Q})}{\mu(Q)}\overset{\eqref{eq:ADR
cubes}}{\lesssim}_{A,M,C_{0}}\mathcal{H}^{1}(J).$
∎
### 6.1. Stopping time argument
Given a small constant $\delta=\delta(A,M,C_{0})>0$, we consider the following
stopping time condition. For $R\in\mathcal{D}$, we define the family
$\mathsf{BCE}(R)$ as the family of maximal cubes $Q\in\mathcal{D}(R)$ such
that
$\sum_{S\in\mathcal{D}\mathrel{\mathop{\ordinarycolon}}Q\subset S\subset
R}\mathcal{E}_{G}(S)\geq\delta\mathcal{H}^{1}(J).$ (6.3)
We define also $\mathsf{Tree}(R)$ as the subfamily of $\mathcal{D}(R)$
consisting of cubes that are not strictly contained in any cube from
$\mathsf{BCE}(R)$. Note that it may happen that $R\in\mathsf{BCE}(R)$, in
which case $\mathsf{Tree}(R)=\\{R\\}$.
###### Lemma 6.2.
For any $R\in\mathcal{D}$ we have
$\sum_{Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)}\mathcal{E}_{G}(Q)\mu(Q)\leq\delta\mathcal{H}^{1}(J)\mu(R),$
(6.4)
and
$\delta\mathcal{H}^{1}(J)\sum_{P\in\mathsf{BCE}(R)}\mu(P)\leq\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{G}(Q)\mu(Q)\lesssim_{A,M,C_{0}}\mathcal{H}^{1}(J)\mu(R).$
(6.5)
###### Proof.
We start by proving (6.4). Observe that
$\sum_{Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)}\mathcal{E}_{G}(Q)\mu(Q)=\sum_{Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)}\int\mathcal{E}_{G}(Q)\mathds{1}_{Q}(x)\,d\mu(x)\\\
=\int\sum_{Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)}\mathcal{E}_{G}(Q)\mathds{1}_{Q}(x)\,d\mu(x).$
Let $x\in\bigcup_{Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)}Q$, and let
$P\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)$ be a cube with $x\in P$.
Recalling that $P\notin\mathsf{BCE}(R)$ and the definition of
$\mathsf{BCE}(R)$ (6.3), we get
$\sum_{P\subset Q\subset R}\mathcal{E}_{G}(Q)<\delta\mathcal{H}^{1}(J).$
Since $P$ was an arbitrary cube with
$P\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)$ and $x\in P$, this gives
$\sum_{Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)}\mathcal{E}_{G}(Q)\mathds{1}_{Q}(x)\leq\delta\mathcal{H}^{1}(J).$
Integrating over
$x\in\bigcup_{Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)}Q\subset R$ yields
$\sum_{Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)}\mathcal{E}_{G}(Q)\mu(Q)\leq\delta\mathcal{H}^{1}(J)\mu(R).$
This proves (6.4).
The upper bound in (6.5) follows from (6.4) and the trivial bound (6.2)
applied to $Q\in\mathsf{BCE}(R)$:
$\sum_{Q\in\mathsf{BCE}(R)}\mathcal{E}_{G}(Q)\mu(Q)\lesssim_{A,M,C_{0}}\mathcal{H}^{1}(J)\sum_{Q\in\mathsf{BCE}(R)}\mu(Q)\leq\mathcal{H}^{1}(J)\mu(R).$
Now we prove the lower bound in (6.5). We have
$\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{G}(Q)\mu(Q)=\int\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{G}(Q)\mathds{1}_{Q}(x)\,d\mu(x)\\\
\geq\int\sum_{P\in\mathsf{BCE}(R)}\sum_{Q\in\mathsf{Tree}(R),\,P\subset
Q}\mathcal{E}_{G}(Q)\mathds{1}_{P}(x)\,d\mu(x).$ (6.6)
By (6.3) we have for every $P\in\mathsf{BCE}(R)$
$\sum_{Q\in\mathsf{Tree}(R),\,P\subset
Q}\mathcal{E}_{G}(Q)\geq\delta\mathcal{H}^{1}(J).$
Hence,
$\int\sum_{P\in\mathsf{BCE}(R)}\sum_{Q\in\mathsf{Tree}(R),\,P\subset
Q}\mathcal{E}_{G}(Q)\mathds{1}_{P}(x)\,d\mu(x)\geq\delta\mathcal{H}^{1}(J)\sum_{P\in\mathsf{BCE}(R)}\int\mathds{1}_{P}(x)\,d\mu(x)\\\
=\delta\mathcal{H}^{1}(J)\sum_{P\in\mathsf{BCE}(R)}\mu(P).$
Together with (6.6), this gives the desired estimate. ∎
### 6.2. Corona decomposition
We are ready to perform the corona decomposition. Let $k(J)\in\mathbb{Z}$ be
the largest integer such that for $Q\in\mathcal{D}_{k(J)}$ we have
$\mathscr{L}(Q)=4\mathcal{H}^{1}(J)^{-1}\rho^{k(J)}\geq 1.$
Set $\mathcal{D}_{*}=\bigcup_{k\geq k(J)}\mathcal{D}_{k}$, and
$\mathsf{Top}_{0}=\\{\mathcal{D}_{k(J)}\\}.$
If $\mathsf{Top}_{k}$ has already been defined, we set
$\mathsf{Top}_{k+1}=\bigcup_{R\in\mathsf{Top}_{k}}\bigcup_{Q\in\mathsf{BCE}(R)}\mathsf{Ch}(Q).$
Finally,
$\mathsf{Top}=\bigcup_{k\geq 0}\mathsf{Top}_{k}.$
Observe that
$\bigcup_{R\in\mathsf{Top}}\mathsf{Tree}(R)=\mathcal{D}_{*}.$
The following is a fairly standard computation.
###### Lemma 6.3.
We have
$\mathcal{H}^{1}(J)\mu(E)+\int_{E}\int_{0}^{1}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)\\\
\sim_{A,M}\mathcal{H}^{1}(J)\mu(E)+\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{G}(Q)\mu(Q).$
(6.7)
###### Proof.
Fix $k\geq k(J)$. Using the fact that for $Q\in\mathcal{D}_{k}$ the rectangles
$2A\mathcal{R}_{Q}$ have only bounded overlaps (with bound depending on $A$),
we have
$\sum_{Q\in\mathcal{D}_{k}}\mathcal{E}_{G}(Q)\mu(Q)\sim_{A}\int_{E}\int_{4A^{-1}\mathcal{H}^{1}(J)^{-1}\rho^{k}}^{4A^{3}\mathcal{H}^{1}(J)^{-1}\rho^{k}}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x).$
Summing over $k\geq k(J)$ we get
$\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{G}(Q)\mu(Q)\sim_{A}\int_{E}\int_{0}^{4A^{3}\mathcal{H}^{1}(J)^{-1}\rho^{k(J)}}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x).$
Recalling that $1\leq 4\mathcal{H}^{1}(J)^{-1}\rho^{k(J)}\lesssim 1$, we get
that
$\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{G}(Q)\mu(Q)\sim_{A}\int_{E}\int_{0}^{CA^{3}}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)$
for some constant $1\leq C\lesssim 1$. This is obviously no-smaller than the
integral on the left hand side of (6.7).
To see the converse estimate, note that for $r>1$ we have $X(x,G,r)\cap
E\subset\mathcal{R}(x,2\mathcal{H}^{1}(J))$, so that
$\int_{E}\int_{1}^{CA^{3}}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)\lesssim\int_{E}\int_{1}^{CA^{3}}\frac{\mu(\mathcal{R}(x,2\mathcal{H}^{1}(J)))}{r}\,\frac{dr}{r}d\mu(x)\\\
\overset{\eqref{eq:ADRrectangles}}{\lesssim}M\mathcal{H}^{1}(J)\int_{E}\int_{1}^{CA^{3}}\frac{1}{r^{2}}\,drd\mu(x)\lesssim
M\mathcal{H}^{1}(J)\mu(E).$
∎
The family $\mathsf{Top}$ satisfies the following packing condition.
###### Lemma 6.4.
We have
$\sum_{R\in\mathsf{Top}}\mu(R)\lesssim_{\delta,A}(\mathcal{H}^{1}(J))^{-1}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)+\mu(E).$
(6.8)
###### Proof.
First, we use the fact that the cubes $R\in\mathsf{Top}_{0}$ are pairwise
disjoint to estimate
$\sum_{R\in\mathsf{Top}_{0}}\mu(R)\leq\mu(E).$
This gives the second term on the right hand side of (6.8).
Moving on to $\mathsf{Top}\setminus\mathsf{Top}_{0}$, we compute
$\sum_{R\in\mathsf{Top}\setminus\mathsf{Top}_{0}}\mu(R)=\sum_{k\geq
0}\sum_{R\in\mathsf{Top}_{k+1}}\mu(R)=\sum_{k\geq
0}\sum_{R\in\mathsf{Top}_{k}}\sum_{Q\in\mathsf{BCE}(R)}\sum_{P\in\mathsf{Ch}(Q)}\mu(P)\\\
=\sum_{k\geq
0}\sum_{R\in\mathsf{Top}_{k}}\sum_{Q\in\mathsf{BCE}(R)}\mu(Q)\overset{\eqref{eq:energy
lower bound}}{\leq}(\delta\mathcal{H}^{1}(J))^{-1}\sum_{k\geq
0}\sum_{R\in\mathsf{Top}_{k}}\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{G}(Q)\mu(Q)\\\
=(\delta\mathcal{H}^{1}(J))^{-1}\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{G}(Q)\mu(Q)\\\
\lesssim_{A,M}(\delta\mathcal{H}^{1}(J))^{-1}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)+\delta^{-1}\mu(E).$
∎
Consider the following conical energy associated to $3J$:
$\mathcal{E}_{J}(Q)\coloneqq\frac{1}{\mu(Q)}\int_{Q}\int_{\rho\mathscr{L}(Q)}^{\mathscr{L}(Q)}\frac{\mu(X(x,3J,r))}{r}\,\frac{dr}{r}d\mu(x).$
Arguing as in (6.7), it is easy to show that
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,3J,r))}{r}\,\frac{dr}{r}d\mu(x)\lesssim\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{J}(Q)\mu(Q).$
(6.9)
We divide the conical energy $\mathcal{E}_{J}(Q)$ into an “interior” and
“exterior” part, which will be dealt with separately:
$\displaystyle\mathcal{E}^{{int}}_{J}(Q)$
$\displaystyle\coloneqq\frac{1}{\mu(Q)}\int_{Q}\int_{\rho\mathscr{L}(Q)}^{\mathscr{L}(Q)}\frac{\mu(X(x,0.5J,r))}{r}\,\frac{dr}{r}d\mu(x),$
$\displaystyle\mathcal{E}^{{ext}}_{J}(Q)$
$\displaystyle\coloneqq\frac{1}{\mu(Q)}\int_{Q}\int_{\rho\mathscr{L}(Q)}^{\mathscr{L}(Q)}\frac{\mu(X(x,3J\setminus
0.5J,r))}{r}\,\frac{dr}{r}d\mu(x).$
We define also the following modification of $\mathcal{E}_{J}^{ext}(Q)$
$\widetilde{\mathcal{E}}_{J}^{ext}(Q)\coloneqq\frac{1}{\mu(Q)}\int_{Q}\frac{\mu(X(x,3J\setminus
0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q)))}{\mathscr{L}(Q)}\,d\mu(x).$
###### Lemma 6.5.
We have
$\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{J}^{ext}(Q)\mu(Q)\lesssim\sum_{Q\in\mathcal{D}_{*}}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q).$
(6.10)
###### Proof.
Given $x\in Q$, we set
$X(x,Q)=X(x,3J\setminus 0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q)).$
If $Q=Q_{0}(x)\supset Q_{1}(x)\supset Q_{2}(x)\supset\dots$ is a sequence of
cubes such that for all $i\in\mathbb{N}$ we have
$Q_{i+1}(x)\in\mathsf{Ch}(Q_{i}(x))$ and $x\in Q_{i}(x)$, then
$\mu(X(x,3J\setminus
0.5J,\mathscr{L}(Q)))=\sum_{i\in\mathbb{N}}\mu(X(x,Q_{i}(x))).$
Thus, for $x\in Q$ and $\rho\mathscr{L}(Q)<r<\mathscr{L}(Q)$
$\frac{\mu(X(x,3J\setminus
0.5J,r)}{r}\lesssim\sum_{i\in\mathbb{N}}\frac{\mu(X(x,Q_{i}(x)))}{\mathscr{L}(Q)}=\sum_{i\in\mathbb{N}}\frac{\mu(X(x,Q_{i}(x)))}{\mathscr{L}(Q_{i}(x))}\cdot\frac{\ell(Q_{i}(x))}{\ell(Q)}.$
Integrating over $x\in Q$ and $\rho\mathscr{L}(Q)<r<\mathscr{L}(Q)$ yields
$\mathcal{E}_{J}^{ext}(Q)\mu(Q)\lesssim\sum_{P\in\mathcal{D}(Q)}\widetilde{\mathcal{E}}_{J}^{ext}(P)\mu(P)\frac{\ell(P)}{\ell(Q)}.$
We sum over $Q\in\mathcal{D}_{*}$ and conclude that
$\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{J}^{ext}(Q)\mu(Q)\lesssim\sum_{Q\in\mathcal{D}_{*}}\sum_{P\in\mathcal{D}(Q)}\widetilde{\mathcal{E}}_{J}^{ext}(P)\mu(P)\frac{\ell(P)}{\ell(Q)}\\\
=\sum_{P\in\mathcal{D}_{*}}\widetilde{\mathcal{E}}_{J}^{ext}(P)\mu(P)\sum_{Q\in\mathcal{D}_{*},\,Q\supset
P}\frac{\ell(P)}{\ell(Q)}\lesssim\sum_{P\in\mathcal{D}_{*}}\widetilde{\mathcal{E}}_{J}^{ext}(P)\mu(P),$
where in the last inequality we used the fact that the inner sum was a
geometric series. ∎
We will prove the following estimates for the interior and exterior energies.
###### Lemma 6.6.
If $\varepsilon=\varepsilon(M,C_{0})$ is chosen small enough, then for any
$R\in\mathsf{Top}$ we have
$\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{J}^{int}(Q)\mu(Q)\lesssim_{C_{0}}\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{G}(Q)\mu(Q).$
(6.11)
Furthermore, if $A=A(C_{0},M)$ is chosen big enough, and
$\delta=\delta(A,M,C_{0})$ is chosen small enough, then
$\sum_{Q\in\mathsf{Tree}(R)}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)\lesssim_{C_{0},M}\mathcal{H}^{1}(J)\mu(R).$
(6.12)
We prove (6.11) in Section 7, and (6.12) in Section 8. Now we show how
Proposition 4.1 follows from the estimates above.
###### Proof of Proposition 4.1.
Recall that our goal is to prove
$\int_{E}\int_{0}^{1}\frac{\mu(X(x,3J,r))}{r}\,\frac{dr}{r}d\mu(x)\\\
\lesssim_{C_{0},M}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)+\mathcal{H}^{1}(J)\mu(E).$
(6.13)
By (6.9), the left hand side is bounded by
$\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{J}(Q)\mu(Q)=\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{J}^{int}(Q)\mu(Q)+\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{J}^{ext}(Q)\mu(Q)\\\
\overset{\eqref{eq:est4}}{\lesssim}\sum_{Q\in\mathcal{D}_{*}}\mathcal{E}_{J}^{int}(Q)\mu(Q)+\sum_{Q\in\mathcal{D}_{*}}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)\\\
=\sum_{R\in\mathsf{Top}}\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{J}^{int}(Q)\mu(Q)+\sum_{R\in\mathsf{Top}}\sum_{Q\in\mathsf{Tree}(R)}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)\eqqcolon
S_{1}+S_{2}.$
To estimate $S_{1}$, we apply (6.11) and (6.7) to conclude
$S_{1}\lesssim\sum_{R\in\mathsf{Top}}\sum_{Q\in\mathsf{Tree}(R)}\mathcal{E}_{G}(Q)\mu(Q)\lesssim_{A,M}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)+\mathcal{H}^{1}(J)\mu(E).$
Regarding $S_{2}$, using (6.12) and (6.8) yields
$S_{2}\lesssim_{M}\sum_{R\in\mathsf{Top}}\mathcal{H}^{1}(J)\mu(R)\lesssim_{A,\delta}\int_{E}\int_{0}^{1}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)+\mathcal{H}^{1}(J)\mu(E).$
Recalling that $\delta=\delta(A,M,C_{0})$ and $A=A(C_{0},M)$, this gives
(6.13). ∎
## 7\. Estimating interior energy and obtaining good cones
### 7.1. Interior energy estimates
Recall that in Proposition 4.1, assumption (e), we assumed that $G$ is closed,
and that for every interval $I$ comprising $J\setminus G$ there exists
$\theta_{I}\in 3I$ such that $\|\pi_{\theta_{I}}^{\perp}\mu\|_{\infty}\leq M$.
We use this property in the following lemma, which is the first step in
estimating $\mathcal{E}_{J}^{int}(Q)$.
###### Lemma 7.1.
For any $x\in\mathbb{R}^{2}$ and $0<r<\infty$ we have
$\mu(X(x,J\setminus G,r))\lesssim M\mathcal{H}^{1}(J\setminus G)\,r.$
In particular, since $\mathcal{H}^{1}(J\setminus
G)\leq\varepsilon\mathcal{H}^{1}(J)$, we have
$\mu(X(x,J\setminus G,r))\lesssim M\varepsilon\mathcal{H}^{1}(J)\,r.$ (7.1)
###### Proof.
Let $\mathcal{B}$ denote the intervals comprising $J\setminus G$, so that for
every $I\in\mathcal{B}$ there exists $\theta_{I}\in 3I$ such that
$\|\pi_{\theta_{I}}^{\perp}\mu\|_{\infty}\leq M$. Clearly,
$X(x,J\setminus G,r)=\bigcup_{I\in\mathcal{B}}X(x,I,r).$
Observe that each truncated cone $X(x,I,r)$ is contained in some rectangle
$\mathcal{R}_{I}$ which is centered at $x$, its direction
$\theta(\mathcal{R}_{I})\in\mathbb{T}$ coincides with the midpoint of $I$, and
it satisfies
$\ell(\mathcal{R}_{I})\sim\mathcal{H}^{1}(I)\,r,\,\mathscr{L}(\mathcal{R}_{I})\sim
r$. Since
$|\theta(R_{I})-\theta_{I}|\leq
2\mathcal{H}^{1}(I)\sim\frac{\ell(\mathcal{R}_{I})}{\mathscr{L}(\mathcal{R}_{I})},$
we may use Lemma 5.1 (recall that
$\|\pi_{\theta_{I}}^{\perp}\mu\|_{\infty}\leq M$) to conclude that
$\mu(\mathcal{R}_{I})\lesssim M\ell(\mathcal{R}_{I})\sim
M\mathcal{H}^{1}(I)\,r.$
It follows that
$\mu(X(x,J\setminus G,r))\leq\sum_{I\in\mathcal{B}}\mu(X(x,I,r))\\\
\leq\sum_{I\in\mathcal{B}}\mu(\mathcal{R}_{I})\lesssim
Mr\sum_{I\in\mathcal{B}}\mathcal{H}^{1}(I)=M\mathcal{H}^{1}(J\setminus G)\,r.$
∎
###### Lemma 7.2.
If $\varepsilon=\varepsilon(M,C_{0})$ is small enough, then for any $x\in E$
and $0<r<\infty$ we have
$\mu(X(x,0.9J,r))\lesssim_{C_{0}}\mu(X(x,G,2r)).$ (7.2)
In particular, $\mathcal{E}_{J}^{int}(Q)\lesssim_{C_{0}}\mathcal{E}_{G}(Q)$,
and so (6.11) holds.
###### Proof.
If $X(x,0.9J,r)\cap E=\\{x\\}$, then there is nothing to prove, so suppose
that $X(x,0.9J,r)\cap E\neq\\{x\\}$.
Let $y\in X(x,0.9J,r)\cap E\setminus\\{x\\}$, and let $0<r_{0}\leq r/2$ be
such that $y\in E\cap X(x,0.9J,r_{0},2r_{0})$. Set
$r_{y}=c\mathcal{H}^{1}(J)\,r_{0}$ for some small absolute constant $c>0$, and
observe that if $c$ is chosen small enough, then $B(y,r_{y})\subset
X(x,J,r_{0}/2,4r_{0})$.
We use Lemma 7.1 to estimate
$\mu(B(y,r_{y})\cap X(x,J\setminus G,r_{0}/2,4r_{0}))\leq\mu(X(x,J\setminus
G,r_{0}/2,4r_{0}))\\\
\overset{\eqref{eq:littlemeas}}{\lesssim}M\varepsilon\mathcal{H}^{1}(J)r_{0}\sim
M\varepsilon r_{y}.$
On the other hand, since $y\in E,r_{y}<r_{0}<\operatorname{diam}(E)=1$, and
$B(y,r_{y})\subset X(x,J,r_{0}/2,4r_{0})$, we get from AD-regularity of $E$
that
$\mu(B(y,r_{y})\cap X(x,J,r_{0}/2,4r_{0}))=\mu(B(y,r_{y}))\gtrsim
C_{0}^{-1}r_{y}.$
The two estimates together give
$C_{0}^{-1}r_{y}\lesssim\mu(B(y,r_{y})\cap X(x,J,r_{0}/2,4r_{0}))\\\
=\mu(B(y,r_{y})\cap X(x,G,r_{0}/2,4r_{0}))+\mu(B(y,r_{y})\cap X(x,J\setminus
G,r_{0}/2,4r_{0}))\\\ \leq\mu(B(y,r_{y})\cap
X(x,G,r_{0}/2,4r_{0}))+CM\varepsilon r_{y}.$
Hence, assuming $\varepsilon=\varepsilon(M,C_{0})$ small enough, we may absorb
the second term on the right hand side to the left hand side, which gives
$\mu(B(y,r_{y})\cap X(x,G,2r))\geq\mu(B(y,r_{y})\cap X(x,G,r_{0}/2,4r_{0}))\\\
\gtrsim C_{0}^{-1}r_{y}\sim_{C_{0}}\mu(B(y,r_{y})).$ (7.3)
Now consider the family of balls
$\mathcal{B}=\\{B(y,r_{y})\ \mathrel{\mathop{\ordinarycolon}}\ y\in
X(x,0.9J,r)\cap E\setminus\\{x\\}\\}.$
By the $5r$-covering lemma, we may find a countable sub-collection
$\mathcal{B}^{\prime}=\\{B(y_{i},r_{y_{i}})\\}_{i\in\mathcal{I}}$ of pairwise
disjoint balls such that $\\{B(y_{i},5r_{y_{i}})\\}_{i\in\mathcal{I}}$ covers
all of $X(x,0.9J,r)\cap E\setminus\\{x\\}$. Then,
$\mu(X(x,0.9J,r)\cap
E)\leq\mu\bigg{(}\bigcup_{i\in\mathcal{I}}B(y_{i},5r_{y_{i}})\bigg{)}\leq\sum_{i\in\mathcal{I}}\mu(B(y_{i},5r_{y_{i}}))\\\
\sim_{C_{0}}\sum_{i\in\mathcal{I}}\mu(B(y_{i},r_{y_{i}}))\overset{\eqref{eq:est2}}{\lesssim_{C_{0}}}\sum_{i\in\mathcal{I}}\mu(B(y_{i},r_{y_{i}})\cap
X(x,G,2r))\\\ =\mu\bigg{(}\bigcup_{i\in\mathcal{I}}B(y_{i},r_{y_{i}})\cap
X(x,G,2r)\bigg{)}\leq\mu(X(x,G,2r).$
∎
### 7.2. Obtaining good cones
We will say that a (possibly truncated) cone $X$ is _good_ if it satisfies
$X\cap E=\varnothing.$
Similarly, we will say that a rectangle $\mathcal{R}$ is good if
$\mathcal{R}\cap E=\varnothing$.
Having plenty of good cones and rectangles will be crucial for estimating the
exterior energy $\widetilde{\mathcal{E}}_{J}^{ext}(Q)$ in Section 8. In the
lemma below we use Lemma 7.2 and the $\mathsf{BCE}$-stopping condition to find
many good cones.
###### Lemma 7.3.
If the $\mathsf{BCE}$-parameter $\delta=\delta(A,M,C_{0})\in(0,1)$ is chosen
small enough, then for all
$R\in\mathsf{Top},\,Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R),$ and $x\in
A\mathcal{R}_{Q}\cap E$ we have
$X(x,0.5J,A^{-1}\mathscr{L}(Q),A^{2}\mathscr{L}(R))\cap E=\varnothing.$
###### Proof.
Assume the contrary: let $Q\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R),\ x\in
A\mathcal{R}_{Q}\cap E$, and $y\in
X(x,0.5J,A^{-1}\mathscr{L}(Q),A^{2}\mathscr{L}(R))\cap E$.
Let $P\in\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)$ be such that $Q\subset P$
and $y\in X(x,0.5J,A^{-1}\mathscr{L}(P),A^{2}\mathscr{L}(P))$, so that in
particular
$A^{-1}\mathscr{L}(P)\leq|x-y|\leq A^{2}\mathscr{L}(P).$
Set
$r_{0}\coloneqq A^{-2}\ell(P)=A^{-2}\mathcal{H}^{1}(J)\mathscr{L}(P)\leq
A^{-1}\mathcal{H}^{1}(J)|x-y|.$ (7.4)
We claim that if $A$ is chosen big enough, then for all $x^{\prime}\in
B(x,r_{0})$ we have
$B(y,r_{0})\subset X(x^{\prime},0.9J,2A^{2}\mathscr{L}(P)).$ (7.5)
This is a simple geometric observation, see Figure 7.1. The rigorous
computation goes as follows: first, observe that if $x^{\prime}\in
B(x,r_{0}),\ y^{\prime}\in B(y,r_{0})$, then
$|x^{\prime}-y^{\prime}|\geq|x-y|-2r_{0}\overset{\eqref{eq:r0}}{\geq}A\mathcal{H}^{1}(J)^{-1}r_{0}-2r_{0}\geq\frac{{A}}{2\mathcal{H}^{1}(J)}r_{0}.$
Thus, using the fact that $y\in X(x,0.5J)$,
$|\pi_{0}(x^{\prime})-\pi_{0}(y^{\prime})|\leq|\pi_{0}(x)-\pi_{0}(y)|+2r_{0}\overset{\eqref{eq:cone
algebraic}}{\leq}\sin\bigg{(}\frac{\mathcal{H}^{1}(J)}{2}\pi\bigg{)}|x-y|+2r_{0}\\\
\leq\sin\bigg{(}\frac{\mathcal{H}^{1}(J)}{2}\pi\bigg{)}|x^{\prime}-y^{\prime}|+4r_{0}\leq\bigg{(}\sin\bigg{(}\frac{\mathcal{H}^{1}(J)}{2}\pi\bigg{)}+\frac{8\mathcal{H}^{1}(J)}{A}\bigg{)}|x^{\prime}-y^{\prime}|\\\
\leq\sin\big{(}0.9\mathcal{H}^{1}(J)\pi\big{)}|x^{\prime}-y^{\prime}|,$
assuming $A$ large enough. This shows $y^{\prime}\in X(x^{\prime},0.9J)$. We
also have $y^{\prime}\in B(x^{\prime},2A^{2}\mathscr{L}(P))$ because
$|x^{\prime}-y^{\prime}|\leq|x-y|+2r_{0}\leq
A^{2}\mathscr{L}(P)+2A^{-2}\ell(P)\leq 2A^{2}\mathscr{L}(P).$
This gives the claim (7.5).
$B(x,r_{0})$
---
$B(y,r_{0})$
---
$x^{\prime}$
---
$x$
---
Figure 7.1. We have $B(y,r_{0})\subset
X(x^{\prime},0.9J,2A^{2}\mathscr{L}(P))$.
Since $x\in A\mathcal{R}_{P}$ and $B(x,r_{0})\subset 2A\mathcal{R}_{P}$, we
get from Lemma 7.2 that
$\displaystyle\mathcal{E}_{G}(P)\mu(P)$
$\displaystyle=\int_{2A\mathcal{R}_{P}}\int_{A^{-1}\mathscr{L}(P)}^{A^{3}\mathscr{L}(P)}\frac{\mu(X(x^{\prime},G,r))}{r}\,\frac{dr}{r}d\mu(x^{\prime})$
$\displaystyle\overset{\eqref{eq:filling
gaps}}{\gtrsim}_{C_{0}}\int_{2A\mathcal{R}_{P}}\int_{2A^{2}\mathscr{L}(P)}^{4A^{2}\mathscr{L}(P)}\frac{\mu(X(x^{\prime},0.9J,r))}{r}\,\frac{dr}{r}d\mu(x^{\prime})$
$\displaystyle\geq\int_{B(x,r_{0})}\int_{2A^{2}\mathscr{L}(P)}^{4A^{2}\mathscr{L}(P)}\frac{\mu(X(x^{\prime},0.9J,r))}{r}\,\frac{dr}{r}d\mu(x^{\prime})$
$\displaystyle\gtrsim_{A}\int_{B(x,r_{0})}\int_{2A^{2}\mathscr{L}(P)}^{4A^{2}\mathscr{L}(P)}\frac{\mu(B(y,r_{0}))}{\mathscr{L}(P)}\,\frac{dr}{r}d\mu(x^{\prime})$
$\displaystyle\geq\frac{\mu(B(x,r_{0}))\mu(B(y,r_{0}))}{\mathscr{L}(P)}\gtrsim\frac{C_{0}^{-2}r_{0}^{2}}{\mathscr{L}(P)}\sim_{C_{0},A}\frac{\ell(P)^{2}}{\mathscr{L}(P)}=\mathcal{H}^{1}(J)\ell(P).$
Hence,
$\mathcal{E}_{G}(P)\gtrsim_{C_{0},A}\mathcal{H}^{1}(J)\frac{\ell(P)}{\mu(P)}\gtrsim_{M}\mathcal{H}^{1}(J).$
Recall that $\mathcal{E}_{G}(P)\leq\delta\mathcal{H}^{1}(J)$ because
$P\notin\mathsf{BCE}(R)$ (see the $\mathsf{BCE}$ stopping condition (6.3)).
Assuming $\delta=\delta(A,M,C_{0})$ small enough, we arrive at a
contradiction. ∎
For brevity of notation, for $R\in\mathsf{Top}$ we define
$\mathcal{T}(R)=\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)$ and
$\mathcal{T}_{k}(R)=\mathcal{T}(R)\cap\mathcal{D}_{k}.$
In the next two lemmas we show that for any integer $k\in\mathbb{Z}$, the
family of intervals
$\\{\pi_{0}(\mathcal{R}_{P})\ \mathrel{\mathop{\ordinarycolon}}\
P\in\mathcal{T}_{k}(R)\\}$
has bounded overlaps. In other words, if we fix generation $\mathcal{D}_{k}$,
then the rectangles associated to cubes in $\mathcal{T}_{k}(R)$ resemble a
graph over the horizontal line $\ell_{0}$. This will be useful in Section 8.
###### Lemma 7.4.
There exists an absolute constant $C>1$ such that the following holds. Suppose
that $R\in\mathcal{D}_{*}$ and $Q\neq P\in\mathcal{D}(R)$ are such that
$\ell(Q)=\ell(P),$ and
$X(x,0.5J,\rho\mathscr{L}(Q),\mathscr{L}(R))\cap E=\varnothing\quad\text{for
all $z\in E\cap 2\mathcal{R}_{Q}$.}$ (7.6)
If $\pi_{0}(\mathcal{R}_{Q})\cap\pi_{0}(\mathcal{R}_{P})\neq\varnothing$, then
$\mathcal{R}_{P}\subset C\mathcal{R}_{Q}$.
Note that the assumptions above are in particular satisfied for any
$Q,P\in\mathcal{D}_{k}\cap\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)$, by Lemma
7.3.
###### Proof.
Let $y_{Q}\in Q,y_{P}\in P,$ and suppose there exists
$z_{Q}\in\mathcal{R}_{Q}$ and $z_{P}\in\mathcal{R}_{P}$ such that
$\pi_{0}(z_{Q})=\pi_{0}(z_{P})$. Then, we have
$|\pi_{0}(y_{Q})-\pi_{0}(y_{P})|=|\pi_{0}(y_{Q}-z_{Q})-\pi_{0}(y_{P}-z_{P})-\pi_{0}(z_{P}-z_{Q})|\\\
\leq|\pi_{0}(y_{Q}-z_{Q})|+|\pi_{0}(y_{P}-z_{P})|+|\pi_{0}(z_{P}-z_{Q})|\leq\ell(Q)+\ell(P)+0=2\ell(Q).$
We claim that $|\pi_{0}^{\perp}(y_{Q})-\pi_{0}^{\perp}(y_{P})|\leq
C^{\prime}\mathscr{L}(Q)$ for some big absolute $C^{\prime}>1$. Indeed, if
that was not the case, then the previous computation gives
$|\pi_{0}(y_{Q})-\pi_{0}(y_{P})|\leq
2\ell(Q)=2\mathcal{H}^{1}(J)\mathscr{L}(Q)\leq\frac{2\mathcal{H}^{1}(J)}{C^{\prime}}|y_{Q}-y_{P}|.$
Taking $C^{\prime}>1$ large enough, we arrive at
$y_{P}\in X(y_{Q},0.5J,\rho\mathscr{L}(Q),\mathscr{L}(R)),$
which is a contradiction with (7.6). Hence,
$|\pi_{0}^{\perp}(y_{Q})-\pi_{0}^{\perp}(y_{P})|\leq
C^{\prime}\mathscr{L}(Q)$.
Recall that $x_{Q}$ is the center of $\mathcal{R}_{Q}$. It follows easily from
the estimates above that for any $x\in\mathcal{R}_{P}$
$|\pi_{0}(x)-\pi_{0}(x_{Q})|\leq|\pi_{0}(x)-\pi_{0}(y_{P})|+|\pi_{0}(y_{P})-\pi_{0}(y_{Q})|+|\pi_{0}(y_{Q})-\pi_{0}(x_{Q})|\\\
\leq\ell(P)+2\ell(Q)+\ell(Q)=4\ell(Q),$
and
$|\pi_{0}^{\perp}(x)-\pi_{0}^{\perp}(x_{Q})|\leq|\pi_{0}^{\perp}(x)-\pi_{0}^{\perp}(y_{P})|+|\pi_{0}^{\perp}(y_{P})-\pi_{0}^{\perp}(y_{Q})|+|\pi_{0}^{\perp}(y_{Q})-\pi_{0}^{\perp}(x_{Q})|\\\
\leq\mathscr{L}(P)+C^{\prime}\mathscr{L}(Q)+\mathscr{L}(Q)\lesssim\mathscr{L}(Q).$
Thus, $\mathcal{R}_{P}\subset C\mathcal{R}_{Q}$ for some absolute $C>1$. ∎
Recall that that for $Q\in\mathcal{D}_{k}$ we have $\ell(Q)=4\rho^{k}$.
###### Lemma 7.5.
Let $R\in\mathsf{Top}$ and $k\geq 0$. Then, the family of intervals
$\\{\pi_{0}(\mathcal{R}_{P})\\}_{P\in\mathcal{T}_{k}(R)}$ has bounded
overlaps, i.e.
$\sum_{P\in\mathcal{T}_{k}(R)}\mathds{1}_{\pi_{0}(\mathcal{R}_{P})}(x)\lesssim
1\quad\text{for all $x\in\mathbb{R}$}.$ (7.7)
In particular, for any interval $K\subset\mathbb{R}$ we have
$\\#\big{\\{}P\in\mathcal{T}_{k}(R)\ \mathrel{\mathop{\ordinarycolon}}\
\pi_{0}(\mathcal{R}_{P})\subset
K\big{\\}}\lesssim\frac{\mathcal{H}^{1}(K)}{\rho^{k}}.$ (7.8)
###### Proof.
Fix $Q\in\mathcal{T}_{k}(R)$. Suppose that $P\in\mathcal{T}_{k}(R)$ satisfies
$\pi_{0}(\mathcal{R}_{Q})\cap\pi_{0}(\mathcal{R}_{P})\neq\varnothing$. We know
from Lemma 7.3 that $Q$ and $P$ satisfy (7.6), and so it follows Lemma 7.4
that $\mathcal{R}_{P}\subset C\mathcal{R}_{Q}$. It remains to observe that
$\\#\\{P\in\mathcal{T}(R)\cap\mathcal{D}_{k}\
\mathrel{\mathop{\ordinarycolon}}\ \mathcal{R}_{P}\subset
C\mathcal{R}_{Q}\\}\lesssim_{C}1.$
This gives (7.7).
To see (7.8), we compute
$\\#\big{\\{}P\in\mathcal{T}_{k}(R)\ \mathrel{\mathop{\ordinarycolon}}\
\pi_{0}(\mathcal{R}_{P})\subset
K\big{\\}}\leq\sum_{P\in\mathcal{T}_{k}(R)}\frac{1}{\rho^{k}}\int_{K}\mathds{1}_{\pi_{0}(\mathcal{R}_{P})}(x)\,dx\\\
=\frac{1}{\rho^{k}}\int_{K}\sum_{P\in\mathcal{T}_{k}(R)}\mathds{1}_{\pi_{0}(\mathcal{R}_{P})}(x)\,dx\overset{\eqref{eq:bounded-
intersection}}{\lesssim}\frac{\mathcal{H}^{1}(K)}{\rho^{k}}.$
∎
## 8\. Estimating exterior energy
Recall that
$\widetilde{\mathcal{E}}_{J}^{ext}(Q)=\frac{1}{\mu(Q)}\int_{Q}\frac{\mu(X(x,3J\setminus
0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q)))}{\mathscr{L}(Q)}\,d\mu(x).$
Our goal is to prove the following.
###### Lemma 8.1.
If $A=A(C_{0},M)$ is chosen large enough, then for any $R\in\mathsf{Top}$ we
have
$\sum_{Q\in\mathsf{Tree}(R)}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)\lesssim_{C_{0},M}\mathcal{H}^{1}(J)\mu(R).$
This estimate will follow from the key geometric lemma below. In order to
state it, we introduce some notation.
###### Definition 8.2.
For $R\in\mathcal{D}_{*}$ we define $U(R)\subset\mathbb{R}$ as
$\displaystyle U(R)$
$\displaystyle\coloneqq\pi_{0}(A\mathcal{R}_{R})\setminus\pi_{0}(A\mathcal{R}_{R}\cap
E)$ $\displaystyle=\big{[}\pi_{0}(x_{R})-A\ell(R)/2,\
\pi_{0}(x_{R})+A\ell(R)/2\big{]}\setminus\pi_{0}(A\mathcal{R}_{R}\cap E).$
Denote by $\mathsf{Gap}(R)$ the family of connected components of $U(R)$.
Since $E$ is closed, the elements of $\mathsf{Gap}(R)$ are intervals. We will
call them _gaps in $\pi_{0}(A\mathcal{R}_{R}\cap E)$_.
Since the gaps are disjoint, and they have positive length, we get that
$\mathsf{Gap}(R)$ is at most countable, and also
$\sum_{K\in\mathsf{Gap}(R)}\mathcal{H}^{1}(K)\leq\mathcal{H}^{1}(U(R))\leq\mathcal{H}^{1}(\pi_{0}(A\mathcal{R}_{R}))=A\ell(R).$
(8.1)
Given $0<r<\ell(R)$ we define the collection of gaps with length comparable to
$r$ as
$\mathsf{Gap}(R,r)=\\{K\in\mathsf{Gap}(R)\ \mathrel{\mathop{\ordinarycolon}}\
A^{-1}r\leq\mathcal{H}^{1}(K)\leq Ar\\}.$
###### Definition 8.3.
For $R\in\mathcal{D}_{*}$, we define the family
$\mathsf{Bad}(R)\subset\mathcal{D}(R)$ as the family of cubes
$Q\in\mathcal{D}(R)$ for which there exists $x\in Q$ such that
$X(x,3J\setminus 0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q))\cap
E\neq\varnothing.$
Observe that if $Q\notin\mathsf{Bad}(R)$, then
$\widetilde{\mathcal{E}}_{J}^{ext}(Q)=\frac{1}{\mu(Q)}\int_{Q}\frac{\mu(X(x,3J\setminus
0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q)))}{\mathscr{L}(Q)}\,d\mu(x)=0.$
The following is the key geometric lemma of this article.
###### Lemma 8.4.
If $A=A(C_{0},M)$ is chosen large enough, then the following holds. Suppose
that $R\in\mathcal{D}_{*}$ and $Q\in\mathcal{D}(R)$ are such that
$X(z,0.5J,A^{-1}\mathscr{L}(Q),A^{2}\mathscr{L}(R))\cap
E=\varnothing\quad\text{for all $z\in A\mathcal{R}_{Q}\cap E$.}$ (8.2)
If $Q\in\mathsf{Bad}(R)$, then there is a gap $K\in\mathsf{Gap}(R,\ell(Q))$
such that
$\pi_{0}(\mathcal{R}_{Q})\subset A^{3}K.$
We defer the proof to the next section. Let us show how Lemma 8.1 follows from
Lemma 8.4.
###### Proof of Lemma 8.1.
Let $R\in\mathsf{Top}$. Our goal is to prove
$\sum_{Q\in\mathsf{Tree}(R)}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)\lesssim_{C_{0},M}\mathcal{H}^{1}(J)\mu(R).$
Recall that $\mathcal{T}(R)=\mathsf{Tree}(R)\setminus\mathsf{BCE}(R),\
\mathcal{T}_{k}(R)=\mathcal{T}(R)\cap\mathcal{D}_{k}$ If
$Q\notin\mathsf{Bad}(R)$, then $\widetilde{\mathcal{E}}_{J}^{ext}(Q)=0$
trivially, and so it suffices to show
$\sum_{Q\in\mathcal{T}(R)\cap\mathsf{Bad}(R)}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)+\sum_{Q\in\mathsf{BCE}(R)}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)\lesssim_{C_{0},M}\mathcal{H}^{1}(J)\mu(R).$
(8.3)
Observe that for any $x\in E$ we have
$\mu(X(x,3J\setminus
0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q)))\leq\mu(\mathcal{R}(x,3\ell(Q)))\overset{\eqref{eq:ADRrectangles}}{\lesssim}M\ell(Q),$
and so for any $Q\in\mathcal{D}_{*}$
$\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)=\int_{Q}\frac{\mu(X(x,3J\setminus
0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q)))}{\mathscr{L}(Q)}\,d\mu(x)\\\
\lesssim\frac{M\ell(Q)}{\mathscr{L}(Q)}\,\mu(Q)=M\mathcal{H}^{1}(J)\mu(Q).$
It follows that
$\sum_{Q\in\mathcal{T}(R)\cap\mathsf{Bad}(R)}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)+\sum_{Q\in\mathsf{BCE}(R)}\widetilde{\mathcal{E}}_{J}^{ext}(Q)\mu(Q)\\\
\lesssim
M\mathcal{H}^{1}(J)\bigg{(}\sum_{Q\in\mathcal{T}(R)\cap\mathsf{Bad}(R)}\mu(Q)+\sum_{Q\in\mathsf{BCE}(R)}\mu(Q)\bigg{)}.$
Thus, to reach (8.3), it suffices to show that the two sums on the right hand
side above are bounded by $C(C_{0},M)\mu(R)$. This is immediate for the second
sum:
$\sum_{Q\in\mathsf{BCE}(R)}\mu(Q)\leq\mu(R).$
What remains to show is that
$\sum_{Q\in\mathcal{T}(R)\cap\mathsf{Bad}(R)}\mu(Q)\lesssim_{C_{0},M}\mu(R).$
(8.4)
Let
$Q\in\mathcal{T}(R)\cap\mathsf{Bad}(R)\subset\mathsf{Tree}(R)\setminus\mathsf{BCE}(R)$.
By Lemma 7.3, $R$ and $Q$ satisfy the empty cone assumption (8.2), and so we
may use Lemma 8.4 to conclude that there is a gap
$K\in\mathsf{Gap}(R,\ell(Q))$ such that $\pi_{0}(\mathcal{R}_{Q})\subset
A^{3}K.$ Hence,
$\displaystyle\sum_{Q\in\mathcal{T}(R)\cap\mathsf{Bad}(R)}\mu(Q)$
$\displaystyle=\sum_{k\geq
0}\sum_{Q\in\mathcal{T}_{k}(R)\cap\mathsf{Bad}(R)}\mu(Q)$
$\displaystyle\leq\sum_{k\geq
0}\sum_{K\in\mathsf{Gap}(R,4\rho^{k})}\sum_{\begin{subarray}{c}Q\in\mathcal{T}_{k}(R),\\\
\pi_{0}(\mathcal{R}_{Q})\subset A^{3}K\end{subarray}}\mu(Q)$
$\displaystyle\lesssim\sum_{k\geq
0}\sum_{K\in\mathsf{Gap}(R,4\rho^{k})}\sum_{\begin{subarray}{c}Q\in\mathcal{T}_{k}(R),\\\
\pi_{0}(\mathcal{R}_{Q})\subset A^{3}K\end{subarray}}M\ell(Q)$
$\displaystyle\overset{\eqref{eq:bounded number of
intervals}}{\lesssim}\sum_{k\geq
0}\sum_{K\in\mathsf{Gap}(R,4\rho^{k})}M\rho^{k}\frac{\mathcal{H}^{1}(A^{3}K)}{\rho^{k}}$
$\displaystyle\sim_{A,M}\sum_{k\geq
0}\sum_{K\in\mathsf{Gap}(R,4\rho^{k})}\mathcal{H}^{1}(K)\sim_{A}\sum_{K\in\mathsf{Gap}(R)}\mathcal{H}^{1}(K)\overset{\eqref{eq:gap
lengths}}{\lesssim}_{A}\ell(R).$
Since $A=A(C_{0},M)$ and $\mu(R)\gtrsim C_{0}^{-1}\ell(R)$, this gives the
desired estimate (8.4). ∎
## 9\. Proof of the key geometric lemma
In this section we prove Lemma 8.4.
### 9.1. Preliminaries
Suppose that $R\in\mathcal{D}_{*}$ and $Q\in\mathcal{D}(R)$ are as in the
assumptions of Lemma 8.4, so that they satisfy
$X(z,0.5J,A^{-1}\mathscr{L}(Q),A^{2}\mathscr{L}(R))\cap
E=\varnothing\quad\text{for all $z\in A\mathcal{R}_{Q}\cap E$,}$ (9.1)
and assume that $Q\in\mathsf{Bad}(R)$, which means that there exists $x\in Q$
such that
$X(x,3J\setminus 0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q))\cap
E\neq\varnothing.$
Let $y\in X(x,3J\setminus 0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q))\cap E$.
See Figure 9.1 for an overview of our setup. The plan is as follows. We want
to find a gap $K\in\mathsf{Gap}(R,\ell(Q))$ such that
$\pi_{0}(\mathcal{R}_{Q})\subset A^{3}K.$
To achieve this, we will find a rectangle $\mathcal{Y}$ satisfying
$\mathcal{Y}\cap E=\varnothing$ (in our terminology: “$\mathcal{Y}$ is a good
rectangle”) of size roughly $\ell(Q)\times\mathscr{L}(R)$, such that
$\pi_{0}^{\perp}(\mathcal{Y})\supset\pi_{0}^{\perp}(A\mathcal{R}_{R})$, and
such that $\mathcal{Y}$ lies between $x$ and $y$, in the sense that
$\pi_{0}(x)$ and $\pi_{0}(y)$ lie on different sides of the interval
$\pi_{0}(\mathcal{Y})$. See the yellow rectangle in Figure 9.1. The properties
above tell us that
$\pi_{0}(\mathcal{Y})\cap\pi_{0}(A\mathcal{R}_{R}\cap E)=\varnothing,$
so that $\pi_{0}(\mathcal{Y})$ is contained in some interval
$K\in\mathsf{Gap}(R)$. One can also see that $K$ necessarily satisfies
$\mathcal{H}^{1}(K)\sim_{A}\ell(Q)$, so that $K\in\mathsf{Gap}(R,\ell(Q))$.
This will be our desired gap.
$A\mathcal{R}_{R}$
---
$x$
---
$y$
---
Figure 9.1. The big white rectangle is $A\mathcal{R}_{R}$, the small white
rectangle is $\mathcal{R}_{Q}$, the orange double-truncated cone is
$X(x,3J\setminus 0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q))$, the yellow
rectangle is the desired good rectangle $\mathcal{Y}$.
The double truncated cone $X(x,3J\setminus
0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q))$ has 4 connected components (see the
orange cone in Figure 9.1 or Figure 9.2). Without loss of generality, we may
assume that $y$ lies in the lower right connected component, so that
$\pi_{0}(x)<\pi_{0}(y)$ and $\pi_{0}^{\perp}(x)>\pi_{0}^{\perp}(y)$ (the proof
for other cases is completely analogous). Note that, since $y\in
X(x,3J\setminus 0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q))$, we have
$\pi_{0}(y)-\pi_{0}(x)\sim\ell(Q),$
and
$\pi_{0}^{\perp}(x)-\pi_{0}^{\perp}(y)\sim\mathscr{L}(Q).$
### 9.2. Finding a leftist rectangle
Recall that our desired good rectangle $\mathcal{Y}$ will be of size roughly
$\ell(Q)\times\mathscr{L}(R)$ and will satisfy
$\pi_{0}^{\perp}(\mathcal{Y})\supset\pi_{0}^{\perp}(A\mathcal{R}_{R})$. Note
that any good cone arising from (9.1) already _almost_ contains a rectangle
with these properties, except for a missing $\ell(Q)\times\mathscr{L}(Q)$
rectangle close to the center of the cone (see the red cone in Figure 9.6).
Our goal is to find an auxiliary good rectangle $\mathcal{B}$ of size roughly
$\ell(Q)\times\mathscr{L}(Q)$, which will fill the missing piece of the good
cone. See the blue rectangle in Figure 9.6.
The good rectangle $\mathcal{B}$ will be contained in something we called “a
leftist rectangle”. In order to define it, we first consider the rectangle
$\mathcal{G}\coloneqq\bigg{\\{}z\in\mathbb{R}^{2}\,\mathrel{\mathop{\ordinarycolon}}\,\pi_{0}(x)\leq\pi_{0}(z)\leq\pi_{0}(y),\
|\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(y)|\leq\frac{|\pi_{0}^{\perp}(x)-\pi_{0}^{\perp}(y)|}{2}\bigg{\\}},$
see the gray rectangle in Figure 9.2. Note that
$\ell(\mathcal{G})=|\pi_{0}(x)-\pi_{0}(y)|\sim\ell(Q),\,\mathscr{L}(\mathcal{G})=|\pi_{0}^{\perp}(x)-\pi_{0}^{\perp}(y)|\sim\mathscr{L}(Q)$,
and the mid-point of its right edge is $y$.
$x$
---
$y$
---
Figure 9.2. The white rectangle is $\mathcal{R}_{Q}$, the gray rectangle is
$\mathcal{G}$, and the orange double-truncated cone is $X(x,3J\setminus
0.5J,\rho\mathscr{L}(Q),\mathscr{L}(Q))$.
Let $N>1$ be a large integer satisfying
$N\sim MC_{0},$ (9.2)
whose precise value will be fixed later on.
We divide $\mathcal{G}$ into $2N+1$ sub-rectangles
$\mathcal{G}_{-N},\dots,\mathcal{G}_{0},\dots,\mathcal{G}_{N}$ such that
$\ell(\mathcal{G}_{i})=\ell(\mathcal{G})=|\pi_{0}(x)-\pi_{0}(y)|$ and
$\mathscr{L}(\mathcal{G}_{i})=\mathscr{L}(\mathcal{G})/(2N+1)=|\pi_{0}^{\perp}(x)-\pi_{0}^{\perp}(y)|/(2N+1)$.
We enumerate them in such a way that each $\mathcal{G}_{i}$ is on top of
$\mathcal{G}_{i-1}$, and $\mathcal{G}_{0}$ is the rectangle containing $y$.
See the left hand side of Figure 9.3. In formulas,
$\mathcal{G}_{i}\coloneqq\bigg{\\{}z\in\mathbb{R}^{2}\,\mathrel{\mathop{\ordinarycolon}}\,\pi_{0}(x)\leq\pi_{0}(z)\leq\pi_{0}(y),\\\
\frac{(2i-1)\mathscr{L}(\mathcal{G})}{2(2N+1)}\leq\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(y)\leq\frac{(2i+1)\mathscr{L}(\mathcal{G})}{2(2N+1)}\bigg{\\}}.$
It is not immediately clear that $\ell(\mathcal{G}_{i})$ and
$\mathscr{L}(\mathcal{G}_{i})$ as we defined them satisfy
$\ell(\mathcal{G}_{i})\leq\mathscr{L}(\mathcal{G}_{i})$, and that
$\mathcal{G}_{i}$’s look as portrayed in Figure 9.3, as opposed to being very
flat. We check this in the lemma below.
###### Lemma 9.1.
We have $\ell(\mathcal{G}_{i})\leq\mathscr{L}(\mathcal{G}_{i})$.
###### Proof.
Recall that $\ell(\mathcal{G}_{i})=\ell(\mathcal{G})\sim\ell(Q)$, and
$\mathscr{L}(\mathcal{G}_{i})=\frac{\mathscr{L}(\mathcal{G})}{2N+1}\sim\frac{\mathscr{L}(Q)}{N}=\frac{\mathcal{H}^{1}(J)^{-1}\ell(Q)}{N}=\frac{\ell(\mathcal{G}_{i})}{\mathcal{H}^{1}(J)N}\overset{\eqref{eq:Ndef}}{\sim}\frac{\ell(\mathcal{G}_{i})}{\mathcal{H}^{1}(J)MC_{0}}.$
(9.3)
Assumption (b) of Proposition 4.1 stated that $\mathcal{H}^{1}(J)\leq
c_{1}C_{0}^{-1}M^{-1}$, where $c_{1}>0$ is a small absolute constant. Assuming
$c_{1}$ to be small enough, the above estimates give
$\mathscr{L}(\mathcal{G}_{i})\geq\ell(\mathcal{G}_{i}).$ (9.4)
∎
$y$
---
$\mathcal{G}_{0}$
---
$\mathcal{G}_{1}$
---
$\mathcal{G}_{2}$
---
$\mathcal{G}_{3}$
---
$\mathcal{G}_{-1}$
---
$\mathcal{G}_{-2}$
---
$\mathcal{G}_{-3}$
---
$\mathcal{G}_{i-1}$
---
$\mathcal{G}_{i}$
---
$\mathcal{G}_{i+1}$
---
$z_{i+1}$
---
$z_{i}$
---
Figure 9.3. On the left, the rectangle $\mathcal{G}$ subdivided into
subrectangles $\mathcal{G}_{i}$ for $N=3$. On the right, 3 subrectangles
$\mathcal{G}_{i-1},\,\mathcal{G}_{i},\,\mathcal{G}_{i+1}$. The black curves
represent the set $E$. Since $\mathcal{G}_{i}\cap E\neq\varnothing$ and
$\mathcal{G}_{i+1}\cap E\neq\varnothing$, the corresponding leftmost points
$z_{i}$ and $z_{i+1}$ are well-defined. Note that $\mathcal{G}_{i}$ is a
leftist rectangle: $\mathcal{G}_{i}\prec\mathcal{G}_{i-1}$ because
$\mathcal{G}_{i-1}\cap E=\varnothing$, and
$\mathcal{G}_{i}\prec\mathcal{G}_{i+1}$ because
$\pi_{0}(z_{i})\leq\pi_{0}(z_{i+1})$.
The following three definitions are easier to digest together with the right
hand side of Figure 9.3.
###### Definition 9.2.
For each $\mathcal{G}_{i}$ with $\mathcal{G}_{i}\cap E\neq\varnothing$, let
$z_{i}\in\mathcal{G}_{i}\cap E$ be a point such that
$\pi_{0}(z_{i})=\inf_{z\in\mathcal{G}_{i}\cap E}\pi_{0}(z).$
We will call $z_{i}$ the _leftmost point of $\mathcal{G}_{i}\cap E.$_ Note
that the left-most point is well-defined because $\mathcal{G}_{i}$ and $E$ are
closed. It might be non-unique, but we do not care.
###### Definition 9.3.
If $-N\leq i,j\leq N$ and $\mathcal{G}_{i}\cap E\neq\varnothing$, then we will
write $\mathcal{G}_{i}\prec\mathcal{G}_{j}$ if either $\mathcal{G}_{j}\cap
E=\varnothing$ or $\pi_{0}(z_{i})\leq\pi_{0}(z_{j})$. In other words,
$\mathcal{G}_{i}\prec\mathcal{G}_{j}$ means that there is no point of
$\mathcal{G}_{j}\cap E$ to the left of $z_{i}$.
###### Definition 9.4.
For $-N+1\leq i\leq N-1$, we will say that $\mathcal{G}_{i}$ is a _leftist
rectangle_ if $\mathcal{G}_{i}\cap E\neq\varnothing$ and we have
$\mathcal{G}_{i}\prec\mathcal{G}_{i-1}$ and
$\mathcal{G}_{i}\prec\mathcal{G}_{i+1}$. That is, the point $z_{i}$ is the
leftmost point of
$(\mathcal{G}_{i-1}\cup\mathcal{G}_{i}\cup\mathcal{G}_{i+1})\cap E$.
###### Lemma 9.5.
There exists $-N+1\leq i\leq N-1$ such that $\mathcal{G}_{i}$ is a leftist
rectangle.
###### Proof.
Suppose the opposite, so that none of the rectangles is leftist. In
particular, $\mathcal{G}_{0}$ is not leftist. This means that either
$\mathcal{G}_{0}\cap E=\varnothing$, or for some $i\in\\{-1,1\\}$ we have
$\mathcal{G}_{i}\prec\mathcal{G}_{0}$. Since $y\in\mathcal{G}_{0}\cap E$, the
second alternative holds. Without loss of generality assume that
$\mathcal{G}_{1}\prec\mathcal{G}_{0}$.
Since $\mathcal{G}_{1}$ is not leftist, but
$\mathcal{G}_{1}\prec\mathcal{G}_{0}$, we get that
$\mathcal{G}_{2}\prec\mathcal{G}_{1}$. In particular, $\mathcal{G}_{2}\cap
E\neq\varnothing$. Continuing in this way, we get for $1\leq j\leq N-1$ that
$\mathcal{G}_{j+1}\prec\mathcal{G}_{j}\prec\mathcal{G}_{j-1}$. In particular,
for all $1\leq j\leq N$ we have $z_{j}\in\mathcal{G}_{j}\cap
E\neq\varnothing$.
Let $1\leq j\leq N$. By (9.4), we have $B(z_{j},\ell(\mathcal{G}_{j}))\subset
3\mathcal{G}_{j}$, and so
$\mu(3\mathcal{G}_{j})\geq\mu(B(z_{j},\ell(\mathcal{G}_{j})))\geq
C_{0}^{-1}\ell(\mathcal{G}_{j}).$
Since the rectangles $\\{3\mathcal{G}_{j}\\}_{j=1}^{N}$ have bounded overlap,
and they are all contained in $3\mathcal{G}$, we get that
$\mu(3\mathcal{G})\gtrsim\sum_{j=1}^{N}\mu(3\mathcal{G}_{j})\geq\sum_{j=1}^{N}C_{0}^{-1}\ell(\mathcal{G}_{j})=NC_{0}^{-1}\ell(\mathcal{G}).$
(9.5)
Recall that $\ell(\mathcal{G})=|x-y|$ and
$\mathscr{L}(\mathcal{G})=|\pi_{0}(x)-\pi_{0}(y)|\sim\mathcal{H}^{1}(J)^{-1}\ell(\mathcal{G})$,
so that $3\mathcal{G}\subset\mathcal{R}(y,C\ell(\mathcal{G}))$ for some
absolute constant $C>1$. Hence, we get from (5.4) that
$\mu(3\mathcal{G})\leq\mu(\mathcal{R}(y,C\ell(\mathcal{G})))\lesssim
M\ell(\mathcal{G}).$ (9.6)
In the definition of $N$ (9.2), we assumed $N\sim MC_{0}$. Let $N=\lceil
C^{\prime}MC_{0}\rceil$, where $C^{\prime}>1$ is a big absolute constant.
Pitting (9.5) against (9.6) and choosing $C^{\prime}>1$ large enough, we reach
a contradiction. ∎
The combination of Lemma 9.5 and the following lemma will complete the proof
of the key geometric lemma.
###### Lemma 9.6.
If $\mathcal{G}_{i}$ is a leftist rectangle, then $\pi_{0}(z_{i})$ is the
right endpoint of some gap $K\in\mathsf{Gap}(R,\ell(Q))$ with
$\pi_{0}(\mathcal{R}_{Q})\subset A^{3}K$.
We divide the proof of Lemma 9.6 into several steps.
### 9.3. Small good rectangle $\mathcal{B}$
Assume that $\mathcal{G}_{i}$ is a leftist rectangle. We define
$\displaystyle\mathcal{B}$
$\displaystyle\coloneqq\\{z\in\mathcal{G}_{i-1}\cup\mathcal{G}_{i}\cup\mathcal{G}_{i+1}\,\mathrel{\mathop{\ordinarycolon}}\,\pi_{0}(z)\leq\pi_{0}(z_{i})\\},$
(9.7)
$\displaystyle=\\{z\in\mathcal{G}_{i-1}\cup\mathcal{G}_{i}\cup\mathcal{G}_{i+1}\,\mathrel{\mathop{\ordinarycolon}}\,\pi_{0}(x)\leq\pi_{0}(z)\leq\pi_{0}(z_{i})\\},$
(9.8)
see the blue rectangle in Figure 9.4. A priori it might happen that
$\pi_{0}(z_{i})=\pi_{0}(x)$, in which case $\mathcal{B}$ would be a degenerate
rectangle (a segment). We show in Lemma 9.7 below that this is not the case.
Note that
$\mathscr{L}(\mathcal{B})=\mathscr{L}(\mathcal{G}_{i-1})+\mathscr{L}(\mathcal{G}_{i})+\mathscr{L}(\mathcal{G}_{i+1})=\frac{3\mathscr{L}(\mathcal{G})}{2N+1}\sim\frac{\mathscr{L}(Q)}{N},$
and also $\ell(\mathcal{B})=|\pi_{0}(z_{i})-\pi_{0}(x)|$.
Since $\mathcal{G}_{i}$ is a leftist rectangle, it follows immediately from
the definitions of leftist rectangles and leftmost points that
$\text{int}(\mathcal{B})\cap E=\varnothing,$ (9.9)
so that $\text{int}(\mathcal{B})$ is a good (open) rectangle.
$\mathcal{G}_{i-1}$
---
$\mathcal{G}_{i}$
---
$\mathcal{G}_{i+1}$
---
$z_{i}$
---
Figure 9.4. The blue rectangle is $\mathcal{B}$. In Lemma 9.7 we show that
$\ell(\mathcal{B})\sim\ell(\mathcal{G})\sim\ell(Q)$.
###### Lemma 9.7.
We have $|\pi_{0}(z_{i})-\pi_{0}(x)|=\ell(\mathcal{B})\sim\ell(Q)$.
###### Proof.
Since $\mathcal{B}\subset\mathcal{G}$, it is clear that
$\ell(\mathcal{B})\leq\ell(\mathcal{G})\sim\ell(Q),$
so we only need to prove
$\ell(\mathcal{B})\gtrsim\ell(\mathcal{G})\sim\ell(Q)$. See Figure 9.5 to get
some intuition on why this is true. We give a formal argument below.
Assume the contrary, so that $\ell(\mathcal{B})\leq c\,\ell(\mathcal{G})$ for
some small absolute constant $0<c<1$. We claim that if $0<c<1$ is chosen small
enough, then
$\mathcal{B}\subset X(x,0.5J,A^{-1}\mathscr{L}(Q),A\mathscr{L}(Q)).$ (9.10)
To see that, observe that if $z\in\mathcal{B}$, then
$|\pi_{0}(z)-\pi_{0}(x)|\leq\ell(\mathcal{B})\leq c\ell(\mathcal{G})\sim
c\ell(Q),$
and also, since $\mathcal{B}\subset\mathcal{G}$,
$\frac{\mathscr{L}(\mathcal{G})}{2}\leq|\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(x)|\leq\frac{3\mathscr{L}(\mathcal{G})}{2}.$
In particular,
$|\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(x)|\sim\mathscr{L}(\mathcal{G})\sim\mathscr{L}(Q)=\mathcal{H}^{1}(J)^{-1}\ell(Q)$.
It follows that
$|\pi_{0}(z)-\pi_{0}(x)|\lesssim
c\mathcal{H}^{1}(J)|\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(x)|.$
If $0<c<1$ is chosen small enough, we get that $z\in X(x,0.5J)$.
Since
$|x-z|\sim|\pi_{0}(z)-\pi_{0}(x)|+|\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(x)|\sim\mathscr{L}(Q),$
we also have $z\in X(x,0.5J,A^{-1}\mathscr{L}(Q),A\mathscr{L}(Q))$ if $A$ is
chosen large enough. This shows (9.10).
$x$
---
$y$
---
$x$
---
$y$
---
$z_{i}$
---
Figure 9.5. On the left we see the full picture, on the right we zoom in on
the dashed-border rectangle. The white rectangle is $\mathcal{R}_{Q}$, the
gray rectangle is $\mathcal{G}$, the blue rectangle is $\mathcal{B}$, the red
double-truncated cone is $X(x,0.5J,A^{-1}\mathscr{L}(Q),A\mathscr{L}(Q))$. The
red cone has an empty intersection with $E$ by (9.1), whereas $\mathcal{B}$
contains the point $z_{i}\in E$. Thus, $\mathcal{B}$ cannot be fully contained
in the red cone, which gives $\ell(\mathcal{B})\gtrsim\ell(Q)$.
Recall that $X(x,0.5J,A^{-1}\mathscr{L}(Q),A\mathscr{L}(Q))\cap E=\varnothing$
by the assumption (9.1). At the same time, $\mathcal{B}$ contains $z_{i}\in
E$. This contradicts (9.10). Hence,
$\ell(\mathcal{B})\geq c\ell(\mathcal{G})\sim\ell(Q).$
∎
### 9.4. Big good rectangle $\mathcal{Y}$
$z_{i}$
---
$\mathcal{R}_{Q}$
---
$A\mathcal{R}_{R}$
---
$z_{i}$
---
Figure 9.6. On the left we see the full picture, on the right we zoom in on
the dashed-border rectangle. The small white rectangle is $\mathcal{R}_{Q}$,
the large white rectangle is $A\mathcal{R}_{R}$, the blue rectangle is
$\mathcal{B}$, the narrow yellow rectangle is $\mathcal{Y}$, the red double-
truncated cone is $X(z_{i},0.5J,A^{-1}\mathscr{L}(Q),A^{2}\mathscr{L}(R))$.
Consider the rectangle $\mathcal{Y}$ defined as
$\mathcal{Y}\coloneqq\\{z\in\mathbb{R}^{2}\,\mathrel{\mathop{\ordinarycolon}}\,\pi_{0}(z_{i})-A^{-1}\ell(Q)\leq\pi_{0}(z)\leq\pi_{0}(z_{i}),\
|\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(z_{i})|\leq 2A\mathscr{L}(R)\\},$
see the yellow rectangle in Figure 9.6. Note that
$\ell(\mathcal{Y})=A^{-1}\,\ell(Q)$,
$\mathscr{L}(\mathcal{Y})=4A\mathscr{L}(R)$, and the mid-point of its right
edge is $z_{i}$.
Our plan is the following. First, we will show that $\mathcal{Y}$ is contained
in the union of the good cone
$X(z_{i},0.5J,A^{-1}\mathscr{L}(Q),A^{2}\mathscr{L}(R))$ (the red cone in the
figure) and the good rectangle $\mathcal{B}$ (the blue rectangle in the
figure). Since the interiors of these two have empty intersections with $E$,
we will conclude that $\text{int}(\mathcal{Y})\cap E=\varnothing$. This will
give us $K\in\mathsf{Gap}(R,\ell(Q))$ with $\pi_{0}(\mathcal{R}_{Q})\subset
A^{3}K$, the desired gap in $\pi_{0}(A\mathcal{R}_{R}\cap E)$.
###### Lemma 9.8.
If $A=A(C_{0},M)$ is chosen big enough, then
$\mathrm{int}{(\mathcal{Y})}\subset\mathrm{int}(\mathcal{B})\cup
X(z_{i},0.5J,A^{-1}\mathscr{L}(Q),A^{2}\mathscr{L}(R)).$ (9.11)
###### Proof.
This is easy to believe in after looking at Figure 9.6 for a minute or two,
but for the sake of completeness, we provide the computations below. They are
easier to follow keeping Figure 9.6 in mind.
Let
$\displaystyle\mathcal{Y}_{1}$
$\displaystyle\coloneqq\\{z\in\mathbb{R}^{2}\,\mathrel{\mathop{\ordinarycolon}}\,\pi_{0}(z_{i})-A^{-1}\ell(Q)<\pi_{0}(z)<\pi_{0}(z_{i}),\
|\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(z_{i})|<\mathscr{L}(\mathcal{G}_{i})\\},$
$\displaystyle\mathcal{Y}_{2}$
$\displaystyle\coloneqq\mathrm{int}(\mathcal{Y})\setminus\mathcal{Y}_{1},$
so that $\mathrm{int}(\mathcal{Y})=\mathcal{Y}_{1}\cup\mathcal{Y}_{2}$. We
claim that
$\mathcal{Y}_{1}\subset\mathrm{int}(\mathcal{B}),$ (9.12)
and
$\mathcal{Y}_{2}\subset
X(z_{i},0.5J,A^{-1}\mathscr{L}(Q),A^{2}\mathscr{L}(R)).$ (9.13)
First we prove (9.12). By Lemma 9.7, we have
$\ell(\mathcal{Y}_{1})=A^{-1}\ell(Q)\leq\ell(\mathcal{B})$, assuming $A$ big
enough. Since $z_{1}$ lies on the right edges of both $\mathcal{Y}_{1}$ and
$\mathcal{B}$, this immediately gives
$\pi_{0}(\mathcal{Y}_{1})\subset\pi_{0}(\mathrm{int}(\mathcal{B}))$. On the
other hand, recall that $z_{i}\in\mathcal{G}_{i}$ and
$\pi_{0}^{\perp}(\mathcal{B})=\pi_{0}^{\perp}(\mathcal{G}_{i-1})\cup\pi_{0}^{\perp}(\mathcal{G}_{i})\cup\pi_{0}^{\perp}(\mathcal{G}_{i+1}),$
see Figure 9.4. It follows that
$\pi_{0}^{\perp}(\mathcal{Y}_{1})=(\pi_{0}^{\perp}(z_{i})-\mathscr{L}(\mathcal{G}_{i}),\
\pi_{0}^{\perp}(z_{i})+\mathscr{L}(\mathcal{G}_{i}))\subset\pi_{0}^{\perp}(\mathrm{int}(\mathcal{B})).$
Since both $\mathcal{Y}_{1}$ and $\mathrm{int}(\mathcal{B})$ are open
rectangles with sides parallel to the axes, we conclude that
$\mathcal{Y}_{1}\subset\mathrm{int}(\mathcal{B})$.
We move on to (9.13). First, observe that for $z\in\mathcal{Y}_{2}$ we have,
by the definition of $\mathcal{Y}$,
$|z-z_{i}|\leq(A^{-2}\ell(Q)^{2}+4A^{2}\mathscr{L}(R)^{2})^{1/2}\leq
3A\mathscr{L}(R),$
and also, since $z\notin\mathcal{Y}_{1}$,
$|z-z_{i}|\geq|\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(z_{i})|\geq\mathscr{L}(\mathcal{G}_{i})=\frac{\mathscr{L}(\mathcal{G})}{2N+1}\overset{\eqref{eq:ELLPi}}{\sim}\frac{\mathscr{L}(Q)}{N}\overset{\eqref{eq:Ndef}}{\sim}\frac{\mathscr{L}(Q)}{MC_{0}}$
Thus, assuming $A=A(M,C_{0})$ large enough, we have
$z\in B(z_{i},A^{2}\mathscr{L}(R))\setminus B(z_{i},A^{-1}\mathscr{L}(Q)).$
It remains to show $z\in X(z_{i},0.5J)$. Note that
$|\pi_{0}(z)-\pi_{0}(z_{i})|\leq
A^{-1}\ell(Q)=A^{-1}\mathcal{H}^{1}(J)\mathscr{L}(Q)\\\
=MC_{0}A^{-1}\mathcal{H}^{1}(J)\frac{\mathscr{L}(Q)}{MC_{0}}\lesssim
MC_{0}A^{-1}\mathcal{H}^{1}(J)\,|\pi_{0}^{\perp}(z)-\pi_{0}^{\perp}(z_{i})|.$
Assuming $A=A(M,C_{0})$ large enough, this gives $z\in X(z_{i},0.5J)$. ∎
###### Lemma 9.9.
We have $\mathrm{int}(\mathcal{Y})\cap E=\varnothing$.
###### Proof.
Recall that $z_{i}\in\mathcal{G}\cap E$, and $\mathcal{G}\subset
A\mathcal{R}_{Q}$. Thus, $z_{i}\in A\mathcal{R}_{Q}\cap E$, and so we get from
(9.1) that
$X(z_{i},0.5J,A^{-1}\mathscr{L}(Q),A^{2}\mathscr{L}(R))\cap E=\varnothing.$
We also have $\text{int}(\mathcal{B})\cap E=\varnothing$ by (9.9). Hence, it
follows from (9.11) that
$\text{int}(\mathcal{Y})\cap E=\varnothing.$
∎
### 9.5. Mind the gap
We are finally ready to find the gap $K\in\mathsf{Gap}(R,\ell(Q))$ with
$\pi_{0}(\mathcal{R}_{Q})\subset A^{3}K$.
First, note that $z_{i}\in A\mathcal{R}_{Q}\subset A\mathcal{R}_{R}$. Since
$\mathscr{L}(\mathcal{Y})=4A\mathscr{L}(R)$ and $z_{i}$ is the mid-point of
the right edge of $\mathcal{Y}$, it follows that
$\\{z\in
A\mathcal{R}_{R}\,\mathrel{\mathop{\ordinarycolon}}\,\pi_{0}(z)\in\pi_{0}(\text{int}(\mathcal{Y}))\\}\subset\text{int}(\mathcal{Y})$
Together with Lemma 9.9, this gives
$\\{z\in A\mathcal{R}_{R}\cap
E\,\mathrel{\mathop{\ordinarycolon}}\,\pi_{0}(z)\in\pi_{0}(\text{int}(\mathcal{Y}))\\}\subset\text{int}(\mathcal{Y})\cap
E=\varnothing.$
Hence,
$\pi_{0}(A\mathcal{R}_{R}\cap
E)\cap\pi_{0}(\text{int}(\mathcal{Y}))=\varnothing.$
This means that the open interval
$\pi_{0}(\text{int}(\mathcal{Y}))=(\pi_{0}(z_{i})-A^{-1}\ell(Q),\,\pi_{0}(z_{i}))$
is contained in some gap $K\in\mathsf{Gap}(R)$. We have
$\mathcal{H}^{1}(K)\geq\mathcal{H}^{1}(\pi_{0}(\text{int}(\mathcal{Y})))=A^{-1}\ell(Q).$
Note that $x,z_{i}\in A\mathcal{R}_{R}\cap E$. Thus,
$\pi_{0}(x),\pi_{0}(z_{i})\notin K$, and also $\pi_{0}(z_{i})$ lies on the
right end-point of $K$. By Lemma 9.7
$\pi_{0}(z_{i})-\pi_{0}(x)=\ell(\mathcal{B})>A^{-1}\ell(Q)=\mathcal{H}^{1}(\pi_{0}(\text{int}(\mathcal{Y}))),$
so that
$\pi_{0}(x)\leq\pi_{0}(z_{i})-\mathcal{H}^{1}(\pi_{0}(\text{int}(\mathcal{Y}))).$
This means that $\pi_{0}(x)$ lies “to the left” of the interval
$\pi_{0}(\text{int}(\mathcal{Y}))$, and in consequence, “to the left” of the
gap $K$. Since $\pi_{0}(z_{i})$ is the right end-point of $K$, it follows from
Lemma 9.7 that
$\mathcal{H}^{1}(K)\leq|\pi_{0}(x)-\pi_{0}(z_{i})|=\ell(\mathcal{B})\sim\ell(Q).$
So we have $A^{-1}\ell(Q)\leq\mathcal{H}^{1}(K)\lesssim\ell(Q)$. In
particular, $K\in\mathsf{Gap}(R,\ell(Q))$.
Finally, we have
$\operatorname{dist}(\pi_{0}(\mathcal{R}_{Q}),K)\leq\operatorname{dist}(\pi_{0}(x),K)\leq|\pi_{0}(x)-\pi_{0}(z_{i})|\lesssim\ell(Q)\leq
A\mathcal{H}^{1}(K),$
and so $\pi_{0}(\mathcal{R}_{Q})\subset A^{3}K$. This finishes the proof of
Lemma 9.6, and of the key geometric lemma.
## Appendix A Proof of Corollary 3.2
In this section we prove Corollary 3.2, which we repeat below for reader’s
convenience.
###### Corollary A.1.
Let $E\subset\mathbb{R}^{2}$ and $G\subset\mathbb{T}$ be as in Theorem 1.7,
and let $\mu=\mathcal{H}^{1}|_{E}$. Then,
$\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\frac{\mu(X(x,G^{\perp},r))}{r}\,\frac{dr}{r}d\mu(x)\lesssim
M\mathcal{H}^{1}(G)\mu(E),$
where $G^{\perp}=G+1/4$.
###### Proof.
If the set $G$ is open, then we can immediately apply Proposition 3.1 to
estimate
$\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\frac{\mu(X(x,G^{\perp},r))}{r}\,\frac{dr}{r}d\mu(x)\lesssim\int_{G}\|\pi_{\theta}\mu\|_{2}^{2}\,d\theta=\int_{G}\int_{\mathbb{R}}|\pi_{\theta}\mu(x)|^{2}\,dx\,d\theta\\\
\overset{\eqref{eq:projbdd}}{\leq}M\int_{G}\int_{\mathbb{R}}\pi_{\theta}\mu(x)\,dx\,d\theta=M\mathcal{H}^{1}(G)\mu(E),$
(A.1)
which is the desired inequality.
The general case will follow from the classical Besicovitch projection theorem
and approximation. Suppose that $G$ is not open. Note that the assumption
(1.3) implies that $\mathcal{H}^{1}(\pi_{\theta}(E))>0$ for all $\theta\in G$,
and even $\mathcal{H}^{1}(\pi_{\theta}(F))>0$ for all $F\subset E$ with
$\mathcal{H}^{1}(F)>0$. Since $\mathcal{H}^{1}(G)>0$, we get from the
classical Besicovitch projection theorem, Theorem A, that $E$ is rectifiable,
so that
$E=\bigcup_{i=1}^{\infty}\Gamma_{i}\cup Z,$
where $\Gamma_{i}$ is a measurable subset of a graph of a $C^{1}$-function,
and $\mathcal{H}^{1}(Z)=0$. For $N\geq 1$ set
$E_{N}\coloneqq\bigcup_{i=1}^{N}\Gamma_{i},$
and $\mu_{N}=\mathcal{H}^{1}|_{E_{N}}$.
Fix $\theta\in G$. Since $\|\pi_{\theta}\mu\|_{\infty}\leq M$, we have that
for each $i\in\mathbb{N}$ and $\mathcal{H}^{1}$-a.e. point $x\in\Gamma_{i}$
the line tangent to $\Gamma_{i}$ at $x$ cannot be perpendicular to
$\ell_{\theta}$, and even
$\measuredangle(T_{x}\Gamma_{i},\ell_{\theta})\leq\frac{\pi}{2}-CM^{-1}$
for some absolute constant $0<C<1$. Hence, if $|\theta^{\prime}-\theta|\leq
cM^{-1}$ for some small absolute constant $0<c<1$, then we have
$\measuredangle(T_{x}\Gamma_{i},\ell_{\theta^{\prime}})\leq\frac{\pi}{2}-C^{\prime}M^{-1}.$
It follows that if $|\theta^{\prime}-\theta|\leq cM^{-1}$, then for any
$i\in\mathbb{N}$ we have
$\|\pi_{\theta^{\prime}}\mathcal{H}^{1}|_{\Gamma_{i}}\|_{\infty}\lesssim M$.
Thus,
$\|\pi_{\theta^{\prime}}\mu_{N}\|_{\infty}\leq\sum_{i=1}^{N}\|\pi_{\theta^{\prime}}\mathcal{H}^{1}|_{\Gamma_{i}}\|_{\infty}\lesssim
NM.$
By the outer regularity of Lebesgue measure, there exists a sequence of open
sets $G_{k}\supset G$ such that
$\mathcal{H}^{1}(G_{k}\setminus G)\leq\frac{1}{k}.$
Without loss of generality we may assume that each $G_{k}$ is contained in a
$cM^{-1}$-neighbourhood of $G$, so that for all $\theta\in G$ we have
$\|\pi_{\theta}\mu_{N}\|_{\infty}\leq\|\pi_{\theta}\mu\|_{\infty}\leq M$ and
for all $\theta\in G_{k}\setminus G$ we have
$\|\pi_{\theta}\mu_{N}\|_{\infty}\lesssim NM$. Then, repeating the computation
from (A.1) yields
$\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\frac{\mu_{N}(X(x,G_{k},r))}{r}\,\frac{dr}{r}d\mu_{N}(x)\lesssim\int_{G_{k}}\|\pi_{\theta}\mu_{N}\|_{2}^{2}\,d\theta\\\
\leq M\mathcal{H}^{1}(G)\mu_{N}(E)+MN\mathcal{H}^{1}(G_{k}\setminus
G)\mu_{N}(E).$ (A.2)
Note that $\mu_{N}(X(x,G,r))\leq\liminf_{k}\mu_{N}(X(x,G_{k},r))$, and so by
Fatou’s lemma
$\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\frac{\mu_{N}(X(x,G,r))}{r}\,\frac{dr}{r}d\mu_{N}(x)\leq\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\liminf_{k\to\infty}\frac{\mu_{N}(X(x,G_{k},r))}{r}\,\frac{dr}{r}d\mu_{N}(x)\\\
\leq\liminf_{k\to\infty}\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\frac{\mu_{N}(X(x,G_{k},r))}{r}\,\frac{dr}{r}d\mu_{N}(x)\\\
\lesssim\liminf_{k\to\infty}\big{(}M\mathcal{H}^{1}(G)\mu_{N}(E)+MN\mathcal{H}^{1}(G_{k}\setminus
G)\mu(E)\big{)}\\\ =M\mathcal{H}^{1}(G)\mu_{N}(E)\leq
M\mathcal{H}^{1}(G)\mu(E).$ (A.3)
Now, fix $0<r<\infty$. We claim that
$f_{N}(r)\coloneqq\int_{\mathbb{R}^{2}}\mu_{N}(X(x,G,r))\,d\mu_{N}(x)\xrightarrow{N\to\infty}\int_{\mathbb{R}^{2}}\mu(X(x,G,r))\,d\mu(x)\eqqcolon
f(r).$
Indeed, we have
$|f(r)-f_{N}(r)|=\int_{\mathbb{R}^{2}}\mu(X(x,G,r))\,d\mu(x)-\int_{\mathbb{R}^{2}}\mu_{N}(X(x,G,r))\,d\mu_{N}(x)\\\
=\int_{E\setminus
E_{N}}\mu(X(x,G,r))\,d\mu(x)-\int_{E_{N}}\mu_{N}(X(x,G,r))-\mu(X(x,G,r))\,d\mu_{N}(x)\\\
\leq\mu(E)\cdot\mu(E\setminus E_{N})+\mu(E_{N})\cdot\mu(E\setminus
E_{N})\xrightarrow{N\to\infty}0.$
Hence, by Fatou’s lemma and Fubini’s theorem
$\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\frac{\mu(X(x,G,r))}{r}\,\frac{dr}{r}d\mu(x)=\int_{0}^{\infty}f(r)\frac{dr}{r^{2}}=\int_{0}^{\infty}\liminf_{N\to\infty}f_{N}(r)\frac{dr}{r^{2}}\\\
\leq\liminf_{N\to\infty}\int_{0}^{\infty}f_{N}(r)\frac{dr}{r^{2}}=\liminf_{N\to\infty}\int_{\mathbb{R}^{2}}\int_{0}^{\infty}\frac{\mu_{N}(X(x,G,r))}{r}\,\frac{dr}{r}d\mu_{N}(x)\\\
\overset{\eqref{eq:12}}{\lesssim}\liminf_{N\to\infty}M\mathcal{H}^{1}(G)\mu(E)=M\mathcal{H}^{1}(G)\mu(E).$
∎
## References
* [AHM+16] J. Azzam, S. Hofmann, J. M. Martell, S. Mayboroda, M. Mourgoglou, X. Tolsa, and A. Volberg. Rectifiability of harmonic measure. Geom. Funct. Anal., 26(3):703–728, 2016. doi:10.1007/s00039-016-0371-x.
* [AHM+20] J. Azzam, S. Hofmann, J. M. Martell, M. Mourgoglou, and X. Tolsa. Harmonic measure and quantitative connectivity: geometric characterization of the $L^{p}$-solvability of the Dirichlet problem. Invent. Math., 222(3):881–993, 2020. doi:10.1007/s00222-020-00984-5.
* [AT15] J. Azzam and X. Tolsa. Characterization of $n$-rectifiability in terms of Jones’ square function: Part II. Geom. Funct. Anal., 25(5):1371–1412, 2015. doi:10.1007/s00039-015-0334-7.
* [Bat20] D. Bate. Purely unrectifiable metric spaces and perturbations of Lipschitz functions. Acta Math., 224(1):1–65, 2020. doi:10.4310/ACTA.2020.v224.n1.a1.
* [Bes39] A. S. Besicovitch. On the fundamental geometrical properties of linearly measurable plane sets of points (III). Math. Ann., 116(1):349–357, Dec 1939. doi:10.1007/BF01597361.
* [BŁV14] M. Bond, I. Łaba, and A. Volberg. Buffon’s needle estimates for rational product Cantor sets. Amer. J. Math., 136(2):357–391, 2014. doi:10.1353/ajm.2014.0013.
* [BŁZ16] M. Bond, I. Łaba, and J. Zahl. Quantitative visibility estimates for unrectifiable sets in the plane. Trans. Amer. Math. Soc., 368(8):5475–5513, 2016. doi:10.1090/tran/6585.
* [BN21] M. Badger and L. Naples. Radon measures and Lipschitz graphs. Bull. Lond. Math. Soc., 53(3):921–936, 2021. doi:10.1112/blms.12473.
* [Bon19] T. Bongers. Geometric bounds for Favard length. Proc. Amer. Math. Soc., 147(4):1447–1452, 2019. doi:10.1090/proc/14358.
* [BT21] T. Bongers and K. Taylor. Transversal families of nonlinear projections and generalizations of Favard length. To appear in Anal. PDE, 2021. doi:10.48550/arXiv.2105.01708.
* [BV10a] M. Bateman and A. Volberg. An estimate from below for the Buffon needle probability of the four-corner Cantor set. Math. Res. Lett., 17(5):959–967, 2010. doi:10.4310/MRL.2010.v17.n5.a12.
* [BV10b] M. Bond and A. Volberg. Buffon needle lands in $\epsilon$-neighborhood of a $1$-dimensional Sierpinski Gasket with probability at most $|\log\epsilon|^{-c}$. C. R. Math., 348(11):653–656, 2010. doi:10.1016/j.crma.2010.04.006.
* [BV11] M. Bond and A. Volberg. Circular Favard Length of the Four-Corner Cantor Set. J. Geom. Anal., 21(1):40–55, 2011. doi:10.1007/s12220-010-9141-4.
* [Cal77] A. P. Calderón. Cauchy integrals on Lipschitz curves and related operators. Proc. Natl. Acad. Sci. U.S.A., 74(4):1324–1327, 1977. doi:10.1073/pnas.74.4.1324.
* [CDOV22] A. Chang, D. Dąbrowski, T. Orponen, and M. Villa. Structure of sets with nearly maximal Favard length. To appear in Anal. PDE, 2022. doi:10.48550/arXiv.2203.01279.
* [CDT20] L. Cladek, B. Davey, and K. Taylor. Upper and lower bounds on the rate of decay of the Favard curve length for the four-corner Cantor set. To appear in Indiana Univ. Math. J., 2020. doi:10.48550/arXiv.2003.03620.
* [Csö00] M. Csörnyei. On the visibility of invisible sets. Ann. Acad. Sci. Fenn. Math., 25:417–421, 2000.
* [Csö01] M. Csörnyei. How To Make Davies’ Theorem Visible. Bull. London Math. Soc., 33(1):59–66, 2001. doi:10.1112/blms/33.1.59.
* [CT20] A. Chang and X. Tolsa. Analytic capacity and projections. J. Eur. Math. Soc. (JEMS), 22(12):4121–4159, 2020. doi:10.4171/JEMS/1004.
* [Dąb22] D. Dąbrowski. Cones, rectifiability, and singular integral operators. Rev. Mat. Iberoam., 38(4):1287–1334, 2022. doi:10.4171/RMI/1301.
* [Dav98] G. David. Unrectifiable 1-sets have vanishing analytic capacity. Rev. Mat. Iberoam., 14(2):369–479, 1998. doi:10.4171/RMI/242.
* [DG22] P. Dote and S. Gan. Exceptional set estimates for orthogonal and radial projections in $\mathbb{R}^{n}$. Preprint, 2022. doi:10.48550/arXiv.2208.03597.
* [DS91] G. David and S. Semmes. Singular integrals and rectifiable sets in $\mathbb{R}^{n}$: Au-delà des graphes lipschitziens. Astérisque, 193, 1991. doi:10.24033/ast.68.
* [DS93a] G. David and S. Semmes. Analysis of and on Uniformly Rectifiable Sets, volume 38 of Math. Surveys Monogr. Amer. Math. Soc., Providence, RI, 1993.
* [DS93b] G. David and S. Semmes. Quantitative rectifiability and Lipschitz mappings. Trans. Amer. Math. Soc., 337(2):855–889, 1993. doi:10.1090/S0002-9947-1993-1132876-8.
* [DT22] B. Davey and K. Taylor. A Quantification of a Besicovitch Non-linear Projection Theorem via Multiscale Analysis. J. Geom. Anal., 32(4):138–55, 2022. doi:10.1007/s12220-021-00793-z.
* [DV22] D. Dąbrowski and M. Villa. Analytic capacity and dimension of sets with plenty of big projections. Preprint, 2022. doi:10.48550/arXiv.2204.05804.
* [Fed47] H. Federer. The ($\varphi$, k) rectifiable subsets of $n$ space. Trans. Amer. Math. Soc., 62(1):114–192, 1947. doi:10.2307/1990632.
* [HJJL12] R. Hovila, E. Järvenpää, M. Järvenpää, and F. Ledrappier. Besicovitch-Federer projection theorem and geodesic flows on Riemann surfaces. Geom. Dedicata, 161(1):51–61, 2012. doi:10.1007/s10711-012-9693-5.
* [IMS12] A. Iosevich, M. Mourgoglou, and S. Senger. On sets of directions determined by subsets of $\mathbb{R}^{d}$. J. Anal. Math., 116(1):355–369, 2012. doi:10.1007/s11854-012-0010-x.
* [JKV97] P. W. Jones, N. H. Katz, and A. Vargas. Checkerboards, Lipschitz functions and uniform rectifiability. Rev. Mat. Iberoam., 13(1):189–210, 1997. doi:10.4171/rmi/219.
* [JM88] P. W. Jones and T. Murai. Positive analytic capacity but zero Buffon needle probability. Pacific J. Math., 133(1):99–114, 1988. doi:10.2140/pjm.1988.133.99.
* [Jon90] P. W. Jones. Rectifiable sets and the traveling salesman problem. Invent. Math., 102(1):1–15, 1990. doi:10.1007/BF01233418.
* [KRS12] A. Käenmäki, T. Rajala, and V. Suomala. Existence of doubling measures via generalised nested cubes. Proc. Amer. Math. Soc., 140(9):3275–3281, 2012. doi:10.1090/S0002-9939-2012-11161-X.
* [Łab14] I. Łaba. Recent Progress on Favard Length Estimates for Planar Cantor Sets. In Operator-Related Function Theory and Time-Frequency Analysis, volume 9 of Abel Symposia, pages 117–145. Springer, Cham, 2014\. doi:10.1007/978-3-319-08557-9_5.
* [ŁM22] I. Łaba and C. Marshall. Vanishing sums of roots of unity and the Favard length of self-similar product sets. Preprint, 2022. doi:10.48550/arXiv.2202.07555.
* [ŁZ10] I. Łaba and K. Zhai. The Favard length of product Cantor sets. Bull. London Math. Soc., 42(6):997–1009, 2010. doi:10.1112/blms/bdq059.
* [Mar54] J. M. Marstrand. Some Fundamental Geometrical Properties of Plane Sets of Fractional Dimensions. Proc. London Math. Soc., s3-4(1):257–302, 1954. doi:10.1112/plms/s3-4.1.257.
* [Mat81] P. Mattila. Integralgeometric properties of capacities. Trans. Amer. Math. Soc., 266(2):539–554, 1981. doi:10.1090/S0002-9947-1981-0617550-8.
* [Mat86] P. Mattila. Smooth Maps, Null-Sets for Integralgeometric Measure and Analytic Capacity. Ann. Of Math., 123(2):303–309, 1986. doi:10.2307/1971273.
* [Mat90] P. Mattila. Orthogonal Projections, Riesz Capacities, and Minkowski Content. Indiana Univ. Math. J., 39(1):185–198, 1990. doi:10.1512/iumj.1990.39.39011.
* [Mat95] P. Mattila. Geometry of sets and measures in Euclidean spaces: fractals and rectifiability, volume 44 of Cambridge Stud. Adv. Math. Cambridge Univ. Press, Cambridge, UK, 1995. doi:10.1017/CBO9780511623813.
* [MO18] H. Martikainen and T. Orponen. Characterising the big pieces of Lipschitz graphs property using projections. J. Eur. Math. Soc. (JEMS), 20(5):1055–1073, 2018. doi:10.4171/JEMS/782.
* [NPV11] F. Nazarov, Y. Peres, and A. Volberg. The power law for the Buffon needle probability of the four-corner Cantor set. St. Petersburg Math. J., 22(1):61–72, 2011. doi:10.1090/S1061-0022-2010-01133-6.
* [NTV14] F. Nazarov, X. Tolsa, and A. Volberg. On the uniform rectifiability of AD-regular measures with bounded Riesz transform operator: the case of codimension 1. Acta Math., 213(2):237–321, 2014. doi:10.1007/s11511-014-0120-7.
* [Orp21] T. Orponen. Plenty of big projections imply big pieces of Lipschitz graphs. Invent. Math., 226(2):653–709, 2021. doi:10.1007/s00222-021-01055-z.
* [OS11] T. Orponen and T. Sahlsten. Radial projections of rectifiable sets. Ann. Acad. Sci. Fenn. Math., 36:677–681, 2011. doi:10.5186/aasfm.2011.3634.
* [OSW22] T. Orponen, P. Shmerkin, and H. Wang. Kaufman and Falconer estimates for radial projections and a continuum version of Beck’s Theorem. Preprint, 2022. doi:10.48550/arXiv.2209.00348.
* [PS02] Y. Peres and B. Solomyak. How likely is Buffon’s needle to fall near a planar Cantor set? Pacific J. Math., 204(2):473–496, 2002. doi:10.2140/pjm.2002.204.473.
* [RS19] E. Rossi and P. Shmerkin. Hölder coverings of sets of small dimension. J. Fractal Geom., 6(3):285–299, 2019. doi:10.4171/jfg/78.
* [SS06] K. Simon and B. Solomyak. Visibility for self-similar sets of dimension one in the plane. Real Anal. Exchange, 32(1):67–78, 2006. doi:10.14321/realanalexch.32.1.0067.
* [Tao09] T. Tao. A quantitative version of the Besicovitch projection theorem via multiscale analysis. Proc. London Math. Soc., 98(3):559–584, 2009. doi:10.1112/plms/pdn037.
* [Tas22] E. Tasso. Rectifiability of a class of integralgeometric measures and applications. Preprint, 2022. doi:10.48550/arXiv.2206.14044.
* [Tol03] X. Tolsa. Painlevé’s problem and the semiadditivity of analytic capacity. Acta Math., 190(1):105–149, 2003. doi:10.1007/BF02393237.
* [Tol05] X. Tolsa. Bilipschitz maps, analytic capacity, and the Cauchy integral. Ann. of Math. (2), 162(3):1243–1304, 2005. doi:10.4007/annals.2005.162.1241.
* [Tol17] X. Tolsa. Rectifiable measures, square functions involving densities, and the Cauchy transform. Mem. Amer. Math. Soc., 245(1158), 2017. doi:10.1090/memo/1158.
* [TT15] X. Tolsa and T. Toro. Rectifiability via a square function and Preiss’ theorem. Int. Math. Res. Not. IMRN, 2015(13):4638–4662, 2015. doi:10.1093/imrn/rnu082.
* [VV22] D. Vardakis and A. Volberg. Geometry of planar curves intersecting many lines at a few points. St. Petersburg Math. J., 33(6):1047–1062, 2022. doi:10.1090/spmj/1742.
* [Whi98] B. White. A new proof of Federer’s structure theorem for $k$-dimensional subsets of $\mathbf{R}^{N}$. J. Amer. Math. Soc., 11(3):693–701, 1998. doi:10.1090/S0894-0347-98-00267-7.
* [Wil17] B. Wilson. Sets with Arbitrarily Slow Favard Length Decay. Preprint, 2017. doi:10.48550/arXiv.1707.08137.
|
# Quadapter: Adapter for GPT-2 Quantization
Minseop Park, Jaeseong You, Markus Nagel, Simyung Chang
Qualcomm AI Research
{minspark, jaeseong, markusn<EMAIL_ADDRESS>cQualcomm AI Research
is an initiative of Qualcomm Technologies, Inc.
###### Abstract
Transformer language models such as GPT-2 are difficult to quantize because of
outliers in activations leading to a large quantization error. To adapt to the
error, one must use quantization-aware training, which entails a fine-tuning
process based on the dataset and the training pipeline identical to those for
the original model. Pretrained language models, however, often do not grant
access to their datasets and training pipelines, forcing us to rely on
arbitrary ones for fine-tuning. In that case, it is observed that
quantization-aware training overfits the model to the fine-tuning data. For
quantization without overfitting, we introduce a quantization adapter
(Quadapter), a small set of parameters that are learned to make activations
quantization-friendly by scaling them channel-wise. It keeps the model
parameters unchanged. By applying our method to the challenging task of
quantizing GPT-2, we demonstrate that it effectively prevents the overfitting
and improves the quantization performance.
## 1 Introduction
Quantizing a transformer model is not a simple matter due to numerous channel-
dependent outliers in activations Bondarenko et al. (2021). They lead to a
large quantization error Zhao et al. (2019), and we observe that the problem
is worse in the decoder-only transformers like GPT-2. One solution to the
difficulty is quantization-aware training (QAT), an approach that fine-tunes
the model parameters in response to the numerical error arising from
quantization. Post-training quantization (PTQ) – a counterpart of QAT that
performs quantization without modifying model parameters – is not powerful
enough to cope with the outliers.
Figure 1: Average perplexity (PPL) of the full-precision (FP) model and the
models quantized with PTQ and QAT on 5 datasets (left). We use the PTB dataset
as the fine-tuning data (F-ID) for QAT. The FP model and the QAT model are
evaluated on the F-ID and the other 4 datasets (F-OOD) (right).
While QAT is effective, it requires the dataset and the training pipeline, and
the problem is that they are often inaccessible when dealing with the original
pretrained model without any downstream task. One then cannot but use
arbitrary fine-tuning data for QAT.
However, the fine-tuning returns worse accuracies for distributions unseen
during training (out-of-distribution with regard to fine-tuning; F-OOD)
despite improving for the training distribution (in-distribution with regard
to fine-tuning; F-ID) Kumar et al. (2022). This is consistent with our
observation that QAT overfits the model to the fine-tuning data as in Figure
1. The resulting quantized model therefore has its generality impaired. This
violates the premise of a general-purpose language model, which must operate
well across various texts of the target language.
Figure 2: Quadapter performs a linear scaling and its inversion before and
after Q, the quantizer for the target activation (left). In the transformer
block of GPT-2, Quadapters can be installed in two different locations
(right).
Our hypothesis is that QAT incurs the overfitting because it changes all the
parameters of the model. This difficulty is much like the research topic of
continual learning, where it is important that a model should not forget its
past capability when transferring to a new task Zhang et al. (2021). Adapter
is a strategy to adapt to a new distribution by training only a small number
of parameters. It is a popular method to lessen the catastrophic forgetting.
We borrow this concept to propose Quadapter, a lightweight module to adapt to
the quantization error on behalf of the intact original model.
The contribution of this work is that we successfully quantize GPT-2,
overcoming the large inter-channel variance and the QAT overfitting issue with
Quadapter. To the best of our knowledge, this is the first work to quantize
both weights and activations of GPT-2 without the complete training pipeline.
## 2 Related Works
Adapters Extensive researches have been conducted on how to steer a large
pretrained model with few adapter parameters. The concept of adapter has
proven its usefulness in language models for transfer learning Houlsby et al.
(2019), multi-task learning Stickland and Murray (2019), and domain adaptation
Zhang et al. (2021). Several works apply adapters to the visual domain as well
Li and Hoiem (2016); Perez et al. (2018).
Transformer Quantization In comparison to GPT-2, BERT is easier to quantize.
It can be quantized with PTQ under a limited performance drop Shen et al.
(2020). QAT on BERT for a given downstream task recovers full-precision (FP)
performance even with ultra-low precision Zafrir et al. (2019); Bondarenko et
al. (2021), or with integer-only operations for non-linear layers Kim et al.
(2021). On the other hand, quantization studies on autoregressive transformers
are relatively limited in their scope, using weight-only quantization Chung et
al. (2020) or requiring full-fledged training Prato et al. (2020); Tao et al.
(2022). Please note that these works focus on quantizing GPT-2 that is
finetuned on a downstream task whereas ours quantizes the original pretrained
GPT-2.
Quantization techniques Directly relevant to our work are cross-layer-
equalization (CLE) Nagel et al. (2019) and adaptive rounding (AdaRound) Nagel
et al. (2020). Similarly to CLE, Qudapter rescales associated model weights to
lessen the quantization burden. AdaRound and our proposed method are alike in
training foldable helper parameters to minimize the block-wise quantization
error. In addition, learned step size (LSQ) Esser et al. (2020) and its
extension (LSQ+) Bhalgat et al. (2020) train the quantization-related
parameters during QAT, to which Quadapter bears similarity.
| | GPT-2 | DistilGPT-2
---|---|---|---
Data | Method | Wikitext2 | PTB | LAMBADA | CBT_CN | CBT_NE | Wikitext2 | PTB | LAMBADA | CBT_CN | CBT_NE
- | FP32 | 29.27 | 41.31 | 48.39 | 27.29 | 30.53 | 44.36 | 59.73 | 74.94 | 42.54 | 47.09
Calib. data | PTQ | 915.58 | 751.23 | 827.06 | 655.31 | 759.83 | 87.52 | 114.42 | 205.35 | 93.16 | 104.94
AdaRound | 507.07 | 478.29 | 685.98 | 319.74 | 309.11 | 84.94 | 104.94 | 164.98 | 107.89 | 92.59
CLE | 40.28 | 59.33 | 74.61 | 38.92 | 43.69 | 69.81 | 86.66 | 144.06 | 68.78 | 76.80
Quadapter BC | 34.53 | 50.65 | 63.51 | 32.47 | 36.46 | 52.79 | 70.43 | 102.75 | 51.97 | 57.81
Wikitext2 | QAT | 32.51 | 100.75 | 125.40 | 54.94 | 63.94 | 35.04 | 109.40 | 129.19 | 67.03 | 76.55
Quadapter BC+QAT | 21.61 | 57.06 | 63.65 | 33.80 | 38.40 | 28.50 | 80.52 | 86.57 | 50.64 | 57.05
Quadapter (ours) | 29.34 | 47.30 | 57.28 | 30.37 | 34.05 | 43.05 | 66.28 | 85.42 | 47.66 | 52.49
PTB | QAT | 331.61 | 33.94 | 330.10 | 212.12 | 252.03 | 347.25 | 37.44 | 308.22 | 214.14 | 257.44
Quadapter BC+QAT | 79.74 | 24.10 | 106.32 | 59.90 | 69.79 | 121.62 | 29.65 | 146.48 | 91.73 | 106.31
Quadapter (ours) | 33.69 | 39.46 | 55.68 | 31.45 | 35.16 | 50.73 | 56.63 | 87.02 | 49.43 | 54.35
Table 1: Performance evaluation of the quantized GPT-2 and DistilGPT-2 on
various datasets. The metric is PPL (lower is better). In the case of
Quadapter BC+QAT, QAT initiates after the block-wise calibration of Quadapter.
For Quadapter (ours), both the training phases are completed. Underline
indicates the results on F-ID
## 3 Methods
Quadapter is simply a set of learnable parameters. On the other hand, the
Quadapter block represents the actual working mechanism of Quadapter,
involving two consecutive layers of linear relations, their quantizers, and
their associated Quadapter instance. The effectiveness of Quadapter comes from
the interaction amongst the involved components, and from the two-phase
training procedure.
### 3.1 Quadapter Design
Quadatper linearly scales the input channels and reverts after quantization.
This ensures the identity relation if not for quantizers, making it possible
to keep the model parameters intact (Figure 2 left).
The scaling and the inverse-scaling of an activation are, in practice, folded
to the weight and the bias of the preceding layer and to the weight of the
following layer. For example, given a forward pass of two linear layers:
$\displaystyle\mathbf{y}$
$\displaystyle=\mathbf{W}_{2}(\mathbf{W}_{1}{\mathbf{x}}+\mathbf{b}_{1})+\mathbf{b}_{2},$
(1)
the Quadapter block output $\hat{\mathbf{y}}$ is as follows:
$\displaystyle\hat{\mathbf{y}}=\textit{Q}_{{\boldsymbol{\theta}}_{2}}(\mathbf{W}_{2}{\mathbf{A}}^{-1})\textit{Q}_{{\boldsymbol{\theta}}_{a}}(\textit{Q}_{{\boldsymbol{\theta}}_{1}}({\mathbf{A}}\mathbf{W}_{1}){\mathbf{x}}$
$\displaystyle+{\mathbf{A}}\mathbf{b}_{1})+\mathbf{b}_{2}$ (2)
$\displaystyle=\textit{Q}_{{\boldsymbol{\theta}}_{2}}(\mathbf{W}_{2}^{\prime})\textit{Q}_{{\boldsymbol{\theta}}_{a}}(\textit{Q}_{{\boldsymbol{\theta}}_{1}}(\mathbf{W}_{1}^{\prime}){\mathbf{x}}+\mathbf{b}_{1}^{\prime})+\mathbf{b}_{2}.$
(3)
Here, ${\mathbf{A}}=\rm{diag}(\mathbf{\alpha})$ is a diagonal matrix with
${\mathbf{A}}_{ii}=\mathbf{\alpha}_{i}$, where $\alpha\in\mathbb{R}^{d}$ is
the learnable Quadapter parameter with the intermediate activation dimension
$d$. $\textit{Q}_{{\boldsymbol{\theta}}_{1}}$ and
$\textit{Q}_{{\boldsymbol{\theta}}_{2}}$ are the weight quantizers, and
$\textit{Q}_{{\boldsymbol{\theta}}_{a}}$ is the activation quantizer. Each
quantizer $\textit{Q}_{\boldsymbol{\theta}}$ quantizes its input values based
on the quantization parameter ${\boldsymbol{\theta}}=(\theta_{{\rm
min}},\theta_{{\rm max}})$ Krishnamoorthi (2018). Quadapter $\mathbf{\alpha}$
is trained during training and fused at the inference time (Equation 3).
As in Equation 2, the forward scaling and the inverse scaling correspond
across three nested quantizers that are strongly nonlinear operations.
Therefore $\mathbf{\alpha}$ should be learned rather than set analytically as
in Nagel et al. (2019); a single analytical solution is not sufficient to
balance the quantization burden between the two layers.
### 3.2 Quadapter Training
The learning of Quadapter is comprised of two phases: the block-wise
calibration and the end-to-end fine-tuning.
Phase 1: Block-wise Calibration Each of the Quadapter instances is
initialized to $\vec{\mathbf{1}}$ and trained with the calibration data,
independently per Quadpter block. The local objective for each block is L2
loss:
$\displaystyle\operatornamewithlimits{\arg\min}_{\mathbf{\alpha}}||\mathbf{y}-\hat{\mathbf{y}}||_{2}^{2},$
(4)
which Nagel et al. (2020) shows to be effectively complementary to the task
loss.
$\hat{\mathbf{y}}$ is computed in the dynamic quantization mode Zafrir et al.
(2019), where the statistics are obtained per batch. Quadapter resulting from
the calibration phase is a PTQ method that is independent of the fine-tuning
process. We therefore denote such Quadapter by Quadapter BC.
Phase 2: End-to-end Fine-tuning The subsequent fine-tuning starts with more
accommodating quantization parameters (i.e. the min/max statistics) since they
have moved to moderate values from extreme outliers during the first phase.
The fine-tuning therefore converges much more quickly.
In the second phase, the statistics for quantization are computed in the
fashion of static quantization Zafrir et al. (2019), based on the same
calibration data as in the first phase. Quadapter is then trained to minimize
the end-to-end task loss. During the course, the quantization parameters are
jointly learned as in Bhalgat et al. (2020) while the model parameters stay
fixed. Algorithm 1 details the full flow of the Quadatper training.
input : pretrained model $M$, Quadapter blocks, calibration data $D_{1}$,
fine-tuning data $D_{2}$, learning rates $\eta_{1}$, $\eta_{2}$.
output : Learned Quadapter parameters
$\\{\mathbf{\alpha}_{1},\mathbf{\alpha}_{2},...\\}$ and quantization
parameters
${\boldsymbol{\theta}}^{*}=\\{{\boldsymbol{\theta}}^{1},{\boldsymbol{\theta}}^{2},...\\}$.
/* Phase 1 */
foreach _$i$ -th Quadapter block_ do
Initialize $\mathbf{\alpha}_{i}=\vec{\mathbf{1}}$
From $M$ and $D_{1}$, gather block input ${\mathbf{x}}_{i}$ and output
$\mathbf{y}_{i}$
while _not converged_ do
$\hat{\mathbf{y}}_{i}\leftarrow\text{Eq. 2}$
$\mathbf{\alpha}_{i}\leftarrow\mathbf{\alpha}_{i}-\eta_{1}\nabla_{\mathbf{\alpha}_{i}}||\hat{\mathbf{y}}_{i}-\mathbf{y}_{i}||^{2}_{2}$
end while
end foreach
/* Phase 2 */
Apply learned Quadapters to $M$
Initialize ${\boldsymbol{\theta}}^{*}$ with $D_{1}$ to make quantized model
$M_{Q}$
while _not converged_ do
for _${\mathbf{x}},\mathbf{y}\in D_{2}$_ do
compute $L_{\rm{task}}(M_{Q}({\mathbf{x}}),\mathbf{y})$
foreach _$i$ -th Quadapter block_ do
$\mathbf{\alpha}_{i}\leftarrow\mathbf{\alpha}_{i}-\eta_{2}\nabla_{\mathbf{\alpha}_{i}}L_{\rm{task}}$
end foreach
update $\boldsymbol{\theta}^{*}$ with LSQ+
end for
end while
Algorithm 1 Quadapter training
## 4 Experiments
Models We quantize GPT-2 Radford et al. (2019) and DistilGPT-2 Sanh et al.
(2019) based on their huggingface pretrained models111huggingface.co/gpt2,
huggingface.co/distilgpt2. Our quantization configuration follows Siddegowda
et al. (2022), doing uniform asymmetric 8-bit quantization for both
activations and weights. All the weights and activations are quantized, except
for biases, non-linear opererations, and additions Zafrir et al. (2019); Kim
et al. (2021). For every transformer block, the Quadapter instances are
installed in between the first layer norm and the linear projection for
key/query/value as well as between the second layer norm and the first feed-
forward network (Figure 2 right). One additional instance is applied to
between the final layer norm and the logit projection.
Baseline methods Our implementation of LSQ+ follows the original proposition
Bhalgat et al. (2020), except for updating the min/max parameters for
stability of training Siddegowda et al. (2022). It is applied for all the QAT
experiments. We use AI Model Efficiency
Toolkit222https://github.com/quic/aimet to obtain AdaRound performance. The
CLE metrics are computed with an untrained Quadapter, initialized analytically
as in Nagel et al. (2019).
Datasets We employ WikiText-2 Merity et al. (2016), the English Penn Treebank
(PTB) corpus Marcus et al. (1993), the LAMBADA dataset Paperno et al. (2016),
and the named-entity subset (CBT_NE) as well as the common-noun subset
(CBT_CN) of Children’s Book Test Hill et al. (2016). We follow the datasets’
default divisions as to training/validation/test splits.
Experiment design To test the overfitting resiliency, GPT-2 and DistilGPT-2
are quantized with various PTQ and QAT methods on one of the five datasets.
The resulting quantized model is evaluated on its F-ID and on the other four
datasets (F-OOD). In addition, we expose the models to varying amounts of
fine-tuning data during quantization to compare the changing behaviors of QAT
and Quadapter.
Figure 3: GPT-2 quantization performance when fine-tuned on F-ID of varying
sizes. Both axes are logarithmic.
Results In Table 1, Quadapter outperforms the baseline methods on the F-OOD in
both GPT-2 and DistilGPT-2. This observation evinces the general capability of
Quadapter to reduce overfitting across different models. The comparison
between Quadapter (ours) and Quadapter BC+QAT is the ablation of the end-to-
end finetuning, and the reusult proves its importance.
Noteworthy is that Quadapter is a powerful stand-alone PTQ technique. Even
without QAT fine-tuning, the F-OOD metrics are better than those of the QAT
baselines. In addition, the effectiveness of the calibration phase is shown by
the comparison between CLE and Quadapter BC.
Another advantage of Quadapter is that it is a viable quantization option in
data-scarce situations. As shown in Figure 3, Quadapter outperforms QAT
throughout different amounts of fine-tuning data, and the gap is most evident
when only a small amount of data is available.
Aside from the convincing metrics reported above, we further explore if
Quadapter does the intended job of transforming an activation into a more
uniform distribution. Figure 4 describes the per-channel statistics before and
after the Quadapter training. Values in most activation dimensions except for
few have small magnitudes around 0, and such dimensions lose precision when
quantized because of the large magnitudes of total min/max before applying
Quadpater. The illustration verifies that the effect of Quadapter indeed
aligns with our expectation, reducing the ranges of outlier-ridden channels
while enlarging the ranges of the others.
Figure 4: Visualization of the per-channel (x-axis) min/max (y-axis) values of
the final layer norm output activation in GPT-2. The solid/dotted lines
represent per-channel/total min and max.
## 5 Limitations
One limitation of Quadapter is that it requires two consecutive layers of
linear relations. In other words, it can be a mediator only for convolution
layers, linear layers, or normalization layers (when followed by a linear or
convolution layer), but not if residual connections or nonlinear activation
functions intervene.
## 6 Conclusions
We identify two challenges in quantizing autoregressive transformer language
models: the overfitting issue of QAT and the inter-channel variance in
activations. Through experiments, we demonstrate that Quadapter not only
mitigates the two problems but also serves as an effective PTQ technique.
## References
* Bhalgat et al. (2020) Yash Bhalgat, Jinwon Lee, Markus Nagel, Tijmen Blankevoort, and Nojun Kwak. 2020\. Lsq+: Improving low-bit quantization through learnable offsets and better initialization. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_.
* Bondarenko et al. (2021) Yelysei Bondarenko, Markus Nagel, and Tijmen Blankevoort. 2021. Understanding and overcoming the challenges of efficient transformer quantization. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, (EMNLP)_.
* Chung et al. (2020) Insoo Chung, Byeongwook Kim, Yoonjung Choi, Se Jung Kwon, Yongkweon Jeon, Baeseong Park, Sangha Kim, and Dongsoo Lee. 2020. Extremely low bit transformer quantization for on-device neural machine translation. In _Findings of the Association for Computational Linguistics (EMNLP)_ , volume abs/2009.07453.
* Esser et al. (2020) Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S. Modha. 2020. Learned step size quantization. In _8th International Conference on Learning Representations (ICLR)_.
* Hill et al. (2016) Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2016. The goldilocks principle: Reading children’s books with explicit memory representations.
* Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In _Proceedings of the 36th International Conference on Machine Learning (ICML)_.
* Kim et al. (2021) Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney, and Kurt Keutzer. 2021\. I-BERT: integer-only BERT quantization. In _Proceedings of the 38th International Conference on Machine Learning, (ICML)_.
* Krishnamoorthi (2018) Raghuraman Krishnamoorthi. 2018. Quantizing deep convolutional networks for efficient inference: A whitepaper. _arXiv preprint_ , abs/1806.08342.
* Kumar et al. (2022) Ananya Kumar, Aditi Raghunathan, Robbie Jones, Tengyu Ma, and Percy Liang. 2022\. Fine-tuning can distort pretrained features and underperform out-of-distribution. _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_.
* Li and Hoiem (2016) Zhizhong Li and Derek Hoiem. 2016. Learning without forgetting. In _European Conference on Computer Vision (ECCV)_.
* Marcus et al. (1993) Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. _Comput. Linguistics_ , 19(2):313–330.
* Merity et al. (2016) Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models.
* Nagel et al. (2020) Markus Nagel, Rana Ali Amjad, Mart van Baalen, Christos Louizos, and Tijmen Blankevoort. 2020. Up or down? adaptive rounding for post-training quantization. In _Proceedings of the 37th International Conference on Machine Learning, (ICML)_.
* Nagel et al. (2019) Markus Nagel, Mart Van Baalen, Tijmen Blankevoort, and Max Welling. 2019. Data-free quantization through weight equalization and bias correction. In _IEEE/CVF International Conference on Computer Vision (ICCV)_.
* Paperno et al. (2016) Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL_.
* Perez et al. (2018) Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. 2018. Film: Visual reasoning with a general conditioning layer. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence_.
* Prato et al. (2020) Gabriele Prato, Ella Charlaix, and Mehdi Rezagholizadeh. 2020. Fully quantized transformer for machine translation. In _Findings of the Association for Computational Linguistics (EMNLP)_.
* Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
* Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. In _NeurIPS EMC2 Workshop_.
* Shen et al. (2020) Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. 2020. Q-BERT: hessian based ultra low precision quantization of BERT. In _The Thirty-Fourth Conference on Artificial Intelligence (AAAI)_.
* Siddegowda et al. (2022) Sangeetha Siddegowda, Marios Fournarakis, Markus Nagel, Tijmen Blankevoort, Chirag Patel, and Abhijit Khobare. 2022. Neural network quantization with AI model efficiency toolkit (AIMET). _arXiv_ , abs/2201.08442.
* Stickland and Murray (2019) Asa Cooper Stickland and Iain Murray. 2019. BERT and pals: Projected attention layers for efficient adaptation in multi-task learning. In _Proceedings of the 36th International Conference on Machine Learning (ICML)_.
* Tao et al. (2022) Chaofan Tao, Lu Hou, Wei Zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, and Ngai Wong. 2022. Compression of generative pre-trained language models via quantization. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, ACL_.
* Zafrir et al. (2019) Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. 2019. Q8BERT: quantized 8bit BERT. In _Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing (EMC2@NeurIPS)_.
* Zhang et al. (2021) Rongsheng Zhang, Yinhe Zheng, Xiaoxi Mao, and Minlie Huang. 2021. Unsupervised domain adaptation with adapter. In _35th Conference on Neural Information Processing Systems (NeurIPS),_.
* Zhao et al. (2019) Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Christopher De Sa, and Zhiru Zhang. 2019\. Improving neural network quantization without retraining using outlier channel splitting. In _Proceedings of the 36th International Conference on Machine Learning, (ICML)_.
## Appendix
## Appendix A Additional Experiments
F-ID Expansion In Table 1, we limit the F-ID to one amongst the five available
datasets. Here, we perform an additional experiment by expanding the F-ID to
include four of them, limiting the F-OOD to the one remaining dataset. The
results are in Table 2. Comparing the metrics on WikiText2 when the QAT model
is fine-tuned on PTB (Table 1) and when fine-tuned on all but WikiText2 (Table
2), we can observe the improvement of Quadapter’s F-OOD performance. On the
other hand, QAT still suffers from overfitting despite the expanded fine-
tuning data.
Ablation of LSQ+ In Bhalgat et al. (2020), LSQ+ is a composite method that
includes initialization of weight quantizer, model parameter training, and
quantization parameter (${\boldsymbol{\theta}}$) training. However, in our
work, we isolate the quantization parameter training and denote it by LSQ+. In
Table 1, QAT is accompanied by LSQ+, thus training both the model parameters
and the quantization parameters. We ablate LSQ+ in Table 3. The results show
that LSQ+ tends to improve the quantization performance in general,
particularly in conjunction with our proposed method.
Quadapter BC effectiveness on QAT As discussed in the main text, QAT makes the
model overfit to F-ID and perform poorly on F-OOD. However, when employed with
Quadapter BC (i.e. Quadapter BC+QAT), the QAT training process is stabilized,
and so the quantized model reaches near the upper bound of the fine-tuned FP
model (Figure 5). This shows that Quadapter fosters QAT.
| Wikitext2 | PTB | LAMBADA | CBT_CN | CBT_NE | F-ID | F-OOD
---|---|---|---|---|---|---|---
FP | 29.28 | 41.31 | 48.39 | 27.29 | 30.53 | 36.88 | 29.28
QAT | 61.57 | 31.95 | 39.60 | 22.67 | 24.83 | 29.76 | 61.57
Quadapter BC+QAT | 41.13 | 25.37 | 34.05 | 19.79 | 21.56 | 25.19 | 41.13
Quadapter (ours) | 31.71 | 44.83 | 45.79 | 27.23 | 30.07 | 36.98 | 31.71
Table 2: PPL measurements of GPT-2 for expanded F-ID. Wikitext2 is the F-OOD,
and the other 4 datasets are the F-ID. The average PPL is reported in the
columns, F-ID and F-OOD.
Data | Method | Wikitext2 | PTB | LAMBADA | CBT_CN | CBT_NE
---|---|---|---|---|---|---
Wikitext2 | QAT | 36.28 (3.77) | 101.81 (1.07) | 134.80 (9.40) | 58.77 (3.83) | 68.67 (4.73)
Quadapter BC+QAT | 21.63 (0.02) | 57.40 (0.34) | 61.72 (-1.93) | 33.48 (-0.32) | 38.02 (-0.38)
Qudapter (ours) | 32.69 (3.35) | 49.01 (1.71) | 59.59 (2.31) | 31.44 (1.07) | 35.27 (1.22)
PTB | QAT | 275.80 (-55.81) | 37.87 (3.93) | 284.61 (-45.49) | 165.75 (-46.37) | 194.71 (-57.32)
Quadapter BC+QAT | 80.00 (0.26) | 23.93 (-0.17) | 101.61 (-4.71) | 59.95 (0.05) | 69.78 (-0.01)
Quadapter (ours) | 33.79 (0.10) | 46.98 (7.52) | 59.46 (3.78) | 31.54 (0.09) | 35.37 (0.21)
Table 3: PPL measurements of GPT-2 trained without LSQ+. The differences from
the counterparts with LSQ+ in Table 1 are noted in the parenthesis (a positive
value indicates LSQ+’s performance gain).
## Appendix B Hyperparameter Details
Block-wise Calibration We use 10 calibration data with the max time length
(block size) of 512, yielding 5120 text tokens in total. The same calibration
data is used for all the PTQ and QAT experiments. The initial learning rate is
0.1, and it decays at the rate of 0.2 every 100 steps. The training takes
place for 500 steps with the Adam optimizer. Figure 6 shows the convergence of
training in two of the GPT-2 Quadapter blocks: the very first layer norm and
the final layer norm. The total block-wise calibration phase takes
approximately 2 minutes with RTX 2080 Ti, when training all the Quadatper
blocks in GPT-2 sequentially from bottom to top.
Figure 5: Comparison of the fine-tuned FP model (FP finetuned) with other
methods. PTB is the F-ID, and the other 4 datasets are the F-OOD. Figure 6:
Block-wise calibration learning curves of the two selected Quadapter blocks in
GPT-2.
End-to-end Fine-tuning We use the initial learning rates 1e-5, 1e-3, and 1e-3
for model parameters, Quadapter $\mathbf{\alpha}$, and quantization parameters
${\boldsymbol{\theta}}$, respectively. The learning rate linearly decreases to
0 over 10,000 training steps. The batch size is set to 4 with the max time
length of 512. All the QAT methods follow this training scheme. After the
completion of training, PPL is measured with the max time length of 1024. All
of the PPL metrics reported in this work share this configuration.
## Appendix C Implementation Details
Quantization Implementation The quantizer function
$\textit{Q}_{\boldsymbol{\theta}}$ is defined as follows:
$\displaystyle\textit{Q}_{\boldsymbol{\theta}}(x)=s\cdot(clip(\left\lfloor\frac{x}{s}+o\right\rceil,0,2^{b}-1)-o),$
(5) $\displaystyle s=\frac{\theta_{max}-\theta_{min}}{2^{b}-1},\quad
o=\left\lfloor-\frac{\theta_{min}}{s}\right\rceil,$ (6)
where $s$ and $o$ are the scale and offset, and $b$ is the target bit depth
(8-bit in our case). $\lfloor\cdot\rceil$ is a rounding function, and
$clip({\mathbf{x}},n,p)$ clamps all the values between $n$ and $p$ from the
input ${\mathbf{x}}$.
In the first calibration phase, $\theta_{min}$ and $\theta_{max}$ are obtained
from the batch statistics at each inference step (i.e. dynamic quantization).
In the second fine-tuning phase, the parameters are initially set based on the
calibration data (i.e. static quantization), and trained afterwards. When
${\boldsymbol{\theta}}$ is trained, the gradients are passed through the
rounding function via straight-through-estimation.
Quadapter Implementation The details of the actual application of Quadapter to
GPT-2 is slightly different from the general form of Quadapter block in
Equation 2. In the transformer block of GPT-2, $\mathbf{W}_{1}$ denotes only
the affine transformation, but not the preceding normalization of the layer
norm operation. For example, we can define the layer norm operation as
follows:
$\displaystyle\mathbf{y}_{l}=\frac{{\mathbf{x}}_{l}-\mu({\mathbf{x}}_{l})}{\sigma({\mathbf{x}}_{l})}\odot\gamma+\beta,$
(7)
then $\mathbf{W}_{1}=\gamma,\mathbf{b}_{1}=\beta$, and the input of Quadapter
block is $({\mathbf{x}}_{l}-\mu({\mathbf{x}}_{l}))/\sigma({\mathbf{x}}_{l})$.
The fused weight is computed as
$\mathbf{W}_{1}^{\prime}=\mathbf{\alpha}\odot\gamma$ with element-wise
multiplication $\odot$.
## Appendix D Constraint of Two Linear Layers
While stating that Quadapter is applicable only to two consecutive layers of
linear nature in Section 5, the main text omits the discussion of piece-wise
linear activation function for brevity. For an operation $f$ that meets the
following scaling-invariant condition:
$\displaystyle f(sx)=sf(x),$ (8)
the identity relation between the scaling and the inverse-scaling of Quadapter
still holds. Therefore, it is possible to install Quadapter through piece-wise
linear functions (e.g. ReLU, leaky ReLU, PReLU, etc.) as well.
Our future goal is to further expand the Quadapter applicability. We thus plan
to investigate if Quadapter would be applicable even with an intervening
nonlinear activation function (e.g. GeLU, tanh, etc.) by enhancing
expressivity (i.e. setting up additional learnable scalar variables for the
inverse scaling). In addition, by scaling all the tensors involved in a
residual connection, we expect to apply Quadapter even in the presence of a
residual connection in between the two target layers.
|
# Electronic properties of twisted bilayer graphene suspended and encapsulated
with hexagonal boron nitride
Min Long Key Laboratory of Artificial Micro- and Nano-structures of Ministry
of Education and School of Physics and Technology, Wuhan University, Wuhan
430072, China Zhen Zhan<EMAIL_ADDRESS>Key Laboratory of Artificial
Micro- and Nano-structures of Ministry of Education and School of Physics and
Technology, Wuhan University, Wuhan 430072, China Pierre A. Pantaleón IMDEA
Nanociencia, C/Faraday 9, 28049 Madrid, Spain Jose Ángel Silva-Guillén IMDEA
Nanociencia, C/Faraday 9, 28049 Madrid, Spain Francisco Guinea IMDEA
Nanociencia, C/Faraday 9, 28049 Madrid, Spain Donostia International Physics
Center, Paseo Manuel de Lardizábal 4, 20018 San Sebastián, Spain Ikerbasque.
Basque Foundation for Science. 48009 Bilbao. Spain. Shengjun Yuan Key
Laboratory of Artificial Micro- and Nano-structures of Ministry of Education
and School of Physics and Technology, Wuhan University, Wuhan 430072, China
Wuhan Institute of Quantum Technology, Wuhan 430206, China
###### Abstract
The recent observed anomalous Hall effect in magic angle twisted bilayer
graphene (TBG) aligned to hexagonal boron nitride (hBN) and unconventional
ferroelectricity in Bernal bilayer graphene sandwiched by hBN present a new
platform to tune the correlated properties in graphene systems. In these
graphene-based moiré superlattices, the aligned hBN substrate plays an
important role. In this paper, we analyze the effects of hBN substrate on the
band structure of the TBG. By means of an atomistic tight-binding model we
calculate the electronic properties of TBG suspended and encapsulated with
hBN. Interestingly, we found that the physical properties of TBG are extremely
sensitive to the presence of hBN and they may be completely different if TBG
is suspended or encapsulated. We quantify these differences by analysing their
electronic properties, optical conductivity and band topology. We found that
the narrow bandwidth, band gap, local density of states and optical
conductivity are significantly modified by the aligned hBN substrates.
Interestingly, these electronic properties can be used as a signature of the
alignment in experiment. Moreover, the TBG/hBN superlattices in the presence
or absence of the two-fold rotation symmetry response differently to the
external electric field. For the TBG suspended in the hBN, application of an
electric field results in the charge unevenly distributed between graphene
layers, which can be used to tune the strength of the valley Hall effect or
the anomalous Hall effect. Such rich topological phase diagram in these
systems may be useful for experiments.
††preprint: APS/123-QED
## I Introduction
Since the discovery of unconventional superconductivity and correlation
effects in twisted bilayer graphene (TBG) near the magic angle by Cao _et al._
[1, 2], the so-called field of “twistronics” has become of great interest to
the condensed matter community [3]. In this magic angle at approximately 1.1∘
[4], the system possesses flat bands near charge neutrality, which are the
responsible for most of these exotic behaviours. Interestingly, not only two
graphene layers have been twisted, but also transition metal dichalcogenides
[5, 6], monochalcogenides [7], hexagonal boron nitride (hBN) [8, 9] and black
phosphorus [10], among others. Similar to the TBG, flat bands are observed in
most of these twisted two-dimensional (2D) materials.
Experimentally, the devices are usually supported on a substrate which is
typically a thin sample of hBN [11]. Although hBN has a large gap and has been
thought to have a small impact on the electronic properties of the materials
which are supporting, it is not the case. As we have shown recently [12] (see
also [13, 14, 15, 16]), hBN has an important effect on the electronic
properties of TBG, when the sample is either supported or encapsulated. We
have found that when the sample is placed on top of hBN a gap opens due to the
appearance of a mass term as a consequence of the breaking of
$\mathcal{C}_{2}$ symmetry [12, 17]. Furthermore, a splitting in the bands
appears due to layer degeneracy breaking. In fact, hBN affects the electronic
properties of TBG even when the angle between TBG and hBN is far from
alignment. When TBG is encapsulated between two layers of hBN the layer
degeneracy can be recovered for certain angles, while the gap still appears.
In a typical experimental setup, an electric field is usually applied to the
samples in order to change their doping which could also modulate the electric
structures of the samples [18, 19]. Therefore, the study of the electronic and
optical properties such as the local density of states (LDOS) and the optical
conductivity of TBG in combination with a substrate and when an electric field
is applied is very compelling.
In this work, we extend our previous study to investigate the aforementioned
properties in both the case of TBG supported and encapsulated between hBN
layers. In Sec. II we describe the atomic structure of our system and the
methods that we employ to perform our calculations. Then, in Sec. III the
layer degeneracy in electronic properties like LDOS and charge density
distribution are studied. We discuss the optical conductivity of the twisted
bilayer graphene-boron nitride superstructure in Sec. IV. In Sec. V we explore
the response of these systems to an external field. In Sec. VI we study the
effect of encapsulating TBG on its band topology. Finally, we give a summary
and discussion of our work.
## II Atomic structures and numerical methods
### II.1 The Atomic structures
In this work, we mainly focus on two structures. The first one is a trilayer
system composed of TBG and a hBN layer lying on the bottom (TBG/hBN). The
second one is a sandwich-like system where TBG is encapsulated by two hBN
layers (hBN/TBG/hBN). TBG and hBN can be stacked in different ways to have
commensurate structures [20]. In our case, we start by constructing an
unrotated trilayer structure, where graphene and hBN bonds are parallel and
the center of the supercell has an AAA stacking with a carbon site of graphene
and a nitrogen (N) site of hBN share the same in-plane position,
$(x,y)=(0,0)$. Then we rotate the top layer graphene ($G_{top}$) and bottom
layer hBN ($hBN_{bot}$) with angles $\theta_{tbg}$ and $\theta_{bot}$ with
respect to the bottom layer graphene ($G_{bot}$), respectively. In our
notation, positive angles correspond to counterclockwise rotations.
We define the lattice vectors of a hexagonal lattice as
$a_{1}=a(\sqrt{3}/2,1/2)$ and $a_{2}=a(\sqrt{3}/2,{\color[rgb]{1,0,0}-}1/2)$
being $a$ the lattice constant. For graphene $a_{g}=0.246$ nm, and for hBN
$a_{hBN}=0.2503$ nm. The lattice mismatch between graphene and hBN is
$\delta\approx 1.8$%. The twist angle of TBG can be solely determined by a
coprime integer pair $(m,n)$
$\theta_{tbg}=2\arcsin\frac{(m-n)}{2\sqrt{m^{2}+mn+n^{2}}},$ (1)
with moiré length
$L_{tbg}=\frac{a_{g}(n-m)}{2|\sin{(\theta_{tbg}/2)}|}=a_{g}\sqrt{m^{2}+mn+n^{2}}.$
(2)
The moiré length of the $G_{bot}$/$hBN_{bot}$ superlattice is [11]
$L_{hBN}=\frac{(1+\delta)a_{g}}{\sqrt{\delta^{2}+2(1+\delta)(1-\cos{\theta_{bot}})}}.$
(3)
A commensurate structure of TBG/hBN is obtained when
$L_{tbg/hBN}=L_{tbg}=qL_{hBN}$ (4)
where $q$ is an integer. We focus on a system where
$\theta_{tbg}=1.05^{\circ}$ and $\theta_{bot}=0.53^{\circ}$. For this angle
combination, the periodicity of the moiré pattern of TBG is identical to that
constructed by hBN and $G_{bot}$, and therefore a single moiré unit cell can
be defined for the combined system [17, 21]. That is, the three moiré lengths
have the same value $L_{tbg/hBN}=L_{tbg}=L_{hBN}=13.4$ nm, and $q=1$. To
construct the hBN/TBG/hBN structure, we just add a second hBN layer
($hBN_{top}$) on the top of the trilayer structure, the twist angle between
$hBN_{top}$ and $G_{bot}$ is also $\theta_{top}=0.53^{\circ}$. It is important
to note that, in order to keep the periodicity of these structures, the
lattice constant of hBN is slightly modified. In our case, this implies a
strain of about $0.14\%$. Such a small value of strain will not affect the
structural or electronic properties of hBN, which, therefore, will not change
the properties of the TBG/hBN systems that we study in this work [12]. The
schematics of TBG/hBN and hBN/TBG/hBN are shown in Fig. 1(a) and Fig. 2(a),
respectively.
### II.2 The tight-binding model
We adopt a combination of semi-classical molecular dynamics and a tight-
binding (TB) model to investigate the electronic properties of TBG/hBN and
hBN/TBG/hBN structures. After constructing a commensurate supercell, we use
semi-classical molecular dynamics, which is implemented in LAMMPS [22], to
fully (both in-plane and out-of-plane) relax the graphene layers in the two
systems. For intralayer interaction between graphene, we use the reactive
empirical bond order potential (REBO) [23]. For interlayer interaction between
graphene, we use the registry-dependent Kolmogorov-Crespi (RDKC) potential
developed for graphite [24]. We use the same RDKC potential for the C-B and
C-N interlayer interaction, but with different strength. The interaction
strength of C-B and C-N are 60% and 200% with respect to the original C-C
interaction, respectively [25]. The hBN layers are fixed in a flat
configuration to mimic a bulk or a few layers substrate. We assume that the
relaxed structures keep the periodicity of the rigid cases.
The full TB Hamiltonian for graphene and hBN heterostructure can be written as
[12]
$\displaystyle\hat{H}=$
$\displaystyle-\sum_{i,j}t(\mathbf{R}_{i}-\mathbf{R}_{j})\ket{\mathbf{R}_{i}}\bra{\mathbf{R}_{j}}+\sum_{i}\epsilon(\mathbf{R}_{i})\ket{\mathbf{R}_{i}}\bra{\mathbf{R}_{i}}$
(5)
$\displaystyle+\sum_{i}V_{D}(\mathbf{R}_{i})\ket{\mathbf{R}_{i}}\bra{\mathbf{R}_{i}},$
where $\mathbf{R}_{i}$ and $\ket{\mathbf{R}_{i}}$ represent the atom position
and the atomic state at site $i$, respectively,
$t(\mathbf{R}_{i}-\mathbf{R}_{j})$ is the transfer integral between the atomic
states at sites $i$ and $j$, $\epsilon(\mathbf{R}_{i})$ encodes the carbon,
boron and nitrogen onsite energies and $V_{D}(\mathbf{R_{i}})$ is the
deformation potential resulting from the structural relaxation. For the onsite
energy of boron, nitrogen and carbon atoms, we assume [26]:
$\displaystyle\epsilon_{B}=3.34\;\text{eV},\;\epsilon_{N}=-1.40\;\text{eV},\;\epsilon_{C}=0\;\text{eV}$
(6)
The lattice deformation leads to the emergence of periodic scalar and gauge
potentials [27, 28, 29, 30, 31, 32, 33]. All these effects can be accurately
considered in our TB model. To incorporate the relaxation effect into
Hamiltonian in Eq. (5), we introduce the deformation potential term as [25]:
$V_{D}(\mathbf{R}_{i})=g_{1}\frac{S(\mathbf{R}_{i})-S_{0}}{S_{0}},$ (7)
where the screened deformation potential $g_{1}=4$ eV [34], $S(\bm{R_{i}})$ is
the effective area of site $i$ that is modulated by local deformations, and
$S_{0}=\sqrt{3}a/4$ is the effective area in equilibrium. For the transfer
integral, we simply adopt the common Slater-Koster-type function for any
combination of atomic species [26]:
$\displaystyle-t(\mathbf{R})=V_{pp\pi}[1-\left(\frac{\mathbf{R}\cdot\mathbf{e}_{z}}{R}\right)^{2}]+V_{pp\sigma}\left(\frac{\mathbf{R}\cdot\mathbf{e}_{z}}{R}\right)^{2},$
(8)
where
$\displaystyle V_{pp\pi}=V_{pp\pi}^{0}e^{-\frac{R-a_{0}}{r_{0}}},$ (9)
$\displaystyle V_{pp\sigma}=V_{pp\sigma}^{0}e^{-\frac{R-d_{0}}{r_{0}}}.$ (10)
In the above equation $\mathbf{e}_{z}$ is the unit vector perpendicular to the
graphene plane, $R=|\mathbf{R}|$, $a_{0}=a_{g}/\sqrt{3}\approx 0.142$ nm is
the C-C distance, $d_{0}=0.335$ nm is the interlayer distance, $V_{pp\pi}^{0}$
and $V_{pp\sigma}^{0}$ are the intralayer and interlayer transfer integrals
between nearest neighbor atoms, respectively. We take
$V_{pp\pi}^{0}\approx-2.7$ eV, $V_{pp\sigma}^{0}\approx 0.48$ eV. The
parameter $r_{0}$ is the decay length of the transfer integral, and is chosen
as $0.184a_{g}$ so that the next nearest intralayer coupling becomes
$0.1V_{pp\pi}^{0}$. For atoms whose distance is more than $0.6$ nm, we set
$t(\mathbf{R})=0$ since for larger distances the value of hopping energy is
small enough to be safely neglected.
### II.3 The electronic properties
Once the TB Hamiltonian is constructed, we can calculate the electronic
properties of the TBG/hBN superlattices. Since the TBG/hBN and hBN/TBG/hBN
structures contain tens of thousands of atoms, we use a tight-binding
propagation method (TBPM) to obtain the density of states (DOS) and optical
conductivity. The TBPM is based on the numerical solution of the time-
dependent Schrödinger equation and requires no diagonalization processes,
which is implemented in our home-made TBPLaS simulator [35, 36, 37]. In TBPM,
a random initial state $|\phi_{0}\rangle$ is used with
$\langle\phi_{0}|\phi_{0}\rangle=1$. The density of states is calculated as a
Fourier transform of the time-dependent correlation function
$D(E)=\frac{1}{2\pi}\displaystyle\int_{-\infty}^{+\infty}e^{iE\tau}\langle\phi_{0}|e^{-iH\tau/\hbar}|\phi_{0}\rangle
d\tau$ (11)
The optical conductivity is calculated by combining the Kubo formula with TBPM
[35]. The real part of the optical conductivity matrix,
$\sigma_{\alpha\beta}$, at temperature $T$ reads
$\begin{split}\real\;\left\\{\sigma_{\alpha\beta}(\omega)\right\\}=&\lim_{E\to
0^{+}}\frac{e^{-\hbar\omega/k_{B}T}-1}{\hbar\omega
A}\int_{0}^{\infty}e^{-E\tau}\sin\left(\omega\tau\right)\\\ &\times
2\imaginary\left\\{\langle\phi_{2}(\tau)|j_{\alpha}|\phi_{1}(\tau)\rangle_{\beta}\right\\}\mathrm{d}\tau,\end{split}$
(12)
with $A$ the area of the unit cell, and $|\phi_{1}(\tau)\rangle$ and
$|\phi_{2}(\tau)\rangle$ read
$\begin{split}|\phi_{1}(\tau)\rangle_{\alpha}&=e^{-iH\tau/\hbar}[1-f(H)]J_{\alpha}|\phi_{0}\rangle,\\\
|\phi_{2}(\tau)\rangle&=e^{-iH\tau/\hbar}f(H)|\phi_{0}\rangle,\\\ \end{split}$
(13)
where $f(H)=1/(e^{(H-\mu)/k_{B}T}+1)$ is the Fermi-Dirac distribution operator
and $\mu$ is the electronic chemical potential. In this work, all the optical
conductivity are calculated at $T=300$ K and $\mu=0$. In the TBPM, the
convergence can be guaranteed by averaging over different initial states
$|\phi_{0}\rangle$. For large enough systems, the results are converged with
only one random initial state. The LDOS is obtained via the recursion method
in real space based on the Lanczos algorithm [38]. The eigenstates and
eigenvalues are obtained by direct diagonalization of the Hamiltonian in Eq.
(5).
Figure 1: (a) The top (upper panel) and side (lower panel) views of the
trilayer structure. The center of the top view is the AAA high-symmetry
stacking, which has the carbon atoms of graphene and nitrogen atom of hBN
share the same in-plane position. The sublattices A and B of the graphene are
specified. The high-symmetry stackings of AAA, ABC and ACB are outlined with
black, red and purple circles, respectively. (b) Band structure and density of
states of the trilayer structure. The black arrows indicate various
significant optical transitions. (c) Local density of states of sublattices A
and B in the AAA stacking region. The LDOS of atoms from top (upper panel) and
bottom (lower panel) graphene layers are plotted separately. (d) Eigenstates
$|\psi|^{2}$ in real space. The eigenstates are calculated by diagonalizing
the Hamiltonian in Eq. (5). The corresponding energies $C1$, $C2$ and $C3$ of
these eigenstates are illustrated in (b). Figure 2: (a) The top (upper panel)
and side (lower panel) views of the tetralayer structure. The center of the
top view is the AAAA high-symmetry stacking, which has the carbon atoms of
graphene and nitrogen atoms of hBN share the same in-plane position. The high-
symmetry stackings of AAAA, ABCA and ACBA are outlined by purple, red and
green circles, respectively. (b) Band structure and density of states of
tetralayer structure. The black arrows indicate various significant optical
transitions. (c) Local density of states of sites in the AAAA stacking region.
(d) Eigenstates $|\psi|^{2}$ in real space. The corresponding energies, $C1$,
$C2$ and $C3$ of these eigenstates are illustrated in (b).
## III TBG Heterostructures
### III.1 TBG supported on hBN
In free standing TBG, the layer degree of freedom is disentangled from spin
and valley, forming eight-fold degeneracy in the low energy flat bands. As
shown in Fig. 6, the conduction and valence flat bands are connected by Dirac
points at the K and K’ points of the moiré Brillouin zones (mBZ). These Dirac
points are protected by $\mathcal{C}_{2}\mathcal{T}$ symmetry, where
$\mathcal{C}_{2}$ is the two-fold rotation operator and $\mathcal{T}$ is the
time–reversal operator. If $\mathcal{C}_{2}\mathcal{T}$ is broken, the Dirac
points will become gapped. As we discussed in our previous work [12], in a
TBG/hBN superlattice, the hBN substrate introduces two contributions to the
TBG system: the first one is the energy difference between nitrogen and boron
atoms. This gives rise to different adhesion energies in the TBG/hBN system
and breaks the $\mathcal{C}_{2}$ inversion symmetry, which is the main source
of this mass gap. The second contribution is the lattice relaxation effects
that give rise to a deformation potential and a pseudo-magnetic field [39].
Such relaxation effect ensures the persistence of a gap opening in the Dirac
point for angles between TBG and hBN far from alignment [40].
The substrate effects can be clearly observed in the band structure shown in
Fig. 1(b). Narrow bands are separated by a gap of around 30 meV due to the
breaking of $\mathcal{C}_{2}$ symmetry [40, 41, 42, 43, 44, 45, 46, 47, 48,
49, 50, 51]. This first gap (energy difference between the flat bands at K) is
reduced when the angle $\theta_{bot}$ increased [12]. Moreover, the presence
of a substrate acting on a single graphene layer breaks the mirror symmetry
between layers and their Dirac cones are shifted in energy. This layer
degeneracy breaking is responsible for the observed splitting between narrow
bands. This splitting is more obvious in the remote bands located at around
$\pm 0.1$ eV. As we will discuss later, a perpendicular electric field will
further increase this splitting. The narrow bands show a significant electron-
hole asymmetry due to the strong superlattice potential induced by the hBN.
Compared to TBG (see Fig. 6) the flat bands become dispersive and the peaks in
the DOS of the TBG at charge neutrality are smoothed giving rise to an
insulating structure in Fig. 1(b). In graphene/hBN superlattices, secondary
Dirac cones appear at higher energies, which can be attributed to the moiré
potential [25]. In the TBG/hBN structure that we study, the periodicity of the
moiré pattern is 14 nm. This entails the secondary Dirac cones to be located
at around $0.14$ eV (outlined by a dashed rectangle in Fig. 1(b)). The states
of the induced Dirac cones are mainly localized at the ABC or ACB stacking
regions (the results not shown here), which is similar to the results of
graphene/hBN superlattices [25].
The hBN substrate also significantly modifies the local properties of the
TBG/hBN superlattice. In free standing TBG (see Fig. 6), the flat band states
are mainly localized in both sublattices A and B around the AA stacking
regions. If we look at the calculated LDOS of sublattices A and B at the AAA
stacking region, as shown in Fig. 1(c), the peak in the sublattice B is lower
than that of the sublattice A in the bottom layer. This is due to the role
that hBN is playing as a substrate. The hBN substrate breaks the sublattice
symmetry in the $G_{bot}$, making the LDOS peaks in the sublattice B lower
than the sublattice A. On the contrary, the hBN substrate has negligible
influence on the top layer. The states from $G_{top}$ and $G_{bot}$ contribute
unequally to the conduction band and valence band, which is the natural result
of the breaking of the layer degeneracy. In some works, the hBN substrate
effect on the TBG is introduced via an effective periodic potential acting on
the nearest graphene layer [26, 27]. Our results indicate again that this
approach is also correct. We also plot the real space wave function of the
narrow bands. Different from the TBG case shown in Fig. 6, where states in the
narrow bands are localized around the AA centers. In the presence of hBN we
observe some states localized in the ABC or ACB stacking regions. That is, a
small part of states from the C1 band are localized in the ACB region (see
labels in Fig. 1(b)), whereas some states from the C2 band are in the ABC
region. Such difference is a consequence of the substrate potential that
redistribute the charges within the moiré unit cell. The states from a higher
energy of C3 are mainly localized in the ACB region of the bottom layer.
### III.2 TBG encapsulated in hBN
We now consider a tetralayer structure where, as shown in Fig. 2(a), TBG is
encapsulated by two hBN layers. As described in our previous work [12] there
are several possible stacking configurations for the tetralayer structure (see
also [20]). In this work we choose the twist angle and stacking configuration
such that the mirror symmetry between layers is recovered, this is given by
the condition $|\theta_{bot}|=|\theta_{top}|$. Therefore, we could tune the
layer degeneracy by adding or removing one of the hBN layers (as we will
describe in the following section, an electric field can also be used to tune
the layer degeneracy). It is important to mention that the breaking of the
layer degeneracy is because the Dirac cones in each graphene layer are being
affected by their nearest hBN layer. A direct consequence of recovering the
layer degeneracy is the disappearance of the splitting in the flat bands, as
we can see from the comparison between the band structure in Fig. 1(b) with
that in Fig. 2(b). The splitting does not only disappear in the narrow bands
near the Fermi level, but also in the high energy bands, indicating an
intrinsic change in the electronic structure of the encapsulated structure.
Compared with TBG, both middle bands become narrower. The gap between the
lower narrow bands has a value of around 50 meV. Moreover, as we can see from
the LDOS calculation, Fig. 2(c), the states from $G_{top}$ and $G_{bot}$
contribute equally to both narrow bands. Another noteworthy result is that the
states of both bands have different contributions from different sublattices.
That is, both the top and bottom layers have a different sublattice charge
distribution, with the LDOS peaks of the sublattice B lower than the
sublattice A. Regarding the charge density maps in Fig. 2(d) and compared to
the TBG/hBN superlattice, the states of the narrow bands become more localized
in the AAAA regions, forming a triangular shape with smaller area. Moreover,
the states of the two upper narrow bands have a quite similar localization in
real space.
## IV Optical conductivity
In the previous sections, we have shown that the hBN strongly modified the
electronic structure of TBG. In this part, we discuss how will the substrate
modify the optical conductivity, which can be conveniently explored, for
example, by means of infrared and terahertz spectroscopies [52]. The real part
of the longitudinal optical conductivity of different TBG moiré systems
computed from Eq. (12) are shown in Fig. 3. In a graphene/hBN superlattice,
the moiré period leads to the emergence of minibands around the extra Dirac
point and the main signatures of the optical excitations are found between the
valence and conduction minibands [25]. In TBG/hBN there are multiple
transitions, as shown in Fig. 3. Black line is the optical conductivity of
pristine TBG. Looking at the optical conductivity of pristine TBG we assign
the peak around 0.03 eV (P1) to the transition between middle narrow bands.
This peak is shifted to higher energies when TBG is supported (red line) or
encapsulated (blue line) with hBN. For the encapsulated case, the shift is
larger. Peaks P2/P3 and P4 around 0.075 eV and 0.15 eV, respectively,
correspond to transitions between the narrow and high energy bands. Peaks
P2/P3 have the same behaviour as peak P1 when TBG is either supported or
encapsulated. These energy shifts in the optical conductivity may allow to
identify the degree of alignment of hBN with TBG. In an experiment, if two
different regions or two samples have an hBN substrate they may give different
signals in the optical conductivity. If the energy difference of the maximum
peaks is large as in Fig. 3 then the hBN is modifying the band structure and
this may be a signature of a near alignment situation. If the difference
between signals is negligible, then the hBN substrate is unaligned and is not
affecting the TBG bands [12].
Figure 3: Calculated optical conductivity of TBG (black line), TBG/hBN (red
line) and hBN/TBG/hBN (blue line). The chemical potential and temperature are
$\mu=0$ and $T=300$ K, respectively. The optical conductivity has been
normalized to the universal optical conductivity of graphene $\sigma_{0}=\pi
e^{2}/2h$. Significant optical peaks P1, P2, P3 and P4 are illustrated by a
dashed rectangle. Figure 4: (a)(b)(d)(e) Band structures and density of states
for trilayer structure and tetralayer structure with different electric field,
respectively, (c) and (f) Eigenstates $|\psi|^{2}$ that calculated from
diagonalization of Hamiltonian for trilayer and tetralayer structure,
respectively. The corresponding energies of these eigenstates are illustrated
in (b) and (e). In the density plots, red is for the maximum and dark blue for
the minimum charge density.
## V The effect of a perpendicular electric field
It has been shown that a perpendicular electric field in graphene/hBN
superlattices allows the tunability of their physical properties [53], such as
the case of Bernal bilayer graphene on hBN (BG/hBN) [54], ABC stacked graphene
on hBN (TG/hBN) [55] or twisted double bilayer graphene (TDBG) [56], among
others. Interestingly, the bandwidth of the narrow bands in the first three
systems is reduced by increasing the electric field [53, 56, 57] while in the
latter, the bandwidth is increased [58, 59]. In TBG a perpendicular electric
field shifts the degeneracy of the Dirac cones [60, 61, 62] and does not open
a gap between the narrow bands. The cones are protected by
$\mathcal{C}_{2}\mathcal{T}$ which is preserved even in the presence of the
electric field [62, 63]. The presence of gap between narrow bands in TBG with
hBN is because the substrate breaks $\mathcal{C}_{2}$. To introduce an
electric field in our TB model, we add an on-site potential with different
sign on each graphene layer. The electrostatic potential is given by $\Delta
V=d_{0}\mathcal{E}\cdot\mathbf{e}_{z}$ being $\mathcal{E}$ the vertical
electric field which has negative value if its direction is opposite to
$\mathbf{e}_{z}$. Then, an extra onsite term is added in the Hamiltonian in
Eq. (5) as
$H_{V}=\pm\frac{\Delta V}{2}\ket{\bm{R_{i}}}\bra{\bm{R_{i}}}.$ (14)
Here +(-) corresponds to $G_{bot}$($G_{top}$).
In pristine TBG, mirror symmetry ensures the degeneracy of the Dirac cones. In
the presence of the electric field this symmetry is broken because at each
layer an on-site potential with different sign is introduced and this is the
main source of the gap between narrow bands (see Ref. [12] for further
details). Indeed, this effect can be clearly seen in Fig. 4. In the TBG/hBN
system (top row of Fig. 4) the degeneracy of the Dirac cones (or layer
degeneracy) is broken because the hBN is acting on a single graphene layer.
Notice that the energy bands corresponding to each valley are shifted in
opposite directions. The effect of the electric field is a further shifting of
the energy bands and this effectively looks like a gap closing as shown in the
DOS in Fig. 4(b) and (e). For the encapsulated system (bottom row of Fig. 4)
we can see that the electric field shifts the energy bands, however, the
effect of the field is less drastic because the layer degeneracy is preserved
(see Fig. 2(b)). The effects of the layer degeneracy breaking can be observed
in the density maps in Fig. 4(c) and (f). As expected in TBG systems, the
charge is strongly localized around the AA centers even with an hBN substrate.
In the presence of both hBN and an electric field, the charge is unevenly
distributed between layers inducing a polarization. The latter is stronger in
the suspended situation, as shown in Fig. 4(c). We have found that the layer
polarization is a physical phenomena resulting from the layer degeneracy
breaking. While this degeneracy can be recovered in TBG by encapsulation with
hBN, c.f Fig. 2, with an electric field is always broken. Our results indicate
that in TBG samples suspended or encapsulated with hBN, the presence of an
electric field polarizes the charges within the layers. From time–reversal
symmetry, the band energy shift is opposite in each valley and therefore the
charge polarization is a valley phenomena. The total charge distribution in
each layer is the same if both valleys are considered. This effect can be used
to tune the strength of the valley Hall effect or the anomalous Hall effect in
the presence of a magnetic field.
## VI Stacking dependent band topology
Theoretical studies analysing the topology and correlated effects in the
presence of a substrate usually introduce the effect of the hBN by considering
only the mass term [21, 64, 65]. While this approach is valid, in experiments
the TBG samples are supported on [66] or encapsulated [67, 68] with hBN and
this may result in different band topology. In this section we show that the
topological properties of TBG/hBN and hBN/TBG/hBN can be completely different.
In both systems, the breaking of inversion symmetry in the TB model allows for
a non-zero Berry curvature with opposite signs in each valley. Because of
time–reversal symmetry, the Berry curvature in each valley has opposite sign
and hence the total Chern number of a given band is zero. However, the
topological invariants can be defined for each valley [31, 69]. The finite
Berry curvature for a single valley is given by
$\Omega_{\vec{k},l}=2~{}\imaginary\left\\{\innerproduct{\partial_{k_{x}}\psi_{\vec{k},l}|\partial_{k_{y}}\psi_{\vec{k},l}}{\partial_{k_{x}}\psi_{\vec{k},l}|\partial_{k_{y}}\psi_{\vec{k},l}}\right\\},$
(15)
where $l$ is a band index with energy $E_{l}(\vec{k})$, momentum
$\vec{k}=\\{k_{x},k_{x}\\}$ and wavefunctions $\psi_{\vec{k},l}$. For the
different stacking configurations, the bands for each valley are isolated and
their Berry curvature is well defined. Therefore, we can assign a valley Chern
number, ${{\cal C}_{l}}$, to the band $l$ which is given by the integral of
the Berry curvature in the moiré Brillouin zone
${\cal
C}_{l}=\frac{1}{2\pi}\int_{\mathrm{mBZ}}d^{2}\vec{k}\Omega_{\vec{k},l}.$ (16)
We use the algorithm in Ref. [70] and the continuum model in Ref. [12] to
compute the Berry curvature. To simplify our analysis we are considering only
the mass terms, here, $\Delta_{b/t}$ are mass terms acting in the bottom and
top graphene layer, respectively, mimicking an hBN substrate, we also consider
the bands only at charge neutrality because the band topology may also depend
on the filling fraction [71]. Figure 5 displays the set of valley Chern
numbers for the systems considered in this work. We can distinguish three
topological phases with Chern numbers for each band,
$\\{\mathcal{C}_{b},\mathcal{C}_{t}\\}$, identified as $P_{1}=\\{-1,1\\}$,
$P_{2}=\\{0,0\\}$ and $P_{3}=\\{1,-1\\}$. The phases $P_{1}$ and $P_{3}$ are
the usual phases found in the presence of a single hBN layer. However, for the
encapsulated situation there are different possibilities, such is the case of
Fig. 2(b) where the corresponding phase is $P3$ because we found that the mass
terms have the same sign and magnitude ($\sim 30$ meV). Additional stacking
configurations may give different topological phases, for example if we swap
the Boron by the Nitrogen in the stack in Fig. 2(a) the mass terms change sign
and the phase can be $P1$ or $P2$. Interestingly, in the encapsulated
situation, if each hBN is nearly aligned with their adjacent TBG the
topological phase is $P2$ if the mass terms have different sign and similar
magnitude. This situation can be achieved, for example, aligning both hBN
layers with respect to each other or by rotating them in opposite directions
with the same angle. The dominant contribution in TBG/hBN structures are the
mass terms. If the angle between hBN and its adjacent graphene layer is
increased, these terms decrease and may survive up to $3^{\circ}$ of alignment
(see Ref. [12] for further details). Therefore, Fig. 5 is a general result and
indicates the rich band topology of suspended and encapsulated samples of TBG
with hBN that can be obtained near alignment.
Figure 5: Topological phases of the two narrow middle bands of TBG with hBN as
a funtion of the mass terms. We distinguish three topological phases,
$\\{\mathcal{C}_{b},\mathcal{C}_{t}\\}$, identified as $P_{1}=\\{-1,1\\}$,
$P_{2}=\\{0,0\\}$ and $P_{3}=\\{1,-1\\}$
## VII Conclusions
We have studied the effects of an hexagonal boron nitride substrate in the
electronic and topological properties of TBG. In particular, we have studied
TBG on hBN and TBG encapsulated with hBN. By using a real space tight-binding
method in combination with semi-classical molecular dynamics, we calculate the
band structure, DOS, LDOS, optical conductivity and analyze the stacking-
dependent topological properties of these systems. We find that the substrate
significantly modifies the electronic properties of the TBG due to the broken
$\mathcal{C}_{2}$ symmetry. Compared to the free standing TBG, the narrow
bands have been strongly modified, and separated by a gap of around 30 meV. In
the TBG/hBN system we found that the substrate induces a layer degeneracy
breaking which results in an splitting of the TB band structure and uneven
distribution of the LDOS in a single layer. In the encapsulated TBG/hBN system
we found that the layer degeneracy is recovered if the twist angle between
each graphene and its nearest hBN layer is of the same magnitude. In addition,
we calculate the DOS peaks in both system and we found that the optical peaks
corresponding to the different transitions are remarkably different in both
energy positions and magnitudes. Such energy shifts in the optical peaks may
provide a method to identify if TBG is aligned with the hBN substrate. Both
considered heterostructures are also strongly sensitive to a perpendicular
electric field. A direct consequence of this field is the shifting of the
energy bands which may look as a gap closing in the DOS. Interestingly, by
mapping the real space distribution of the wavefunctions we found that the
electric field polarizes the charge in each valley which can be used to tune
the strength of the valley Hall effect or the anomalous Hall effect in the
presence of a magnetic field. Finally, by calculating the valley Chern
numbers, we found that depending on the induced mass gap, different
topological phases can be obtained. Because the mass gap is a consequence of
the degree of the hBN alignment, this result, combined with the optical
conductivity peaks may give a simple methodology to identify the encapsulating
conditions in TBG/hBN samples.
## ACKNOWLEDGMENTS
This work was supported by the National Natural Science Foundation of China
(Grant No. 11774269) and the National Key R&D Program of China (Grant No.
2018YFA0305800). IMDEA Nanociencia acknowledges support from the “Severo
Ochoa” Programme for Centres of Excellence in R&D (Grant No. SEV-2016-0686).
P.A.P and F.G. acknowledge funding from the European Commission, within the
Graphene Flagship, Core 3, grant number 881603 and from grants NMAT2D
(Comunidad de Madrid, Spain), SprQuMat (Ministerio de Ciencia e Innovación,
Spain). Numerical calculations presented in this paper have been performed on
the supercomputing system in the Supercomputing Center of Wuhan University.
Figure 6: (a) Band structure and density of states, (b) LDOS of the sublattice
A and B in the AA stacking region and (c) LDOS mapping in real space for free
standing twisted bilayer graphene with $\theta=1.05^{\circ}$. The LDOS mapping
are obtained via the TBPM method. The corresponding energies of the
eigenstates are illustrated in the DOS.
## Appendix A The electronic properties of free standing twisted bilayer
graphene
In this section, we briefly describe the properties of pristine TBG. The
hopping parameters in Eq. (10) are given such that the “magic” angle is at
$1.21^{\circ}$. Figure 6(a) show the band structure of TBG with a twist angle
$\theta_{tbg}=1.05^{\circ}$. The DOS display the van Hove singularities (vHs)
due to the two narrow bands. Figure 6(b) shows that the contribution to each
sublattice to the LDOS is identical due to the preserved sublattice, layer and
valley symmetries. In Fig. 6(c) we show the LDOS maps in real space for
energies at the vHs. They have the familiar “fidget-spinner” shape [63] with
the states localized around the AA region.
## References
* Cao _et al._ [2018a] Y. Cao, V. Fatemi, A. Demir, S. Fang, S. L. Tomarken, J. Y. Luo, J. D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, E. Kaxiras, R. C. Ashoori, and P. Jarillo-Herrero, Nature 556, 80 (2018a).
* Cao _et al._ [2018b] Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Nature 556, 43 (2018b).
* Carr _et al._ [2017] S. Carr, D. Massatt, S. Fang, P. Cazeaux, M. Luskin, and E. Kaxiras, Physical Review B 95 (2017).
* Bistritzer and MacDonald [2011] R. Bistritzer and A. H. MacDonald, PNAS 108, 12233 (2011).
* Zhang _et al._ [2020] Z. Zhang, Y. Wang, K. Watanabe, T. Taniguchi, K. Ueno, E. Tutuc, and B. J. LeRoy, Nature Physics 16, 1093 (2020).
* Zhan _et al._ [2020] Z. Zhan, Y. Zhang, P. Lv, H. Zhong, G. Yu, F. Guinea, J. Á. Silva-Guillén, and S. Yuan, Phys. Rev. B 102 (2020).
* Kennes _et al._ [2020] D. M. Kennes, L. Xian, M. Claassen, and A. Rubio, Nat. Commun. 11 (2020).
* Woods _et al._ [2021] C. R. Woods, P. Ares, H. Nevison-Andrews, M. J. Holwill, R. Fabregas, F. Guinea, A. K. Geim, K. S. Novoselov, N. R. Walet, and L. Fumagalli, Nat. Commun. 12 (2021).
* Walet and Guinea [2021] N. R. Walet and F. Guinea, Phys. Rev. B 103 (2021).
* Cao _et al._ [2016] T. Cao, Z. Li, D. Y. Qiu, and S. G. Louie, Nano Lett. 16, 5542 (2016).
* Wang _et al._ [2019] L. Wang, S. Zihlmann, M.-H. Liu, P. Makk, K. Watanabe, T. Taniguchi, A. Baumgartner, and C. Schönenberger, Nano Lett. 19, 2371 (2019).
* Long _et al._ [2022] M. Long, P. A. Pantaleón, Z. Zhan, F. Guinea, J. Á. Silva-Guillén, and S. Yuan, Npj Comput. Mater. 8 (2022).
* Shi _et al._ [2021] J. Shi, J. Zhu, and A. H. MacDonald, Phys. Rev. B 103, 075122 (2021).
* Cea _et al._ [2020a] T. Cea, P. A. Pantaleón, and F. Guinea, Phys. Rev. B 102, 155136 (2020a).
* Shin _et al._ [2021a] J. Shin, Y. Park, B. L. Chittari, J.-H. Sun, and J. Jung, Phys. Rev. B 103, 075423 (2021a).
* Mao and Senthil [2021] D. Mao and T. Senthil, Phys. Rev. B 103, 115110 (2021).
* Cea _et al._ [2020b] T. Cea, P. A. Pantaleón, and F. Guinea, Phys. Rev. B 102, 155136 (2020b).
* Chittari _et al._ [2019] B. L. Chittari, G. Chen, Y. Zhang, F. Wang, and J. Jung, Phys. Rev. Lett. 122, 016401 (2019).
* Mukai _et al._ [2021] F. Mukai, K. Horii, R. Ebisuoka, K. Watanabe, T. Taniguchi, and R. Yagi, Commun. Phys. 4 (2021).
* Shin _et al._ [2021b] J. Shin, Y. Park, B. L. Chittari, J.-H. Sun, and J. Jung, Phys. Rev. B 103, 075423 (2021b).
* Zhang _et al._ [2019] Y.-H. Zhang, D. Mao, and T. Senthil, Phys. Rev. Res. 1 (2019).
* Plimpton [1995] S. Plimpton, J. Comput. Phys. 117, 1 (1995).
* Brenner _et al._ [2002] D. W. Brenner, O. A. Shenderova, J. A. Harrison, S. J. Stuart, B. Ni, and S. B. Sinnott, J. Phys. Condens. Matter 14, 783 (2002).
* Kolmogorov and Crespi [2005] A. N. Kolmogorov and V. H. Crespi, Phys. Rev. B 71, 235415 (2005).
* Slotman _et al._ [2015] G. J. Slotman, M. M. van Wijk, P.-L. Zhao, A. Fasolino, M. I. Katsnelson, and S. Yuan, Phys. Rev. Lett. 115, 186801 (2015).
* Moon and Koshino [2014] P. Moon and M. Koshino, Phys. Rev. B 90 (2014).
* San-Jose _et al._ [2014a] P. San-Jose, A. Gutiérrez-Rubio, M. Sturla, and F. Guinea, Phys. Rev. B 90, 075428 (2014a).
* Jung _et al._ [2017] J. Jung, E. Laksono, A. M. Dasilva, A. H. Macdonald, M. Mucha-Kruczyński, and S. Adam, Phys. Rev. B 96, 085442 (2017).
* Sachs _et al._ [2011] B. Sachs, T. O. Wehling, M. I. Katsnelson, and A. I. Lichtenstein, Phys. Rev. B 84, 195414 (2011).
* Kindermann _et al._ [2012] M. Kindermann, B. Uchoa, and D. L. Miller, Phys. Rev. B 86, 115415 (2012).
* San-Jose _et al._ [2014b] P. San-Jose, A. Gutiérrez-Rubio, M. Sturla, and F. Guinea, Phys. Rev. B 90, 115152 (2014b).
* Wallbank _et al._ [2013] J. R. Wallbank, A. A. Patel, M. Mucha-Kruczyński, A. K. Geim, and V. I. Fal’Ko, Phys. Rev. B 87, 245408 (2013).
* Lin and Ni [2019] X. Lin and J. Ni, Phys. Rev. B 100, 195413 (2019).
* Ochoa _et al._ [2011] H. Ochoa, E. V. Castro, M. I. Katsnelson, and F. Guinea, Phys. Rev. B 83 (2011).
* Yuan _et al._ [2010] S. Yuan, H. D. Raedt, and M. I. Katsnelson, Phys. Rev. B 82 (2010).
* Li _et al._ [2022] Y. Li, Z. Zhan, Y. Li, and S. Yuan, arXiv preprint arXiv:2209.00806 (2022).
* TBPLaS [2022] TBPLaS, http://www.tbplas.net/index.html (2022).
* Haydock _et al._ [1972] R. Haydock, V. Heine, and M. Kelly, J. Phys. C: Solid State Phys. 5, 2845 (1972).
* Shi _et al._ [2020] H. Shi, Z. Zhan, Z. Qi, K. Huang, E. van Veen, J. Á. Silva-Guillén, R. Zhang, P. Li, K. Xie, H. Ji, M. I. Katsnelson, S. Yuan, S. Qin, and Z. Zhang, Nat. Commun. 11, 1 (2020).
* Jung _et al._ [2015] J. Jung, A. M. DaSilva, A. H. MacDonald, and S. Adam, Nat. Commun. 6 (2015).
* Hunt _et al._ [2013] B. Hunt, J. D. Sanchez-Yamagishi, A. F. Young, M. Yankowitz, B. J. LeRoy, K. Watanabe, T. Taniguchi, P. Moon, M. Koshino, P. Jarillo-Herrero, and R. C. Ashoori, Science 340, 1427 (2013).
* Song _et al._ [2013] J. C. W. Song, A. V. Shytov, and L. S. Levitov, Phys. Rev. Lett. 111, 266801 (2013).
* Amet _et al._ [2013] F. Amet, J. R. Williams, K. Watanabe, T. Taniguchi, and D. Goldhaber-Gordon, Phys. Rev. Lett. 110, 216601 (2013).
* Gorbachev _et al._ [2014] R. V. Gorbachev, J. C. W. Song, G. L. Yu, A. V. Kretinin, F. Withers, Y. Cao, A. Mishchenko, I. V. Grigorieva, K. S. Novoselov, L. S. Levitov, and A. K. Geim, Science 346, 448 (2014).
* Chen _et al._ [2014] Z.-G. Chen, Z. Shi, W. Yang, X. Lu, Y. Lai, H. Yan, F. Wang, G. Zhang, and Z. Li, Nat. Commun. 5, 4461 (2014).
* Yankowitz _et al._ [2014] M. Yankowitz, J. Xue, and B. J. LeRoy, J. Phys. Condens. Matter 26, 303201 (2014).
* Wong _et al._ [2015] D. Wong, Y. Wang, J. Jung, S. Pezzini, A. M. DaSilva, H.-Z. Tsai, H. S. Jung, R. Khajeh, Y. Kim, J. Lee, S. Kahn, S. Tollabimazraehno, H. Rasool, K. Watanabe, T. Taniguchi, A. Zettl, S. Adam, A. H. MacDonald, and M. F. Crommie, Phys. Rev. B 92, 155409 (2015).
* Lee _et al._ [2016] M. Lee, J. R. Wallbank, P. Gallagher, K. Watanabe, T. Taniguchi, V. I. Fal’ko, and D. Goldhaber-Gordon, Science 353, 1526 (2016).
* Wang _et al._ [2016] E. Wang, X. Lu, S. Ding, W. Yao, M. Yan, G. Wan, K. Deng, S. Wang, G. Chen, L. Ma, J. Jung, A. V. Fedorov, Y. Zhang, G. Zhang, and S. Zhou, Nat. Phys. 12, 1111 (2016).
* Yankowitz _et al._ [2018] M. Yankowitz, J. Jung, E. Laksono, N. Leconte, B. L. Chittari, K. Watanabe, T. Taniguchi, S. Adam, D. Graf, and C. R. Dean, Nature 557, 404 (2018).
* Kim _et al._ [2018] H. Kim, N. Leconte, B. L. Chittari, K. Watanabe, T. Taniguchi, A. H. Macdonald, J. Jung, and S. Jung, Nano Lett. 18, 7732 (2018).
* Ni _et al._ [2015] G. X. Ni, H. Wang, J. S. Wu, Z. Fei, M. D. Goldflam, F. Keilmann, B. Özyilmaz, A. H. C. Neto, X. M. Xie, M. M. Fogler, and D. N. Basov, Nat. Mater. 14, 1217 (2015).
* Pantaleón _et al._ [2021] P. A. Pantaleón, T. Cea, R. Brown, N. R. Walet, and F. Guinea, 2D Mater. 8, 044006 (2021).
* Moriyama _et al._ [2019] S. Moriyama, Y. Morita, K. Komatsu, K. Endo, T. Iwasaki, S. Nakaharai, Y. Noguchi, Y. Wakayama, E. Watanabe, D. Tsuya, _et al._ , arXiv preprint arXiv:1901.09356 (2019).
* Chen _et al._ [2019] G. Chen, A. L. Sharpe, P. Gallagher, I. T. Rosen, E. J. Fox, L. Jiang, B. Lyu, H. Li, K. Watanabe, T. Taniguchi, J. Jung, Z. Shi, D. Goldhaber-Gordon, Y. Zhang, and F. Wang, Nature 572, 215 (2019).
* He _et al._ [2020] M. He, Y. Li, J. Cai, Y. Liu, K. Watanabe, T. Taniguchi, X. Xu, and M. Yankowitz, Nat. Phys. 17, 26 (2020).
* Carr _et al._ [2020] S. Carr, C. Li, Z. Zhu, E. Kaxiras, S. Sachdev, and A. Kruchkov, Nano Lett. 20, 3030 (2020).
* Lei _et al._ [2021] C. Lei, L. Linhart, W. Qin, F. Libisch, and A. H. MacDonald, Phys. Rev. B 104, 035139 (2021).
* Lopez-Bezanilla and Lado [2020] A. Lopez-Bezanilla and J. L. Lado, Phys. Rev. Res. 2 (2020).
* Tsim _et al._ [2020] B. Tsim, N. N. T. Nam, and M. Koshino, Phys. Rev. B 101, 125409 (2020).
* San-Jose and Prada [2013] P. San-Jose and E. Prada, Phys. Rev. B 88, 121408 (2013).
* Moon _et al._ [2014] P. Moon, Y.-W. Son, and M. Koshino, Phys. Rev. B 90, 155427 (2014).
* Po _et al._ [2018] H. C. Po, L. Zou, A. Vishwanath, and T. Senthil, Phys. Rev. X. 8 (2018).
* Repellin and Senthil [2020] C. Repellin and T. Senthil, Phys. Rev. Research 2, 023238 (2020).
* Bultinck _et al._ [2020] N. Bultinck, S. Chatterjee, and M. P. Zaletel, Phys. Rev. Lett. 124, 166601 (2020).
* Serlin _et al._ [2019] M. Serlin, C. L. Tschirhart, H. Polshyn, Y. Zhang, J. Zhu, K. Watanabe, T. Taniguchi, L. Balents, and A. F. Young, Science 367, 900 (2019).
* Sharpe _et al._ [2019] A. L. Sharpe, E. J. Fox, A. W. Barnard, J. Finney, K. Watanabe, T. Taniguchi, M. A. Kastner, and D. Goldhaber-Gordon, Science 365, 605 (2019).
* Ma _et al._ [2020] C. Ma, Q. Wang, S. Mills, X. Chen, B. Deng, S. Yuan, C. Li, K. Watanabe, T. Taniguchi, X. Du, F. Zhang, and F. Xia, Nano Lett. 20, 6076 (2020).
* Song _et al._ [2015] J. C. Song, P. Samutpraphoot, and L. S. Levitov, PNAS 112, 10879 (2015).
* Fukui _et al._ [2005] T. Fukui, Y. Hatsugai, and H. Suzuki, JPSJ 74, 1674 (2005).
* Pantaleón _et al._ [2022] P. A. Pantaleón, V. T. Phong, G. G. Naumis, and F. Guinea, Phys. Rev. B 106 (2022).
|
# Sensing the position of a single scatterer in an opaque medium by mutual
scattering
Minh Duy Truong 1 Ad Lagendijk 1 and Willem L. Vos1 1Complex Photonic
Systems (COPS), MESA + Institute for Nanotechnology, University of Twente,
P.O. Box 217, 7500 AE Enschede, The Netherlands * Email<EMAIL_ADDRESS>
###### Abstract
We investigate the potential of mutual scattering, i.e., light scattering with
multiple properly phased incident beams, as a method to extract structural
information from inside an opaque object. In particular, we study how
sensitively the displacement of a single scatterer is detected in an optically
dense sample of many (up to $N=1000$) similar scatterers. By performing exact
calculations on ensembles of many point scatterers, we compare the mutual
scattering (from two beams) and the well-known differential cross-section
(from one beam) in response to the change of location of a single dipole
inside a configuration of randomly distributed similar dipoles. Our numerical
examples show that mutual scattering provides speckle patterns with an angular
sensitivity at least 10 times higher than the traditional one-beam techniques.
By studying the “susceptivity” of mutual scattering, we demonstrate the
possibility to determine the original depth relative to the incident surface
of the displaced dipole in an opaque sample. Furthermore, we show that mutual
scattering offers a new approach to determine the complex scattering
amplitude.
††journal: oe††articletype: Research Article
Website: https://nano-cops.com/
## 1 Introduction
Traditional scattering experiments are associated with sending a single beam
of waves to a target [1, 2, 3, 4]. If the target interacts only weakly with
the incoming waves, as is typical for X-ray and light scattering [1, 2, 3, 4],
the detailed structure of the target including the positions of constituting
particles can be retrieved from the characteristics of the scattered waves. If
a medium interacts more strongly with the waves [5, 6], it becomes
increasingly opaque, hence the usual single scattering approaches break down,
and only limited structural information can be retrieved by methods such as
diffusion wave spectroscopy [7, 8, 9]. Beyond the traditional case of a single
incident wave [1, 2, 3, 4], the recent development of multiple-beam techniques
such as wavefront shaping has opened the new potential for research of
strongly interacting opaque samples [10, 11, 12].
Recently, it was realized that interference of multiple incident beams gives
rise to a new scattering phenomenon, called mutual scattering [13]. From a
generalized optical theorem, it follows that the total extinction of even only
two incident waves is either greatly enhanced by up to $100\%$ compared to the
usual single-beam extinction, which is called mutual extinction, or greatly
reduced, which is called mutual transparency. Subsequently, the experimental
demonstration of mutual scattering has been made for biological, silicon, and
carbon samples [14].
Following the theoretical and experimental demonstration, we explore here a
new practical application of mutual scattering, namely: how sensitive is
mutual scattering to detecting the displacement of a single nanoparticle deep
inside a sample with an ensemble of many (up to 1000) similar nanoparticles?
In other words, the nanoparticle we wish to track is not a tracer particle
with properties different from the ensemble, like a dyed nanosphere standing
out in a sea of undyed ones [15]. The underlying hypothesis is that the cross-
interference of two beams is more sensitive to changes of a scatterer located
deep within the sample than conventional scattering methods. An enhanced
sensitivity would boost the potential of mutual scattering for applications in
biomedical imaging inside tissue, in semiconductor metrology, in
communications for non-line-of-sight links [16], or in the study of the
structure of free-form samples [17, 18].
In addition, we will also show that mutual scattering allows one to extract
the modulus and imaginary part of the scattering amplitude in an experiment.
By combining this new information with the traditional one-beam scattering, we
obtain the complex scattering amplitude at all angles, while this was up to
now only possible for forward scattering [19]. In other words, mutual
scattering of multiple beams is an interferometric technique to measure both
the amplitude and phase of the complex scattering amplitudes.
## 2 Schematic of mutual scattering
Figure 1: Schematic of mutual scattering: the sample (white box) at the
centre is illuminated from the left by two incident (purple and orange) plane
waves. The interference between the incident plane waves and the scattered
spherical waves of different colors, observed at detectors $\mathcal{A}$ or
$\mathcal{B}$, gives rise to the mutual scattering phenomena.
To study the characteristics of mutual scattering, we introduce the sample
geometry consisting of a box-like sample as shown in Fig. 1. The sample has
thus finite support in three dimensions (3D) [20]. We illuminate the left side
of the sample with two incident beams with wave vectors
$\mathbf{k}_{\text{in},1}$ and $\mathbf{k}_{\text{in},2}$, respectively, and
with equal amplitudes $A_{1}=A_{2}=A$. The incident plane is chosen to be
perpendicular to the $x$-axis. The angles of incidence of these two beams,
denoted $\gamma_{1}$ and $\gamma_{2}$, respectively, are defined as the angles
between the $z$-axis and $\mathbf{k}_{\text{in},1}$ or
$\mathbf{k}_{\text{in},2}$. The two incident beams subtend an angle
$\gamma=\gamma_{1}+\gamma_{2}$. After emanating from the right of the sample
and experiencing a certain mutual scattering, the two beams propagate to two
detectors located at $\mathcal{A}$ and $\mathcal{B}$. Detector $\mathcal{A}$
measures the mutual scattering $F^{\text{MS}}_{1}(\gamma)$ along
$\mathbf{k}_{\text{in},1}$, and detector $\mathcal{B}$ measures
$F^{\text{MS}}_{2}(\gamma)$, see Appendix 8 for details.
To model the optical properties of opaque objects, we consider an ensemble of
$N_{\text{dipole}}$ scatterers, shown in Fig. 2(a,b), that are randomly
distributed in a rectangular box with a volume $V=4\times 4\times 4$
$\lambda^{3}$, with $\lambda$ the wavelength of light. The scatterers are
dipoles, whose optical properties are described in Appendix 8.4, where their
interaction strength with light [5, 6] is illustrated by the volume of the
polarizability spheres in Figs. 2(a,b). The sample in Fig. 2(a) holds $N=1000$
scatterers and has a substantial photonic strength that corresponds to a mean
free path less than the wavelength $l_{\text{scat}}/\lambda=0.2011$ or less
than the sample dimensions $V^{1/3}/l_{\text{scat}}=20$, typical of a highly
opaque sample. The sample in Fig. 2(b) interacts less strongly with light, as
it has $l_{\text{scat}}/\lambda=0.8044$ or $V^{1/3}/l_{\text{scat}}=5$ and is
thus still opaque.
Figure 2: Schematic of the numerical samples: (a) A cube with
$N_{\text{dipole}}=1000$ dipoles, and (b) a cube with $N_{\text{dipole}}=250$
dipoles, whose polarizability is shown by the extent of the blue spheres. The
target dipole (red sphere) at position $\mathbf{r}_{0}$ is moved along a
chosen direction, for example, the blue line shows the movement of the red
scatterer in the $x$-direction, while the positions of all other
($N_{\text{dipole}}-1$) scatterers are preserved.
To compute the scattering amplitude of the sample, we must compute the exact
evaluation of the T-matrix of many scatterers, see Appendix 8.4 for more
details. The scattering amplitude $f$ is obtained from
$\displaystyle
f(\mathbf{k}_{\text{out}},\mathbf{k}_{\text{in}})=-\dfrac{1}{4\pi}T(\mathbf{k}_{\text{out}},\mathbf{k}_{\text{in}}),$
(1)
where $T(\mathbf{k}_{\text{out}},\mathbf{k}_{\text{in}})$ is the transition
matrix or T-matrix in scattering theory [21].
We note aside that when the sample is opaque and significantly larger than the
wavelength, the scattering amplitude is accurately described with Fraunhofer
diffraction theory [22], as shown in Ref. [14]. However, since the sample is
considered to be impenetrable in this description, this approximation is not
helpful to study the internal structure.
## 3 Results I - Mutual scattering of static configuration
We study the properties of mutual scattering in response to varying the static
configuration of the sample. We start with an ensemble of
$N_{\text{dipole}}=250$ dipoles randomly distributed in a rectangular box and
create new configurations by randomly changing the positions of all dipoles.
### 3.1 Symmetric incident beams
Based on the theory in Eq. (8.2), we find that the mutual scattering depends
on the phase difference $(\phi_{1}-\phi_{2})$ between two beams. By tuning the
phase difference, we obtain the maximum and minimum mutual scattering for each
angle of incidence $\gamma$ between the two incident beams.
We start with the symmetric case, where the two incident beams with mutual
angle $\gamma$ symmetrically illuminate the sample, such that the angles of
incidence of both beam 1 and beam 2 are equal to $\gamma/2$. For each sample
configuration, the mutual scattering amplitudes $F^{\text{MS}}_{1}$ and
$F^{\text{MS}}_{2}$ are not equal because a single random structure is always
asymmetric. If we average the mutual scattering over many configurations,
however, the statistics of the mutual scattering of beam 1 and beam 2 and the
mutual scattering of both beams will be the same:
$\displaystyle\langle F^{\text{MS}}_{1}\rangle=\langle
F^{\text{MS}}_{2}\rangle=\langle F^{\text{MS}}_{12}\rangle,$ (2)
where the notation $\langle x\rangle$ represents the average (mean) value of
$x$ calculated over $N_{\text{realization}}$ realizations. Therefore, we will
show only the result of $F^{\text{MS}}_{12}$ in this symmetric case.
Fig. 3 shows the statistics of the maximum and minimum mutual scattering,
denoted by $F^{\text{MS}}_{\text{max}}$ and $F^{\text{MS}}_{\text{min}}$,
respectively, as a function of angle $\gamma$ between the two incident beams
for $N_{\text{dipole}}=250$. The results are calculated based on
$N_{\text{realization}}=1000$ realizations of different configurations of the
location of scatterers.
Figure 3: Mutual scattering $F^{\text{MS}}_{12}$ for $N_{\text{dipole}}=250$
with respect to angle $\gamma$ between two symmetric incident beams. The
results are averaged over $N_{\text{realization}}=1000$ realizations of
different configurations of the location of dipoles. (a) The blue and orange
lines correspond to the maximum and minimum scattering obtained from phase
variations, respectively. The average self-extinction containing only the sum
of the forward scattering is given by the black dotted line. (b) The
percentile distribution of the maximum and minimum mutual scattering.
Fig. 3(a) shows both the average value of the maximum $\langle
F^{\text{MS}}_{\text{max}}\rangle$ and minimum mutual scattering $\langle
F^{\text{MS}}_{\text{min}}\rangle$ over all the realizations as functions of
angle $\gamma$. The average value of the forward-scattering of both beams
$\langle F^{\text{forward}}_{12}(\gamma)\rangle$ is given by
$\displaystyle\langle F^{\text{forward}}_{12}(\gamma)\rangle=\dfrac{\langle
F^{\text{MS}}_{\text{max}}(\gamma)\rangle+\langle
F^{\text{MS}}_{\text{min}}(\gamma)\rangle}{2}.$ (3)
At $\gamma=0\degree$, the forward-scattering equals 1, increases gradually and
peaks at $\gamma=90\degree$, when the incident angles of two beams equal
$45^{\degree}$. The average maximum and minimum mutual scattering, i.e.,
$\langle F^{\text{MS}}_{\text{max}}(\gamma)\rangle$ and $\langle
F^{\text{MS}}_{\text{min}}(\gamma)\rangle$, have a shape like a sine cardinal
function near $\langle F^{\text{forward}}_{12}(\gamma)\rangle$. At small
angles, the mutual scattering is strong, with modulations up to 100%. The
mutual scattering quickly decreases with increasing angle, being up to 20% at
$\gamma=20\degree$ and then further decreasing. The maximum and minimum mutual
scattering vary according to the distribution of scatterers inside the box.
The statistics of that variation are shown in Fig. 3(b), notably the
50-percentile and 90-percentile distribution of both the maximum and minimum
mutual scattering. It is apparent that the mutual scattering amplitudes vary
strongly with varying configurations. For example, at $\gamma=30\degree$,
there is a $90\%$ chance out of 1000 realizations that the value of the
maximum mutual scattering is found in the range between 1 and 1.2. The
variations increase with increasing angle, such that at angles in excess of
$\gamma=60\degree$ the variation ranges of maximum and minimum start to
overlap.
Figure 4: 15-bin percentage histogram of maximum (blue) and minimum (orange)
mutual scattering of symmetric incident beams for $N_{\text{dipole}}=250$ and
$N_{\text{realization}}=1000$ at angle $\gamma=30\degree$. The black circle in
the centre of the double-headed arrow represents the mean value, and the
standard deviation $s$ of the maximum and minimum mutual scattering over all
realizations is indicated by the black double-headed arrows.
The mutual scattering statistics are more readily apparent at a constant angle
as shown in Fig. 4(a,b) for $\gamma=30\degree$, which shows the histograms of
the maxima and minima for symmetric incident beams for $N_{\text{dipole}}=250$
and $N_{\text{realization}}=1000$. In this representation, it is apparent that
both the maximum and minimum mutual scattering amplitudes show considerable
variations over all realizations. In one extreme realization, where the
maximum and minimum mutual scattering can both take the value 1.0, the whole
mutual scattering effect is completely washed out. On the other hand, when
averaged over all realizations, however, there is a significant mutual
scattering, with the mean being 1.1 (maximum) and 0.9 (minimum), where the
difference exceeds the indicated standard deviations. Furthermore, Fig. 3(b)
and Fig. 4(a,b) show that the probability distributions of both maximum and
minimum mutual scattering at a fixed angle $\gamma$ are neither symmetric nor
normally distributed. The maximum scattering is right-skewed, while the
minimum is left-skewed.
Figure 5: (a) Inverse standard deviation $s^{-1}$ of the distribution of the
maximum mutual scattering of two symmetric incident beams versus the number of
dipoles $N_{\text{dipole}}$. The results are averaged over
$N_{\text{realization}}=50$ realizations. (b) Inverse standard error
$\hat{s}^{-1}$ of the distribution of the maximum mutual scattering of two
symmetric incident beams versus the number of realizations
$N_{\text{realization}}$. The results are computed for a given number of
dipole scatterers $N_{\text{dipole}}=250$. The black slopes in the bottom
right corners of (a,b) indicate three different power laws ($y\propto x^{k}$
where $k=0,0.5,$ and $1$).
Figure 5(a) shows the inverse standard deviation as a function of the number
of dipoles, and Figure 5(b) shows the inverse standard error of the mean
$\hat{s}$ versus the number of realizations, for three representative angles
($\gamma=30\degree,90\degree$, and $180\degree$). Naively, we expect that an
increase in the number of dipoles, corresponding to an increase in the number
of scattering events, would increase the standard deviation of mutual
scattering. However, the inverse standard deviation overall increases with the
number of dipoles with power between 0.5 and 1.0 (Fig. 5(a)). On the other
hand, as a function of the number of realizations, all three curves of the
inverse standard error $\hat{s}^{-1}\coloneqq
s^{-1}{N_{\text{realization}}^{1/2}}$ in Fig. 5(b) are closely consistent with
a power 0.5. This implies that the standard deviation $s$ is almost
independent of the number of realizations.
The standard deviation $s$ of mutual scattering in Fig. 5(a), and the standard
error of the mean $\hat{s}$ of mutual scattering in Fig. 5(b), increase as the
angle between the two incident beams increases. Indeed, the blue line (for
$\gamma=30\degree$) is always above the orange line (for $\gamma=90\degree$),
which is, in turn, higher than the green line (for $\gamma=180\degree$). This
is consistent with Fig. 3(b), where mutual scattering fluctuates with larger
amplitude at larger angles.
Fig. 5(a) shows that when the number of scatterers increases, and hence the
density of scatterers, the distribution of mutual scattering becomes
statistically less dispersive. The mutual scattering at high density is then
less sensitive to the internal configurations of the sample. While the density
of the scatterers increases, the sample becomes increasingly opaque and the
mutual scattering converges to its average values, that is, the blue and
orange lines in Fig. 3(b) converge, with a sine cardinal shape. Therefore, we
associate the sine cardinal behaviour of the average value of mutual
scattering with the external shape of the sample. At the same time, Fig. 3(b)
shows that the larger the angle $\gamma$, the wider the distribution of
maximum and minimum mutual scattering, corresponding to blue and orange zones
in Fig. 3(b), are. Consequently, as the angle $\gamma$ increases, the mutual
scattering is more influenced by the internal structure of the sample, and the
sine cardinal shape of mutual scattering gradually becomes fainter.
Summarizing this section, from a practical point of view, the mutual
scattering at large angles holds information on the internal structure of the
sample, and vice versa the mutual scattering at small angles characterizes a
sample’s external shape.
### 3.2 Asymmetric incident beams
For the case of asymmetric incident beams, the incoming direction of the first
beam $\mathbf{k}_{\text{in},1}$ is fixed. The angle of incidence of the first
and second beams are no longer equal and are denoted as $\gamma_{1}=0\degree$
and $\gamma_{1}=\gamma$, respectively. As a result, the symmetry of mutual
scattering between beam 1 and beam 2 no longer holds. For example, the
difference between the mutual scattering of beam 1 ($F_{1}^{\text{MS}}$
detected at the sensor $\mathcal{A}$) and beam 2 ($F_{2}^{\text{MS}}$ detected
at the sensor $\mathcal{B}$) is illustrated by Fig. 6(a,b).
Figure 6: (a) Mutual scattering of the first beam $F^{\text{MS}}_{1}$ from two
asymmetric incident beams with respect to the angle ($\gamma$) for
$N_{\text{dipole}}=250$ and $N_{\text{realization}}=1000$. (b) Mutual
scattering of the second beam $F^{\text{MS}}_{2}$.
The most obvious difference between Fig. 6(a) and Fig. 6(b) is the symmetry of
Fig. 6(a). In particular, the average lines of the maximum and minimum mutual
scattering are perfectly symmetrical about the black dash-dotted line (the
forward scattering line $\langle F^{\text{forward}}_{1}(\gamma)\rangle$),
which has a value of 1 at all angles $\gamma$. This is explained by the fact
that the angle of incidence of the first beam is $\gamma_{1}=0\degree$. On the
other hand, for beam 2 in Fig. 6(b), the forward scattering line $\langle
F^{\text{forward}}_{2}(\gamma)\rangle$ reach local maxima at
$\gamma_{2}=45\degree$ and $\gamma_{2}=135\degree$.
## 4 Results II - Comparison between two-beam mutual scattering and one-beam
techniques
The goal of this section is to compare the two-beam mutual scattering and the
traditional one-beam scattering. If the second beam in Fig. 1 is turned off
and the incoming direction of the first beam $\mathbf{k}_{\text{in},1}$ is
fixed at $0\degree$ ($\gamma_{1}=0\degree$), we have the conventional one-beam
experiment. Then, the sensor at $\mathcal{B}$ in Fig. 1 only detects the
current of the field scattered from direction $\mathbf{k}_{\text{in},1}$ into
direction $\mathbf{k}_{\text{in},2}$. This scattered current can be considered
as the angular “speckle” of the object [23], which is proportional to the
differential cross-section from direction $\mathbf{k}_{\text{in},1}$ into
direction $\mathbf{k}_{\text{in},2}$:
$\displaystyle\dfrac{d\sigma}{d\Omega}(\gamma)\equiv\dfrac{d\sigma}{d\Omega}(\mathbf{k}_{\text{in},2},\mathbf{k}_{\text{in},1})=|f(\mathbf{k}_{\text{in},2},\mathbf{k}_{\text{in},1})|^{2},$
(4)
where $f(\mathbf{k}_{\text{in},2},\mathbf{k}_{\text{in},1})$ is the scattering
amplitude of the object. On the other hand, the mutual scattering of the
second beam $F^{\text{MS}}_{2}$ contains the interference between the incident
field in the direction $\mathbf{k}_{\text{in},2}$ and the field scattered from
direction $\mathbf{k}_{\text{in},1}$ into direction
$\mathbf{k}_{\text{in},2}$.
Fig. 7(top, bottom) shows the comparison between the one-beam differential
cross-section and the maximum mutual scattering for a fixed “reference”
configuration of 250 dipoles. It is worth noting that two reference blue lines
in Fig. 7(top, bottom) tend to fluctuate up and down similarly at the same
angles, but with different magnitudes. As is apparent in Fig. 7(bottom), when
the angle $\gamma$ between the two beams increases, the differential cross-
section $d\sigma/d\Omega$ decreases sharply to zero and varies in the range
from $10^{-4}$ to $10^{-2}$. On the other hand, the mutual scattering in Fig.
7(top) experiences a milder variation as the value $F^{MS}_{max,2}-1$ of the
reference configuration mostly fluctuates around $10^{-1}$. The 10 to 1000
times larger magnitude of mutual scattering compared to the differential
cross-section is understood mathematically from the fact that the interference
part of $F^{\text{MS}}_{2}$ is proportional to the imaginary part of the
scattering amplitude ${\rm
Im}\left[f(\mathbf{k}_{\text{in},1},\mathbf{k}_{\text{in},2})\right]$ (see
Appendix 8 for more details), while the differential cross-section Eq. (4)
gives the modulus squared of the scattering amplitude
$|f(\mathbf{k}_{\text{in},1},\mathbf{k}_{\text{in},2})|^{2}$. In short, we
consider the mutual scattering as a quantity that not only captures but also
magnifies the behaviour of differential cross-section $d\sigma/d\Omega$,
corresponding to the traditional speckle, and thus leads to a greater
sensitivity that we will explore in the next section.
Figure 7: Comparison in a semi-log scale between (top) the maximum mutual
scattering of the second beam $F^{\text{MS}}_{\text{max},2}$ in the case of
asymmetric incidence and (bottom) the differential cross-section
$|f_{21}|^{2}\equiv|f(\mathbf{k}_{2},\mathbf{k}_{1})|^{2}$ with
$N_{\text{dipole}}=250$, and $N_{\text{realization}}=50$. The blue zones show
how the maximum mutual scattering and the differential cross-section vary as
the configurations of the dipoles are changed.
## 5 Results III - Sensing the position of a single displaced scatterer
Knowing that two-beam mutual scattering is more sensitive than conventional
one-beam scattering with respect to internal position variations of scatterers
inside an opaque sample, we now turn to how the displacement of a single
nanoparticle deep inside a sample is sensed with mutual scattering.
### 5.1 Setup
In brief, the procedure of our numerical setup is as follows:
1. 1.
We start with a “reference” configuration: a selected dipole at position
$\mathbf{r}_{0}$, and the other $(N_{\text{dipole}}-1)$ dipoles randomly
distributed in the box, as illustrated in Fig. 2(b).
2. 2.
The selected dipole at $\mathbf{r}_{0}$ is displaced in the $x$-direction, as
shown in Fig. 2(b), while the positions of all other ($N_{\text{dipole}}-1$)
scatterers are fixed.
3. 3.
We calculate the difference in mutual scattering between the reference
configuration and the new configuration obtained after the displacement of the
red-sphere dipole.
4. 4.
We repeat the process with a new reference configuration where the selected
dipole is still located at $\mathbf{r}_{0}$ but the other
$(N_{\text{dipole}}-1)$ dipoles have randomly changed positions. We compute
the resulting statistic after $N_{\text{realization}}$ different
configurations.
We quantify the variation of mutual scattering relative to the displacement of
a single scatterer by a mathematical quantity called “susceptivity”, which is
defined based on Fig. 8(a,b).
Figure 8: Illustration of the susceptivity of mutual scattering for
$N_{\text{dipole}}=250$: (a) The solid blue line represents the reference
maximum scattering before moving the selected dipole, i.e., the red sphere in
Figure 2(b). The dash orange line is the maximum scattering computed when the
selected dipole has been displaced to the new position. The light blue zone
covers all the values of maximum scattering for $N_{\text{realization}}=50$
realizations. (b) The susceptivity of mutual scattering (5) is defined as the
ratio of the variation upon moving one scatterer (the orange zone) to the
variation upon moving all $N_{\text{dipole}}$ scatterers (the light blue
zone).
For example, the susceptivity of two-beam mutual scattering is defined as the
ratio of variation upon moving one scatterer, i.e., the orange slashed zone in
Fig. 8(b), to the variation upon the complete change of configuration of
selection of scatterers when moving all $N_{\text{dipole}}$ scatterers, i.e.,
the light blue line in Fig. 8(b), by the following equation:
$\displaystyle\tilde{\delta}F^{\text{MS}}\coloneqq\dfrac{\left|F^{\text{MS}}-F^{\text{MS}}_{\text{ref}}\right|}{F^{\text{MS}}_{\text{sup}}-F^{\text{MS}}_{\text{inf}}},$
(5)
where $F^{\text{MS}}_{\text{ref}}$ stands for the reference configuration.
After moving the selected dipole to a new position, the susceptivity of mutual
scattering (compared to reference configuration $F^{\text{MS}}_{\text{ref}}$)
is denoted by $\tilde{\delta}F^{\text{MS}}$. $F^{\text{MS}}_{\text{sup}}$ and
$F^{\text{MS}}_{\text{inf}}$ refer to the limit superior and limit inferior
which bound the variation of mutual scattering with respect to all the
possible configurations of scatterers. The susceptivity of 1-beam differential
cross-section is also calculated in a similar way.
### 5.2 Results and discussion
Figure 9: Comparison between (a) the susceptivity of differential cross-
section $\tilde{\delta}|f_{12}|^{2}$ and (b) the susceptivity of the maximum
scattering of the second beam $\tilde{\delta}F^{\text{MS}}_{\text{max},2}$
with respect to the displacement in the $x$-direction of moving dipole,
measured at angle $\gamma=90\degree$ for $N_{\text{dipole}}=250$ and
$N_{\text{realization}}=50$.
We express the susceptivity of mutual scattering in Eq. 5 as a function of
displacement of a single dipole originally located at position
$\mathbf{r}_{0}=(0,0,0)$, i.e., the centre of our box sample. The selected
dipole is shifted along the $x$-axis. The statistics of the susceptivity of
the maximum scattering of the second beam
$\tilde{\delta}F^{\text{MS}}_{\text{max},2}$ and the differential cross-
section
$\tilde{\delta}|f_{21}|^{2}\equiv\tilde{\delta}|f(\mathbf{k}_{\text{in},2},\mathbf{k}_{\text{in},1})|^{2}$
with respect to the displacement $\Delta x$ at angle $\gamma=90\degree$ are
shown in Fig. 9(a,b). Data are calculated with $N_{\text{dipole}}=250$
dipoles, based on $N_{\text{realization}}=50$ different reference
configurations of the location of dipoles. The solid blue lines stand for the
mean value of the susceptivity, while the dark and light blue zones are the
60-percentile and 90-percentile of the probability distribution of the
susceptivity, respectively. The displacement value $\Delta x$ is normalized
based on wavelength $\lambda$ and scattering mean free path $l_{\text{scat}}$.
Let’s take a closer look on the susceptivity of the maximum scattering of the
second beam ($\tilde{\delta}F^{\text{MS}}_{\text{max},2}$) in Fig. 9(b). The
value of susceptivity tends to sharply increase in the range $\Delta
x/\lambda\in[0,0.5]$, then reaches the asymptotic value and stays around
0.025, over the range $\Delta x/\lambda\in[0.5,1.5]$. Intuitively, when
changing one dipole out of $N_{\text{dipole}}$ dipoles, the susceptivity
function is expected to change with a value of approximately the ratio
$1/N_{\text{dipole}}$. In other words, we expect
$\tilde{\delta}F^{\text{MS}}_{\text{max},2}\times N_{\text{dipole}}\approx 1$.
However, as seen on the right twin $y$-axis in Fig. 9(b), the asymptotic value
of mutual scattering at large displacement is on average 6 times the expected
rate.
In practice, the transformation of “one-beam speckle”, corresponding to the
differential cross-section, as a function of displacement of a single
scattering has never been experimentally measured because the magnitude of the
differential cross-section at large angles is too small. However, we notice
that the susceptivity of differential cross-section
$\tilde{\delta}|f_{12}|^{2}$ in Fig. 9(a) behaves very similar to the
susceptivity of the maximum scattering
$\tilde{\delta}F^{\text{MS}}_{\text{max},2}$ in Fig. 9(b). This allows us to
study the dependence of speckles on the structure of the sample for the first
time through the property of multi-beam mutual scattering.
After expressing the susceptivity of the maximum mutual scattering of the
second beam $\tilde{\delta}F^{\text{MS}}_{\text{max},2}$ as a function of the
displacement of the dipole from the centre of the sample, the next step is to
find out how the susceptivity changes if the starting point of the moving
scatterer is located elsewhere in the sample. In particular, let the original
starting point of the moving (red) dipole be $\mathbf{r}_{1}=(0,0,z)$, where
$z$ represents how deep the dipole lies inside of the sample relative to the
incident surface. We choose 3 values of $z$:
* •
$z=-1.75$: The dipole is located very close to the incident surface.
* •
$z=0$: The dipole is at the center of the box.
* •
$z=1.75$: The dipole is located very far from the incident surface and near
the exit surface.
Figure 10: Comparison between (a) the susceptivity of differential cross-
section $|f_{21}|^{2}\equiv|f(\mathbf{k}_{2},\mathbf{k}_{1})|^{2}$ and (b) the
susceptivity of the maximum mutual scattering
$\tilde{\delta}F^{\text{MS}}_{\text{max},2}$ of beam 2 in the case of
asymmetric incidence with respect to the displacement of the moving dipole at
angle $\gamma=90\degree$ for $N_{\text{dipole}}=250$ and
$N_{\text{realization}}=50$. The figure depicts 3 different depths
$z=-1.75,z=0$, and $z=1.75$ corresponding to 3 colors blue, orange, and green
respectively. The solid lines stand for the mean value, while the colored
zones represent the standard error of the mean.
We plot the susceptivity of the maximum mutual scattering of the second beam
$\tilde{\delta}F^{\text{MS}}_{\text{max},2}$ with respect to the $x$-direction
displacement of the moving dipole from $\mathbf{r}_{1}=(0,0,z)$ for all three
values of $z$ in Fig. 10(b). Similar to Fig. 9(b), the susceptivity functions
increase and then fluctuate around a fixed value, which depends on the
original depth $z$ of the displaced dipole. We see that the blue line is
always above the orange line, which is, in turn, above the green line. In
other words, the susceptivity of the dipole located close to the incident
surface ($z=-1.75$) fluctuates at a larger asymptote than the susceptivity of
the dipole located far from the incident surface ($z=1.75$). In addition, the
standard error of the mean of the susceptivity tends to decrease as the
original location of the displaced dipole is further away from the incident
surface.
There is also a surprising similarity between the susceptivity of the maximum
mutual scattering in Fig. 10(b) and the differential cross-section in Fig.
10(a). This consolidates the fact that we can use mutual scattering as a
substitute for differential cross-section, i.e., the modulus squared of
scattering amplitude, in order to detect the location of a single scatterer.
Figure 11: Average of susceptivity function of the maximum mutual scattering
$\tilde{\delta}F^{\text{MS}}_{\text{max}}$ of beam 1 (the blue line) and beam
2 (the orange line) with respect to the depth of the moving dipole.
The final conclusion of this paper is summarized by Fig. 11, which expresses
the average of susceptivity function of the maximum mutual scattering of beam
1 $\langle\tilde{\delta}F^{\text{MS}}_{\text{max},1}\rangle$, i.e., the blue
line, and beam 2
$\langle\tilde{\delta}F^{\text{MS}}_{\text{max},2}\rangle$,i.e., the orange
line, as functions of the depth of the displaced dipole in the case of
asymmetric incidence. The result is computed for $N_{\text{dipole}}=250$ and
$N_{\text{realization}}=50$ at the displacement $\Delta x=1.5\lambda$ and
averaged over all the angles $\gamma$. In agreement with Fig. 10(b), Fig. 11
shows that the susceptivity of mutual scattering of beam 2 decreases as the
depth $z$ (relative to the incident surface) of the displaced dipole in the
x-direction increases. This property is understood through the scattering
amplitude $f(\mathbf{k}_{\text{out}},\mathbf{k}_{\text{in}})$ of multiple
scatterers. For a fixed $\mathbf{k}_{\text{in},1}$, we conjecture that
scattering amplitude $f(\mathbf{k}_{\text{in},2},\mathbf{k}_{\text{in},1})$,
i.e., the scattering strength from $\mathbf{k}_{\text{in},1}$ to
$\mathbf{k}_{\text{in},2}$, will be more affected by scatterers located closer
to the incident surface (in the direction $\mathbf{k}_{\text{in},1}$) than
ones located further away. On the other hand, the susceptivity of mutual
scattering of beam 1 tends to grow with the depth $z$. This is explained as
follows:
1. 1.
Since scattering amplitude
$f(\mathbf{k}_{\text{in},2},\mathbf{k}_{\text{in},1})$ is more affected by the
scatterers located closer to the incident surface (in the direction
$\mathbf{k}_{\text{in},1}$), by the reciprocity of light, for a fixed
$\mathbf{k}_{\text{in},1}$, the scattering amplitude
$f(-\mathbf{k}_{\text{in},1},-\mathbf{k}_{\text{in},2})$ (the scattering
strength from $-\mathbf{k}_{\text{in},2}$ to $-\mathbf{k}_{\text{in},1}$) will
be more affected by scatterers located further from the incident surface (in
the direction $-\mathbf{k}_{\text{in},1}$).
2. 2.
Since $f(-\mathbf{k}_{\text{in},1},-\mathbf{k}_{\text{in},2})$ is more
affected by scatterers located further from the incident surface (in the
direction $-\mathbf{k}_{\text{in},1}$), by the symmetry of our box sample, for
a fixed $\mathbf{k}_{\text{in},1}$, the scattering amplitude
$f(\mathbf{k}_{\text{in},1},\mathbf{k}_{\text{in},2})$ will be more affected
by scatterers located further from the incident surface (in the direction
$\mathbf{k}_{\text{in},1}$).
All these statistical properties are used to determine the location of the
moving dipole, or even further, detect motions inside opaque objects.
## 6 Discussion
The superior sensitivity of the two-beam mutual scattering over the one-beam
differential cross-section with respect to the subtle internal variations of
the sample opens up a huge potential to explore structural data which are very
difficult to extract from traditional one-beam speckle. One of the most
interesting applications is in imaging of the movement of tissues and body
fluids, which until now still relies on the Doppler effect in ultrasonography
[24]. The complex structure of biological tissues requires expensive high-tech
tools to bypass the multiple scattering problem. Thus, if possible,
statistically detecting the movement of tissues and body fluids through mutual
scattering could open up new possibilities for optical applications in
medicine.
Verifying the current theoretical predictions in experiments will guide the
next development of the research on the applications of mutual scattering. Our
near-future goal is to measure the statistical difference in the susceptivity
of mutual scattering with respect to the change of displacement of the
internal fraction of opaque media in the upcoming experiments.
One of the further research directions of this topic is to fully develop a
method and schematic diagram based on mutual scattering for extracting a
complete profile of the complex scattering amplitude of a given sample for all
incoming and outgoing wave vectors. Since mutual scattering allows us to
extract the imaginary part of the scattering amplitude ${\rm
Im}\left[f(\mathbf{k}_{\text{out}},\mathbf{k}_{\text{in}})\right]$, it is
possible to reconstruct the full profile of the full complex profile of
scattering amplitude $f(\mathbf{k}_{\text{out}},\mathbf{k}_{\text{in}})$ of
the object for all incoming and outgoing directions.
Finally, it is worth investigating the potential of speckle correlation of
“two-beam speckle patterns”, which is used commonly for single-beam techniques
[25, 26], as another way to extract more information about the shape and
movement of the object.
## 7 Conclusion
In this paper, we discuss potential applications of mutual scattering in the
study of the internal structure of matter. In particular, mutual scattering
from incoming beams, which possesses the same order of magnitude as the
scattering amplitude, is easily measured with much higher accuracy than the
scattering current from one beam, which is proportional to the modulus square
of the scattering amplitude. Adding an extra beam (on top of traditional one-
beam techniques) and measuring “two-beam speckle patterns” allows us to more
precisely extract statistical data regarding the internal structure of
objects, which is usually not properly appreciated and is considered as noise
to be eliminated.
Moreover, we demonstrate through a numerical example how information about the
depth of a displaced dipole in an opaque box is obtained through the
susceptivity of mutual scattering. In detail, the optical characteristics of
the boxed sample are simulated by the multiple scattering problem of
$N_{\text{dipole}}$ scatterers. The calculation results show that, at
different depths, the susceptivity function of mutual scattering increases
with displacement distance and fluctuates around different values. Studying
these statistical data shows that the susceptivity of mutual scattering of
beam 2 (or beam 1) tends to decline (or increase) as the depth of the
displaced dipole increases, which in turn reveals the location of the moving
dipole.
## 8 Appendix - Theory
### 8.1 Scattering with a single incoming wave
We will limit ourselves to scalar waves, for mathematical convenience. In
scattering theory, when scalar light is scattered from an object, the scalar
field is partitioned into an unperturbed part and a scattered part:
$\displaystyle\psi=\psi_{\text{in}}+\psi_{\text{scat}},$ (6)
wherein the case of single incident plane wave, the unperturbed part (the
incident part) is given as follows:
$\displaystyle\psi_{\text{in}}(\mathbf{r},\mathbf{k}_{\text{in}})$
$\displaystyle=A\exp\left(i\mathbf{k}_{\text{in}}\cdot\mathbf{r}-i\omega
t+i\phi\right).$ (7)
The real-valued $A$ represents the amplitude, $\omega$ and $\phi$ are the
frequency and the phase of the incoming plane wave.
$\mathbf{k}_{\text{in}}=(\omega/c)\hat{\mathbf{k}}_{\text{in}}\equiv
k\hat{\mathbf{k}}_{\text{in}}$ is the incoming wave vector. Then, the
amplitude of the total wave in the far-field is given by:
$\displaystyle\lim_{r\to\infty}\psi(\mathbf{r})$
$\displaystyle=\lim_{r\to\infty}\left(\psi_{\text{in}}+\psi_{\text{scat}}\right)$
$\displaystyle=A\exp\left(i\mathbf{k}_{\text{in}}\cdot\mathbf{r}-i\omega
t+i\phi\right)+\dfrac{A}{r}f\left(\mathbf{r},\mathbf{k}_{\text{in}}\right)\exp\left(ikr-i\omega
t+i\phi\right),$ (8)
where the scattering amplitude
$f\left(\mathbf{k}_{\text{out}},\mathbf{k}_{\text{in}}\right)$ is the
scattering strength from incoming direction $\mathbf{k}_{\text{in}}$ to
outgoing direction $\mathbf{k}_{\text{out}}$, and $r=|\mathbf{r}|$. The
scattering amplitude, as shown in Section 3 and Section 5, is an essential
ingredient in extracting the internal structure of the sample.
We recall that, experimentally, the quantity measured at far-field detectors
is usually the current $\mathbf{J}$ of the scalar wave instead of the
amplitude of the wave [21]:
$\displaystyle\mathbf{J}\equiv-{\rm
Re}\left[(\partial_{t}\psi)^{\ast}\nabla\psi\right],$ (9)
where we express the real and imaginary part of a complex value $x$ as $\rm
Re[x]$ and $\rm Im[x]$, respectively. Then, the extinction of the wave is
given by the interference of the incoming beams and the scattered beams:
$\displaystyle\mathbf{J}_{\text{ext}}=-{\rm
Re}[(\partial_{t}\psi_{\text{in}})^{\ast}\nabla\psi_{\text{scat}}]-{\rm
Re}[(\partial_{t}\psi_{\text{scat}})^{\ast}\nabla\psi_{\text{in}}].$ (10)
In the one-incoming-beam case, the observed scattering current measured at the
direction $\mathbf{r}$ is given as follows:
$\displaystyle\lim_{r\to\infty}\mathbf{J}_{\text{scat}}(\mathbf{r},\mathbf{k}_{\text{in}})=$
$\displaystyle\dfrac{\omega^{2}}{r^{2}c}A^{2}\left|f(\mathbf{r},\mathbf{k}_{\text{in}})\right|^{2}.$
(11)
### 8.2 Scattering with multiple waves
The concept of mutual scattering occurs when there are multiple incoming
waves. We focus on the scenario of two incoming plane waves:
$\displaystyle\psi_{\text{in}}(\mathbf{r},\mathbf{k}_{\text{in}})$
$\displaystyle=\psi_{\text{in},1}(\mathbf{r},\mathbf{k}_{\text{in},1})+\psi_{\text{in},2}(\mathbf{r},\mathbf{k}_{\text{in},2})$
$\displaystyle=A_{1}\exp\left(i\mathbf{k}_{\text{in},1}\cdot\mathbf{r}-i\omega
t+i\phi_{1}\right)+A_{2}\exp\left(i\mathbf{k}_{\text{in},2}\cdot\mathbf{r}-i\omega
t+i\phi_{2}\right),$ (12)
where we assume that the two beams have the frequency $\omega$. Let $A_{1}$
(or $A_{2}$) stand for the amplitude, $\mathbf{k}_{\text{in},1}$ (or
$\mathbf{k}_{\text{in},2}$) the incoming direction and $\phi_{1}$ (or
$\phi_{2}$) the phase of the first (or second) incoming plane wave.
The amplitude of the waves at far-field for the case of two beams is shown as
below:
$\displaystyle\lim_{r\to\infty}\psi(\mathbf{r})=$
$\displaystyle\lim_{r\to\infty}\left(\psi_{\text{in}}+\psi_{\text{scat}}\right)$
$\displaystyle=$
$\displaystyle\,A_{1}\exp\left(i\mathbf{k}_{\text{in},1}\cdot\mathbf{r}-i\omega
t+i\phi_{1}\right)+\dfrac{A_{1}}{r}f\left(\mathbf{r},\mathbf{k}_{\text{in},1}\right)\exp\left(ikr-i\omega
t+i\phi_{2}\right)$
$\displaystyle+A_{2}\exp\left(i\mathbf{k}_{\text{in},2}\cdot\mathbf{r}-i\omega
t+i\phi_{1}\right)+\dfrac{A_{2}}{r}f\left(\mathbf{r},\mathbf{k}_{\text{in},2}\right)\exp\left(ikr-i\omega
t+i\phi_{2}\right).$ (13)
The current of self-extinction of beam 1 along the forward scattering
direction is given by:
$\displaystyle\lim_{r\to\infty}\mathbf{J}_{\text{in},1}(\mathbf{r},\mathbf{k}_{\text{in},1})=$
$\displaystyle-\dfrac{2\omega}{r^{2}}A^{2}{\rm
Im}\left[f(\mathbf{k}_{\text{in},1},\mathbf{k}_{\text{in},1})\right]\delta\left\\{1-\cos(\mathbf{r},\mathbf{k}_{\text{in},1})\right\\},$
(14)
where $\cos(\mathbf{r},\mathbf{k}_{\text{in},1})$ is the trigonometric
function of an angle between two vectors $\mathbf{r}$ and
$\mathbf{k}_{\text{in},1}$.
$\delta\left\\{1-\cos(\mathbf{r},\mathbf{k}_{\text{in},1})\right\\}$ is the
Dirac delta function, whose value is zero everywhere except at
$1-\cos(\mathbf{r},\mathbf{k}_{\text{in},1})=0$, and whose integral over all
values of $\cos(\mathbf{r},\mathbf{k}_{\text{in},1})$ is equal to one.
Then, the current of extinction detected along the direction
$\mathbf{k}_{\text{in},1}$ consists of the incoming current of the first beam
and the extinction of the field scattered from the direction
$\mathbf{k}_{\text{in},2}$ into $\mathbf{k}_{\text{in},1}$:
$\displaystyle\lim_{r\to\infty}\mathbf{J}_{\text{ext},1}(\mathbf{k}_{\text{in},2},\mathbf{k}_{\text{in},1})=$
$\displaystyle-\dfrac{2\omega}{r^{2}}A^{2}_{1}{\rm
Im}\left[f(\mathbf{k}_{\text{in},1},\mathbf{k}_{\text{in},1})\right]\delta\left\\{1-\cos(\mathbf{r},\mathbf{k}_{\text{in},1})\right\\}$
$\displaystyle-\dfrac{2\omega}{r^{2}}A_{1}A_{2}{\rm
Im}\left[f(\mathbf{k}_{\text{in},2},\mathbf{k}_{\text{in},1})e^{i(\phi_{2}-\phi_{1})}\right]\delta\left\\{1-\cos(\mathbf{r},\mathbf{k}_{\text{in},1})\right\\}.$
(15)
We note that the value of $\lim_{r\to\infty}\mathbf{J}_{\text{ext},1}$ depends
on the angle between two vectors $\mathbf{k}_{\text{in},1}$ and
$\mathbf{k}_{\text{in},2}$. Let $\gamma$ denote this angle between
$\mathbf{k}_{\text{in},1}$ and $\mathbf{k}_{\text{in},2}$, we express total
extinction current of beam 1 at far-field as a function of $\gamma$:
$\lim_{r\to\infty}\mathbf{J}_{\text{ext},1}(\gamma)$. Similarly, we express
the self-extinction $\lim_{r\to\infty}\mathbf{J}_{\text{in},1}(\gamma)$ as a
function of $\gamma$.
### 8.3 Mutual scattering
We define the normalized mutual scattering of beam $i$, $i\in\\{1,2\\}$, as
follows:
$\displaystyle
F^{\text{MS}}_{i}(\gamma)=\dfrac{\lim_{r\to\infty}\mathbf{J}_{\text{ext},i}(\gamma)}{\lim_{r\to\infty}\mathbf{J}_{\text{in},i}(\gamma=0)}\>,$
(16)
where the denominator part is added to normalize the value of mutual
scattering. In fact, at $\gamma=0$, by tuning the phase difference
$(\phi_{1}-\phi_{2})$, the mutual scattering $F^{\text{MS}}_{i}(\gamma=0)$ has
a maximum value of 2 and a minimum value of 0 (see for instance Fig. 3).
The self-extinction of beam $i$ containing only the forward scattering,
denoted by $F^{\text{forward}}_{i}(\gamma)$, is normalized by the following
equation:
$\displaystyle
F^{\text{forward}}_{i}(\gamma)\equiv\dfrac{\lim_{r\to\infty}\mathbf{J}_{\text{in},i}(\gamma)}{\lim_{r\to\infty}\mathbf{J}_{\text{in},i}(\gamma=0)}\;.$
(17)
The normalized mutual scattering of both two beams
$F^{\text{MS}}_{12}(\gamma)$ is given by:
$\displaystyle
F^{\text{MS}}_{12}(\gamma)=\dfrac{\lim_{r\to\infty}[\mathbf{J}_{\text{ext},1}(\gamma)+\mathbf{J}_{\text{ext},2}(\gamma)]}{\lim_{r\to\infty}[\mathbf{J}_{\text{in},1}(\gamma=0)+\mathbf{J}_{\text{in},2}(\gamma=0)]}\;.$
(18)
Equations (16) and (18) will be used throughout this paper to calculate the
mutual scattering. Finally, the forward-scattering (self-extinction) of both
beams is given by
$\displaystyle
F^{\text{forward}}_{12}(\gamma)=\dfrac{\lim_{r\to\infty}(\mathbf{J}_{\text{in},1}+\mathbf{J}_{\text{in},2})}{\lim_{r\to\infty}[\mathbf{J}_{\text{in},1}(\gamma=0)+\mathbf{J}_{\text{in},2}(\gamma=0)]}.$
(19)
### 8.4 T-matrix for multiple scattering problem
Given a complex scattering medium consisting of $N_{\text{dipole}}$
scatterers, it is important to notice that calculating the T-matrix of the
whole sample requires summing all the scattering events order by order, up to
infinite order, from a collection of all $N_{\text{dipole}}$ scatterers. The
scattering of each point scatterer is, in turn, characterized by its single
particle t-matrix. We recall the t-matrix for a single-point dipole (see [27]
for more details) as follows:
$\displaystyle t(\omega)=-\dfrac{4\pi
c}{\omega_{0}Q}\dfrac{\omega_{0}^{2}}{\omega_{0}^{2}-\omega^{2}-i\frac{\omega^{3}}{Q\omega_{0}}},$
(20)
where $\omega_{0}$ is the resonance frequency and $Q$ is the quality factor of
the resonance. For simplicity, the frequency $\omega$ is normalized to
wavelength $\lambda=1$. The resonance frequency is set very close to the
frequency of light $\omega_{0}=1.0001\omega$. The speed of light is set to 1,
$c=1$, and the quality factor is chosen to be 10, $Q=10$.
The polarizability $\alpha(\omega)$ in Fig. 2 is obtained from the t-matrix of
single point scatterer
$\displaystyle\alpha(\omega)=-t(\omega)\dfrac{c^{2}}{\omega^{2}}.$ (21)
From the t-matrix of single point scatterer, the full T-matrix
$T(\mathbf{k}_{\text{out}},\mathbf{k}_{\text{in}})$ of the sample is obtained
by inversion of a matrix $N_{\text{dipole}}\times N_{\text{dipole}}$, where
calculation details are given in [13].
Funding This work is supported by the NWO-TTW program P15-36 “Free-Form
Scattering Optics” (FFSO) and the MESA+ Institute section Applied
Nanophotonics (ANP).
Acknowledgments It is a great pleasure to thank Lars Corbijn van Willenswaard,
and Alfredo Rates for their helpful discussions. AL and WLV are grateful to
the staff of the Institut Langevin (Paris) for hospitality during their recent
visits.
Disclosures The authors declare no conflicts of interest.
Data availability Data underlying the results presented in this paper are
available in Ref. [28].
## References
* [1] B. Warren, _X-ray Diffraction_ , Addison-Wesley Series in Metallurgy and Materials (Addison-Wesley Publishing, Reading, MA, 1969).
* [2] A. Ishimary, _Wave propagation and scattering in random media_ (Academic Press, New York, NY, 1978).
* [3] J. Als-Nielsen and D. McMorrow, _Elements of Modern X-ray Physics_ (John Wiley and Sons, New York, NY, 2001).
* [4] B. Chu, _Laser light scattering_ (Dover, Mineola, NY, 2007).
* [5] W. L. Vos, R. Sprik, A. van Blaaderen, A. Imhof, A. Lagendijk, and G. H. Wegdam, “Strong effects of photonic band structures on the diffraction of colloidal crystals,” Phys. Rev. B 53, 16231–16235 (1996).
* [6] W. L. Vos and L. A. Woldering, “Cavity quantum electrodynamics with three-dimensional photonic bandgap crystals,” in _Light Localisation and Lasing,_ M. Ghulinyan and L. Pavesi, eds. (Cambridge Univ. Press, Cambridge, 2015), pp. 180–216.
* [7] D. J. Pine, D. A. Weitz, P. M. Chaikin, and E. Herbolzheimer, “Diffusing wave spectroscopy,” Physical Review Letters 60, 1134–1137 (1988).
* [8] G. Maret and P. E. Wolf, “Multiple light scattering from disordered media. The effect of brownian motion of scatterers,” Zeitschrift für Physik B Condensed Matter 65, 409–413 (1987).
* [9] J. Stetefeld, S. A. McKenna, and T. R. Patel, “Dynamic light scattering: a practical guide and applications in biomedical sciences,” Biophysical Reviews 8, 409–427 (2016).
* [10] A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nature Photonics 6, 283–292 (2012).
* [11] S. Rotter and S. Gigan, “Light fields in complex media: Mesoscopic scattering meets wave control,” Reviews of Modern Physics 89, 015005 (2017).
* [12] H. Cao, A. P. Mosk, and S. Rotter, “Shaping the propagation of light in complex media,” Nature Physics 18, 994–1007 (2022).
* [13] A. Lagendijk, A. P. Mosk, and W. L. Vos, “Mutual extinction and transparency of multiple incident light waves,” EPL 130 (2020).
* [14] A. Rates, A. Lagendijk, O. Akdemir, A. P. Mosk, and W. L. Vos, “Observation of mutual extinction and transparency in light scattering,” Physical Review A 104 (2021).
* [15] P. Hong, O. S. Ojambati, A. Lagendijk, A. P. Mosk, and W. L. Vos, “Three-dimensional spatially resolved optical energy density enhanced by wavefront shaping,” Optica 5, 844–849 (2018).
* [16] N. Raptis, E. Pikasis, and D. Syvridis, “Performance evaluation of modulation and multiple access schemes in ultraviolet optical wireless connections for two atmosphere thickness cases,” Journal of the Optical Society of America A 33, 1628 (2016).
* [17] J. Haberko, N. Muller, and F. Scheffold, “Direct laser writing of three-dimensional network structures as templates for disordered photonic materials,” Physical Review A 88, 043822 (2013).
* [18] K. Falaggis, J. Rolland, F. Duerr, and A. Sohn, “Freeform Optics: Introduction,” Optics Express 30, 6450–6455 (2022).
* [19] N. Moteki, “Measuring the complex forward-scattering amplitude of single particles by self-reference interferometry: CAS-v1 protocol,” Optics Express 29, 20688 (2021).
* [20] S. B. Hasan, A. P. Mosk, W. L. Vos, and A. Lagendijk, “Finite-size scaling of the density of states in photonic band gap crystals,” Phys. Rev. Lett. 120, 237402 (2018).
* [21] A. Lagendijk and B. A. Van Tiggelen, “Resonant multiple scattering of light,” Physics Report 29, 143–215 (1996).
* [22] M. Born, E. Wolf, A. B. Bhatia, P. C. Clemmow, D. Gabor, A. R. Stokes, A. M. Taylor, P. A. Wayman, and W. L. Wilcock, _Principles of Optics_ (Cambridge University Press, Cambridge, UK, 1999).
* [23] J. W. Goodman, _Speckle Phenomena in Optics: Theory and Applications, Second Edition_ (SPIE, Bellingham, WA, 2020).
* [24] A. Srivastav, K. Bhogi, S. Mandal, and M. Sharad, “An adaptive low-complexity abnormality detection scheme for wearable ultrasonography,” IEEE Transactions on Circuits and Systems II: Express Briefs 66, 1466–1470 (2019).
* [25] J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
* [26] Y. Jauregui-Sánchez, H. Penketh, and J. Bertolotti, “Tracking moving objects through scattering media via speckle correlations,” Nature Communications 13, 5779 (2022).
* [27] P. De Vries, D. V. Van Coevorden, and A. Lagendijk, “Point scatterers for classical waves,” Reviews of Modern Physics 70, 447–466 (1998).
* [28] M. D. Truong, “DATASET sensing the position of a single scatterer in an opaque medium by mutual scattering, https://doi.org/10.5281/zenodo.7362231,” (2022).
|
# Topological magnons driven by the Dzyaloshinskii-Moriya interaction in the
centrosymmetric ferromagnet Mn5Ge3
M. dos Santos Dias<EMAIL_ADDRESS>Peter Grünberg Institut
and Institute for Advanced Simulation, Forschungszentrum Jülich $\&$ JARA,
D-52425 Jülich, Germany Faculty of Physics, University of Duisburg-Essen and
CENIDE, D-47053 Duisburg, Germany Scientific Computing Department, STFC
Daresbury Laboratory, Warrington WA4 4AD, United Kingdom N. Biniskos
<EMAIL_ADDRESS>Forschungszentrum Jülich GmbH, Jülich Centre
for Neutron Science at MLZ, Lichtenbergstr. 1, D-85748 Garching, Germany
Current address: Charles University, Faculty of Mathematics and Physics,
Department of Condensed Matter Physics, Ke Karlovu 5, 121 16, Praha, Czech
Republic F. J. dos Santos<EMAIL_ADDRESS>Laboratory for Materials
Simulations, Paul Scherrer Institut, 5232 Villigen PSI, Switzerland Theory
and Simulation of Materials (THEOS), and National Centre for Computational
Design and Discovery of Novel Materials (MARVEL), École Polytechnique Fédérale
de Lausanne, 1015 Lausanne, Switzerland K. Schmalzl Forschungszentrum Jülich
GmbH, Jülich Centre for Neutron Science at ILL, 71 Avenue des Martyrs, F-38000
Grenoble, France J. Persson Forschungszentrum Jülich GmbH, Jülich Centre for
Neutron Science (JCNS-2) and Peter Grünberg Institut (PGI-4), JARA-FIT,
D-52425 Jülich, Germany F. Bourdarot Université Grenoble Alpes, CEA, IRIG,
MEM, MDN, F-38000 Grenoble, France N. Marzari Theory and Simulation of
Materials (THEOS), and National Centre for Computational Design and Discovery
of Novel Materials (MARVEL), École Polytechnique Fédérale de Lausanne, 1015
Lausanne, Switzerland Laboratory for Materials Simulations, Paul Scherrer
Institut, 5232 Villigen PSI, Switzerland S. Blügel Peter Grünberg Institut
and Institute for Advanced Simulation, Forschungszentrum Jülich $\&$ JARA,
D-52425 Jülich, Germany T. Brückel Forschungszentrum Jülich GmbH, Jülich
Centre for Neutron Science (JCNS-2) and Peter Grünberg Institut (PGI-4), JARA-
FIT, D-52425 Jülich, Germany S. Lounis Peter Grünberg Institut and Institute
for Advanced Simulation, Forschungszentrum Jülich $\&$ JARA, D-52425 Jülich,
Germany Faculty of Physics, University of Duisburg-Essen and CENIDE, D-47053
Duisburg, Germany
###### Abstract
The phase of the quantum-mechanical wave function can encode a topological
structure with wide-ranging physical consequences, such as anomalous transport
effects and the existence of edge states robust against perturbations. While
this has been exhaustively demonstrated for electrons, properties associated
with the elementary quasiparticles in magnetic materials are still
underexplored. Here, we show theoretically and via inelastic neutron
scattering experiments that the bulk ferromagnet Mn5Ge3 hosts gapped
topological Dirac magnons. Although inversion symmetry prohibits a net
Dzyaloshinskii-Moriya interaction in the unit cell, it is locally allowed and
is responsible for the gap opening in the magnon spectrum. This gap is
predicted and experimentally verified to close by rotating the magnetization
away from the $c$-axis with an applied magnetic field. Hence, Mn5Ge3 realizes
a gapped Dirac magnon material in three dimensions. Its tunability by chemical
doping or by thin film nanostructuring defines an exciting new platform to
explore and design topological magnons. More generally, our experimental route
to verify and control the topological character of the magnons is applicable
to bulk centrosymmetric hexagonal materials, which calls for systematic
investigation.
## I Introduction
Recent breakthroughs in the physics of electrons in solids resulted from the
application of topological concepts to the quantum-mechanical wave function,
highlighting the role of the Berry phase Berry (1984). For instance, the
modern understanding of the integer quantum Hall effect in a two-dimensional
(2D) system is that of a gapped bulk with non-zero Chern numbers which imply
the existence of chiral edge states responsible for the quantized conduction
Thouless _et al._ (1982). Three-dimensional (3D) topological insulators are
gapped by the spin-orbit interaction, leading to Dirac-like surface states
with a linear dispersion and spin-momentum locking that underpin the quantum
spin Hall effect Hasan and Kane (2010). There is now a systematic
classification of the possible topological phases in electronic systems,
encompassing also gapless systems such as Weyl semimetals, where the
dimensionality but also spatial and magnetic symmetries are prominent Bradlyn
_et al._ (2017); Po _et al._ (2017); Kruthoff _et al._ (2017).
Topology is agnostic to whether the (quasi)particles are fermions or bosons,
so that magnons can also be responsible for novel physical effects McClarty
(2022). Perhaps the first example is the Hall effect experienced by a
thermally-induced magnon current in ferromagnetic (FM) Lu2V2O7, with the
Dzyaloshinskii-Moriya interaction (DMI) playing a similar role to the one of
spin-orbit coupling (SOC) for electronic systems Onose _et al._ (2010);
Matsumoto and Murakami (2011); Mena _et al._ (2014). Magnetic materials with
an hexagonal lattice generally exhibit a Dirac-like magnon dispersion at the
K-point in the Brillouin zone, which if gapped signals the existence of non-
trivial topology Zhang _et al._ (2013); Mook _et al._ (2014); Chisnell _et
al._ (2015); Fransson _et al._ (2016); Pershoguba _et al._ (2018). Such
topological magnon insulators were experimentally identified in 2D FM
materials Chen _et al._ (2018, 2021); Zhu _et al._ (2021), while gapless
Dirac magnons were characterized in 3D magnetic materials such as
antiferromagnetic (AFM) Cu3TeO6 Yao _et al._ (2018) and CoTiO3 Yuan _et al._
(2020); Elliot _et al._ (2021), and the elemental FM Gd Scheie _et al._
(2022). While a symmetry-based approach has been proposed to predict materials
hosting topological magnons Karaki _et al._ (2023), experimentally confirming
the topological character is challenging, as the magnon Hall effect is
difficult to measure and other signatures such as a characteristic winding of
the scattering intensity have only recently been detected Elliot _et al._
(2021).
In this article, we study the 3D centrosymmetric ferromagnet Mn5Ge3 Forsyth
and Brown (1990); Maraytta _et al._ (2020). This material exhibits
significant anomalous Hall Zeng _et al._ (2006) and Nernst Kraft _et al._
(2020) effects which are signatures of large electronic Berry phases. Its
Curie temperature ($T_{\text{C}}$) can be enhanced by carbon doping Gajdzik
_et al._ (2000) as successfully described by a previous theoretical work
Slipukhina _et al._ (2009), but its magnonic properties remain unexplored.
Here, we theoretically predict and experimentally confirm the existence of
gapped Dirac magnons at the K-point due to the DMI. This gap can be closed by
rotating the magnetization direction with an external magnetic field, thus
validating the proposed gap mechanism and confirming the topological character
of the magnons at the K-point. Our experimental route to verify and control
the topological character of the magnons is not limited to Mn5Ge3 and should
also be applicable to other bulk centrosymmetric hexagonal materials.
## II Results
### II.1 Basic properties of Mn5Ge3
A high-quality Mn5Ge3 single crystal of about $10\text{\,}\mathrm{g}$ has been
grown using the Czochralski method. The space group is $P6_{3}/mcm$ and the
unit cell contains 10 Mn atoms (and 6 Ge atoms), with Mn1 and Mn2 occupying
the Wyckoff positions 4$d$ and 6$g$, respectively Forsyth and Brown (1990). In
the $ab$-plane, Mn1 adopts a honeycomb lattice while Mn2 adopts an hexagonal
arrangement, Fig. 1a. Along the $c$-axis, Mn1 forms chains and Mn2 columns of
face-sharing octahedra, Fig. 1b. The Curie temperature ($T_{\mathrm{C}}$) is
around $300\text{\,}\mathrm{K}$ and the magnetic moments of the Mnl and Mn2
atoms are 1.96(3) $\mu_{\mathrm{B}}$ and 3.23(2) $\mu_{\mathrm{B}}$,
respectively, aligned along the $c$-axis Maraytta _et al._ (2020); Forsyth
and Brown (1990).
### II.2 Magnetic properties from first principles
Density functional theory (DFT) calculations were performed prior to the
inelastic neutron scattering measurements in order to provide an initial
picture of the expected magnon excitations and to identify interesting regions
in ($\mathbf{Q}$,$E$) space. The theoretical magnetic moments from juKKR (2.11
$\mu_{\mathrm{B}}$ and 3.14 $\mu_{\mathrm{B}}$ for Mn1 and Mn2, respectively)
and the computed Heisenberg exchange interactions are comparable to the ones
previously reported Slipukhina _et al._ (2009), as seen in Table 1. Spin-
orbit coupling leads to significant DMI (c.f. Table 1), much weaker symmetric
anisotropic exchange (not included in Eq. (1)), and a small uniaxial magnetic
anisotropy energy ($K\approx$-0.1\text{\,}\mathrm{meV}$$), so that the
relevant spin Hamiltonian (with $|\mathbf{S}_{i}|=1$) reads:
$\mathcal{H}=K\sum_{i}(S_{i}^{z})^{2}-\sum_{i,j}J_{ij}\,\mathbf{S}_{i}\cdot\mathbf{S}_{j}-\sum_{i,j}\mathbf{D}_{ij}\cdot(\mathbf{S}_{i}\times\mathbf{S}_{j})\;.$
(1)
Here $J_{ij}$ are the Heisenberg exchange interactions and $\mathbf{D}_{ij}$
are the DMI vectors which mostly align along the $c$-axis, with the strongest
shown in Fig. 1b. We discovered that some of the magnetic interactions, namely
the AFM ones, are quite sensitive to small changes in the unit cell parameter
and the atomic positions. The impact of this can be seen in the theoretical
magnon dispersions shown in Fig. 1c, where we compare the results obtained
with the experimental crystal structure parameters and with the theoretically
optimized ones. In both cases there are two magnon bands in the energy range
of experimental interest, with an energy gap at the K-point where otherwise a
Dirac-like crossing of the bands would be expected by symmetry. This is
straightforwardly verified to arise from the DMI, as omitting it from the
magnon calculation leads to the closing of the gap.
Figure 1: Crystal structure and theoretical magnon bands of Mn5Ge3. a Top view of the crystal structure, showing bonds with length up to $4.3\text{\,}\mathrm{\SIUnitSymbolAngstrom}$. The unit cell is indicated by the thin black lines. b Perspective view of the crystal structure, omitting the Ge atoms. The first few magnetic exchange interactions are marked with the respective non-vanishing Dzyaloshinskii-Moriya interaction (DMI) vectors located in the bond centers. The values are collected in Table 1. c Computed magnon bands using the magnetic exchange interactions extracted from DFT calculations performed for the DFT optimized structure (blue) and for the experimental structure reported in Ref. Forsyth and Brown (1990) (red). The solid and dashed lines show the magnon bands obtained with and without the DMI, respectively. With DMI, a gap opens at the K-point. The magnetization is set along the $c$-axis, which is the easy-axis of the material. The inset indicates the location of the high-symmetry points in the hexagonal Brillouin zone. | | Ref. Slipukhina _et al._ (2009) | Experimental structure Forsyth and Brown (1990) | DFT structure
---|---|---|---|---
| Type | Distance (Å) | Value (meV) | Distance (Å) | Value (meV) | Distance (Å) | Value (meV)
$J_{1}$ | Mn1-Mn1 | 2.526 | 29.1 | 2.527 | 30.87 | 2.485 | 31.59
$J_{2}$ | Mn1-Mn2 | 3.068 | 8.0 | 3.065 | 8.65 | 3.021 | 7.82
$J_{3}$ | Mn2-Mn2 | 2.974 | -2.0 | 2.983 | -1.27 | 3.013 | -0.21
$J_{4}$ | Mn2-Mn2 | 3.055 | 6.9 | 3.058 | 6.84 | 3.033 | 6.10
$J_{5}$ | Mn1-Mn1 | 4.148 | -1.4 | 4.148 | -3.86 | 4.112 | -2.52
$J_{6}$ | Mn2-Mn2 | 4.263 | 9.4 | 4.271 | 9.97 | 4.276 | 9.86
$|\mathbf{D}_{2}|$ | Mn1-Mn2 | — | — | 3.065 | 0.57 | 3.021 | 0.59
$|\mathbf{D}_{3}|$ | Mn2-Mn2 | — | — | 2.983 | 0.50 | 3.013 | 0.45
Table 1: Magnetic exchange interactions in Mn5Ge3 from DFT. We compare our
results using the experimental crystal structure Forsyth and Brown (1990) with
those of Ref. Slipukhina _et al._ (2009), and to our results using the
theoretically optimized crystal structure. Positive (negative) values
characterize FM (AFM) coupling. The corresponding pairs are indicated in Fig.
1b. The calculated magnetic interactions are long-ranged and only the first
few values are given in the table. The listed values are significant to the
displayed decimal precision.
### II.3 Dzyaloshinskii-Moriya interaction in centrosymmetric systems
The DMI is the key magnetic interaction for the subsequent interpretation of
our experimental findings, so before we continue we wish to clarify how it can
be present and have an effect in a centrosymmetric material. In his seminal
paper Moriya (1960), Moriya established the symmetry rules that the
interaction named after Dzyaloshinskii and himself must obey. The most famous
of these rules is that if two spins are connected by an inversion center then
the respective DMI must identically vanish. This pointedly explains why we
find finite DMI in our calculations for Mn5Ge3: it is finite for those spin
pairs that do not contain an inversion center in the midpoint of the
corresponding bond, such as those illustrated in Fig. 1b.
Centrosymmetry does ensure that the net DMI of the unit cell is zero, which is
also verified in our simulations. This means that the ferromagnetic domain
walls are not chiral and that magnetic skyrmions cannot form, in agreement
with Neumann’s principle. In contrast, magnons can still be influenced by the
local DMI. In a semiclassical picture, the spins at different sites precess
with different phases and/or amplitudes, so that certain pairs of spins are
noncollinear and can be affected by the torque arising from the DMI. This is a
strong effect at the K-point, where two magnon modes with opposite chirality
cross and the degeneracy is lifted in a non-perturbative way by the DMI. We
now report the experimental observation of this effect and its implications.
### II.4 Magnons from inelastic neutron scattering
Inspired by these theoretical predictions, the experimental magnon spectrum of
Mn5Ge3 was investigated by INS. Several constant-$\mathbf{Q}$ and constant-$E$
scans have been performed at $T=10$ K along the reciprocal space directions
$(h,0,0)$, $(h,0,2)$, $(h,h,0)$, and $(0,0,l)$, with representative examples
shown in Fig. 2a. The measured scattering intensity (circles) was fitted with
Gaussian line shapes on top of a sloping background (lines).
Figure 2: Magnon bands of Mn5Ge3 determined by inelastic neutron scattering.
a Representative measurements obtained at IN22 with constant-$\mathbf{Q}$
scans (symbols) at $T=10$ K with $|\mathbf{k}_{f}|=2.662$ Å-1 and fits used to
determine the excitation energies (lines). The purple arrows and rectangles
indicate the peak positions in the corresponding ($\mathbf{q}$,$E$) region of
the dispersion. The dashed lines show the individual contributions of the
various Gaussian peaks and of the linear background. The error bars indicate
one standard deviation (square root of the neutron counts). b Experimentally
determined magnon bands (symbols) and fit to a simplified model described in
the text (lines). The solid and the dashed lines show the magnon bands
obtained with and without the Dzyaloshinskii-Moriya interaction (DMI),
respectively. The error bars indicate the uncertainty in the peak location
from least-squares fitting.
The magnetic nature of the excitations has been confirmed through their
temperature dependence (see Supplementary Figs. 4-8 com and Fig. 3a below),
and the obtained $\mathbf{q}$ and $E$ position is given as red symbols in Fig.
2b. For the in-plane directions ($\Gamma-\text{M}-\text{K}-\Gamma$) one can
distinguish the presence of three modes: two acoustic-like modes dispersing
upwards in energy away from $\Gamma$ and additionally a third higher-energy
mode with a steep dispersion along $\Gamma-\text{M}$ and weakly dispersive
along $\text{K}-\text{M}$. In contrast, in the same investigated $E$ range a
single stiff spin-wave mode is observed for the out-of-plane direction
($\Gamma-\text{A}$).
We now turn to the theoretical interpretation of these measurements. Comparing
the experimental results of Fig. 2b with Fig. 1c, we find that the spin model
derived from the DFT calculations qualitatively reproduces several features.
The absence of a clearly visible gap in the spin-wave spectrum near zero
energy transfer ($\Gamma$-point) agrees with the expected weakness of the
uniaxial magnetic anisotropy Maraytta _et al._ (2020), as also computed from
DFT. The theoretical dispersion along $\Gamma-\text{A}$ is slightly stiffer
than the experimental one, and a simulation of the INS intensity reveals that
the second mode which is higher in energy should be invisible (see
Supplementary Fig. 14 com ). We find rather poor quantitative agreement
between theory and experiment in the $\Gamma-\text{K}-\text{M}$ plane, which
is probably related to the already-identified strong dependence of the
magnetic interactions computed from DFT on small changes of the structural
parameters. We verified that this dependence is systematic by considering
various deformations of the unit cell (see Supplementary Figs. 11 and 12 com
). However, the most crucial feature is observed both in theory and in
experiment, that is the existence of a gap at the K-point where two magnon
bands should otherwise cross.
In order to provide a more quantitative description of the experimentally
obtained magnon bands for the in-plane directions, we constructed a simplified
effective spin model (additional details are given in the Supplementary
Information, Section II.E com ). We replace each Mn2 triangle at a constant
height in the unit cell with a single effective spin $S=9/2$ ($S=1$ for Mn1),
so that the column of Mn2 octahedra is replaced by a spin chain. Seen from the
$c$-axis, the unit cell for this model thus contains two Mn1 chains and one
effective Mn2 chain, and we determine the model parameters using the measured
magnon energies at the high-symmetry points. The lines in Fig. 2b show that
the results of this model approach indeed provide a realistic band dispersion,
and confirm once more that the gap at the K-point is a consequence of a finite
DMI. The model results also highlight a peculiarity of the measured magnon
energies in the vicitiny of $\Gamma$, to which we shall briefly return in the
Discussion.
### II.5 Closing the topological magnon gap
Although there are strong theoretical arguments in favor of the topological
character of the magnon gap at the K-point, a convincing experimental
demonstration is in order. The gap is expected to arise from the DMI, but such
a microscopic interaction cannot easy be manipulated experimentally. However,
it is known that the impact of the DMI on the FM magnon spectrum depends on
the relative alignment between the vectors that characterize this interaction
and the FM magnetization Udvardi and Szunyogh (2009); Zakeri _et al._ (2010);
dos Santos _et al._ (2020). Adapting these arguments to the hexagonal
symmetry of Mn5Ge3 leads to the prediction that the magnitude of the gap
should depend on the relative alignment of the FM magnetization and the
crystalline $c$-axis, and in particular should vanish if the two are
perpendicular. We have verified that the magnon gap closes both in the DFT-
based spin model and in the one fitted to the experimental measurements when
the magnetisation is perpendicular to the $c$-axis (as it does when
disregarding the DMI). The magnon dispersions computed with DMI and setting
the magnetization along the $a^{*}$-axis are identical to the dashed lines
shown in Fig. 1c and Fig. 2b, which were obtained by excluding the DMI from
the calculations.
Figure 3: Temperature and magnetic field dependence of the magnon peaks at
the K-point determined by inelastic neutron scattering. a Temperature
dependence. At $T=$10\text{\,}\mathrm{K}$$ three peaks are seen between
$8\text{\,}\mathrm{meV}$ and $22\text{\,}\mathrm{meV}$. At
$T=$398\text{\,}\mathrm{K}$$ (above $T_{\mathrm{C}}$) only a broad
quasielastic signal is observed. Neutron intensity for the data at 10 K and
398 K is given on the right and left axis, respectively. b Dependence on
magnetic field applied along the $c$-axis at $T=$10\text{\,}\mathrm{K}$$. The
field has almost no effect on the location of the two peaks. c Dependence on
magnetic field applied along the $a^{*}$-axis at $T=$10\text{\,}\mathrm{K}$$.
In zero field the magnetization is along the $c$-axis and two peaks are
visible. The application of a transverse magnetic field saturates the
magnetization along the $a$-axis and merges the two peaks into one,
demonstrating the closure of the magnon energy gap. The data shown in a were
obtained at IN22
($|\mathbf{k}_{f}|=$2.662\text{\,}{\mathrm{\SIUnitSymbolAngstrom}}^{-1}$$) and
the data in b-c at IN12
($|\mathbf{k}_{f}|=$1.971\text{\,}{\mathrm{\SIUnitSymbolAngstrom}}^{-1}$$).
The error bars indicate one standard deviation (square root of the neutron
counts).
This hypothesis can be experimentally tested by applying an external magnetic
field. First we rule out the possibility of a phonon contribution to the
inelastic peaks at the K-point, by heating the sample above its
$T_{\mathrm{C}}$, as seen in Fig. 3a, and verifying that the peaks disappear
conforming their magnetic origin. The measurements reported so far in this
work were performed in zero field, for which the magnetization is parallel to
the $c$-axis due to the uniaxial magnetic anisotropy. Applying a magnetic
field along the $c$-axis should lead to a very small Zeeman shift of the
magnon energies, and this is indeed what we observed, as seen in Fig. 3b. If a
magnetic field of similar magnitude is applied along the $a^{*}$-axis it
overcomes the magnetic anisotropy energy and saturates the magnetization
Maraytta _et al._ (2020). The results obtained in this way are shown in Fig.
3c. Now the effect of the field cannot be explained by a simple Zeeman shift,
and instead we find the anticipated closing of the magnon gap. The two peaks
observed in zero field coalesce into a single one with an integrated intensity
approximately matching the sum of intensities for the two peaks in zero field
(see also Supplementary Fig. 10 com ), although shifted to slightly higher
energy than anticipated. The distinct response of the magnon excitation to a
magnetic field applied to orthogonal crystal directions is consistent with the
DMI mechanism, and so the gapped Dirac magnons at the K-point should
consequently have a topological nature.
### II.6 Ruling out alternative explanations for a gap at the K-point
Next we rule out potential alternative mechanisms to the DMI that could lead
to a magnon gap opening at the K-point, such as dipolar, Kitaev and magnon-
phonon interactions:
(i) Dipolar interactions are long-ranged but much weaker than the magnetic
exchange interactions, so their effect is usually seen for rather small wave
vectors in the vicinity of the $\Gamma$-point. Even if they did lift the
magnon degeneracy at the K-point, their intrinsic weakness could not account
for the observed magnitude of the gap.
(ii) Kitaev interactions were proposed for instance in Ref. Lee _et al._
(2020) to explain measurements on CrI3 but are ruled out for Mn5Ge3 both by
our simulations and by general considerations. The interactions extracted from
our DFT calculations include both the Heisenberg exchange, the DMI and the
symmetric anisotropic exchange (SAE), which includes the Kitaev interaction.
The SAE was found to be rather weak and unable to open the observed magnon gap
at the K-point. The weakness of the SAE (and so of potential Kitaev
interactions) could be anticipated from the weak magnetic anisotropy measured
for this system. This reflects the lack of heavy elements in the material that
could supply a strong spin-orbit interaction, which is a key ingredient for
obtaining a sizeable Kitaev interaction. We are also not aware of any Kitaev
material candidates containing only elements from the first four rows of the
periodic table (i.e. $Z<36$), likely due to the preceding reason. To make this
argument more quantitative, we employ the theory of magnetic exchange
interactions for systems with weak SOC presented by Moriya Moriya (1960), Eqs.
2.3 and 2.4. The DMI is first-order in the weak SOC, while the Kitaev
interaction is part of the SAE which is second-order in SOC and so is much
weaker than the DMI. The Kitaev interaction, if present, would contribute to
the magnetic anisotropy energy, which is about 1 meV/f.u. for Mn5Ge3 (the DMI
does not contribute to the magnetic anisotropy energy of the ferromagnetic
state). To give an estimate of the potential magnitude of the Kitaev
interaction using only experimental input, we distribute the magnetic
anisotropy energy on one of the set of bonds for which we identified the DMI,
bond #2 indicated in Fig. 1b. This set of bonds occurs four times in the unit
cell, as it connects each Mn1 site to its six Mn2 neighbours, and so could
have 1 meV/24 = 0.04 meV maximum Kitaev strength. The SAE obtained directly
from the DFT calculations is about 0.02 meV in magnitude, which is in line
with this estimate, and is one order of magnitude smaller than the values
found for the DMI (0.57 meV for the set of bonds #2), as expected from the
theory of the magnetic exchange interactions for systems with weak SOC.
(iii) Magnon-phonon interactions can result in gaps at the crossing points
between the magnon and phonon branches. However, our INS data ruled out this
possibility. The measured excitations at different $\mathbf{Q}$ vectors around
and at the K-point are solely of magnetic origin. This has been verified
through their temperature dependence. All the peaks observed in the energy
range from 10 to 20 meV at $T=10$ K are replaced by a broad quasi-elastic
signal (centered at 0 meV) above the ordering temperature as shown in Fig. 3a.
Hence no phonon modes were detected in the vicinity of the K-point with an
energy compatible with that of the magnons, which is a requirement for the gap
opening mechanism through magnon-phonon interactions.
Therefore, we can assert that the only reasonable mechanism for the gap
opening at the K-point is the DMI.
### II.7 Simulation of magnon surface states
Figure 4: Magnon surface states in a $(01\bar{1}0)$-oriented slab of Mn5Ge3.
a Relation between the primitive cell and the rectangular cell used to define
the slab, viewed along the $c$-axis, using the simplified model that fits the
experimental data in Fig. 2b. The Mn1 chains are indicated by the two
different shades of red, while the effective spin taken to represent the Mn2
sites is shown in blue. b Path in reciprocal space used for the magnon
calculation. c Magnon bands of a 20-unit-cell-wide slab computed with periodic
boundary conditions (PBC). d Magnon bands of a 20-unit-cell-wide slab computed
by changing periodic to open boundary conditions (OBC) along the
$(01\bar{1}0)$ orientation. Results obtained excluding/including the
Dzyaloshinskii-Moriya interaction (DMI) are shown by dashed/solid lines. The
green arrow in (d) indicates the DMI-enabled magnon surface state.
Here we explore the expectation that if the bulk magnon band structure has
some topologically non-trivial character it should be accompanied by magnon
surface states. To do so, we compute the magnon band structure of slabs which
are finite in one direction and periodic (i.e., infinite) in the other two
directions. Comparing the simulations performed for the same slab with
periodic and open boundary conditions along the chosen surface normal enables
us to identify the energy range corresponding to the surface projection of the
bulk magnon bands. Surface magnons are then expected to appear in the regions
of $(E,\mathbf{q})$ where bulk magnon bands are absent.
To illustrate this point, we performed simulations using the simplified spin
model depicted in Fig. 4a with parameters fitted to the experimental
measurements in Fig. 2b. We created a rectangular supercell and extended it by
20 unit cells along the $(01\bar{1}0)$ direction of the original hexagonal
lattice. The chosen path in reciprocal space is perpendicular to the surface
normal and shown in Fig. 4b. Fig. 4c shows that indeed there is a pocket in
the K–M–K path and with energies between 10 and 12 meV from which bulk magnons
are absent, with or without DMI. Fig. 4d shows the modified magnon band
structure upon truncating the crystal along the $(01\bar{1}0)$ direction,
i.e., making a horizontal cut through the lattice shown in Fig. 4a. We indeed
find that magnon surface states do appear in the identified region where bulk
magnon bands were absent. Without the DMI, these surface magnons are
disconnected and gapped from each other. The DMI restructures the band
connectivity and leads to a crossing that resembles a distorted Weyl-like
crossing. The crossing is not located at the M-point as the slab loses the
$ac$-mirror plane, retaining instead a twofold rotation around the $c$-axis.
There are other surface magnons at around 3 meV and 18 meV that are only
weakly affected by the DMI, and result from the reduced coordination number
introduced by creating the surface. Lastly, the identified surface magnons
were found to extend only a couple of unit cells towards the interior of the
slab, confirming their localization at the surface.
The experimental detection of the predicted surface magnons is quite
challenging, as the scattering volume is too small for a straightforward
detection using INS. Other techniques such as Brillouin light scattering (for
magnons near the $\Gamma$-point) or electron energy loss spectroscopy could be
considered for this purpose Zakeri (2016).
## III Discussion
We now briefly discuss some outstanding points. Firstly, we return to the
quantitative disagreement between theory and experiment concerning the magnon
bands. The main issue seems to be an overestimation of the magnitude of the
magnetic interactions in the DFT calculations, which has also been found in
other studies. A recent example from the literature is Co3Sn2S2 Zhang _et
al._ (2021), where two different DFT approaches are compared with experiment,
with disagreements also in the $\Gamma-\text{M}-\text{K}-\Gamma$ plane but not
in the $\Gamma-\text{A}$ direction. Another issue is that the simplified spin
model does not adequately capture the relative intensities of the two peaks
around the K-point, despite providing a reasonable description of the
experimental magnon energies. This is likely due to the model assumptions,
namely treating the Mn2 sites as a single effective spin and neglecting the
atomic magnetic form factor, as well as employing a simple Lorentzian
broadening for the peaks. This shows the need for further developments on the
theoretical side. We also noticed that the ratio of the intensities of the two
peaks varies slightly in different experimental setups, see Figs. 3b-c. This
might be due to changes in the domain state of the sample arising from the
measurement history.
Our INS measurements for Mn5Ge3 revealed a peculiar dispersion for the second
mode in the $\Gamma-\text{K}$ direction. Such a steep linear-like dispersion
close to the $\Gamma$-point resembles an AFM magnon or a phonon mode. We
restate that the material is FM and the measured excitations are of magnetic
nature as verified by measurements above $T_{\text{C}}$ (Fig. 3a and
Supplementary Figs. 4-8 com ), so that both simple explanations are ruled out.
However, it is possible to have formation of hybrid collective modes. INS can
identify the magnetic and phononic component of such excitations, which are
referred as magnon-polarons (magnetoelastic modes), and are reported in
several magnetic materials Petit _et al._ (2007); Sukhanov _et al._ (2019).
Avoided crossings are the common signature of magnetoelastic interactions, but
they also underpin the magnetovibrational scattering term in the INS
scattering cross section Squires (2012); Raymond _et al._ (2019). Additional
discussion is given in the Supplementary Information, Section I.C com . We
propose that Mn5Ge3 is also an interesting 3D FM candidate material for
detailed investigation of magnetoelastic effects Zhang _et al._ (2019); Go
_et al._ (2019).
The key interaction responsible for opening the magnon gap at the K-point and
thus endowing the Dirac magnons with a topological character is the DMI. There
are several possible routes by which this interaction could be engineered, so
that the magnon properties can be tuned for specific purposes for magnonic
devices operating in a broad temperature range. Firstly, chemical substitution
of Ge by Si has been explored in the literature to connect to the
multifunctional AFM Mn5Si3 Sürgers _et al._ (2014); Biniskos _et al._
(2018); dos Santos _et al._ (2021); Biniskos _et al._ (2022). Substituting
Ge by Si reduces $T_{\text{C}}$ Zhao _et al._ (2006), but the impact on the
magnons and on the DMI is unknown. On the other hand, carbon implantation is
demonstrated to enhance $T_{\text{C}}$ Kraft _et al._ (2020); Michez _et
al._ (2022) for which the imprint on the Heisenberg exchange interactions has
been theoretically established Slipukhina _et al._ (2009), but once again the
effect on the magnons and the DMI remains unexplored. Lastly, Mn5Ge3 can also
be grown in thin film form Kraft _et al._ (2020); Michez _et al._ (2022).
This could modify the DMI by epitaxial strain, which would also be interesting
in connection to potential magnetoelastic effects, by interfaces to other
materials, or by quantum confinement effects if the thickness is just a few
nanometers.
In conclusion, we presented a combined theoretical and experimental study of
the magnons in the centrosymmetric 3D FM Mn5Ge3. Despite the inversion
symmetry, significant DMI has been theoretically identified on Mn-Mn bonds
which do not contain an inversion center. This DMI is responsible for opening
a gap in the magnon spectrum at the K-point, where otherwise symmetry would
enforce a Dirac-like crossing of the magnon bands. INS measurements of the
magnon spectrum show qualitative agreement with the main points predicted by
theory, and confirm the expected gap at the K-point. We experimentally observe
the closing of the gap by rotating the magnetization from the $c$-axis to the
$a^{*}$-axis with a magnetic field. This both validates the gap generation
mechanism and the topological nature of the magnons at the K-point, thus
establishing Mn5Ge3 as a realization of a gapped Dirac magnon material in
three dimensions. The ability to control the gap at the K-point with an
external magnetic field will also impact topological magnon surface states,
and deserves further study. As the macroscopic magnetic properties of Mn5Ge3
can be tuned by chemical substitution of Ge with Si or by carbon implantation,
and it can also be grown as thin films in spintronics heterostructures, we
foresee that the features of the newly-discovered topological magnons can be
engineered and subsequently integrated in novel device concepts for magnonic
applications. Looking beyond Mn5Ge3, the physical mechanism leading to the
formation of topological magnons at the K-point should be present in many
other bulk centrosymmetric hexagonal materials, which opens an exciting avenue
for future investigations.
## IV Methods
### IV.1 Experimental methods
Inelastic neutron scattering (INS) experiments have been carried out on a cold
(IN12) and a thermal (IN22) triple axis spectrometer at the Institut Laue-
Langevin, in Grenoble, France. We use the hexagonal coordinate system and the
scattering vector $\mathbf{Q}$ is given in reciprocal lattice units (r.l.u.).
The wave-vector $\mathbf{q}$ is related to $\mathbf{Q}$ through
$\mathbf{Q}=\mathbf{q}+\mathbf{G}$, where $\mathbf{G}$ is a Brillouin zone
center. Inelastic scans were performed with constant $|\mathbf{k}_{f}|$, where
$\mathbf{k}_{f}$ is the wave-vector of the scattered neutron beam. Data were
collected holding either the energy (constant-$E$) or the scattering vector
(constant-$\mathbf{Q}$) fixed. Further details on the experimental procedures
and additional measurements can be found in the Supplementary Information,
Section I com .
### IV.2 Theoretical methods
The theoretical results were obtained with DFT calculations and the extracted
spin Hamiltonian. The unit cell parameters and the atomic positions were
optimized with the DFT code Quantum Espresso Giannozzi _et al._ (2009). The
magnetic parameters were computed with the DFT code juKKR Bauer (2014); juk ,
which are then used to solve a spin Hamiltonian in the linear spin wave
approximation dos Santos _et al._ (2018). Further details on all these
aspects can be found in the Supplementary Information, Section II com .
## V Data availability
The authors declare that the data supporting the findings of this study are
available within the paper, its supplementary information file and in the
following repositories dat (2021a, b, c, d); dos Santos Dias _et al._ (2023).
## VI Code availability
The DFT simulation packages Quantum Espresso and juKKR are publicly available
(see Methods). The code for the solution of the linear spin wave problem is
available from the corresponding authors upon request.
## References
* Berry (1984) M. V. Berry, “Quantal phase factors accompanying adiabatic changes,” Proc. R. Soc. Lond. A 392, 45–57 (1984).
* Thouless _et al._ (1982) D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, “Quantized Hall Conductance in a Two-Dimensional Periodic Potential,” Phys. Rev. Lett. 49, 405–408 (1982).
* Hasan and Kane (2010) M. Z. Hasan and C. L. Kane, “Colloquium: Topological insulators,” Rev. Mod. Phys. 82, 3045–3067 (2010).
* Bradlyn _et al._ (2017) Barry Bradlyn, L. Elcoro, Jennifer Cano, M. G. Vergniory, Zhijun Wang, C. Felser, M. I. Aroyo, and B. Andrei Bernevig, “Topological quantum chemistry,” Nature 547, 298–305 (2017).
* Po _et al._ (2017) Hoi Chun Po, Ashvin Vishwanath, and Haruki Watanabe, “Symmetry-based indicators of band topology in the 230 space groups,” Nat. Commun. 8, 50 (2017).
* Kruthoff _et al._ (2017) Jorrit Kruthoff, Jan de Boer, Jasper van Wezel, Charles L. Kane, and Robert-Jan Slager, “Topological Classification of Crystalline Insulators through Band Structure Combinatorics,” Phys. Rev. X 7, 041069 (2017).
* McClarty (2022) Paul A. McClarty, “Topological Magnons: A Review,” Annu Rev Conden Ma P 13, 171–190 (2022).
* Onose _et al._ (2010) Y. Onose, T. Ideue, H. Katsura, Y. Shiomi, N. Nagaosa, and Y. Tokura, “Observation of the Magnon Hall Effect,” Science 329, 297–299 (2010).
* Matsumoto and Murakami (2011) Ryo Matsumoto and Shuichi Murakami, “Theoretical Prediction of a Rotating Magnon Wave Packet in Ferromagnets,” Phys. Rev. Lett. 106, 197202 (2011).
* Mena _et al._ (2014) M. Mena, R. S. Perry, T. G. Perring, M. D. Le, S. Guerrero, M. Storni, D. T. Adroja, Ch. Rüegg, and D. F. McMorrow, “Spin-Wave Spectrum of the Quantum Ferromagnet on the Pyrochlore Lattice ${\mathrm{Lu}}_{2}{\mathrm{V}}_{2}{\mathrm{O}}_{7}$,” Phys. Rev. Lett. 113, 047202 (2014).
* Zhang _et al._ (2013) Lifa Zhang, Jie Ren, Jian-Sheng Wang, and Baowen Li, “Topological magnon insulator in insulating ferromagnet,” Phys. Rev. B 87, 144101 (2013).
* Mook _et al._ (2014) Alexander Mook, Jürgen Henk, and Ingrid Mertig, “Edge states in topological magnon insulators,” Phys. Rev. B 90, 024412 (2014).
* Chisnell _et al._ (2015) R. Chisnell, J. S. Helton, D. E. Freedman, D. K. Singh, R. I. Bewley, D. G. Nocera, and Y. S. Lee, “Topological Magnon Bands in a Kagome Lattice Ferromagnet,” Phys. Rev. Lett. 115, 147201 (2015).
* Fransson _et al._ (2016) J. Fransson, A. M. Black-Schaffer, and A. V. Balatsky, “Magnon Dirac materials,” Phys. Rev. B 94, 075401 (2016).
* Pershoguba _et al._ (2018) Sergey S. Pershoguba, Saikat Banerjee, J. C. Lashley, Jihwey Park, Hans Ågren, Gabriel Aeppli, and Alexander V. Balatsky, “Dirac Magnons in Honeycomb Ferromagnets,” Phys. Rev. X 8, 011010 (2018).
* Chen _et al._ (2018) Lebing Chen, Jae-Ho Chung, Bin Gao, Tong Chen, Matthew B. Stone, Alexander I. Kolesnikov, Qingzhen Huang, and Pengcheng Dai, “Topological Spin Excitations in Honeycomb Ferromagnet CrI3,” Phys. Rev. X 8, 041028 (2018).
* Chen _et al._ (2021) Lebing Chen, Jae-Ho Chung, Matthew B. Stone, Alexander I. Kolesnikov, Barry Winn, V. Ovidiu Garlea, Douglas L. Abernathy, Bin Gao, Mathias Augustin, Elton J. G. Santos, and Pengcheng Dai, “Magnetic Field Effect on Topological Spin Excitations in CrI3,” Phys. Rev. X 11, 031047 (2021).
* Zhu _et al._ (2021) Fengfeng Zhu, Lichuan Zhang, Xiao Wang, Flaviano José dos Santos, Junda Song, Thomas Mueller, Karin Schmalzl, Wolfgang F. Schmidt, Alexandre Ivanov, Jitae T. Park, Jianhui Xu, Jie Ma, Samir Lounis, Stefan Blügel, Yuriy Mokrousov, Yixi Su, and Thomas Brückel, “Topological magnon insulators in two-dimensional van der Waals ferromagnets CrSiTe3 and CrGeTe3: Toward intrinsic gap-tunability,” Sci. Adv. 7, eabi7532 (2021).
* Yao _et al._ (2018) Weiliang Yao, Chenyuan Li, Lichen Wang, Shangjie Xue, Yang Dan, Kazuki Iida, Kazuya Kamazawa, Kangkang Li, Chen Fang, and Yuan Li, “Topological spin excitations in a three-dimensional antiferromagnet,” Nat Phys 14, 1011–1015 (2018).
* Yuan _et al._ (2020) Bo Yuan, Ilia Khait, Guo-Jiun Shu, F. C. Chou, M. B. Stone, J. P. Clancy, Arun Paramekanti, and Young-June Kim, “Dirac Magnons in a Honeycomb Lattice Quantum $\mathit{XY}$ Magnet CoTiO3,” Phys. Rev. X 10, 011062 (2020).
* Elliot _et al._ (2021) M. Elliot, P. A. McClarty, D. Prabhakaran, R. D. Johnson, H. C. Walker, P. Manuel, and R. Coldea, “Order-by-disorder from bond-dependent exchange and intensity signature of nodal quasiparticles in a honeycomb cobaltate,” Nat Commun 12, 3936 (2021).
* Scheie _et al._ (2022) A. Scheie, Pontus Laurell, P. A. McClarty, G. E. Granroth, M. B. Stone, R. Moessner, and S. E. Nagler, “Dirac Magnons, Nodal Lines, and Nodal Plane in Elemental Gadolinium,” Phys. Rev. Lett. 128, 097201 (2022).
* Karaki _et al._ (2023) Mohammed J. Karaki, Xu Yang, Archibald J. Williams, Mohamed Nawwar, Vicky Doan-Nguyen, Joshua E. Goldberger, and Yuan-Ming Lu, “An efficient material search for room-temperature topological magnons,” Science Advances 9, eade7731 (2023).
* Forsyth and Brown (1990) J B Forsyth and P J Brown, “The spatial distribution of magnetisation density in Mn5Ge3,” J. Phys.: Condens. Matter 2, 2713–2720 (1990).
* Maraytta _et al._ (2020) N. Maraytta, J. Voigt, C. Salazar Mejía, K. Friese, Y. Skourski, J. Perßon, S. M. Salman, and Th. Brückel, “Anisotropy of the magnetocaloric effect: Example of Mn5Ge3,” J. Appl. Phys. 128, 103903 (2020).
* Zeng _et al._ (2006) Changgan Zeng, Yugui Yao, Qian Niu, and Hanno H. Weitering, “Linear Magnetization Dependence of the Intrinsic Anomalous Hall Effect,” Phys. Rev. Lett. 96, 037204 (2006).
* Kraft _et al._ (2020) R. Kraft, S. Srichandan, G. Fischer, and C. Sürgers, “Anomalous Nernst effect in ferromagnetic Mn5Ge3Cx thin films on insulating sapphire,” J Appl Phys 128, 033905 (2020).
* Gajdzik _et al._ (2000) M Gajdzik, C Sürgers, M.T Kelemen, and H.v Löhneysen, “Strongly enhanced Curie temperature in carbon-doped Mn5Ge3 films,” J. Magn. Magn. Mater. 221, 248–254 (2000).
* Slipukhina _et al._ (2009) I. Slipukhina, E. Arras, Ph. Mavropoulos, and P. Pochet, “Simulation of the enhanced Curie temperature in Mn5Ge3Cx compounds,” Appl Phys Lett 94, 192505 (2009).
* Moriya (1960) Toru Moriya, “Anisotropic Superexchange Interaction and Weak Ferromagnetism,” Phys. Rev. 120, 91–98 (1960).
* (31) See Supplemental Material for supporting experimental and theoretical information, which includes Refs. Garrity _et al._ (2014); Perdew _et al._ (1996); Vosko _et al._ (1980); Ebert and Mankovsky (2009); Schmalzl _et al._ (2016); Lisher and Forsyth (1971); Mangold _et al._ (2020); Fernandez-Baca _et al._ (1991); Steinsvoll _et al._ (1981); Brown _et al._ (1996); Papanikolaou _et al._ (2002); Wildberger _et al._ (1995); Biniskos _et al._ (2017); Dordevic _et al._ (2009); Ndiaye _et al._ (2013).
* Udvardi and Szunyogh (2009) L Udvardi and L Szunyogh, “Chiral Asymmetry of the Spin-Wave Spectra in Ultrathin Magnetic Films,” Phys. Rev. Lett. 102, 207204 (2009).
* Zakeri _et al._ (2010) Kh. Zakeri, Y. Zhang, J. Prokop, T.-H. Chuang, N. Sakr, W. X. Tang, and J. Kirschner, “Asymmetric Spin-Wave Dispersion on Fe(110): Direct Evidence of the Dzyaloshinskii-Moriya Interaction,” Phys. Rev. Lett. 104, 137203 (2010).
* dos Santos _et al._ (2020) Flaviano José dos Santos, Manuel dos Santos Dias, and Samir Lounis, “Nonreciprocity of spin waves in noncollinear magnets due to the Dzyaloshinskii-Moriya interaction,” Phys. Rev. B 102, 104401 (2020).
* Lee _et al._ (2020) Inhee Lee, Franz G. Utermohlen, Daniel Weber, Kyusung Hwang, Chi Zhang, Johan van Tol, Joshua E. Goldberger, Nandini Trivedi, and P. Chris Hammel, “Fundamental Spin Interactions Underlying the Magnetic Anisotropy in the Kitaev Ferromagnet ${\mathrm{CrI}}_{3}$,” Phys. Rev. Lett. 124, 017201 (2020).
* Zakeri (2016) Khalil Zakeri, “Probing of the interfacial Heisenberg and Dzyaloshinskii–Moriya exchange interaction by magnon spectroscopy,” J. Phys.: Condens. Matter 29, 013001 (2016).
* Zhang _et al._ (2021) Qiang Zhang, Satoshi Okamoto, German D. Samolyuk, Matthew B. Stone, Alexander I. Kolesnikov, Rui Xue, Jiaqiang Yan, Michael A. McGuire, David Mandrus, and D. Alan Tennant, “Unusual Exchange Couplings and Intermediate Temperature Weyl State in Co3Sn2S2,” Phys. Rev. Lett. 127, 117201 (2021).
* Petit _et al._ (2007) S. Petit, F. Moussa, M. Hennion, S. Pailhès, L. Pinsard-Gaudart, and A. Ivanov, “Spin Phonon Coupling in Hexagonal Multiferroic YMnO3,” Phys. Rev. Lett. 99, 266604 (2007).
* Sukhanov _et al._ (2019) A. S. Sukhanov, M. S. Pavlovskii, Ph. Bourges, H. C. Walker, K. Manna, C. Felser, and D. S. Inosov, “Magnon-polaron excitations in the noncollinear antiferromagnet Mn3Ge,” Phys. Rev. B 99, 214445 (2019).
* Squires (2012) G. L. Squires, _Introduction to the Theory of Thermal Neutron Scattering_ (Cambridge University Press, 2012).
* Raymond _et al._ (2019) S. Raymond, N. Biniskos, K. Schmalzl, J. Persson, and T. Brückel, “Total interference between nuclear and magnetovibrational one-phonon scattering cross sections,” J. Phys.: Conf. Ser. 1316, 012018 (2019).
* Zhang _et al._ (2019) Xiaoou Zhang, Yinhan Zhang, Satoshi Okamoto, and Di Xiao, “Thermal Hall Effect Induced by Magnon-Phonon Interactions,” Phys. Rev. Lett. 123, 167202 (2019).
* Go _et al._ (2019) Gyungchoon Go, Se Kwon Kim, and Kyung-Jin Lee, “Topological Magnon-Phonon Hybrid Excitations in Two-Dimensional Ferromagnets with Tunable Chern Numbers,” Phys. Rev. Lett. 123, 237207 (2019).
* Sürgers _et al._ (2014) Christoph Sürgers, Gerda Fischer, Patrick Winkel, and Hilbert v Löhneysen, “Large topological Hall effect in the non-collinear phase of an antiferromagnet,” Nat. Commun. 5, 3400 (2014).
* Biniskos _et al._ (2018) N. Biniskos, K. Schmalzl, S. Raymond, S. Petit, P. Steffens, J. Persson, and T. Brückel, “Spin Fluctuations Drive the Inverse Magnetocaloric Effect in Mn5Si3,” Phys. Rev. Lett. 120, 257205 (2018).
* dos Santos _et al._ (2021) F. J. dos Santos, N. Biniskos, S. Raymond, K. Schmalzl, M. dos Santos Dias, P. Steffens, J. Persson, S. Blügel, S. Lounis, and T. Brückel, “Spin waves in the collinear antiferromagnetic phase of Mn5Si3,” Phys. Rev. B 103, 024407 (2021).
* Biniskos _et al._ (2022) N. Biniskos, F. J. dos Santos, K. Schmalzl, S. Raymond, M. dos Santos Dias, J. Persson, N. Marzari, S. Blügel, S. Lounis, and T. Brückel, “Complex magnetic structure and spin waves of the noncollinear antiferromagnet Mn5Si3,” Phys. Rev. B 105, 104404 (2022).
* Zhao _et al._ (2006) F.Q. Zhao, W. Dagula, O. Tegus, and K.H.J. Buschow, “Magnetic-entropy change in Mn5Ge3-xSix alloys,” Journal of Alloys and Compounds 416, 43–45 (2006).
* Michez _et al._ (2022) L.-A. Michez, M. Petit, V. Heresanu, V. Le Thanh, E. Prestat, F. d’Acapito, Q. Ramasse, F. Boscherini, P. Pochet, and M. Jamet, “Unveiling the atomic position of C in Mn5Ge3Cx thin films,” Phys. Rev. Mater. 6, 074404 (2022).
* Giannozzi _et al._ (2009) Paolo Giannozzi, Stefano Baroni, Nicola Bonini, Matteo Calandra, Roberto Car, Carlo Cavazzoni, Davide Ceresoli, Guido L. Chiarotti, Matteo Cococcioni, Ismaila Dabo, Andrea Dal Corso, Stefano de Gironcoli, Stefano Fabris, Guido Fratesi, Ralph Gebauer, Uwe Gerstmann, Christos Gougoussis, Anton Kokalj, Michele Lazzeri, Layla Martin-Samos, Nicola Marzari, Francesco Mauri, Riccardo Mazzarello, Stefano Paolini, Alfredo Pasquarello, Lorenzo Paulatto, Carlo Sbraccia, Sandro Scandolo, Gabriele Sclauzero, Ari P. Seitsonen, Alexander Smogunov, Paolo Umari, and Renata M. Wentzcovitch, “QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials,” J. Phys.: Condens. Matter 21, 395502 (2009).
* Bauer (2014) David Siegfried Georg Bauer, _Development of a relativistic full-potential first-principles multiple scattering Green function method applied to complex magnetic textures of nano structures at surfaces_ , Ph.D. thesis, RWTH Aachen (2014).
* (52) https://jukkr.fz-juelich.de/.
* dos Santos _et al._ (2018) Flaviano José dos Santos, Manuel dos Santos Dias, Filipe Souza Mendes Guimarães, Juba Bouaziz, and Samir Lounis, “Spin-resolved inelastic electron scattering by spin waves in noncollinear magnets,” Phys. Rev. B 97, 024431 (2018).
* dat (2021a) https://doi.ill.fr/10.5291/ILL-DATA.4-01-1682 (2021a).
* dat (2021b) https://doi.ill.fr/10.5291/ILL-DATA.CRG-2853 (2021b).
* dat (2021c) https://doi.ill.fr/10.5291/ILL-DATA.INTER-547 (2021c).
* dat (2021d) https://doi.ill.fr/10.5291/ILL-DATA.CRG-2755 (2021d).
* dos Santos Dias _et al._ (2023) M. dos Santos Dias, N. Biniskos, F. J. dos Santos, K. Schmalzl, J. Persson, F. Bourdarot, N. Marzari, S. Blügel, and S. Lounis, “Topological magnons driven by the Dzyaloshinskii-Moriya interaction in the centrosymmetric ferromagnet Mn5Ge3,” Materials Cloud Archive 2023.12 (2023), 10.24435/materialscloud:xq-5d.
* Garrity _et al._ (2014) Kevin F. Garrity, Joseph W. Bennett, Karin M. Rabe, and David Vanderbilt, “Pseudopotentials for high-throughput DFT calculations,” Comput. Mater. Sci. 81, 446–452 (2014).
* Perdew _et al._ (1996) John P. Perdew, Kieron Burke, and Matthias Ernzerhof, “Generalized Gradient Approximation Made Simple,” Phys. Rev. Lett. 77, 3865–3868 (1996).
* Vosko _et al._ (1980) SH Vosko, L Wilk, and M Nusair, “Accurate spin-dependent electron liquid correlation energies for local spin density calculations: a critical analysis,” Can. J. Phys. 58, 1200–1211 (1980).
* Ebert and Mankovsky (2009) H Ebert and S Mankovsky, “Anisotropic exchange coupling in diluted magnetic semiconductors: Ab initio spin-density functional theory,” Phys. Rev. B 79, 045209 (2009).
* Schmalzl _et al._ (2016) K. Schmalzl, W. Schmidt, S. Raymond, H. Feilbach, C. Mounier, B. Vettard, and T. Brückel, “The upgrade of the cold neutron three-axis spectrometer IN12 at the ILL,” Nucl. Instrum. Methods Phys. Res. A 819, 89–98 (2016).
* Lisher and Forsyth (1971) E. J. Lisher and J. B. Forsyth, “Analytic approximations to form factors,” Acta Crystallographica Section A 27, 545–549 (1971).
* Mangold _et al._ (2020) Claudia Mangold, Shunda Chen, Giuseppe Barbalinardo, Jörg Behler, Pascal Pochet, Konstantinos Termentzidis, Yang Han, Laurent Chaput, David Lacroix, and Davide Donadio, “Transferability of neural network potentials for varying stoichiometry: Phonons and thermal conductivity of MnxGey compounds,” J Appl Phys 127, 244901 (2020).
* Fernandez-Baca _et al._ (1991) J. A. Fernandez-Baca, R. M. Nicklow, Z. Tun, and J. J. Rhyne, “Neutron-scattering study of the magnetic excitations of thulium metal,” Phys. Rev. B 43, 3188–3191 (1991).
* Steinsvoll _et al._ (1981) O. Steinsvoll, R. M. Moon, W. C. Koehler, and C. G. Windsor, “Magnetic form factor of metallic iron and nickel as seen by inelastic neutron scattering from phonons,” Phys. Rev. B 24, 4031–4040 (1981).
* Brown _et al._ (1996) P J Brown, B Roessli, J G Smith, K-U Neumann, and K R A Ziebeck, “Determination of the wavevector and temperature dependence of the ‘forbidden’ mode in Fe65Ni35 Invar using inelastic neutron scattering,” J. Phys.: Condens. Matter 8, 1527–1538 (1996).
* Papanikolaou _et al._ (2002) N Papanikolaou, R Zeller, and P H Dederichs, “Conceptual improvements of the KKR method,” J. Phys.: Condens. Matter 14, 2799 (2002).
* Wildberger _et al._ (1995) K Wildberger, P Lang, R Zeller, and PH Dederichs, “Fermi-dirac distribution in ab initio Green-function calculations,” Phys Rev B 52, 11502 (1995).
* Biniskos _et al._ (2017) N. Biniskos, S. Raymond, K. Schmalzl, A. Schneidewind, J. Voigt, R. Georgii, P. Hering, J. Persson, K. Friese, and T. Brückel, “Spin dynamics of the magnetocaloric compound MnFe4Si3,” Phys. Rev. B 96, 104407 (2017).
* Dordevic _et al._ (2009) S. V. Dordevic, L. W. Kohlman, N. Stojilovic, Rongwei Hu, and C. Petrovic, “Signatures of electron-boson coupling in the half-metallic ferromagnet Mn5Ge3: Study of electron self-energy $\Sigma(\omega)$ obtained from infrared spectroscopy,” Phys. Rev. B 80, 115114 (2009).
* Ndiaye _et al._ (2013) W. Ndiaye, M. C. Richter, O. Heckmann, P. De Padova, J.-M. Mariot, A. Stroppa, S. Picozzi, W. Wang, A. Taleb-Ibrahimi, P. Le Fèvre, F. Bertran, C. Cacho, M. Leandersson, T. Balasubramanian, and K. Hricovini, “Bulk electronic structure of Mn5Ge3/Ge(111) films by angle-resolved photoemission spectroscopy,” Phys. Rev. B 87, 165137 (2013).
* Jülich Supercomputing Centre (2021) Jülich Supercomputing Centre, “JURECA: Data Centric and Booster Modules implementing the Modular Supercomputing Architecture at Jülich Supercomputing Centre,” Journal of large-scale research facilities 7, A182 (2021).
## VII Acknowledgements
We thank S. Raymond for discussions and comments. The work of M.d.S.D. made
use of computational support by CoSeC, the Computational Science Centre for
Research Communities, through CCP9. N.B. acknowledges the support of JCNS
through the Tasso Springer fellowship. F.J.d.S. acknowledges support of the
European H2020 Intersect project (Grant No. 814487), and N.M. of the Swiss
National Science Foundation (SNSF) through its National Centre of Competence
in Research (NCCR) MARVEL. This work was also supported by the Brazilian
funding agency CAPES under Project No. 13703/13-7 and the Deutsche
Forschungsgemeinschaft (DFG) through SPP 2137 “Skyrmionics” (Project LO
1659/8-1). We gratefully acknowledge the computing time granted through JARA
on the supercomputer JURECA Jülich Supercomputing Centre (2021) at
Forschungszentrum Jülich GmbH and by RWTH Aachen University. The neutron data
collected at the Institut Laue-Langevin are available at Refs. dat (2021a, b,
c, d).
## VIII Author contributions
MdSD, NB and FJdS contributed equally to this work. MdSD and NB conceived the
project together with TB and SL. MdSD performed most DFT calculations and the
spin-wave modelling, with additional calculations performed by FJdS. JP grew
the Mn5Ge3 single crystal. NB performed the experimental measurements and the
corresponding data analysis. KS and FB were local contacts of IN12 and IN22
and provided instrument support. The theoretical aspects of the work were
discussed with NM, SB and SL. All authors participated in the discussion of
the results. MdSD, NB and FJdS wrote the manuscript with input from all
authors.
## IX Competing interests
The authors declare no competing interests.
|
# Knowledge Distillation based Degradation Estimation for Blind Super-
Resolution
Bin Xia1, Yulun Zhang2, Yitong Wang3, Yapeng Tian4,
Wenming Yang1, Radu Timofte5, and Luc Van Gool2
1Tsinghua University 2ETH Zürich 3ByteDance Inc
4University of Texas at Dallas 5University of Würzburg Corresponding Author:
Wenming Yang<EMAIL_ADDRESS>
###### Abstract
Blind image super-resolution (Blind-SR) aims to recover a high-resolution (HR)
image from its corresponding low-resolution (LR) input image with unknown
degradations. Most of the existing works design an explicit degradation
estimator for each degradation to guide SR. However, it is infeasible to
provide concrete labels of multiple degradation combinations (e.g., blur,
noise, jpeg compression) to supervise the degradation estimator training. In
addition, these special designs for certain degradation, such as blur, impedes
the models from being generalized to handle different degradations. To this
end, it is necessary to design an implicit degradation estimator that can
extract discriminative degradation representation for all degradations without
relying on the supervision of degradation ground-truth. In this paper, we
propose a Knowledge Distillation based Blind-SR network (KDSR). It consists of
a knowledge distillation based implicit degradation estimator network (KD-IDE)
and an efficient SR network. To learn the KDSR model, we first train a teacher
network: KD-IDET. It takes paired HR and LR patches as inputs and is optimized
with the SR network jointly. Then, we further train a student network KD-IDES,
which only takes LR images as input and learns to extract the same implicit
degradation representation (IDR) as KD-IDET. In addition, to fully use
extracted IDR, we design a simple, strong, and efficient IDR based dynamic
convolution residual block (IDR-DCRB) to build an SR network. We conduct
extensive experiments under classic and real-world degradation settings. The
results show that KDSR achieves SOTA performance and can generalize to various
degradation processes. The code is available at https://github.com/Zj-
BinXia/KDSR.
## 1 Introduction
Single image super-resolution (SISR) aims to recover details of a high-
resolution (HR) image from its low-resolution (LR) counterpart, which has a
variety of downstream applications (Dong et al., 2014; Zhang et al., 2019; Xia
et al., 2022d; Fritsche et al., 2019; Xia et al., 2022c; b). These state-of-
the-art methods (Kim et al., 2016; Lim et al., 2017; Lai et al., 2017; Xia et
al., 2022a; Wang et al., 2018b) usually assume that there is an ideal bicubic
downsampling kernel to generate LR images. However, this simple degradation is
different from more complex degradations existing in real-world LR images.
This degradation mismatch will lead to severe performance drops.
To address the issue, blind super-resolution (Blind-SR) methods are developed.
Some Blind-SR works (Wang et al., 2021a; Luo et al., 2022) use the classical
image degradation process, given by Eq. 1. Recently, some works (Cai et al.,
2019; Bulat et al., 2018) attempted to develop a new and complex degradation
process to better cover real-world degradation space, which forms a variant of
Blind-SR called real-world super-resolution (Real-SR). The representative
works include BSRGAN (Zhang et al., 2021) and Real-ESRGAN (Wang et al.,
2021b), which introduce comprehensive degradation operations such as blur,
noise, down-sampling, and JPEG compression, and control the severity of each
operation by randomly sampling the respective hyper-parameters. To better
simulate the complex degradations in real-world, they also apply random
shuffle of degradation orders (Zhang et al., 2021) and second-order
degradation (Wang et al., 2021b) respectively.
Since Blind-SR faces almost infinite degradations, introducing prior
degradation information to SR networks can help to constrain the solution
space and boost SR performance. As shown in Fig. 1, the way to obtain
degradation information can be divided into three categories: (1) Several Non-
Blind SR methods (Zhang et al., 2018a; Shocher et al., 2018; Zhang et al.,
2020; Soh et al., 2020; Xu et al., 2020) directly take the known degradation
information as prior (Fig. 1 (a)). (2) Most of Blind-SR methods (Gu et al.,
2019; Luo et al., 2020; Wang et al., 2021a; Liang et al., 2022; Luo et al.,
2022) adopt explicit degradation estimators, which are trained with ground-
truth degradation (Fig. 1 (b)). However, these explicit degradation estimators
are elaborately designed for specific degradation processes. The
specialization makes them hard to transfer to handle other degradation
processes. In addition, it is challenging to annotate precise ground-truth
labels to represent the multiple degradation combination (Zhang et al., 2021;
Wang et al., 2021b) for supervised degradation learning. Therefore, developing
implicit degradation representation (IDR) based methods is important. (3)
Recently, as shown in Fig. 1 (c), DASR (Wang et al., 2021a) and MM-RealSR (Mou
et al., 2022) use metric learning to estimate IDR and quantize degradation
severity respectively. However, metric learning methods roughly distinguish
degradations by pushing away or pulling close features, which is unstable and
cannot fully capture discriminative degradation characteristics for Blind-SR.
Figure 1: The illustration of different degradation estimators. (a) Non-blind
SR methods directly use known degradation information to guide SR networks,
such as SRMD (Zhang et al., 2018a). (b) Many Blind-SR methods estimate the
explicit degradation with the supervision of ground-truth degradation. (c)
Several methods use metric learning to distinguish degradation roughly. (d)
Our knowledge distillation (KD) based implicit degradation estimator can
estimate accurate implicit degradation representation to guide SR without
ground-truth degradation supervision.
In this paper, we aim to design an efficient implicit degradation
representation (IDR) learning SR framework that can easily adapt to any
degradation process. To this end, we develop a novel knowledge distillation
based Blind-SR Network (KDSR). Specifically, as shown in Fig. 1 (d), KDSR uses
a knowledge distillation based implicit degradation estimator (KD-IDE) to
predict accurate IDR. Furthermore, we propose a strong yet efficient SR
network based on our newly developed IDR based Dynamic Convolution Residual
Blocks (IDR-DCRB) to reconstruct the HR image with the guidance of IDR. For
the training process, we first input HR and LR images to the teacher KD-IDET,
which is optimized with the SR network together. Given the paired HR and LR
images, teacher KD-IDET can easily extract the latent degradation information
in LR images. Then, we use a student KD-IDES to learn to extract the same IDR
as that of KD-IDET from LR images directly. Extensive experiments can
demonstrate the effectiveness of the proposed KDSR. Our main contributions are
threefold:
* •
We propose KDSR, a strong, simple, and efficient baseline for Blind-SR, can
generalize to any degradation process, which addresses the weakness of
explicit degradation estimation.
* •
We propose a novel KD based implicit degradation representation (IDR)
estimator. To the best of our knowledge, the design of IDR estimation has
received little attention so far. Besides, we propose an efficient IDR-based
SR network to fully utilize IDR to guide SR.
* •
Extensive experiments show that the proposed KDSR can achieve excellent Blind-
SR performance in different degradation settings from simple to complex.
## 2 Related Work
### 2.1 Blind Super-Resolution
In the past few years, numerous Non-Blind SISR methods (Dong et al., 2014; Lim
et al., 2017; Zhang et al., 2018a; Ledig et al., 2017; Johnson et al., 2016;
Ma et al., 2020; Xia et al., 2023) have achieved promising performance and
have been widely studied. However, the performance of these methods will
dramatically decline as there is a degradation gap between training and test
data. As a remedy, SRMD (Zhang et al., 2018a), USRNet (Zhang et al., 2020) and
some other methods (Zhang et al., 2019; Xu et al., 2020; Shocher et al., 2018;
Soh et al., 2020) utilize the blur kernel and noise level as additional input.
Although these methods can deal with multiple degradations with a single
model, they require accurate degradation estimation, which is also a
challenging task.
To handle unknown degradation, a few Blind-SR methods have been proposed. Some
methods, such as IKC (Gu et al., 2019) and DAN (Luo et al., 2020), use the
classical Blind-SR degradation process and combine a blur kernel estimator
with SR networks, which can be adaptive to images degraded from various blur
kernels (Kim et al., 2021; Cornillere et al., 2019). Besides, KMSR (Zhou &
Susstrunk, 2019) constructs a kernel pool by utilizing a generative
adversarial network (Goodfellow et al., 2014). Recently, some works like
BSRGAN (Zhang et al., 2021) and Real-ESRGAN (Wang et al., 2021b) design more
complex and comprehensive degradation processes to better cover the real-world
degradation space, which becomes a variant of Blind-SR called Real-SR.
For each degradation type and process, previous Blind-SR methods (Gu et al.,
2019; Liang et al., 2022) tend to specially design an explicit degradation
estimator. That is quite complex and hard to provide ground-truth labels for
multiple degradation combinations. Recently, DASR (Wang et al., 2021a) and MM-
RealSR (Mou et al., 2022) use metric learning to roughly distinguish various
degradations, which is not accurate enough to provide degradation
representation to guide SR. In this paper, we propose to estimate IDR
accurately and fully use it for SR in an efficient way.
### 2.2 Knowledge Distillation
The purpose of knowledge distillation (KD) (Hinton et al., 2015) is to
transfer the representation ability of a teacher network to a student network.
KD has been widely used to compress models, typically for classification
tasks. Specifically, they (Ahn et al., 2019) compress the classification
models by enforcing the output distribution between the teacher and student
networks to be close. Recently, some works extend KD to feature distillation,
such as intermediate feature learning (Romero et al., 2014) and pairwise
relations in intermediate feature learning (Liu et al., 2019). For the SISR
task, SRKD (Gao et al., 2018) and FAKD (He et al., 2020) apply the KD between
intermediate feature maps to compress models. To obtain more efficient SR
networks, PISR (Lee et al., 2020) further introduces variational information
distillation (Ahn et al., 2019) to maximize the mutual information between
intermediate feature maps of teacher and student SR networks. Different from
previous works adopting KD for model compression, we develop KDSR to obtain
accurate IDR.
## 3 Methodology
In this section, we present our knowledge distillation based Blind-SR network,
i.e., KDSR. As shown in Fig. 2, KDSR mainly consists of a knowledge
distillation based implicit degradation estimator network (KD-IDE) and a SR
network. In the following sections, we first introduce the blind-SR problems
in Sec. 3.1. Then, we provide the details of the proposed KD-IDE and SR
networks of KDSR in Sec. 3.2 and Sec. 3.3, respectively. In Sec. 3.4, we
introduce the two-stage training details.
### 3.1 Overview
Blind SR methods can be summarized two categories: classic blind-SR and real-
world blind-SR.
For classic blind-SR, some Blind-SR works (Wang et al., 2021a; Luo et al.,
2022) use the classical image degradation process, given by
$\mathbf{y}=\left(\mathbf{x}\otimes\mathbf{k}\right)\downarrow_{s}+\mathbf{n},$
(1)
where $\otimes$ denotes convolution operation. $\mathbf{x}$ and $\mathbf{y}$
are HR and corresponding LR images respectively. $\mathbf{k}$ is blur kernel
and $\mathbf{n}$ is additional white Gaussian noise. $\downarrow_{s}$ refers
to downsampling operation with scale factor $s$. The severity of blur and
noise are unknown, which randomly sampling the respective hyper-parameters to
adjust severity and form almost infinite degradation space.
Given an input LR image $\mathbf{y}$ and applied blur kernel $\mathbf{k}$, the
classic blind-SR methods (Gu et al., 2019; Luo et al., 2020; 2022) pretrain an
explicit degradation estimator to estimate the blur kernel applied on
$\mathbf{y}$ with the supervision of groundtruth $\mathbf{k}$. Then, their SR
network can use the estimated blur kernel to perform SR on LR image
$\mathbf{y}$. The SR network is trained with loss function
$\mathcal{L}_{rec}$.
$\mathcal{L}_{rec}=\left\|I_{HR}-I_{SR}\right\|_{1},$ (2)
where $I_{HR}$ and $I_{SR}$ are real and SR images separately.
Real-world blind-SR is a variant of classic blind-SR, in which more
complicated degradation is adopted. The real-world blind-SR approaches (Wang
et al., 2021b; Zhang et al., 2021) introduce comprehensive degradation
operations such as blur, noise, down-sampling, and JPEG compression, and
control the severity of each operation by randomly sampling the respective
hyper-parameters. Moreover, they apply random shuffle of degradation orders
and second-order degradation to increase degradation complexity. Since
degradation is complex and cannot provide specific degradation labels, they
directly use SR networks without degradation estimators. Their SR networks
emphasize visual quality trained with $\mathcal{L}_{vis}$.
$\mathcal{L}_{vis}=\lambda_{rec}\mathcal{L}_{rec}+\lambda_{per}\mathcal{L}_{per}+\lambda_{adv}\mathcal{L}_{adv},$
(3)
where $\mathcal{L}_{per}$ and $\mathcal{L}_{adv}$ are perceptual (Johnson et
al., 2016) and adversarial loss (Wang et al., 2021b).
As shown in Fig. 2, we propose a KDSR, consisting of KD-IDE and an efficient
SR network. Different previous explicit degradation estimation based blind-SR
methods Gu et al. (2019); Luo et al. (2020; 2022); Liang et al. (2022), our
KD-IDE does not requires degradation labels for training, which can generalize
to any degradation process. Moreover, our the design of our SR network is neat
and efficient, which is practical and can fully use degradation information
for SR. There are two stage training processes for KDSR, including Teacher
KDSRT and Student KDSRS training. We first train the KDSRT: we input the
paired HR and LR images to the KD-IDET and obtain the implicit degradation
representation (IDR) $\mathbf{D}_{T}$ and $\mathbf{D}_{T}^{\prime}$, where
$\mathbf{D}_{T}$ is used to guide the SR. After that, we move on to KDSRS
training. We initialize the KDSRS with the KDSRT’s parameters and make the
KDSRS learn to directly extract $\mathbf{D}_{S}^{\prime}$ same as
$\mathbf{D}_{T}^{\prime}$ from LR images.
### 3.2 Knowledge Distillation based Implicit Degradation Estimator
Figure 2: The overview of our proposed knowledge distillation based Blind-SR
network (KDSR), which consists of a KD based implicit degradation estimator
network (KD-IDE) and a SR network mainly formed by the IDR based Depthwise
Dynamic convolution (IDR-DDC).
Most Blind-SR methods elaborately design an explicit degradation estimator for
each degradation type and process. There are several limitations for explicit
degradation estimators: (1) These special designs for specific degradation
processes make the explicit estimator hard to be transferred to other
degradation settings. (2) It is complex to provide various degradation labels
for explicit degradation estimator training, especially the random combination
of multiple degradations (Wang et al., 2021b). Therefore, we develop a KD
based implicit degradation estimator (KD-IDE), which can distinguish various
degradations accurately without the supervision of degradation ground-truth.
As shown in Fig. 2 (c), we can divide KD-IDE into several parts: (1) We take
the LR images and the concatenation of LR and HR images as input for KD-IDES
and KD-IDET, respectively. Specially, for the KD-IDET (Fig. 2 (d)), it can
easily extract the degradation which makes HR degrade to LR images by
providing paired HR and LR images and jointly being optimized with the SR
network. Since there is a spatial size difference between HR and LR images, we
perform the Pixel-Unshuffle operation on HR images
$I_{HR}\in\mathbb{R}^{3\times 4H\times 4W}$ to be
$I_{HR^{\prime}}\in\mathbb{R}^{48\times H\times W}$ and then concatenate it
with LR images to obtain an input $I\in\mathbb{R}^{51\times H\times W}$. (2)
The input passes through the first convolution to become feature maps. It is
noticeable that the input channels in the first convolution are 3 and 51 for
KD-IDES and KD-IDET, respectively. (3) After that, we use numerous residual
blocks to further extract features and obtain a rough degradation vector by
Average Pooling operation. (4) We use the two linear layers to refine the
degradation vector and obtain IDR $\mathbf{D^{\prime}}\in\mathbb{R}^{4C}$,
which is used for KD. (5) Although $\mathbf{D^{\prime}}$ has $4C$ channels to
accurately present degradation and can give more degradation information for
the student network to learn, it will consume a large number of computational
resources used in IDR-DDC. Hence, we need to further compress it with a linear
layer and obtain another IDR $\mathbf{D}\in\mathbb{R}^{C}$ to guide SR. More
details of KD training are given in Sec. 3.4.
### 3.3 Image Super-Resolution Network
As for the design of SR network, we should consider three points: (1) After we
obtain the IDR, it is important to design a SR network that can fully use the
estimated degradation prior for SR. (2) An ideal Blind-SR network is likely to
be used in practice, the structure of which should be simple. Thus, we also
try to make the network formed by one type of simple and strong enough module.
(3) The huge computation consumption usually limits the application of models,
especially on edge devices. Thus, it is necessary to design an efficient
model.
As shown in Fig. 2 (a), (b), and (d), our SR network can be divided into three
hierarchies. (1) We first propose a convolution unit called IDR based
Depthwise Dynamic Convolution (IDR-DDC). Motivated by UDVD (Xu et al., 2020),
we adopt the dynamic convolution to use IDR to guide SR. Specifically, to
fully use the estimated IDR, as displayed in Fig. 2 (a), we generate specific
convolution weights according to the IDR $\mathbf{D}$. However, if we generate
ordinary convolution weights, the computational cost will be quite large and
affect the efficiency of the network. Thus, we further introduce depthwise
convolution (Howard et al., 2017), which merely consumes about $\frac{1}{C}$
computation and parameters of ordinary convolution. The IDR-DDC can be
mathematically expressed as:
$\mathbf{W}=\operatorname{Reshape}\left(\phi\left(\mathbf{D}\right)\right),$
(4)
$\mathbf{F}_{out}[i,:,:]=\mathbf{F}_{in}[i,:,:]\otimes\mathbf{W}[i,:,:,:],i\in[0,C),$
(5)
where $\phi(.)$ and $\otimes$ are two linear layers and convolution operation
separately; $\mathbf{D}\in\mathbb{R}^{C}$ indicates IDR,
$\phi\left(\mathbf{D}\right)\in\mathbb{R}^{CK_{h}K_{w}}$ is the output of
$\phi(.)$, $\mathbf{W}\in\mathbb{R}^{C\times 1\times K_{h}\times K_{w}}$ is
weights of dynamic convolution ; $\mathbf{F}_{in}$ and
$\mathbf{F}_{out}\in\mathbb{R}^{C\times H\times W}$ are input and output
feature maps respectively. (2) As shown in Fig. 2 (b), motivated by EDSR (Lim
et al., 2017), we develop IDR based Dynamic Convolution Residual Blocks (IDR-
DCRB) to realize deep model. For the first convolution of IDR-DCRB, we use the
IDR-DDC to utilize degradation information. However, IDR-DDC lacks interaction
between different channels. Thus, we adopt ordinary convolution as the second
convolution. (3) For simplicity, as shown in Fig. 2 (d) or (e), we mainly
stack the IDR-DCRB to form the SR network.
### 3.4 Training Process
KDSR has a two-stage training process. (1) As shown in the Fig. 2 (d), we
first train the teacher KDSRT. we input the paired LR and HR images to the KD-
IDET obtain the IDR $\mathbf{D_{T}}$ and $\mathbf{D^{\prime}_{T}}$. Then,
$\mathbf{D_{T}}$ is used to generate specific degradation weights for dynamic
convolution. After that, the specific degradation SR network will restore the
LR images. By jointly optimizing the teacher SR network and KD-IDET with the
$\mathcal{L}_{1}$ Loss (Eq. 2), the KD-IDE can effectively extract accurate
IDR to guide SR network. (2) After finishing KDSRT training, we move on to
train KDSRS. As shown in the Fig. 2 (e), different from KD-IDET, we only input
the LR images to the KD-IDES, obtaining IDR $\mathbf{D_{S}}$ and
$\mathbf{D^{\prime}_{S}}$. The other steps are the same as KDSRT training
except for the adopted loss functions. Specifically, we introduce a knowledge
distillation (KD) function (Eq. 6) to enforce the KD-IDES directly extracting
the same accurate IDR as KD-IDET from LR images. In addition, for the classic
degradation model (Eq. 1), following previous Blind-SR works (Gu et al., 2019;
Wang et al., 2021a), we adopt $\mathcal{L}_{rec}$ (Eq. 2) and can set the
total loss function as $\mathcal{L}_{classic}$ (Eq. 7). For more complex
degradation processes (Real-SR), following $\mathcal{L}_{vis}$ (Eq. 3) of
Real-ESRGAN (Wang et al., 2021b), we propose $\mathcal{L}_{real}$ (Eq. 8).
More details are given in appendix.
$\displaystyle\mathcal{L}_{kl}$
$\displaystyle=\sum_{j=[0,4C)}\mathbf{D}_{Tnorm}^{\prime}(j)\log\left(\frac{\mathbf{D}_{Tnorm}^{\prime}(j)}{\mathbf{D}_{Snorm}^{\prime}(j)}\right),$
(6)
$\mathcal{L}_{classic}=\lambda_{rec}\mathcal{L}_{rec}+\lambda_{kl}\mathcal{L}_{kl},$
(7)
$\mathcal{L}_{real}=\lambda_{rec}\mathcal{L}_{rec}+\lambda_{kl}\mathcal{L}_{kl}+\lambda_{per}\mathcal{L}_{per}+\lambda_{adv}\mathcal{L}_{adv},$
(8)
where $\mathbf{D}_{Tnorm}^{\prime}$ and $\mathbf{D}_{Snorm}^{\prime}$ are
normalized with softmax operation of $\mathbf{D}_{T}^{\prime}$ and
$\mathbf{D}_{S}^{\prime}$ separately. $\mathcal{L}_{per}$ and
$\mathcal{L}_{adv}$ are perceptual and adversarial loss. $\lambda_{kl}$,
$\lambda_{per}$ and $\lambda_{adv}$ denote the balancing parameters.
## 4 Experiments
### 4.1 Settings
We train and test our method on classic and real-world degradation settings.
For the $classic\leavevmode\nobreak\ degradation$, following previous works
(Gu et al., 2019; Luo et al., 2022), we combine 800 images in DIV2K (Agustsson
& Timofte, 2017) and 2,650 images in Flickr2K (Timofte et al., 2017) as the
DF2K training set. The batch sizes are set to 64, and the LR patch sizes are
64$\times$64\. We use Adam optimizer with $\beta_{1}=0.9$, $\beta_{2}=0.99$.
We train both teacher and student networks with 600 epochs and set their
initial learning rate to $10^{-4}$ and decrease to half after every 150
epochs. The loss coefficient $\lambda_{rec}$ and $\lambda_{kd}$ are set to 1
and 0.15 separately. The SR results are evaluated with PSNR and SSIM on the Y
channel in the YCbCr space. (1) In Sec. 4.2, we train and test on isotropic
Gaussian kernels following the setting in Gu et al. (2019). Specifically, the
kernel sizes are fixed to 21$\times$21\. In training, the kernel width
$\sigma$ ranges are set to [0.2, 4.0] for scale factors 4. We uniformly sample
the kernel width in the above ranges. For testing, we adopt the Gaussian8 (Gu
et al., 2019) kernel setting to generate evaluation datasets. Gaussian8
uniformly chooses 8 kernels from range [1.80, 3.20] for scale 4. The LR images
are obtained by blurring and downsampling the HR images with selected kernels.
(2) In Sec. 4.3, we also validate our methods on anisotropic Gaussian kernels
and noises following the setting in (Wang et al., 2021a). Specifically, We set
the kernel size to 21$\times$21 for scale factor 4. In training, we use the
additive Gaussian noise with covariance $\sigma=25$ and adopt anisotropic
Gaussian kernels characterized by Gaussian probability density function
$N(0,\Sigma)$ with zero mean and varying covariance matrix $\Sigma$. The
covariance matrix $\Sigma$ is determined by two random eigenvalues
$\lambda_{1},\lambda_{2}\sim U(0.2,4)$ and a random rotation angle $\theta\sim
U(0,\pi)$.
For the $real$-$world\leavevmode\nobreak\ degradation$, in Sec. 4.4, similar
to Real-ESRGAN (Wang et al., 2021b), we adopt DF2K and OutdoorSceneTraining
(Wang et al., 2018a) datasets for training. We set the learning rate of the
KDSRT to $2\times 10^{-4}$ and pre-train it with only Eq. 2 by 1000K
iterations. Then, we optimize KDSRS with Eq. 7 by 1000K iterations and
continue to train it with Eq. 8 by 400K iterations. The learning rate is fixed
as $10^{-4}$. For optimization, we use Adam with $\beta_{1}=0.9$,
$\beta_{2}=0.99$. In both two stages of training, we set the batch size to 48,
with the input patch size being 64.
Table 1: 4$\times$ SR quantitative comparison on datasets with Gaussian8
kernels. The bottom three methods marked in rouse use IDR to guide blind SR.
The FLOPs and runtime are computed based on an LR size of $180\times 320$.
Best and second best performance are in red and blue colors, respectively.
Methods | Param (M) | FLOPs (G) | Time (ms) | Set5 | Set14 | BSD100 | Urban100 | Manga109
---|---|---|---|---|---|---|---|---
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM
Bicubic | - | - | - | 24.57 | 0.7108 | 22.79 | 0.6032 | 23.29 | 0.5786 | 20.35 | 0.5532 | 21.50 | 0.6933
RCAN | 15.59 | 1082.41 | 556.21 | 26.60 | 0.7598 | 24.85 | 0.6513 | 25.01 | 0.6170 | 22.19 | 0.6078 | 23.52 | 0.7428
Bicubic+ZSSR | 0.23 | - | 30946.60 | 26.45 | 0.7279 | 24.78 | 0.6268 | 24.97 | 0.5989 | 21.11 | 0.5805 | 23.53 | 0.724
IKC | 5.32 | 2528.03 | 1053.79 | 31.67 | 0.8829 | 28.31 | 0.7643 | 27.37 | 0.7192 | 25.33 | 0.7504 | 28.91 | 0.8782
DANv1 | 4.33 | 1098.33 | 201.04 | 31.89 | 0.8864 | 28.42 | 0.7687 | 27.51 | 0.7248 | 25.86 | 0.7721 | 30.50 | 0.9037
DANv2 | 4.71 | 1088.14 | 201.51 | 32.00 | 0.8885 | 28.50 | 0.7715 | 27.56 | 0.7277 | 25.94 | 0.7748 | 30.45 | 0.9037
AdaTarget | 16.70 | 1032.59 | 109.77 | 31.58 | 0.8814 | 28.14 | 0.7626 | 27.43 | 0.7216 | 25.72 | 0.7683 | 29.97 | 0.8955
DCLS | 13.63 | - | 175.84 | 32.12 | 0.8890 | 28.54 | 0.7728 | 27.60 | 0.7285 | 26.15 | 0.7809 | 30.86 | 0.9086
DASR | 5.84 | 185.66 | 44.14 | 31.46 | 0.8789 | 28.11 | 0.7603 | 27.44 | 0.7214 | 25.36 | 0.7506 | 29.39 | 0.8861
KDSRS-M (Ours) | 5.80 | 191.42 | 38.74 | 32.02 | 0.8892 | 28.46 | 0.7761 | 27.52 | 0.7281 | 25.96 | 0.7760 | 30.58 | 0.9026
KDSRS-L (Ours) | 14.19 | 623.61 | 149.14 | 32.11 | 0.8933 | 28.68 | 0.7867 | 27.64 | 0.7300 | 26.15 | 0.7830 | 30.99 | 0.9069
HR RCAN DCLS KDSRS-M Bicubic DANv2 DASR KDSRS-L HR RCAN DCLS KDSRS-M Bicubic
DANv2 DASR KDSRS-L
---
HR RCAN DCLS KDSRS-M Bicubic DANv2 DASR KDSRS-L HR RCAN DCLS KDSRS-M Bicubic
DANv2 DASR KDSRS-L
Figure 3: Visual comparison (4$\times$) of Blind-SR methods on isotropic
Gaussian kernels.
### 4.2 Evaluation with Isotropic Gaussian Kernels
We first evaluate our KDSR on degradation with isotropic Gaussian kernels. We
compare the KDSR with several SR methods, including RCAN (Zhang et al.,
2018c), ZSSR (Shocher et al., 2018), IKC (Gu et al., 2019), DAN (Luo et al.,
2020), AdaTarget (Jo et al., 2021), and DASR (Wang et al., 2021a). Note that
RCAN is a state-of-the-art SR method for bicubic degradation. For a fair
comparison on different model sizes, we develop KDSRS-M and KDSRS-L by
adjusting the depth and channels of the network. We apply Gaussian8 (Gu et
al., 2019) kernel setting on five datasets, including Set5 (Bevilacqua et al.,
2012), Set14 (Zeyde et al., 2010), B100 (Martin et al., 2001), Urban100 (Huang
et al., 2015), and Manga109 (Matsui et al., 2017), to generate evaluation
datasets.
The quantitative results are shown in Tab. 1. We can see that our KDSRS-M
surpasses DASR by 0.6dB, 0.39dB, 0.67dB and 1.24dB on Set5, Set14, Urban100
and Manga109 datasets separately. In addition, compared with the Blind-SR
method DANv2, our KDSRS-M achieves better performance consuming only $21\%$
FLOPs of DANv2. It is because that DANv2 uses an iterative strategy to
estimate accurate explicit blur kernels, which requires many computations.
Besides, compared with the SOTA Blind-SR method DCLS, our KDSRS-L achieves
better performance on almost all datasets consuming less time. It is notable
that DCLS specially designed an explicit degradation estimator for blur
kernel, while the KD-IDE in our KDSR is simple and can adapt to any
degradation process. The qualitative results are shown in Fig. 3. We can see
that our KDSRS-L has more clear textures compared with other methods. Our
KDSRS-M also achieves better visual results than DANv2.
Table 2: PSNR results achieved on Set14 (Zeyde et al., 2010) under anisotropic
Gaussian blur and noises. The bottom two methods marked in rouse use IDR to
guide blind SR. The best results are marked in bold. The runtime is measured
on an LR size of $180\times 320$.
Method | Params | Time | Noise $\sigma$ | Blur Kernel
---|---|---|---|---
| | | | | | | |
DnCNN + RCAN | 650K+15.59M | 556.21ms | 0 | 24.28 | 24.47 | 24.6 | 24.64 | 24.58 | 24.47 | 24.31 | 23.97 | 23.01
10 | 23.88 | 24.03 | 24.16 | 24.21 | 24.13 | 24.03 | 23.88 | 23.62 | 22.76
20 | 23.45 | 23.58 | 23.70 | 23.73 | 23.69 | 23.57 | 23.42 | 23.23 | 22.46
DnCNN +IKC | 650K+5.32M | 1053.79ms | 0 | 24.76 | 25.55 | 26.54 | 27.33 | 26.55 | 25.55 | 24.64 | 25.99 | 25.49
10 | 24.20 | 24.54 | 24.86 | 24.96 | 24.78 | 24.52 | 24.23 | 24.19 | 23.14
20 | 23.62 | 23.87 | 24.07 | 24.15 | 24.06 | 23.86 | 23.65 | 23.59 | 22.71
DnCNN +DCLS | 650K+19.05M | 192.83ms | 0 | 25.80 | 26.20 | 26.45 | 26.46 | 26.30 | 26.20 | 26.39 | 25.57 | 23.96
10 | 24.05 | 24.28 | 24.44 | 24.50 | 24.40 | 24.27 | 24.09 | 23.85 | 22.90
20 | 23.58 | 23.75 | 23.88 | 23.93 | 23.88 | 23.72 | 23.56 | 23.40 | 22.58
DASR | 5.84M | 44.14ms | 0 | 27.20 | 27.62 | 27.74 | 27.85 | 27.82 | 27.62 | 27.38 | 27.44 | 26.27
10 | 25.26 | 25.57 | 25.64 | 25.69 | 25.62 | 25.54 | 25.42 | 25.20 | 24.37
20 | 23.68 | 23.87 | 24.20 | 24.32 | 24.09 | 23.91 | 23.76 | 23.81 | 22.87
KDSRS (Ours) | 5.80M | 38.74ms | 0 | 27.67 | 27.99 | 28.14 | 28.20 | 28.12 | 27.99 | 27.80 | 27.87 | 26.52
10 | 25.74 | 25.91 | 25.97 | 26.00 | 25.96 | 25.88 | 25.75 | 25.50 | 24.67
20 | 24.72 | 24.89 | 24.92 | 24.89 | 24.92 | 24.82 | 24.70 | 24.59 | 23.84
HR DCLS DASR Bicubic RCAN KDSRS HR DCLS DASR Bicubic RCAN KDSRS
---
Figure 4: 4$\times$ visual comparison. Noise levels are set to 10 and 20 for
these two images separately.
### 4.3 Evaluation with Anisotropic Gaussian Kernels and Noises
We evaluate our KDSR on degradation with anisotropic Gaussian kernels and
noises by adopting 9 typical blur kernels and different noise levels. We
compare our KDSR with SOTA blind-SR methods, including RCAN (Zhang et al.,
2018c), IKC (Gu et al., 2019), DCLS (Luo et al., 2022) and DASR (Wang et al.,
2021a). Since RCAN, IKC, and DCLS cannot deal with noise degradation, we use
DnCNN (Zhang et al., 2017), a SOTA denoising method, to denoise images for
them.
The quantitative results are shown in Tab. 2. Compared with the SOTA explicit
degradation estimation based on Blind-SR methods DCLS, our KDSRS surpasses it
by over 1 dB under almost all degradation settings consuming $29.4\%$
parameters and $5.1\%$ runtime. Furthermore, as $\sigma=20$, our KDSRS
surpasses DASR by about 1dB with less parameters and runtime. This shows the
superiority of knowledge distillation based IDR estimation and efficient SR
network structure. In addition, we provide visual comparison in Fig. 4. We can
see that KDSRS has sharper edges, more realistic details, and fewer artifacts
compared with other methods. More visual results are given in appendix.
### 4.4 Evaluation on Real-World SR
We further validate the effectiveness of KDSR on Real-World datasets. As
described in Sec. 4.1, we introduce GAN (Goodfellow et al., 2014) and
perceptual Johnson et al. (2016) loss to train our network with the same high-
order complex degradation process as Real-ESRGAN (Wang et al., 2021b),
obtaining KDSRS-GAN. We compare our methods with the state-of-the-art GAN-
based SR methods, including Real-ESRGAN, BSRGAN (Zhang et al., 2021), MM-
RealSR (Mou et al., 2022), ESRGAN (Wang et al., 2018b). We evaluate all
methods on the dataset provided in the challenge of Real-World Super-
Resolution: AIM19 Track2 (Lugmayr et al., 2019) and NTIRE2020 Track1 (Lugmayr
et al., 2020). Since AIM19 and NTIRE2020 datasets provide a paired validation
set, we use the LPIPS (Zhang et al., 2018b), PSNR, and SSIM for the
evaluation.
The quantitative results are shown in Tab. 3. Compared with the recent real-
world SR method MM-RealSR, our KDSRS-GAN performs better, only consuming about
50$\%$ runtime. In addition, KDSRS-GAN outperforms SOTA real-world SR method
Real-ESRGAN on LPIPS, PSNR, and SSIM, only consuming its 75$\%$ FLOPs.
Furthermore, we provide qualitative results in Fig. 5. We can see that our
KDSRS-GAN produces more visually promising results with clearer details and
textures. More qualitative results are provided in appendix.
Table 3: 4$\times$ SR quantitative comparison on real-world SR competition
benchmarks. The FLOPs and runtime are computed based on an LR size of 180
$\times$ 320\. The best results are marked in bold.
Methods | Parms (M) | FLOPs(G) | Runtime (ms) | AIM2019 | NTIRE2020
---|---|---|---|---|---
LPIPS$\downarrow$ | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | PSNR$\uparrow$ | SSIM$\uparrow$
ESRGAN | 16.69 | 871.25 | 236.04 | 0.5558 | 23.17 | 0.6192 | 0.5938 | 21.14 | 0.3119
BSRGAN | 16.69 | 871.25 | 236.04 | 0.4048 | 24.20 | 0.6904 | 0.3691 | 26.75 | 0.7386
Real-ESRGAN | 16.69 | 871.25 | 236.04 | 0.3956 | 23.89 | 0.6892 | 0.3471 | 26.40 | 0.7431
MM-RealSR | 26.13 | 930.54 | 290.64 | 0.3948 | 23.45 | 0.6889 | 0.3446 | 25.19 | 0.7404
KDSRs-GAN (Ours) | 18.85 | 640.84 | 154.62 | 0.3758 | 24.22 | 0.7038 | 0.3198 | 27.12 | 0.7614
HR BSRGAN MM-RealSR Bicubic Real-ESRGAN KDSRS-GAN HR BSRGAN MM-RealSR Bicubic
Real-ESRGAN KDSRS-GAN
---
Figure 5: 4$\times$ visual comparison on real-world SR competition benchmarks.
## 5 Ablation Study
Knowledge Distillation Based Blind-SR Network. In this part, we validate the
effectiveness of the components in KDSR, such as KD and IDR-DDC (Tab. 4).
KDSRS4 is actually the KDSRS-M adopted in Tab. 1, and KDSRT is KDSRT4’s
corresponding teacher network. (1) We directly input the degradation blur
kernels into the KDSRS4, obtaining KDSRS3\. Compared with KDSRS3, KDSRS4 has a
similar performance by estimating IDR. That demonstrates that our KDSRS4 can
estimate quite accurate IDR to guide Blind-SR. (2) We cancel the KD in KDSRS4
to obtain KDSRS2, which means that KDSRS2 cannot learn the IDR extraction from
KDSRT. Comparing KDSRS4 and KDSRS2, we can see that the KD scheme can bring
0.42dB improvement for KDSRS4, which demonstrates that KD can effectively help
KDSRS4 to learn the IDR extraction ability from KDSRT. (3) Based on KDSRS2, we
replace the IDR-DDC in IDR-DCRB with ordinary convolution to obtain KDSRS1\.
KDSRS2 is 0.17dB higher than KDSRS1, which demonstrates the effectiveness of
IDR-CDC. (4) Besides, KDSRS4 is 0.2dB lower than its teacher KDSRT. That means
KDSRS4 cannot completely learn the IDR extraction ability from KDSRT.
Table 4: PSNR results evaluated on Urban100 with Gaussian8 (Gu et al., 2019)
kernels for $4\times$ SR. The FLOPs and runtime are both measured on an LR
size of $180\times 320$.
Method | Oracle Degradation | KD | IDR-DDC | Param (M) | FLOPs (G) | Time (ms) | PSNR (dB)
---|---|---|---|---|---|---|---
KDSRT (Ours) | ✗ | ✗ | ✓ | 5.82 | 192.77 | 39.05 | 26.16
KDSRS1 | ✗ | ✗ | ✗ | 5.58 | 293.46 | 47.80 | 25.37
KDSRS2 | ✗ | ✗ | ✓ | 5.80 | 191.42 | 38.74 | 25.54
KDSRS3 | ✓ | ✗ | ✓ | 6.13 | 166.26 | 41.06 | 26.08
KDSRS4 (Ours) | ✗ | ✓ | ✓ | 5.80 | 191.42 | 38.74 | 25.96
Table 5: Comparison between different KD loss functions.
Loss | PNSR (dB)
---|---
$\mathcal{L}_{1}$ (Eq. 9) | 25.92
$\mathcal{L}_{2}$ (Eq. 10) | 25.88
$\mathcal{L}_{kl}$ (Eq. 6) | 25.96
The Loss Functions for Knowledge Distillation. We explore which KD function
is best to guide the KDSRS learn to directly extract the same IDR as KDSRT
from LR images. Although there are some works (Gao et al., 2018; He et al.,
2020) have explored the KD function for SR, they take intermediate feature
maps $F\in\mathbb{R}^{C\times H\times W}$ as learning objects to compress SR
models. However, we take IDR $\mathbf{D^{\prime}_{T}}\in\mathbb{R}^{4C}$ as
learning objects to learn the ability to extract IDR from LR images.
Therefore, we cannot directly apply these experiences from previous works to
our model. Here, we define three classic KD functions: (1) We use the Kullback
Leibler divergence to measure distribution similarity ($\mathcal{L}_{kl}$, Eq.
6). (2) We define $\mathcal{L}_{1}$ for optimization (Eq. 9). (3) Motivated by
KD loss in SR model compression (Gao et al., 2018), we define
$\mathcal{L}_{2}$ (Eq. 10).
$\mathcal{L}_{1}=\frac{1}{4C}\sum_{i=1}^{4C}\left|\mathbf{D^{\prime}_{S}}(i)-\mathbf{D^{\prime}_{T}}(i)\right|,\vspace{-2mm}$
(9)
$\mathcal{L}_{2}=\frac{1}{4C}\sum_{i=1}^{4C}\left(\mathbf{D^{\prime}_{S}}(i)-\mathbf{D^{\prime}_{T}}(i)\right)^{2},$
(10)
where $\mathbf{D^{\prime}_{T}}$ and
$\mathbf{D^{\prime}_{S}}\in\mathbb{R}^{4C}$ are IDRs extracted by KDSRT-M and
KDSRS-M respectively. We apply these three loss functions on KDSRS-M
separately to learn the IDR from KDSRT-M. Then, we evaluate them on 4$\times$
Urban100 with Gaussian8 kernels. The results are shown in Tab. 5. We can see
that the performance of $\mathcal{L}_{kl}$ is better than $\mathcal{L}_{1}$
and $\mathcal{L}_{2}$. That means that the degradation information is mainly
contained in the distribution of IDR $\mathbf{D}$ rather than in its absolute
values.
The Visualization of KD-IDE. To further validate the effectiveness of our KD-
IDE, we use t-SNE (Van der Maaten & Hinton, 2008) to visualize the
distribution of extracted IDR. Specifically, we generate LR images from BSD100
(Martin et al., 2001) with different isotropic Gaussian kernels and feed them
to KDSRT, KDSRS, KDSRS without KD, and DASR (Wang et al., 2021a) to generate
IDR $\mathbf{D}$ for Fig. 6 (a), (b), (c), and (d) respectively. We can see
from Fig. 6 (a) and (b) that KDSRT can distinguish different degradations, and
KDSRS also learn this ability from KDSRT well. In addition, comparing Fig. 6
(b) and (c), we can see that KDSRS obtaining IDR extraction knowledge from
KDSRT can distinguish various degradations better than KDSRS without adopting
KD. That further demonstrates the effectiveness of our KD-IDE. Furthermore, we
compare KDSRS and DASR (Fig. 6 (b) and (d)), and the results show that KDSRS
can distinguish various degradations more clear than DASR, which shows the
superiority of KD based IDE to metric learning based IDE.
| | | |
---|---|---|---|---
(a) IDR extracted by KDSRT. | (b) IDR extracted by KDSRS. | (c) IDR extracted by KDSRS without KD. | (d) IDR extracted by DASR. |
Figure 6: Visualization of IDR with different isotropic Gaussian blur kernels
$\sigma$ on various methods.
## 6 Conclusion
Most Blind-SR methods tend to elaborately design an explicit degradation
estimator for a specific type of degradation to guide SR. Nevertheless, it is
difficult to provide the labels of multiple degradation combinations to train
explicit degradation estimators, and these specific designs for certain
degradation make them hard to transfer to other degradation processes. To
address these issues, we develop a knowledge distillation based Blind-SR
(KDSR) network, consisting of a KD-IDE and an efficient SR network that is
stacked by IDR-DCRBs. We use KD to make KD-IDES directly extract the same
accurate IDR as KD-IDET from LR images. IDR-DCRBs of SR network use IDR based
depthwise dynamic convolution to fully and efficiently utilize the extracted
IDR to guide SR. Extensive experiments on classic and complex real-world
degradation processes demonstrate that the proposed KDSR can achieve a general
state-of-the-art Blind SR performance.
## Acknowledgments
This work was partly supported by the Alexander von Humboldt Foundation, the
National Natural Science Foundation of China(No. 62171251), the Natural
Science Foundation of Guangdong Province(No.2020A1515010711), the Special
Foundations for the Development of Strategic Emerging Industries of
Shenzhen(Nos.JCYJ20200109143010272 and CJGJZD20210408092804011) and Oversea
Cooperation Foundation of Tsinghua.
## References
* Agustsson & Timofte (2017) Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In _CVPRW_ , 2017.
* Ahn et al. (2019) Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D Lawrence, and Zhenwen Dai. Variational information distillation for knowledge transfer. In _CVPR_ , 2019.
* Bevilacqua et al. (2012) Marco Bevilacqua, Aline Roumy, Christine Guillemot, and Marie line Alberi Morel. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In _BMVC_ , 2012.
* Bulat et al. (2018) Adrian Bulat, Jing Yang, and Georgios Tzimiropoulos. To learn image super-resolution, use a gan to learn how to do image degradation first. In _ECCV_ , 2018.
* Cai et al. (2019) Jianrui Cai, Hui Zeng, Hongwei Yong, Zisheng Cao, and Lei Zhang. Toward real-world single image super-resolution: A new benchmark and a new model. In _ICCV_ , 2019.
* Cornillere et al. (2019) Victor Cornillere, Abdelaziz Djelouah, Wang Yifan, Olga Sorkine-Hornung, and Christopher Schroers. Blind image super-resolution with spatially variant degradations. _TOG_ , 2019.
* Dong et al. (2014) Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Learning a deep convolutional network for image super-resolution. In _ECCV_ , 2014.
* Fritsche et al. (2019) Manuel Fritsche, Shuhang Gu, and Radu Timofte. Frequency separation for real-world super-resolution. In _ICCVW_ , 2019.
* Gao et al. (2018) Qinquan Gao, Yan Zhao, Gen Li, and Tong Tong. Image super-resolution using knowledge distillation. In _ACCV_ , 2018.
* Goodfellow et al. (2014) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. _NeurIPS_ , 2014.
* Gu et al. (2019) Jinjin Gu, Hannan Lu, Wangmeng Zuo, and Chao Dong. Blind super-resolution with iterative kernel correction. In _CVPR_ , 2019.
* He et al. (2020) Zibin He, Tao Dai, Jian Lu, Yong Jiang, and Shu-Tao Xia. Fakd: Feature-affinity based knowledge distillation for efficient image super-resolution. In _ICIP_ , 2020.
* Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. _arXiv preprint arXiv:1503.02531_ , 2015.
* Howard et al. (2017) Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. _arXiv preprint arXiv:1704.04861_ , 2017.
* Huang et al. (2015) Jia-Bin Huang, Abhishek Singh, and Narendra Ahuja. Single image super-resolution from transformed self-exemplars. In _CVPR_ , 2015.
* Jo et al. (2021) Younghyun Jo, Seoung Wug Oh, Peter Vajda, and Seon Joo Kim. Tackling the ill-posedness of super-resolution through adaptive target generation. In _CVPR_ , 2021.
* Johnson et al. (2016) Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In _ECCV_ , 2016.
* Kim et al. (2016) Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In _CVPR_ , 2016.
* Kim et al. (2021) Soo Ye Kim, Hyeonjun Sim, and Munchurl Kim. Koalanet: Blind super-resolution using kernel-oriented adaptive local adjustment. In _CVPR_ , 2021.
* Lai et al. (2017) Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. Deep laplacian pyramid networks for fast and accurate super-resolution. In _CVPR_ , 2017.
* Ledig et al. (2017) Christian Ledig, Lucas Theis, Ferenc Huszár, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In _CVPR_ , 2017.
* Lee et al. (2020) Wonkyung Lee, Junghyup Lee, Dohyung Kim, and Bumsub Ham. Learning with privileged information for efficient image super-resolution. In _ECCV_ , 2020.
* Liang et al. (2022) Jie Liang, Hui Zeng, and Lei Zhang. Efficient and degradation-adaptive network for real-world image super-resolution. _arXiv preprint arXiv:2203.14216_ , 2022.
* Lim et al. (2017) Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In _CVPRW_ , 2017.
* Liu et al. (2019) Yifan Liu, Ke Chen, Chris Liu, Zengchang Qin, Zhenbo Luo, and Jingdong Wang. Structured knowledge distillation for semantic segmentation. In _CVPR_ , 2019.
* Lugmayr et al. (2019) Andreas Lugmayr, Martin Danelljan, Radu Timofte, Manuel Fritsche, Shuhang Gu, Kuldeep Purohit, Praveen Kandula, Maitreya Suin, AN Rajagoapalan, Nam Hyung Joon, et al. Aim 2019 challenge on real-world image super-resolution: Methods and results. In _ICCVW_ , 2019.
* Lugmayr et al. (2020) Andreas Lugmayr, Martin Danelljan, and Radu Timofte. Ntire 2020 challenge on real-world image super-resolution: Methods and results. In _CVPRW_ , 2020.
* Luo et al. (2020) Zhengxiong Luo, Yan Huang, Shang Li, Liang Wang, and Tieniu Tan. Unfolding the alternating optimization for blind super resolution. _arXiv preprint arXiv:2010.02631_ , 2020.
* Luo et al. (2022) Ziwei Luo, Haibin Huang, Lei Yu, Youwei Li, Haoqiang Fan, and Shuaicheng Liu. Deep constrained least squares for blind image super-resolution. In _CVPR_ , 2022.
* Ma et al. (2020) Cheng Ma, Yongming Rao, Yean Cheng, Ce Chen, Jiwen Lu, and Jie Zhou. Structure-preserving super resolution with gradient guidance. In _CVPR_ , 2020.
* Martin et al. (2001) David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In _ICCV_ , 2001.
* Matsui et al. (2017) Yusuke Matsui, Kota Ito, Yuji Aramaki, Azuma Fujimoto, Toru Ogawa, Toshihiko Yamasaki, and Kiyoharu Aizawa. Sketch-based manga retrieval using manga109 dataset. _Multimedia Tools and Applications_ , 2017.
* Miyato et al. (2018) Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. _arXiv preprint arXiv:1802.05957_ , 2018.
* Mou et al. (2022) Chong Mou, Yanze Wu, Xintao Wang, Chao Dong, Jian Zhang, and Ying Shan. Metric learning based interactive modulation for real-world super-resolution. _ECCV_ , 2022.
* Romero et al. (2014) Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. _arXiv preprint arXiv:1412.6550_ , 2014.
* Shocher et al. (2018) Assaf Shocher, Nadav Cohen, and Michal Irani. “zero-shot” super-resolution using deep internal learning. In _CVPR_ , 2018.
* Simonyan & Zisserman (2014) Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_ , 2014.
* Soh et al. (2020) Jae Woong Soh, Sunwoo Cho, and Nam Ik Cho. Meta-transfer learning for zero-shot super-resolution. In _CVPR_ , 2020.
* Timofte et al. (2017) Radu Timofte, Eirikur Agustsson, Luc Van Gool, Ming-Hsuan Yang, and Lei Zhang. Ntire 2017 challenge on single image super-resolution: Methods and results. In _CVPRW_ , 2017.
* Van der Maaten & Hinton (2008) Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. _Journal of machine learning research_ , 2008.
* Wang et al. (2021a) Longguang Wang, Yingqian Wang, Xiaoyu Dong, Qingyu Xu, Jungang Yang, Wei An, and Yulan Guo. Unsupervised degradation representation learning for blind super-resolution. In _CVPR_ , 2021a.
* Wang et al. (2018a) Xintao Wang, Ke Yu, Chao Dong, and Chen Change Loy. Recovering realistic texture in image super-resolution by deep spatial feature transform. In _CVPR_ , 2018a.
* Wang et al. (2018b) Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In _ECCVW_ , 2018b.
* Wang et al. (2021b) Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In _ICCV_ , 2021b.
* Wei et al. (2020) Pengxu Wei, Ziwei Xie, Hannan Lu, Zongyuan Zhan, Qixiang Ye, Wangmeng Zuo, and Liang Lin. Component divide-and-conquer for real-world image super-resolution. In _ECCV_ , 2020.
* Xia et al. (2022a) Bin Xia, Yucheng Hang, Yapeng Tian, Wenming Yang, Qingmin Liao, and Jie Zhou. Efficient non-local contrastive attention for image super-resolution. _AAAI_ , 2022a.
* Xia et al. (2022b) Bin Xia, Jingwen He, Yulun Zhang, Yucheng Hang, Wenming Yang, and Luc Van Gool. Structured sparsity learning for efficient video super-resolution. _arXiv preprint arXiv:2206.07687_ , 2022b.
* Xia et al. (2022c) Bin Xia, Yapeng Tian, Yucheng Hang, Wenming Yang, Qingmin Liao, and Jie Zhou. Coarse-to-fine embedded patchmatch and multi-scale dynamic aggregation for reference-based super-resolution. In _AAAI_ , 2022c.
* Xia et al. (2022d) Bin Xia, Yapeng Tian, Yulun Zhang, Yucheng Hang, Wenming Yang, and Qingmin Liao. Meta-learning based degradation representation for blind super-resolution. _arXiv preprint arXiv:2207.13963_ , 2022d.
* Xia et al. (2023) Bin Xia, Yulun Zhang, Yitong Wang, Yapeng Tian, Wenming Yang, Radu Timofte, and Luc Van Gool. Basic binary convolution unit for binarized image restoration network. _ICLR_ , 2023.
* Xu et al. (2020) Yu-Syuan Xu, Shou-Yao Roy Tseng, Yu Tseng, Hsien-Kai Kuo, and Yi-Min Tsai. Unified dynamic convolutional network for super-resolution with variational degradations. In _CVPR_ , 2020.
* Zeyde et al. (2010) Roman Zeyde, Michael Elad, and Matan Protter. On single image scale-up using sparse-representations. In _International conference on curves and surfaces_ , 2010.
* Zhang et al. (2017) Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. _TIP_ , 2017.
* Zhang et al. (2018a) Kai Zhang, Wangmeng Zuo, and Lei Zhang. Learning a single convolutional super-resolution network for multiple degradations. In _CVPR_ , 2018a.
* Zhang et al. (2019) Kai Zhang, Wangmeng Zuo, and Lei Zhang. Deep plug-and-play super-resolution for arbitrary blur kernels. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 1671–1681, 2019.
* Zhang et al. (2020) Kai Zhang, Luc Van Gool, and Radu Timofte. Deep unfolding network for image super-resolution. In _CVPR_ , 2020.
* Zhang et al. (2021) Kai Zhang, Jingyun Liang, Luc Van Gool, and Radu Timofte. Designing a practical degradation model for deep blind image super-resolution. _arXiv preprint arXiv:2103.14006_ , 2021.
* Zhang et al. (2018b) Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In _CVPR_ , 2018b.
* Zhang et al. (2018c) Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-resolution using very deep residual channel attention networks. In _Proceedings of the European conference on computer vision (ECCV)_ , pp. 286–301, 2018c.
* Zhou & Susstrunk (2019) Ruofan Zhou and Sabine Susstrunk. Kernel modeling super-resolution on real low-resolution images. In _ICCV_ , 2019.
## Appendix A Appendix
### A.1 Loss Functions
Reconstruction loss ${\cal L}_{rec}$ aims to reduce image distortion. In this
paper, we adopt Mean Absolute Error (MAE) loss as:
$\mathcal{L}_{rec}=\left\|I_{HR}-I_{SR}\right\|_{1},$ (11)
where $I_{HR}$ and $I_{SR}$ indicate the ground-truth image and the network
output.
Perceptual loss ${\cal L}_{per}$ is helpful to improve visual quality (Johnson
et al., 2016). Perceptual loss computes the difference between the predicted
image and the target image in the feature space. The perceptual loss is
expressed as:
$\mathcal{L}_{per}=\left\|\phi\left(I_{HR}\right)-\phi\left(I_{SR}\right)\right\|_{2},$
(12)
where $\phi_{i}$ indicates the Conv layer of VGG19 model (Simonyan &
Zisserman, 2014). Here we use the $\\{conv1,...conv5\\}$ feature maps (with
weights $\\{0.1,0.1,1,1,1\\}$) before activation in the pre-trained VGG19
network as the perceptual loss.
Adversarial loss ${\cal L}_{adv}$ improves the visual quality of synthesized
images. We adopt discrimator in Real-ESRGAN (Wang et al., 2021b), which adopts
U-Net design with skip connection to obtain greater discriminative power for
complex training outputs. Besides, following Real-ESRGAN, we also employ the
spectral normalization regularization (Miyato et al., 2018) to stabilize the
training dynamics. The adversarial loss is defined as follows:
$\mathcal{L}_{adv}=-D\left(I_{SR}\right).$ (13)
The loss for training discriminator $D$ is defined as follows:
$\mathcal{L}_{D}=D\left(I_{SR}\right)-D\left(I_{HR}\right).$ (14)
The overall loss function of our model is ultimately designed as:
$\mathcal{L}_{classic}=\lambda_{1}\mathcal{L}_{rec}+\lambda_{kl}\mathcal{L}_{kl},$
(15)
$\mathcal{L}_{real}=\lambda_{rec}\mathcal{L}_{rec}+\lambda_{kl}\mathcal{L}_{kl}+\lambda_{per}\mathcal{L}_{per}+\lambda_{adv}\mathcal{L}_{adv},$
(16)
where $\mathcal{L}_{classic}$ is used to train reconstruction based KDSRS, and
$\mathcal{L}_{real}$, following Real-ESRGAN, is used to train GAN-based KDSRS-
GAN. In these two overall loss, we set $\lambda_{rec}$, $\lambda_{kl}$,
$\lambda_{per}$ and $\lambda_{adv}$ to 1, 1, 1 and 0.1, respectively.
### A.2 Visual comparison on Images from Camera
We compare the our KDSRS-GAN with other real-world SR methods, including
BSRGAN (Zhang et al., 2021), Real-ESRGAN (Wang et al., 2021b) and MM-RealSR
(Mou et al., 2022) on real-world benchmarks: NTIRE2020 Track2 and DRealSR (Wei
et al., 2020), which is captured by smartphone cameras. The results are shown
in Figs. 7 and LABEL:fig:real_SR_show_track2V2. We can see that our KDSRS-GAN
has better visual quality than compared methods.
### A.3 Additional Visual Results
Visual Comparison on Isotropic Gaussian Kernels. In this part, we provide more
$4\times$ Blind-SR visual results achieved on isotropic Gaussian kernels in
Fig. 8. We can see that our KDSRS-L achieves the best visual quality among all
comparing methods.
Visual Comparison on Anisotropic GAUSSIAN Kernels and Noises. We provide more
$4\times$ Blind-SR visual results achieved on anisotropic Gaussian kernels and
noises in Fig. 9. We can see that our KDSRS produces sharper textures and more
visually pleasant results than other methods.
Visual Comparison on Real-World SR. We provide more $4\times$ Real-SR visual
results achieved on real-world SR competition benchmarks: AIM2019 and
NTIRE2020 in Fig. 11. Compared with other SOTA Real-SR methods, our KDSRS-GAN
produces more visually pleasant results with clearer details and textures.
BSRGAN Real-ESRGAN MM-RealSR KDSRS-GAN
---
BSRGAN Real-ESRGAN MM-RealSR KDSRS-GAN
BSRGAN Real-ESRGAN MM-RealSR KDSRS-GAN
BSRGAN Real-ESRGAN MM-RealSR KDSRS-GAN
BSRGAN Real-ESRGAN MM-RealSR KDSRS-GAN
Figure 7: 4$\times$ visual comparison on real-world SR competition benchmarks
(NTIRE2020 Track2, LR images of which come straight from the smartphone
camera).
HR RCAN DCLS KDSRS-M Bicubic DANv2 DASR KDSRS-L
---
HR RCAN DCLS KDSRS-M Bicubic DANv2 DASR KDSRS-L
HR RCAN DCLS KDSRS-M Bicubic DANv2 DASR KDSRS-L
HR RCAN DCLS KDSRS-M Bicubic DANv2 DASR KDSRS-L
HR RCAN DCLS KDSRS-M Bicubic DANv2 DASR KDSRS-L
Figure 8: 4$\times$ visual comparison on isotropic Gaussian kernels.
HR DCLS DASR Bicubic RCAN KDSRS
---
HR DCLS DASR Bicubic RCAN KDSRS
HR DCLS DASR Bicubic RCAN KDSRS
HR DCLS DASR Bicubic RCAN KDSRS
Figure 9: 4$\times$ visual comparison on anisotropic Gaussian kernels and
noises. Noise levels are set to 10 for these images.
HR DCLS DASR Bicubic RCAN KDSRS
---
HR DCLS DASR Bicubic RCAN KDSRS
HR DCLS DASR Bicubic RCAN KDSRS
HR DCLS DASR Bicubic RCAN KDSRS
Figure 10: 4$\times$ visual comparison on anisotropic Gaussian kernels and
noises. Noise levels are set to 20 for these images.
HR BSRGAN MM-RealSR Bicubic Real-ESRGAN KDSRS-GAN
---
HR BSRGAN MM-RealSR Bicubic Real-ESRGAN KDSRS-GAN
HR BSRGAN MM-RealSR Bicubic Real-ESRGAN KDSRS-GAN
HR BSRGAN MM-RealSR Bicubic Real-ESRGAN KDSRS-GAN
Figure 11: 4$\times$ visual comparison on real-world SR competition
benchmarks: AIM2019 (top 2 examples) and NTIRE2020 (bottom 2 examples)
|
# Adjunction of roots, algebraic $K$-theory and chromatic redshift
Christian Ausoni, Haldun Özgür Bayındır, Tasos Moulinos
###### Abstract.
Given an $E_{1}$-ring $A$ and a class $a\in\pi_{mk}(A)$ satisfying a suitable
hypothesis, we define a map of $E_{1}$-rings $A\to A(\sqrt[m]{a})$ realizing
the adjunction of an $m$th root of $a$. We define a form of logarithmic THH
for $E_{1}$-rings, and show that root adjunction is log-THH-étale for suitably
tamely ramified extension, which provides a formula for
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$ in terms of THH and log-THH of
$A$. If $A$ is connective, we prove that the induced map $K(A)\to
K(A(\sqrt[m]{a}))$ in algebraic $K$-theory is the inclusion of a wedge
summand. Using this, we obtain $V(1)_{*}K(ko_{p})$ for $p>3$ and also, we
deduce that if $K(A)$ exhibits chromatic redshift, so does
$K(A(\sqrt[m]{a}))$. We interpret several extensions of ring spectra as
examples of root adjunction, and use this to obtain a new proof of the fact
that Lubin-Tate spectra satisfy the redshift conjecture.
## 1\. Introduction
Let $A$ be an $E_{1}$-algebra spectrum, and let $a\in\pi_{mk}(A)$ with $m>0$
and even $k\geq 0$. In this paper, we define under a certain Hypothesis 4.4,
an $E_{1}$-algebra extension $A\to A(\sqrt[m]{a})$ realizing the adjunction of
an $m$th-root of $a$ in homotopy rings,
$\pi_{*}A\to\pi_{*}\big{(}A(\sqrt[m]{a})\big{)}\cong\pi_{*}(A)[z]/(z^{m}-a),$
and then study how the algebraic $K$-theory of $A(\sqrt[m]{a})$, or its
topological Hochschild and cyclic homology, relates to that of $A$. Hypothesis
4.4 holds for example if $A$ is an $E_{2}$-ring for which $\pi_{*}A$ is
concentrated in even degrees.
In general, the existence of a suitable root adjunction to a ring spectrum and
its effect on such invariants is an intriguing question; for example it has
been shown by Schwänzl, Vogt and Waldhausen [51, Proposition 2], precisely by
considering topological Hochschild homology, that it is not possible, in
$E_{\infty}$-ring spectra, to adjoin a fourth root of unity $i$ to the sphere
spectrum $\mathbb{S}$ (in a sense made precise in loc. cit.). Nevertheless,
Lawson [28] introduces a construction that allows, under some assumption, to
adjoin roots of a homotopy degree zero unit in $E_{\infty}$ ring spectra.
For classes in positive homotopy degrees, examples exist and have shown to be
relevant, in particular in studying redshift for algebraic $K$-theory. We have
the Adams splitting of connective complex $K$-theory $ku$ completed at an odd
prime $p$,
$ku_{p}\simeq\bigvee_{1\leq i<p-1}\Sigma^{2i}\ell_{p}\,,$
and the extension $\ell_{p}\to ku_{p}$ can be interpreted, on homotopy rings,
as the root adjunction
$\pi_{*}\ell_{p}\cong\mathbb{Z}_{p}[v_{1}]\to\mathbb{Z}_{p}[u]\cong\pi_{*}ku_{p}\,,$
were $v_{1}\mapsto u^{p-1}$, giving
$\pi_{*}ku_{p}\cong(\pi_{*}\ell_{p})[u]/(u^{p-1}-v_{1}).$
Sagave showed in [49, 4.15] that $ku_{p}$ can be constructed as an extension
of $\ell_{p}$, establishing how $\ell_{p}\to ku_{p}$ qualifies as a tamely
ramified extension in $E_{\infty}$-rings. In [4, 7], Rognes and the first
author had computed the algebraic $K$-theory of $\ell_{p}$ and $ku_{p}$ with
coefficients in a Smith-Toda complex $V(1)=\mathbb{S}/(p,v_{1})$, for $p\geq
5$. Taking $T(2)=V(1)[v_{2}^{-1}]$, one has the formula
$T(2)_{*}K(ku_{p})\cong\big{(}T(2)_{*}K(\ell_{p})\big{)}[b]/(b^{p-1}+v_{2})$
relating the two computations, hinting at a chromatic shift (or redshift) of
this tamely ramified root adjunction.
After our construction of root adjunction in $E_{1}$-rings, we offer an
investigation of how the obtained extension is reflected in algebraic
$K$-theory. In particular, we have the following spectrum-level splitting of
algebraic $K$-theory in the tamely ramified case, which applies to a wide
array of examples.
###### Theorem 1.1 (Theorem 5.8).
Assume Hypothesis 4.4 with $p\nmid m$ and $\lvert a\rvert>0$. Furthermore,
assume that $A$ is $p$-local and connective. In this situation, the map in
algebraic $K$-theory
$K(A)\to K(A(\sqrt[m]{a}))$
induced by the extension $A\to A(\sqrt[m]{a})$ is the inclusion of a wedge
summand.
This is deduced from the corresponding result for topological cyclic homology,
see Theorem 5.6. An analogous splitting result for algebraic $K$-theory in the
non-connective case is provided in Corollary 5.11.
For an integer $n>0$, we say that a spectrum $E$ is _height_ $n$ if
$L_{T(n)}E\not\simeq 0$ and $L_{T(m)}E\simeq 0$ for $m>n$, where, $T(n)$
denotes a height $n$ telescope. We say an $E_{1}$-ring $A$ of height $n$
_exhibits redshift_ if $K(A)$ is of height $n+1$. Due to [31, Purity Theorem],
$K(A)$ is of height at most $n+1$, so $A$ will exhibit redshift if
$L_{T(n+1)}K(A)\not\simeq 0$. The following result is thus an immediate
consequence of our splitting results, Theorem 5.8 and Corollary 5.11:
###### Corollary 1.2.
Assume Hypothesis 4.4 with $p\nmid m$ and $\lvert a\rvert>0$. If $A$ exhibits
redshift, then so does $A(\sqrt[m]{a})$.
A key feature of the defined root adjunction is that $A(\sqrt[m]{a})$ is
endowed with the structure of an $E_{1}$-algebra in the symmetric monoidal
category $\operatorname{Fun}(\mathbb{Z}/m,\operatorname{Sp})$ of $m$-graded
spectra, which is reflected in an Adams’ type splitting of spectra
$A(\sqrt[m]{a})\simeq\bigvee_{0\leq i<m}\Sigma^{ik}A\,.$
Such a grading on a spectrum, compatible with further additional algebraic
structures, has already proven to be very useful: let us mention the
computations of Hesselholt-Madsen [19] of the $K$-theory of truncated
polynomial algebras, and of the second and third authors [11] of the
$K$-theory of the free $E_{1}$-algebras in degree $2$ over finite fields. In
the present paper, the grading corresponding to the root adjunction, and the
induced splitting of $\operatorname{\textup{THH}}$ as developed in [2,
Appendix A], are essential ingredients in the proofs of our various splitting
results.
We use the theory of _logarithmic topological Hochschild homology_ for an in
depth study of the THH of $A(\sqrt[m]{a})$. Hesselholt and Madsen [20]
introduced logarithmic topological Hochschild homology for studying the
algebraic $K$-theory of complete discrete valuation rings in mixed
characteristic, and proved a descent property of log THH in the case of tamely
ramified extensions. Rognes [46] then initiated a study of logarithmic
structures, logarithmic André-Quillen homology and of
$\mathrm{log}\operatorname{\textup{THH}}$ in the context of $E_{\infty}$ ring
spectra. With Sagave and Schlichtkrull [47, 48], they then established the
existence of localization sequences for
$\mathrm{log}\operatorname{\textup{THH}}$ and proved that it satisfies tamely
ramified descent in the example of the extension $\ell_{p}\to ku_{p}$.
Here, we offer an alternative definition of log THH that applies to a more
general class of ring spectra. More precisely, we define log THH for a given
$E_{1}$-ring spectrum $A$ and a class $a\in\pi_{mk}(A)$ satisfying Hypothesis
4.4, associated to the pre-log structure given by the monoid generated, under
multiplication, by $a\in\pi_{*}(A)$ (see Definition 6.6); it is denoted
$\operatorname{\textup{THH}}(A\mid a)$. For instance, this applies to the
Morava $K$-theory spectrum $k(n)$ for $v_{n}\in\pi_{*}k(n)$. We prove the
following form of tamely ramified descent (see also Theorem 6.23):
###### Theorem 1.3 (Theorem 6.33).
If $A$ is $p$-local and $p\nmid m$, there is an equivalence of spectra:
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))\simeq\operatorname{\textup{THH}}(A)\vee\big{(}\bigvee_{0<i<m}\Sigma^{ik}\operatorname{\textup{THH}}(A\mid
a)\big{)}.$
We also prove the existence of a localization cofibre sequence
$\operatorname{\textup{THH}}(A)\to\operatorname{\textup{THH}}(A\mid
a)\to\Sigma\operatorname{\textup{THH}}(A/a),$
see Theorem 6.28. This is an analogue in the present setting of the
localization sequences constructed in [47], but note that our definition only
applies to the case of a pre-log structure given by a monoid on a single
generator. We would like to point out also the recent preprint of Binda,
Lundemo, Park and Østvær [10], where a version of logarithmic Hochschild
homology for simplicial commutative rings is constructed.
We now mention examples of application of the above results.
### Topological $K$-theory
We prove in Theorems 4.10 and 7.6 that, at an odd prime $p$, there are
equivalences of $E_{1}$-ring spectra
$ku_{p}\simeq\ell_{p}(\sqrt[p-1]{v_{1}})\ \ \textup{and}\ \ \
ko_{p}\simeq\ell_{p}(\sqrt[\frac{p-1}{2}]{v_{1}})\,.$
These equivalences upgrade the splitting result of Adams into a
$\mathbb{Z}/(p-1)$-graded, respectively $\mathbb{Z}/(\frac{p-1}{2})$-graded
$E_{1}$-ring structure. We prove that when $p=1$ in $\mathbb{Z}/m$, the
splitting in Theorem 1.1 can be improved to a more refined splitting of
$K(A(\sqrt[m]{a}))$. In the case of $ku_{p}\simeq\ell_{p}(\sqrt[p-1]{v_{1}})$,
this more refined splitting reads as
(1.4) $K(ku_{p})\simeq\bigvee_{0\leq i<p-1}K(ku_{p})_{i}\,.$
Here $K(ku_{p})_{0}\simeq K(\ell_{p})$, and for $p>3$, and we can compute the
$V(1)$-homotopy groups of each of the $i$-th-graded piece
$V(1)_{*}K(ku_{p})_{i}$ using first author’s computation of
$V(1)_{*}K(ku_{p})$.
Using this refined splitting, the second author makes a simplified computation
of $T(2)_{*}K(ku)$ in [8]. We also obtain complete descriptions of
$V(1)_{*}K(ko_{p})$ and $T(2)_{*}K(ko_{p})$, see Theorem 7.10.
We note that the splitting (1.4) can be considered as a version of Adams’
splitting for the cohomology theory represented by $K(ku)$, with classes
corresponding to $2$-categorical complex vector bundles, as developped in [9].
### Johnson-Wilson spectra and Morava $E$-theory
Let $n\geq 1$ be an integer, and let $E(n)$ and $E_{n}$ be the Johnson-Wilson
and Morava $E$-theory spectra. Let $\widehat{E(n)}$ be the $K(n)$-localization
of $E(n)$. These spectra have coefficient rings given as
$\pi_{*}E(n)\cong\mathbb{Z}_{p}[v_{1},...,v_{n-1},v_{n}^{\pm 1}],\hskip
21.33955pt\pi_{*}E_{n}\cong W(\mathbb{F}_{p^{n}})[\lvert
u_{1},\dots,u_{n-1}\rvert][u^{\pm 1}]$
and $\pi_{*}\widehat{E(n)}\cong\pi_{*}E(n)^{\wedge}_{I_{n}}$, where $\lvert
u_{i}\rvert=0$, $\lvert u\rvert=-2$, and $I_{n}=(p,v_{1},\dots,v_{n})$. The
Galois group $Gal=\text{Gal}(\mathbb{F}_{p^{n}}|\mathbb{F}_{p})$ acts on
$E_{n}$, and let $E_{n}^{hGal}$ be the homotopy fixed point spectrum with
coefficients $\pi_{*}E_{n}^{hGal}\cong\mathbb{Z}_{p}[\lvert
u_{1},\dots,u_{n-1}\rvert][u^{\pm 1}]$.
We prove in Theorem 9.6 that there are equivalences of $E_{1}$-rings
(1.5) $\displaystyle E_{n}$
$\displaystyle\simeq\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge_{\mathbb{S}_{p}}\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}}),\
\ \textup{and}$ $\displaystyle E_{n}^{hGal}$
$\displaystyle\simeq\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}})\,.$
This promotes the $E_{1}$-ring structure on Morava $E$-theories to a non-
trivial $\mathbb{Z}/(p^{n}-1)$-graded $E_{1}$-ring structure.
Using this description of $E_{n}$ and the log THH étaleness of root
adjunction, we show in Theorem 9.15 that the canonical map
$\operatorname{\textup{THH}}(\widehat{E(n)})\wedge_{\widehat{E(n)}}E_{n}\xrightarrow{\simeq_{p}}\operatorname{\textup{THH}}(E_{n}),$
is an equivalence after $p$-completion. The relationship between such
equivalences and the Galois descent question for THH are studied in [39].
Applying Corollary 5.11 to these root adjunctions of non-connective spectra,
we deduce the following result.
###### Theorem 1.6 (Theorem 9.9).
The canonical maps:
$\displaystyle K(E(n))\to$ $\displaystyle\ K(E_{n}^{hGal})$ $\displaystyle
K(\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge E(n))\to$ $\displaystyle\ K(E_{n})$
are inclusions of wedge summands after $T(n+1)$-localization.
### Lubin-Tate spectra
We can also apply our results to Lubin-Tate spectra that can be constructed,
in several steps, from the truncated Brown-Peterson spectra $BP\langle
n\rangle$, with coefficients $\pi_{*}BP\langle
n\rangle\cong\mathbb{Z}_{(p)}[v_{1},\dots,v_{n}]$. In more precise terms, we
consider an $E_{3}$ $MU[\sigma_{2(p^{n}-1)}]$-algebra form of $BP\langle
n\rangle$ as constructed by Hahn and Wilson [25, Remark 2.1.2]. In this
situation, we can construct $BP\langle n\rangle(\sqrt[p^{n}-1]{v_{n}})$ as an
$E_{3}$ $MU[\sigma_{2}]$-algebra (Remark 4.11).
Let be $k$ a perfect field of characteristic $p$, and let $\mathbb{S}_{W(k)}$
denote the spherical Witt vectors spectrum. We prove in Proposition 8.6 that
the $MU[\sigma_{2}]$-orientation above provides a formal group law $\Gamma$ of
height $n$ over $k$, and that there is an equivalence of $E_{3}$-rings
$L_{K(n)}(\mathbb{S}_{W(k)}\wedge BP\langle
n\rangle)(\sqrt[p^{n}-1]{v_{n}})\simeq E_{(k,\Gamma)}\,,$
where $E_{(k,\Gamma)}$ denotes the Lubin-Tate spectrum corresponding to
$\Gamma$. Due to [31, Purity Theorem], we have an equivalence
$L_{T(n+1)}K\big{(}\mathbb{S}_{W(k)}\wedge BP\langle
n\rangle(\sqrt[p-1]{v_{n}})\big{)}\simeq L_{T(n+1)}K(E_{(k,\Gamma)}).$
By [25], $BP\langle n\rangle$ satisfies the redshift conjecture; following an
argument suggested to us by Hahn, we show that $\mathbb{S}_{W(k)}\wedge
BP\langle n\rangle$ also satisfies the red-shift conjecture, c.f. Proposition
8.2. By Corollary 1.2, we deduce that $E_{(k,\Gamma)}$ satisfies the redshift
conjecture. Indeed, we deduce from this (see Theorem 8.9) a new proof of
Yuan’s result [52] that all Lubin-Tate spectra satisfy the redshift
conjecture. We also obtain the following from Corollary 5.11.
###### Theorem 1.7 (Theorem 8.8).
The induced map
$L_{T(n+1)}K(\mathbb{S}_{W(k)}\wedge BP\langle n\rangle)\to
L_{T(n+1)}K(E_{(k,\Gamma)})$
is the inclusion of a _non-trivial_ wedge summand.
We expect the above result to be relevant also for explicit computations. For
example, in [1], the authors compute
$V(2)_{*}\operatorname{\textup{TC}}(BP\langle 2\rangle)$ for $p\geq 7$ which,
provides an explicit description of $T(3)_{*}K(BP\langle 2\rangle)$. Through
the inclusion above, we deduce that $T(3)_{*}K(BP\langle 2\rangle)$ maps
isomorphically to a summand of $T(3)_{*}K(E_{(\mathbb{F}_{p},\Gamma)})$ for a
height $2$ formal group law $\Gamma$ over $\mathbb{F}_{p}$. To our knowledge,
this is the first explicit, quantitative result on the algebraic $K$-theory
groups of Lubin-Tate spectra for height larger than $1$.
Note that it is not known if the $E_{3}$ $MU$-algebra forms of $BP\langle
n\rangle$ constructed in [25] map into the Morava $E$-theories mentioned in
the preceding sub-section.
###### Remark 1.8.
In [12, Theorem G], the authors construct an $E_{\infty}$-map
$MU[\sigma_{2}]\to E_{(\overline{\mathbb{F}}_{p},\Gamma^{\prime})}$ for the
unique height $n$ formal group law $\Gamma^{\prime}$ on
$\overline{\mathbb{F}}_{p}$ solving an open question on the existence of
orientations on Lubin-Tate spectra. Our constructions provide a similar
orientation for a “smaller” form of Lubin-Tate spectra. Namely, we obtain that
$E_{(\mathbb{F}_{p},\Gamma)}$ is an $E_{3}$ $MU[\sigma_{2}]$-algebra where
$\sigma_{2}$ acts through $u^{-1}\in\pi_{*}E_{(\mathbb{F}_{p},\Gamma)}$; see
Example 8.7. Moreover, there is a grading on $E_{(\mathbb{F}_{p},\Gamma)}$
that respects this structure. To be precise, $E_{(\mathbb{F}_{p},\Gamma)}$ is
an $E_{3}$ $MU[\sigma_{2}]$-algebra in the $\infty$-category of
$\mathbb{Z}/(p^{n}-1)$-graded spectra where
$u^{-1}\in\pi_{*}E_{(\mathbb{F}_{p},\Gamma)}$ is of weight $1$.
###### Remark 1.9.
The above construction of Lubin-Tate spectra by root adjunction is used in an
essential way in the construction of a counter-example to the telescope
conjecture in forthcoming work by Burklund, Hahn, Levy and Schlank.
### Morava $K$-theory
In Section 9.3, we construct two-periodic Morava $K$-theories from Morava
$K$-theories through root adjunction. By Corollary 1.2, we deduce that two-
periodic Morava $K$-theories satisfy the redshift conjecture if the redshift
conjecture holds for Morava $K$-theories (Corollary 9.11).
For $p>3$, the $V(1)$-homotopy of $K(k(1))$ is computed by the first author
and Rognes in [5] where it is also shown that $k(1)$ satisfies the redshift
conjecture. From this, we deduce that the two-periodic first Morava $K$-theory
$ku/p$ also satisfies the redshift conjecture (Corollary 9.12). Moreover,
through the interpretation of $ku/p$ as $k(1)(\sqrt[p-1]{v_{1}})$, the second
author makes the first computation of $T(2)_{*}K\big{(}ku/p\big{)}$ in [8].
###### Outline.
We begin with a quick introduction of graded objects in Section 2. In Section
3, we construct a family of graded $E_{2}$ “polynomial” algebras and establish
their even cell decompositions. In Section 4, we provide our central
construction for root adjunctions (Construction 4.6) and prove our first
splitting result on the THH of ring spectra obtained via a root adjunction. In
Section 5 we prove Theorem 1.6. Section 6 is devoted to studying the variant
of log THH we set forth, as well as the logarithmic THH-étaleness of root
adjunctions. Section 7 contains our results on the algebraic $K$-theory of
real and complex topological $K$-theories. We apply our results to Lubin-Tate
spectra in Section 8. In Section 9, we study the THH and the algebraic
$K$-theory of Morava $E$-theories.
###### Notation 1.10.
1. (1)
We work freely in the setting of $\infty$-categories and higher algebra from
[33, 36].
2. (2)
For an $E_{2}$-algebra $R$ in a symmetric monoidal $\infty$-category, when we
say $T$ is an $R$-algebra (or an $E_{1}$ $R$-algebra), we mean that it is an
$E_{1}$-algebra in right $R$-modules. If we mean an $E_{1}$-algebra in left
$R$-modules, we call this a left $E_{1}$ $R$-algebra. If $R$ is $E_{\infty}$,
we do not need to denote the distinction.
3. (3)
When we say $E_{n}$-ring, we mean an $E_{n}$-algebra in the $\infty$-category
of spectra $\operatorname{Sp}$.
###### Acknowledgements.
We would like to thank Sanath Devalapurkar for many of the ideas in Section 3.
We also thank Jeremy Hahn for showing us the proof of the redshift conjecture
for $BP\langle n\rangle$ with Witt vector coefficients. We benefited from
various conversations with Andrew Baker and Robert Burklund and we would like
to thank them as well. We are very grateful to Robert Burklund for pointing
out a mistake in the proof of the claim, in a previous version of this paper,
that the Morava $K$-theories satisfy redshift; the claim has been removed from
the present version. We would like to thank Oscar Randal-Williams for
explaining to us in depth the constructions in [17]. Finally, we would like to
thank anonymous referrees for many very useful suggestions or corrections.
The first and second authors acknowledge support from the project
ANR-16-CE40-0003 ChroK. The second author acknowledges support from the
Engineering and Physical Sciences Research Council (EPSRC) grant EP/T030771/1.
The third author acknowledges support from the grant NEDAG
ERC-2016-ADG-741501.
## 2\. Recollections on graded objects
Let $m\geq 0$ be an integer and let $\mathbb{Z}/m$ denote the discrete
$\infty$-groupoid whose objects are the elements of the set of integers modulo
$m$. For a presentably symmetric monoidal $\infty$-category $\mathcal{V}$, we
define the $\infty$-category of $m$-graded objects in $\mathcal{V}$ to be the
functor category $\operatorname{Fun}(\mathbb{Z}/m,\mathcal{V})$. For a functor
$F$ in $\operatorname{Fun}(\mathbb{Z}/m,\mathcal{V})$, we denote $F(i)$ by
$F_{i}$ for every $i\in\mathbb{Z}/m$. Since $\mathbb{Z}/m$ is discrete, we
have:
$\operatorname{Fun}(\mathbb{Z}/m,\mathcal{V})\simeq\prod_{i\in\mathbb{Z}/m}\mathcal{V}.$
For $m=0$, this is given by $\operatorname{Fun}(\mathbb{Z},\mathcal{V})$ where
$\mathbb{Z}$ is the corresponding discrete $\infty$-groupoid. In this case, we
omit $m$ and we call $\operatorname{Fun}(\mathbb{Z},\mathcal{V})$ the
$\infty$-category of graded objects in $\mathcal{V}$. We are mainly interested
in the case $\mathcal{V}=\operatorname{Sp}$. We call an object of
$\operatorname{Fun}(\mathbb{Z}/m,\operatorname{Sp})$ an $m$-graded spectrum;
for $m=0$, we drop $m$ and call it a graded spectrum.
Using the symmetric monoidal structure on $\mathbb{Z}/m$ given by addition, we
equip $\operatorname{Fun}(\mathbb{Z}/m,\mathcal{V})$ with the Day convolution
closed symmetric monoidal structure [18]. Since $\mathbb{Z}/m$ is discrete,
this boils down to the following.
$(F\otimes_{\textup{Day}}G)_{k}=\coprod_{i+j=k\textup{\ in\
}\mathbb{Z}/m}F_{i}\otimes G_{j}$
### 2.1. Algebras in graded spectra
We are interested in $E_{n}$-algebras in the $\infty$-category of $m$-graded
spectra and the algebras over these $E_{n}$-algebras.
###### Definition 2.1.
An $m$-graded $E_{n}$-ring $A$ is an $E_{n}$-algebra in
$\operatorname{Fun}(\mathbb{Z}/m,\operatorname{Sp})$. For $k<n$, an $m$-graded
$E_{k}$ $A$-algebra is an $E_{k}$ $A$-algebra in
$\operatorname{Fun}(\mathbb{Z}/m,\operatorname{Sp})$. Similarly, an $m$-graded
(left) right $A$-module is a (left) right $A$-module in
$\operatorname{Fun}(\mathbb{Z}/m,\operatorname{Sp})$.
###### Remark 2.2.
Note that the notion of an $m$-graded $E_{k}$-algebra in $\operatorname{Sp}$
is in general different than the notion of an $m$-graded object in the
$\infty$-category of $E_{k}$-algebras in $\operatorname{Sp}$.
### 2.2. Manipulations on graded objects
For a symmetric monoidal functor $\mathbb{Z}/m\to\mathbb{Z}/m^{\prime}$, there
is an induced adjunction between
$\operatorname{Fun}(\mathbb{Z}/m,\operatorname{Sp})$ and
$\operatorname{Fun}(\mathbb{Z}/m^{\prime},\operatorname{Sp})$ where the left
adjoint is symmetric monoidal and given by left Kan extension [40, Corollary
3.8]. The right adjoint is given by restriction. This provides the following
adjunctions which allow us to move between various gradings. Let $n>0$ and
$s\geq 0$ be integers.
* •
We let $D^{n}_{sn}\dashv Q$ denote the adjunction induced by the quotient map
$\mathbb{Z}/sn\to\mathbb{Z}/n$ sending $1$ to $1$.
* •
We often use $D^{n}_{sn}$ for $s=0$ which allows us to obtain an $n$-graded
object out of a graded object $X$ in $\operatorname{Sp}$. We let $D^{n}$
denote
$D^{n}_{0}\colon\thinspace\operatorname{Fun}(\mathbb{Z},\operatorname{Sp})\to\operatorname{Fun}(\mathbb{Z}/n,\operatorname{Sp})$
and we have
$D^{n}(X)_{i}\simeq\bigvee_{j\in\mathbb{Z}\mid j\equiv i\textup{\ mod
n}}X_{j}.$
* •
For $n=1$, we denote $D^{1}_{s}$ by $D$. In this case,
$D\colon\thinspace\operatorname{Fun}(\mathbb{Z}/s,\operatorname{Sp})\to\operatorname{Sp}$
is given by $D(X)\simeq\vee_{j\in\mathbb{Z}/s}X_{j}$, i.e. left Kan extension
along $\mathbb{Z}/s\to 0$. For an $s$-graded (spectrum) $E_{n}$-ring $X$, we
call $D(X)$ the underlying (spectrum) $E_{n}$-ring of $X$. We often omit $D$
in our notation.
* •
For $s\in\mathbb{Z}$, let $L_{s}\dashv R_{s}$ denote the adjunction on
$\operatorname{Fun}(\mathbb{Z},\operatorname{Sp})$ induced by the map
$\mathbb{Z}\xrightarrow{\cdot s}\mathbb{Z}$ given by multiplication by $s$.
For a graded spectrum $X$, we have
$L_{s}(X)_{si}\simeq X_{i}$
for every $i$ and $L_{s}(X)_{j}\simeq 0$ whenever $s\nmid j$.
* •
Let $F\dashv G$ denote the adjunction induced by the trivial map
$0\to\mathbb{Z}/m$. We have $G(X)=X_{0}$. For an $m$-graded $E_{n}$-ring $A$,
$F(G(A))$ is given by $A_{0}$ in weight $0$ and it is trivial on the other
degrees. Therefore, we sometimes abuse notation and denote the $m$-graded
$E_{n}$-ring $F(G(A))$ by $A_{0}$. The counit of this adjunction provides a
map
$A_{0}\to A$
of $m$-graded $E_{n}$-rings. If $A_{i}\simeq 0$ for $i\neq 0$, then this map
is an equivalence and we say that $A$ is concentrated in weight zero. The
following lemma states that in this situation, there is an equivalence of
$E_{n}$-rings between the underlying $E_{n}$-ring of $A$ and the weight zero
piece $G(A)=A_{0}$ of $A$. Therefore, we often do not distinguish between $A$,
$G(A)=A_{0}$ and $D(A)$ in our notation when $A$ is concentrated in weight
zero.
###### Lemma 2.3.
Let $A$ be an $m$-graded $E_{n}$-ring concentrated in weight zero. There is an
equivalence of $E_{n}$-rings
$A_{0}\simeq D(A)$
where $A_{0}$ denotes $G(A)$. In particular, we have $F(D(A))\simeq A$ as
$m$-graded $E_{n}$-rings.
###### Proof.
Since $A$ is concentrated in weight zero, we have $D(A)\simeq DFG(A)$. As
$D\circ F$ Kan extends through the composite $0\to\mathbb{Z}/m\to 0$, it is
equivalent to the identity functor. We obtain that $DFG(A)\simeq G(A)\simeq
A_{0}$.
For the second statement, note that $F(D(A))\simeq F(G(A))$ due to the first
statement. Since $A$ is concentrated in weight $0$, we have $F(G(A))\simeq A$.
∎
## 3\. A family of $E_{2}$ polynomial rings in graded spectra
In this section, we introduce the construction of a family of $E_{2}$-algebras
in graded spectra. These have appeared in the work of Hahn and Wilson in [25]
and are also studied in greater depth in [14]. These will be central to our
constructions. For every $r,w\in\mathbb{Z}$, one constructs a graded
$E_{2}$-ring $\mathbb{S}[\sigma_{2r}]$ which may be thought of as a
“polynomial” algebra with a generator in homotopical degree $2r$ and grading
weight $w$. However, these are not polynomial algebras in the precise sense,
as they are only demonstrated to admit $E_{2}$ structures. By mapping these
$E_{2}$-rings into each other, we will be able to construct $E_{2}$-ring
extensions.
### 3.1. Shearing preliminaries
The main mechanism underlying this construction is that of shearing, which we
now briefly review. It has appeared in [42], and is also studied in [14]. In
what follows, $\operatorname{Gr}(\operatorname{Sp})$ denotes
$\operatorname{Fun}(\mathbb{Z},\operatorname{Sp})$, i.e. the $\infty$-category
of graded spectra.
###### Proposition 3.1.
There exists an endofunctor on graded spectra
$\operatorname{sh}:\operatorname{Gr}(\operatorname{Sp})\to\operatorname{Gr}(\operatorname{Sp})$
given by
$\operatorname{sh}(M)_{i}:=M_{i}[-2i]$
with the following properties:
* •
$\operatorname{sh}$ is an equivalence, with inverse given by
$\operatorname{sh}^{-1}(M_{i})=M_{i}[2i]$
* •
$\operatorname{sh}$ admits an $E_{2}$-monoidal structure, with respect to the
Day convolution product on $\operatorname{Gr}(\operatorname{Sp})$.
###### Proof.
This appears in the $\mathbb{Z}$-linear setting in [42] and is also studied in
[14]. However, for the sake of completeness, we sketch the basic ideas
underlying the construction. In [35], Lurie constructs an $E_{2}$-monoidal map
of spaces
$\phi:\mathbb{Z}\to\operatorname{Pic}(\operatorname{Sp})$
sending $n\mapsto\mathbb{S}^{-2n}$. We now define $\operatorname{sh}$ as the
functor, obtained by adjunction, from the assignment
$\mathbb{Z}\times\operatorname{Gr}(\operatorname{Sp})\to\operatorname{Sp}$
given by the composition
$\mathbb{Z}\times\operatorname{Gr}(\operatorname{Sp})\xrightarrow{(\phi,\operatorname{ev})}\operatorname{Pic}(\operatorname{Sp})\times\operatorname{Sp}\xrightarrow{\otimes}\operatorname{Sp}.$
Here, the first map sends $(n,M)\mapsto(\phi(n),M_{n})$. The fact that this
latter composition is $E_{2}$ follows from the fact that $\phi$ is itself
$E_{2}$. This further implies that $\operatorname{sh}$ is itself $E_{2}$
monoidal. To see that this is an equivalence, one displays, as in [42], an
inverse in the same way by precomposing $\phi$ with the map
$\mathbb{Z}\xrightarrow{-1\cdot}\mathbb{Z}$. ∎
###### Variant 3.2.
One can precompose the map
$\phi:\mathbb{Z}\to\operatorname{Pic}(\operatorname{Sp})$ with the map
$\cdot(k):\mathbb{Z}\to\mathbb{Z}$. We denote the composition by
$\phi^{k}:\mathbb{Z}\to\operatorname{Pic}(\operatorname{Sp})$
As in the above, we use this to define an endofunctor
$\operatorname{sh}^{k}:\operatorname{Gr}(\operatorname{Sp})\to\operatorname{Gr}(\operatorname{Sp}).$
This acquires the same formal properties as above, e.g it will be an
$E_{2}$-monoidal autoequivalence on $\operatorname{Gr}(\operatorname{Sp})$.
Furthermore, one has the description $(\operatorname{sh}^{k}M)_{i}\simeq
M_{i}[-2ki]$.
### 3.2. Sheared polynomial algebras
Recall that there exists an $E_{\infty}$ algebra
$\mathbb{S}[t]\in\operatorname{Gr}(\operatorname{Sp})$, which gives a graded
enhancement of the “flat” polynomial algebra. One can obtain this, for
example, by observing that the restriction map from filtered spectra to graded
spectra
$\operatorname{Res}:\operatorname{Fil}(\operatorname{Sp})\to\operatorname{Gr}(\operatorname{Sp})$
is lax symmetric monoidal. In more detail, this will be the restriction
$\operatorname{Fil}(\operatorname{Sp})=\operatorname{Fun}((\mathbb{Z},\leq),\operatorname{Sp})\to\operatorname{Fun}(\mathbb{Z}^{\operatorname{ds}},\operatorname{Sp})=\operatorname{Gr}(\operatorname{Sp})$
along $\mathbb{Z}\hookrightarrow(\mathbb{Z},\leq)$ so that in particular we
forget the structure maps of the filtration, cf [35]. We remind the reader
that this is different from the associated graded functor. One then sets
$\mathbb{S}[t]:=\operatorname{Res}(\mathbf{1})$, where $1$ denotes the unit of
the symmeric monoidal structure on $\operatorname{Fil}(\operatorname{Sp})$.
Thus, $\mathbb{S}[t]$ (which is given by $\mathbb{S}$ in nonpositive weights
and $0$ in positive weights) acquires the structure of an $E_{\infty}$-algebra
in graded spectra.
###### Construction 3.3.
As described in Proposition 3.1, there exists an $E_{2}$-monoidal
autoequivalence $\operatorname{sh}$. We set
$\mathbb{S}[\sigma_{2}]:=\operatorname{sh}(\mathbb{S}[t]),$
and more generally for $k>0$,
$\mathbb{S}[\sigma_{2k}]:=\operatorname{sh}^{k}(\mathbb{S}[t]);$
that is, one applies $\operatorname{sh}^{k}$ to $\mathbb{S}[t]$ to obtain a
family of $E_{2}$-algebras in graded spectra. For $k$=0, we set
$\mathbb{S}[\sigma_{0}]:=\mathbb{S}[t]$. It follows by inspection that the
underlying graded $E_{1}$-ring of $\mathbb{S}[\sigma_{2k}]$ is the free graded
$E_{1}$-ring on $\mathbb{S}^{2k}(-1)$ where $\mathbb{S}^{2k}(-1)$ is
$\mathbb{S}^{2k}$ concentrated in weight $-1$.
###### Remark 3.4.
For $w\in\mathbb{Z}$ and even $k\geq 0$, we apply the functor $L_{-w}$ which
left Kan extends along the multiplication map
$\cdot(-w):\mathbb{Z}\to\mathbb{Z}$
to obtain weight sifted variants of $\mathbb{S}[\sigma_{k}]$. We often omit
$L_{-w}$ when the weight of $\sigma_{k}$ is clear from the context but when we
wish to be explicit, we write $\mathbb{S}[\sigma_{k,w}]$ for the graded
$E_{2}$-ring $L_{-w}\mathbb{S}[\sigma_{k}]$ where $\sigma_{k,w}$ is in weight
$w$. As before, the underlying graded $E_{1}$-ring of
$\mathbb{S}[\sigma_{k,w}]$ is the free graded $E_{1}$-ring
$\operatorname{Free}_{E_{1}}(\mathbb{S}^{k}(w))$.
To adjoin roots, we often start with $\mathbb{S}[\sigma_{mk}]$ and
$\mathbb{S}[\sigma_{k}]$ with $\sigma_{mk}$ and $\sigma_{k}$ in weights $m$
and $1$ respectively where $m>0$ and $k>0$ is even.
###### Proposition 3.5.
In the situation above, there exists a map of graded $E_{2}$-rings
$\mathbb{S}[\sigma_{mk}]\to\mathbb{S}[\sigma_{k}]$
that carries $\sigma_{mk}$ to $\sigma_{k}^{m}$ in homotopy. This provides a
map of $m$-graded $E_{2}$-rings
$D^{m}(\mathbb{S}[\sigma_{mk}])\to D^{m}(\mathbb{S}[\sigma_{k}])$
where $\sigma_{k}\in\pi_{*}D^{m}(\mathbb{S}[\sigma_{k}])$ is of weight $1$ and
$D^{m}(\mathbb{S}[\sigma_{mk}])$ is concentrated in weight $0$.
Furthermore, we have $D^{m}(\mathbb{S}[\sigma_{mk}])\simeq
F(D(\mathbb{S}[\sigma_{mk}]))$ as $m$-graded $E_{2}$-rings; here $F$ Kan
extends through $0\to\mathbb{Z}/m$.
###### Proof.
We first remind the reader that there is an identification
$\mathbb{S}[\sigma_{mk,-1}]:=\operatorname{sh}^{mk}(\mathbb{S}[t])\simeq
R_{m}(\mathbb{S}[\sigma_{k,-1}])$
of graded $E_{2}$-rings. This follows from the very definition the $k$th
shearing functor $\operatorname{sh}^{k}$, in particular that it sends
$\mathbb{S}[t]$ to the negative weight part of the graded spectrum given by
the map
$\phi^{k}:\mathbb{Z}\xrightarrow{\times
k}\mathbb{Z}\to\operatorname{Pic}(\operatorname{Sp}).$
Thus we may identify $\mathbb{S}[\sigma_{mk,-1}]$ with
$R_{m}\mathbb{S}[\sigma_{k,-1}]\simeq
R_{m}\operatorname{sh}^{k}(\mathbb{S}[t])$, negative weight part of the graded
spectrum given by the map
$\phi^{mk}:\mathbb{Z}\to\operatorname{Pic}(\operatorname{Sp}).$
Therefore, the counit of the adjunction $L_{m}\dashv R_{m}$ provides a map of
graded $E_{2}$-rings
$\mathbb{S}[\sigma_{mk,-m}]\to\mathbb{S}[\sigma_{k,-1}].$
Applying the functor $L_{-1}$ to this map, we obtain the map of graded
$E_{2}$-rings $\mathbb{S}[\sigma_{mk}]\to\mathbb{S}[\sigma_{k}]$ claimed in
the proposition.
The functor $D^{m}$ gives the desired map
$D^{m}(\mathbb{S}[\sigma_{mk}])\to D^{m}(\mathbb{S}[\sigma_{k}])$
of $m$-graded $E_{2}$-algebras. Note that
$\sigma_{mk}\in\pi_{*}D^{m}(\mathbb{S}[\sigma_{mk}])$ is of weight $0$, which
ensures that $D^{m}(\mathbb{S}[\sigma_{mk}])$ is concentrated in weight $0$.
To see the last statement, we have
(3.6) $FDD^{m}(\mathbb{S}[\sigma_{mk}])\simeq D^{m}(\mathbb{S}[\sigma_{mk}])$
due to Lemma 2.3 since $D^{m}(\mathbb{S}[\sigma_{mk}])$ is conentrated in
weight $0$. Furthermore, $DD^{m}$ is Kan extension through the composite
$\mathbb{Z}\to\mathbb{Z}/m\to 0$ which is the same as the Kan extending
through $\mathbb{Z}\to 0$. Therefore, $DD^{m}(\mathbb{S}[\sigma_{mk}])\simeq
D(\mathbb{S}[\sigma_{mk}])$. This, together with (3.6) provides the desired
equivalence $D^{m}(\mathbb{S}[\sigma_{mk}])\simeq
F(D(\mathbb{S}[\sigma_{mk}]))$. ∎
We often omit the functor $D^{m}$ in our notation and denote the map of
$m$-graded $E_{2}$-rings $D^{m}(\mathbb{S}[\sigma_{mk}])\to
D^{m}(\mathbb{S}[\sigma_{k}])$ as
$\mathbb{S}[\sigma_{mk}]\to\mathbb{S}[\sigma_{k}]$.
###### Remark 3.7.
To adjoin a root to a degree $0$ class, we need the $k=0$ case of the
proposition above. In other words, we need an analogous $E_{2}$-map
$\mathbb{S}[\sigma_{mk}]\to\mathbb{S}[\sigma_{k}]$ for $k=0$. For this, we
start with the graded $E_{2}$-map
$\mathbb{S}[\sigma_{m2}]\to\mathbb{S}[\sigma_{2}]$ and apply the functor sh.
This procedure provides a graded $E_{2}$-map
$\mathbb{S}[\sigma_{{0},m}]\to\mathbb{S}[\sigma_{0,1}]$ that carries
$\sigma_{0,m}$ to $\sigma_{0,1}^{m}$ as desired.
### 3.3. Cell structures on sheared polynomial algebras
A key technical result for us will be the even cell decomposition of
$\mathbb{S}[\sigma_{k}]$ _as an $E_{2}$-algebra_. As we will see in the
remainder of this section, this is what will allow for us to define
$E_{2}$-algebra maps to a given $E_{2}$-algebra $A$, along which we will then
adjoin roots.
###### Remark 3.8.
In the second arxiv version of [25], Hahn and Wilson also construct $E_{2}$
even cell decompositions on the free $E_{1}$-algebra $\mathbb{S}[\sigma_{k}]$
using a Koszul duality argument for even $k\geq 0$. However, they removed this
result in the later versions of their paper since they found a simpler
argument for their redshift results that avoid the use of these even cell
decompositions. Since this does not appear in the published version of [25],
we give a proof of the $E_{2}$ even cell decompositions on
$\mathbb{S}[\sigma_{k}]$ that we use. We would like to note that our methods
are different than the ones used in [25, Arxiv version 2].
Before doing this, we make precise what exactly we mean by even cell
decomposition. The following notions are heavily inspired by Section 6.3 of
[17].
###### Definition 3.9.
Let $f:S\to R\in\operatorname{Alg}_{E_{2}}(\operatorname{Sp}^{\mathbb{Z}})$ be
a map of $E_{2}$-algebras in graded spectra. We say $f$ has a _filtered
cellular decomposition_ if there exists a tower in
$\operatorname{Alg}_{E_{2}}(\operatorname{Fil}(\operatorname{Fun}(\mathbb{Z},\operatorname{Sp})))$
for which
$S=\operatorname{sk}_{-1}(f)\to\operatorname{sk}_{0}(f)\to\operatorname{sk}_{1}(f)\to\cdots\to\operatorname{colim}_{i}\operatorname{sk}_{i}(f)=:\operatorname{sk}(f)\simeq
R$
such that each $\operatorname{sk}_{i}(f)$ is obtained from
$\operatorname{sk}_{i-1(f)}$ via the following pushout diagram:
${\operatorname{Free}_{E_{2}}(\bigsqcup_{\alpha\in I_{n_{i}}}\partial
D^{g_{\alpha},n_{i}}[i-1])}$${\operatorname{sk}_{i-1}(f)}$${\operatorname{Free}_{E_{2}}(\bigsqcup_{\alpha\in
I_{n_{i}}}D^{g_{\alpha},n_{i}}[i])}$${\operatorname{sk}_{i}(f).}$
The notation $X[n]$ in the above means that the object $X$ is placed in
filtering degree $n$.
In particular, in each degree $i$ of the tower we are adding cells in
increasing dimension $n_{i}$. Thus if $i\leq j$, then $n_{i}\leq n_{j}$, and
$I_{n_{i}}$ refers to the set of $n_{i}$ cells of $R$. It may be the case,
that $n_{i}=i$, but we do not require this for the sake of flexibility of the
definition, which is a point of departure from the notion in [17]. If
$f:\mathbf{1}\to R$ is the map from the unit, we call this a filtered cellular
decomposition of $R$.
###### Definition 3.10.
A map $f\colon\thinspace S\to R$ of graded $E_{2}$-rings admits a cell
decomposition if it is the colimit of a tower
$S=\operatorname{sk}_{-1}(f)\to\operatorname{sk}_{0}(f)\to\operatorname{sk}_{1}(f)\to\cdots\to\operatorname{colim}_{i}\operatorname{sk}_{i}(f)\simeq
R$
in graded $E_{2}$-rings where each stage is obtained from the previous via a
cell attachement in graded $E_{2}$-rings. In particular, if $f$ admits a
filtered cellular decomposition, taking levelwise colimits provides a cellular
decomposition of $f$.
Our first step is to establish the decomposition for $\mathbb{S}[t]$.
###### Proposition 3.11.
As an $E_{2}$-algebra in graded spectra, $\mathbb{S}[t]$ admits a (filtered)
cellular decomposition with cells in even degrees.
###### Proof.
The two key inputs for our argument are Theorem 11.21 and Theorem 13.7 of
[17]. The former result applied to the map $f:0\to I$ of non-unital
$E_{2}$-algebras in graded spectra (where $I$ is the augmentation ideal of the
map $\mathbb{S}[t]\to\mathbb{S}$) says that there exists a relative CW
decomposition
$0\to\operatorname{colim}\operatorname{sk}_{n}(f)\simeq I.$
Moreover, the proof of this fact in loc. cit. constructs a minimal cell
structure, one that has the smallest possible number of cells in a given
bidegree (the extra degree here arises since we are working in graded
spectra). In particular the colimit they construct,
$\operatorname{colim}\operatorname{sk}_{n}(f)$, will have cells precisely in
bidegree
$b^{E_{2}}_{g,d}(\mathbb{S}[t]):=\operatorname{dim}_{k}H^{E_{2}}_{g,d}(\mathbb{S}[t],\mathbb{S},k)\in\mathbb{N}\cup\\{\infty\\}$,
where $H^{\mathcal{O}}_{g,d}(R;k)$ is the $\mathcal{O}$-homology of an
$\mathcal{O}$-algebra $R$, with coefficients in the ring $k$; we will in fact
set $k=\mathbb{Z}$.
We sketch the argument given there applied to our particular case for the sake
of completeness. One proceeds by inductively constructing a factorization
$0=\operatorname{sk}_{-1}\xrightarrow{h_{0}}\cdots\xrightarrow{h_{\epsilon}}\operatorname{sk}_{\epsilon}\xrightarrow{f_{\epsilon}}I,$
in the $\infty$-category of (increasingly) filtered objects of
$\operatorname{Fun}(\mathbb{Z},\operatorname{Sp})$. Here
$h_{e}:\operatorname{sk}_{e-1}\to\operatorname{sk}_{e}$ comes with the
structure of a (filtered) CW attachment of dimension $e$, where
$\operatorname{sk}_{i}$ denotes the $i$th skeleton equipped with the skeletal
filtration leading up to that degree. Taking the colimit along $\epsilon$
gives an induced map $f_{\infty}:\operatorname{colim}sk_{\epsilon}(f)\to I$,
which is an equivalence.
For the inductive step in their argument, they show that the Hurewicz map
$\pi_{*,\epsilon}(\mathbb{S}[t],\operatorname{sk}_{\epsilon-1})\to
H_{*,\epsilon}^{E_{2}}(\mathbb{S}[t],\operatorname{sk}_{\epsilon-1};k)$
from relative homotopy to relative $E_{2}$-homology with coefficients in $k$
is surjective. Using this, one is able to choose a set of maps
$\\{E_{\alpha}:(D^{\epsilon},\partial
D^{\epsilon})\to(\mathbb{S}[t](g),\operatorname{sk}_{\epsilon-1}(g))\\},$
whose images generate
$H_{*,\epsilon}^{E_{2}}(\mathbb{S}[t],\operatorname{sk}_{\epsilon-1};k)$ as a
$k$-module. The boundary maps are then used to attach filtered cells
$(g,\epsilon)$ to $\operatorname{sk}_{\epsilon-1}$ to form
$\operatorname{sk}_{\epsilon}$ and the corresponding $E_{\alpha}$ is used to
extend $f_{\epsilon-1}$ to $f_{\epsilon}$. Putting all this together, we see
that the attachment of the cells is parameterized by the dimensions of the
$E_{2}$-algebra homology groups with coefficients in $k$. To see, in our
particular setup, that this cell decomposition is concentrated in even
degrees, it is therefore enough to verify that
$H^{E_{2}}_{g,e}(\mathbb{S}[t],k)$ vanishes whenever $e\cong 1\mod 2$. For
this we apply [17, Theorem 13.7] which states that the $k$-fold iterated bar
construction of an $E_{k}$ algebra is equivalent to the $k$-suspension of the
$E_{k}$-cotangent complex. By [35, Proposition 5.4.9], there is an equivalence
$\operatorname{Bar}^{(2)}(\mathbb{S}[t])\simeq\operatorname{gr}(\mathbb{S}[\mathbb{C}P^{n}]_{n\geq
0})$ in graded spectra, where the right hand side is the associated graded of
the filtration on spherical chains on $\mathbb{C}P^{\infty}$, with filtration
induced by the skeletal filtration on infinite projective space. Tensoring
this with $k=\mathbb{Z}$ in our particular situation, we obtain (a 2-fold
shift) of chains on $\mathbb{C}P^{\infty}$ with coefficients in $\mathbb{Z}$
which has a cell in each bidegree $(-n,2n-2)$. By taking into account units,
we conclude that $\mathbb{S}[t]$ may be constructed from $\mathbb{S}$ by
attaching the same cells. ∎
###### Remark 3.12.
We remark that we may take the levelwise colimit in the above filtered
cellular decomposition to obtain an $E_{2}$ cellular decomposition for
$\mathbb{S}\to\mathbb{S}[t]$ in the sense of Definition 3.10.
###### Corollary 3.13.
The degree zero piece of the above cellular decomposition is the free algebra
$\operatorname{Free}_{E_{2}}(\mathbb{S}(-1))$, i.e. the free $E_{2}$-algebra
with generator in degree $0$ and weight $-1$. Moreover, the map
$f_{0}:\operatorname{Free}_{E_{2}}((\mathbb{S}(-1))\to\mathbb{S}[t]$ itself
admits a cellular decomposition with even cells of positive dimensions
###### Proof.
In degree zero, we have the following pushout square in the
$\operatorname{Alg}_{E_{2}}(\operatorname{Fun}(\mathbb{Z},\operatorname{Sp}))$:
${\mathbb{S}\simeq\operatorname{Free}_{E_{2}}(\emptyset)}$${\operatorname{sk}_{-1}(\mathbb{S}[t])\simeq\mathbb{S}}$${\operatorname{Free}_{E_{2}}(\mathbb{S}(-1))\simeq\operatorname{Free}_{E_{2}}(D^{0})}$${\operatorname{sk}_{0}(\mathbb{S}[t]).}$
Since this is a pushout square we obtain an equivalence
$\operatorname{sk}_{0}(\mathbb{S}[t])\simeq\operatorname{Free}_{E_{2}}(\mathbb{S}(-1))$
Moreover, by starting in degree zero with the zero cells already attached, we
may conclude that the map
$f_{0}:\operatorname{Free}_{E_{2}}(\mathbb{S}(-1))\to\mathbb{S}[t].$
itself admits a cellular decomposition with even cells of positive dimension.
∎
By the above corollary, we have a cellular decomposition on the map of graded
$E_{2}$-rings $\operatorname{Free}_{E_{2}}(\mathbb{S}(-1))\to\mathbb{S}[t]$.
We can apply shearing to this map to obtain a map
$\operatorname{Free}_{E_{2}}(\mathbb{S}^{k}(-1))\to\mathbb{S}[\sigma_{k}].$
###### Proposition 3.14.
Let $k>0$ be even. The map
$f_{0}:\operatorname{Free}_{E_{2}}\mathbb{S}^{k}(-1)\to\mathbb{S}[\sigma_{k}]$
admits a cell decomposition with cells concentrated in even degrees. Left Kan
extending along the multiplication map
$\mathbb{Z}\xrightarrow{\times-w}\mathbb{Z}$, we conclude that
$f_{0}:\operatorname{Free}_{E_{2}}\mathbb{S}^{k}(w)\to\mathbb{S}[\sigma_{k,w}]$
admits a cell decomposition with cells concentrated in even degrees.
###### Proof.
By construction, $\mathbb{S}[t]$ may be written as a filtered colimit of a
diagram of $E_{2}$ algebras,
$\operatorname{Free}_{E_{2}}(\mathbb{S}(-1))\to\operatorname{sk}_{1}(f)\to\cdots\to\operatorname{sk}_{i-1}(f)\to\operatorname{sk}_{i}(f)\to\cdots$
where each $\operatorname{sk}_{i}(f)$ is formed as a pushout from
$\operatorname{sk}_{i-1}$ along a map
$\operatorname{Free}_{E_{2}}(\mathbb{S}^{2n+1})\to\mathbb{S}$. We may apply
$\operatorname{sh}^{k/2}$ to this diagram, and take note of the fact that this
will commute with colimits along the filtered diagram, together with the free
$E_{2}$-algebra functor. Thus we conclude with an even cell presentation for
the induced map:
$\operatorname{sh}^{k/2}(\operatorname{Free}_{E_{2}}(\mathbb{S}(-1)))\simeq\operatorname{Free}_{E_{2}}(\mathbb{S}^{k}(-1))\to\mathbb{S}[\sigma_{k}].$
By left Kan extending along the multiplication by $-w$ map on $\mathbb{Z}$
(i.e. applying $L_{-w}$), we conclude analogously for the map
$\operatorname{Free}_{E_{2}}(\mathbb{S}^{k}(w))\to\mathbb{S}[\sigma_{k,w}].$
∎
###### Proposition 3.15.
Let $A$ be a (graded) $E_{2}$-ring whose homotopy groups are concentrated in
even degrees and let $a\in\pi_{k}A$ be a weight $w$ class for some even $k\geq
0$. Then there is a (graded) $E_{2}$-ring map
$\mathbb{S}[\sigma_{k,w}]\to A$
which carries $\sigma_{k}$ to $a$.
###### Proof.
By Proposition 3.14, the map
$f:\operatorname{Free}_{E_{2}}(\mathbb{S}^{k}(w))\to\mathbb{S}[\sigma_{k,w}]$
admits an even cell decomposition. Let $a\in\pi_{k}(A)$ be as in the
hypothesis of the proposition. This induces an $E_{2}$-algebra map
$\operatorname{Free}_{E_{2}}(\mathbb{S}^{k}(w))\to A$, which we would like to
extend inductively along the above tower. In order to do this, it is enough to
note that in degree $i$, we would need to trivialize the induced map
$\operatorname{Free}_{E_{2}}(\mathbb{S}^{k+2i-1})\to A$. Using the
free/forgetful adjunction between
$\operatorname{Alg}_{E_{2}}(\operatorname{Fun}(\mathbb{Z},\operatorname{Sp}))$
and $\operatorname{Fun}(\mathbb{Z},\operatorname{Sp})$ algebras and graded
spectra, this will now follow from the fact that $\pi_{2n-1}(A)=0$ for all
$n$. ∎
###### Remark 3.16.
In Remark 3.7, we mentioned that we are going to use
$\operatorname{sh}(\mathbb{S}[\sigma_{m2,m}])$ to adjoin roots to degree $0$
classes. We remark that $\operatorname{sh}(\mathbb{S}[\sigma_{m2,m}])$ also
satisfies the lifting property in the proposition above. This follows by the
fact that the even cell decomposition for $\mathbb{S}[\sigma_{m2,m}]$ provides
an even cell decomposition for $\operatorname{sh}(\mathbb{S}[\sigma_{m2,m}])$
since $\operatorname{sh}$ is an $E_{2}$-monoidal left adjoint functor.
###### Remark 3.17.
We remark that another way to construct an $E_{2}$-algebra map
$\mathbb{S}[t]\to A$ comes from the filtration on $\mathbb{S}[t]$ given its
filtered cell decomposition. The mapping space
$\operatorname{Map}_{\operatorname{Alg}_{E_{2}}}(\mathbb{S}[t],A)$
obtains a filtration from the filtation on the source; this will have
associated graded
$\operatorname{gr}\operatorname{Map}_{\operatorname{Alg}_{E_{2}}}(\mathbb{S}[t],A)\simeq\operatorname{Map}(\operatorname{Free}_{E_{2}}(\bigoplus_{n\geq
1}\mathbb{S}^{2n},A)\simeq\operatorname{Map}(\bigoplus_{n\geq
1}\mathbb{S}^{2n},A).$
Now if $X$ has even homotopy groups, then so does the associated graded, so
that the resulting spectral sequence computing the homotopy groups of the
limit collapses. Thus, $a\in\pi_{0}(A)$ gives a class in
$x\in\pi_{0}\operatorname{Map}(\mathbb{S}^{2k},X)\subset\pi_{0}\operatorname{Map}(\bigoplus_{n\geq
1}\mathbb{S}^{2n+2nk-2},X)$, which will be an infinite cycle, and thus
corresponds to an $E_{2}$-algebra map $\mathbb{S}[t]\to A$. We remark that
this approach should allow for one to define maps $\mathbb{S}[\sigma_{k}]\to
A$ in the general case as well. We thank Oscar Randal-Williams for suggesting
this approach.
## 4\. Adjoining roots and THH
Here, we introduce our construction for adjoining roots to ring spectra and
prove our first results on the THH of ring spectra obtained through this
construction.
### 4.1. Background on algebras over $E_{n}$-algebras
Here is a quick background on some of the standard facts that we often use
from [36].
For an $E_{\infty}$-algebra $R$ in a presentably symmetric monoidal
$\infty$-category $\mathcal{C}$, the $\infty$-category of $E_{n}$ $R$-algebras
is a symmetric monoidal $\infty$-category with the pointwise tensor product
[36, Example 3.2.4.4]. Therefore, for two $E_{n}$ $R$-algebras $A$ and $B$,
$A\otimes_{R}B$ is an $E_{n}$ $R$-algebra.
In this work, we often consider algebras over an $E_{n}$-algebra $R$ and in
this case, the $\infty$-category of $E_{m}$ $R$-algebras (for $m\leq n-1$) are
not known to carry an appropriate $E_{n-1}$-monoidal structure. To work around
this problem, we use the following facts which can be extracted from [33,
Corollary 4.8.5.20].
The $\infty$-category of (left) right $R$-modules is an $E_{n-1}$-monoidal
$\infty$-category. We call an $E_{m}$-algebra in the $\infty$-category of
right $R$-modules an $E_{m}$ $R$-algebra where $m\leq n-1$.
Furthermore, for a map $f\colon\thinspace R\to S$ of $E_{n}$-algebras in
$\mathcal{C}$, one obtains an $E_{n-1}$-monoidal functor $-\otimes_{R}S$
between the respective $\infty$-categories of modules. For every $m\leq n-1$,
this induces a functor:
(4.1)
$-\otimes_{R}S\colon\thinspace\operatorname{Alg}_{E_{m}}(\operatorname{RMod}_{R})\to\operatorname{Alg}_{E_{m}}(\operatorname{RMod}_{S}).$
In particular, for an $E_{m}$ $R$-algebra $A$, $A\otimes_{R}S$ is an $E_{m}$
$S$-algebra. Furthermore, the forgetful functor induced by $f$, i.e. the right
adjoint of $-\otimes_{R}S$, is $E_{n-1}$-lax monoidal and therefore it induces
a functor:
(4.2)
$\operatorname{Alg}_{E_{m}}(\operatorname{RMod}_{S})\to\operatorname{Alg}_{E_{m}}(\operatorname{RMod}_{R}).$
The unit of this adjunction provides a map of $E_{m}$ $R$-algebras:
(4.3) $A\to A\otimes_{R}S.$
Since $S$ is the monoidal unit in $\operatorname{RMod}_{S}$, $S$ is an
$E_{n-1}$ $S$-algebra, and forgetting through (4.2), it is an $E_{n-1}$
$R$-algebra. In summary, an $E_{n}$-algebra map $R\to S$ equips $S$ with the
structure of an $E_{n-1}$ $R$-algebra.
### 4.2. A construction for adjoining roots to ring spectra
We now introduce our construction for adjoining roots to ring spectra. For
this we use the following hypothesis. Recall that we often omit the functor
$D$ and let $\mathbb{S}[\sigma_{k}]$ denote the underlying $E_{2}$-ring of the
graded $E_{2}$-ring $\mathbb{S}[\sigma_{k}]$.
###### Hypothesis 4.4 (Root adjunction hypothesis).
Given an $E_{1}$-ring $A$ with an $a\in\pi_{mk}A$, there is a chosen
$\mathbb{S}[\sigma_{mk}]$-algebra structure on $A$ for which the structure map
$\mathbb{S}[\sigma_{mk}]\to A$ carries $\sigma_{mk}$ to $a\in\pi_{mk}A$. Here,
$m>0$ and $k\geq 0$ is even. See Proposition 4.5 for the cases of interest
where this is satisfied.
The hypothesis above may not seem very natural but it is satisfied in the
following general situations.
###### Proposition 4.5.
Let $k\geq 0$ be even and $m>0$, an $E_{1}$-ring $A$ satisfies Hypothesis 4.4
for $a\in\pi_{mk}A$ if:
1. (1)
$A$ is an $E_{2}$-ring for which $\pi_{*}A$ is concentrated in even degrees,
or
2. (2)
$A$ is an $R$-algebra for an $E_{2}$-ring $R$ where $\pi_{*}R$ is concentrated
in even degrees and $a$ is in the image of the map $\pi_{*}R\to\pi_{*}A$.
###### Proof.
Assume that $A$ is as in (2), let $r\in\pi_{mk}R$ detect $a$ through the map
$\pi_{*}R\to\pi_{*}A$. We choose an $E_{2}$-ring map
$g\colon\thinspace\mathbb{S}[\sigma_{mk}]\to R$ that carries $\sigma_{mk}$ to
$r$, see Proposition 3.15. Forgetting through $g$, see (4.2), one obtains a
$\mathbb{S}[\sigma_{mk}]$-algebra structure on $A$. Indeed, through this
structure, $\sigma_{mk}$ acts through $a$ as desired.
If $A$ is as in (1), then $A$ is an $A$-algebra and $A$ satisfies the
assumption in (2). Therefore, $A$ satisfies Hypothesis 4.4. ∎
For instance, the Morava $K$-theory spectrum $K(n)$ and all $E_{1}$
$MU_{(p)}$-algebra forms of $BP\langle n\rangle$ satisfy Hypothesis 4.4 with
respect to their non-negative degree homotopy classes.
Notice that we are not assuming any preexisting non-trivial grading on $A$; in
fact this will allow us to view it as an $m$-graded spectrum concentrated in
weight zero. Given $a\in\pi_{mk}A$, the following construction adjoins an
$m$-root to $a$.
###### Construction 4.6.
Assume Hypothesis 4.4. We consider $\mathbb{S}[\sigma_{mk}]$ as an $m$-graded
$E_{2}$-ring and $A$ as an $m$-graded $\mathbb{S}[\sigma_{mk}]$-algebra, both
concentrated in weight $0$, using the functor $F$ from Section 2.2; we omit
$F$ in our notation.
Due to Proposition 3.5 (Remark 3.7 for $k=0$), there is a map
$\phi:\mathbb{S}[\sigma_{mk}]\to\mathbb{S}[\sigma_{k}]$
of $m$-graded $E_{2}$-rings that carries $\sigma_{mk}$ to $\sigma_{k}^{m}$ in
homotopy where $\sigma_{k}$ is of weight $1$ and $\sigma_{mk}$ is of weight
$0$. Note that we omit the functor $D^{m}$ in our notation. Considering the
corresponding extension of scalars functor
$-\wedge_{\mathbb{S}[\sigma_{mk}]}\mathbb{S}[\sigma_{k}]$ between the
$\infty$-categories of $m$-graded $\mathbb{S}[\sigma_{mk}]$-algebras and
$m$-graded $\mathbb{S}[\sigma_{k}]$-algebras, (see (4.1)), we define the
$m$-graded $E_{1}$ $\mathbb{S}[\sigma_{k}]$-algebra $A(\sqrt[m]{a})$ through:
$A(\sqrt[m]{a}):=A\wedge_{\mathbb{S}[\sigma_{mk}]}\mathbb{S}[\sigma_{k}].$
This comes equipped with a map $A\to A(\sqrt[m]{a})$ of $m$-graded $E_{1}$
$\mathbb{S}[\sigma_{mk}]$-algebras, see (4.3).
Since $\pi_{*}(\mathbb{S}[\sigma_{k}])$ is free as a
$\pi_{*}(\mathbb{S}[\sigma_{mk}])$-module, one obtains an isomorphism of
rings:
$\pi_{*}A(\sqrt[m]{a})\cong\pi_{*}(A)[z]/(z^{m}-a).$
Therefore, we say $A(\sqrt[m]{a})$ is obtained from $A$ by adjoining an
$m$-root to $a$.
When $A$ is $p$-local, observe that we have and equivalence of $m$-graded
$E_{1}$ $\mathbb{S}[\sigma_{k}]$-algebras:
$A(\sqrt[m]{a})\simeq
A\wedge_{\mathbb{S}_{(p)}[\sigma_{mk}]}\mathbb{S}_{(p)}[\sigma_{k}],$
where $\mathbb{S}_{(p)}[\sigma_{i}]$ denotes the $p$-localization of
$\mathbb{S}[\sigma_{i}]$.
It follows that the weight pieces of $A(\sqrt[m]{a})$ are given by the
following
(4.7) $A(\sqrt[m]{a})_{i}\simeq\Sigma^{ik}A$
for each $0\leq i<m$.
Note that $A(\sqrt[m]{a})$ might possibly depend on the
$\mathbb{S}[\sigma_{mk}]$-algebra structure chosen on $A$. Therefore,
everytime we apply Construction 4.6, we fix an
$\mathbb{S}[\sigma_{mk}]$-algebra structure on $A$.
###### Remark 4.8.
If $A$ is an $E_{3}$-ring with even homotopy, one may start with an
$E_{2}$-map $\mathbb{S}[\sigma_{mk}]\to A$ for a given $a\in\pi_{mk}A$ with
$k\geq 0$. By extending scalars, one obtains an $E_{2}$-ring map
$A\wedge\mathbb{S}[\sigma_{mk}]\to A$ that equips $A$ with the structure of an
$A\wedge\mathbb{S}[\sigma_{mk}]$-algebra. Through this, $A(\sqrt[m]{a})$ is
weakly equivalent as an $m$-graded $\mathbb{S}[\sigma_{k}]$-algebra to
$A\wedge_{A\wedge\mathbb{S}[\sigma_{mk}]}A\wedge\mathbb{S}[\sigma_{k}].$
In particular, $A(\sqrt[m]{a})$ admits the structure of an $m$-graded
$A\wedge\mathbb{S}[\sigma_{k}]$-algebra.
###### Remark 4.9.
In general, we do not expect the root adjunction $A\ \to A(\sqrt[m]{a})$ to
satisfy a universal property. On the other hand, if $A$ is an $E_{3}$-ring,
$A(\sqrt[m]{a})$ is an $A$-algebra and the map
$\pi_{*}A\to\pi_{*}A(\sqrt[m]{a})$ is étale, then it follows by [23, Theorem
1.10] that there is a bijection between homotopy classes of $A$-algebra maps
$A(\sqrt[m]{a})\to B$ and $\pi_{*}A$-algebra maps
$\pi_{*}A(\sqrt[m]{a})\to\pi_{*}B$ for any étale $A$-algebra $B$.
For the following, we fix an $E_{2}$-map
$\mathbb{S}_{(p)}[\sigma_{2(p-1)}]\to\ell$ carrying $\sigma_{2(p-1)}$ to
$v_{1}$.
###### Theorem 4.10.
There is an equivalence
$ku_{p}\simeq\ell_{p}(\sqrt[p-1]{v_{1}})$
of $E_{1}$ $\ell_{p}$-algebras.
###### Proof.
By Remark 4.8 above, $\ell_{p}(\sqrt[p-1]{v_{1}})$ is an $\ell_{p}$-algebra.
Let $L_{p}$ denote the non-connective $p$-completed Adams summand. The $E_{1}$
$L_{p}$-algebra
$\ell_{p}(\sqrt[p-1]{v_{1}})\wedge_{\ell_{p}}L_{p}$
is an étale $E_{1}$ $L_{p}$-algebra in the sense of Hesselholt-Pstragowski
[23] and there is an isomorphism of $\pi_{*}(L_{p})$-algebras
$\pi_{*}(\ell_{p}(\sqrt[p-1]{v_{1}})\wedge_{\ell_{p}}L_{p})\cong\pi_{*}(KU_{p}).$
It follows by [23, Theorem 1.10] that there is an equivalence of
$L_{p}$-algebras
$\ell_{p}(\sqrt[p-1]{v_{1}})\wedge_{\ell_{p}}L_{p}\simeq KU_{p}.$
Through this, $\ell_{p}(\sqrt[p-1]{v_{1}})$ serves as the connective cover of
$KU_{p}$ in $E_{1}$ $\ell_{p}$-algebras. Hence, there is an equivalence of
$E_{1}$ $\ell_{p}$-algebras $ku_{p}\simeq\ell_{p}(\sqrt[p-1]{v_{1}})$. ∎
In Theorem 9.6 we show that the Morava $E$-theory spectrum $E_{n}$ is given by
$\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge_{\mathbb{S}_{p}}\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}})$
as an $E_{1}$-ring where $\widehat{E(n)}$ is the $K(n)$-localized Johnson-
Wilson spectrum.
###### Remark 4.11.
In certain cases, it is possible to equip $A(\sqrt[m]{a})$ with the structure
of an $E_{n}$-algebra for $n>1$. For this, one may use the graded $E_{\infty}$
$MU$-algebra $MU[\sigma_{k}]$ [25, Construction 2.6.1] where $k>0$ is even.
Indeed, $MU[\sigma_{k}]$ is the free graded $E_{1}$ $MU$-algebra over
$\Sigma^{k}MU$. There is a map of graded $E_{\infty}$-rings
$MU[\sigma_{2(p^{n}-1)}]:=L_{p^{n}-1}R_{p^{n}-1}MU[\sigma_{2}]\to
MU[\sigma_{2}]$
and Proposition 3.15 provides maps $\mathbb{S}[\sigma_{k}]\to MU[\sigma_{k}]$
of graded $E_{2}$-rings.
It follows by [25, Remark 2.1.2] that a form of $BP\langle n\rangle$ admits
the structure of an $E_{3}$ $MU[\sigma_{2(p^{n}-1)}$]-algebra where
$\sigma_{2(p^{n}-1)}$ acts through $v_{n}\in\pi_{*}BP\langle n\rangle$. We
obtain an equivalence of $p^{n}-1$-graded $E_{1}$
$\mathbb{S}[\sigma_{2}]$-algebras
$BP\langle n\rangle(\sqrt[p^{n}-1]{v_{n}}):=BP\langle
n\rangle\wedge_{\mathbb{S}[\sigma_{2(p^{n}-1)}]}\mathbb{S}[\sigma_{2}]\simeq
BP\langle n\rangle\wedge_{MU[\sigma_{2(p^{n}-1)}]}MU[\sigma_{2}].$
This equips $BP\langle n\rangle(\sqrt[p^{n}-1]{v_{n}})$ with the structure of
a $p^{n}-1$-graded $E_{3}$ $MU[\sigma_{2}]$-algebra.
### 4.3. The weight zero piece of THH
Here, we prove our first result regarding the topological Hochschild homology
of the ring spectra obtained via root adjunction. Namely, we show that
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$ contains
$\operatorname{\textup{THH}}(A)$ as a summand whenever $A$ is $p$-local and
$p\nmid m$.
It follows by [2, Example A.10] that for an $m$-graded $E_{1}$-ring $Y$, the
$m$-grading on $\operatorname{\textup{THH}}(Y)$ is obtained by applying the
cyclic bar construction $b_{\bullet}(Y)$ of $Y$ in the $\infty$-category of
$m$-graded spectra. In simplicial level $s$ and weight $i$, the $m$-graded
cyclic bar construction of $Y$ is given by the following.
$b_{s}(Y)_{i}\simeq\bigvee_{k_{0}+\cdots+k_{s}=i\in\mathbb{Z}/m}Y_{k_{0}}\wedge\cdots\wedge
Y_{k_{s}}$
Due to [2, Corollary A.15], one has the following equality
(4.12) $\operatorname{\textup{THH}}(D(Y))\simeq
D(\operatorname{\textup{THH}}(Y))$
where the functor $D(-)$ provides the underlying spectrum as usual; we often
omit $D$ in our notation. Furthermore, $\operatorname{\textup{THH}}(Y)$ is an
$S^{1}$-equivariant $m$-graded spectrum in a canonical way and the equivalence
above is an equivalence of $S^{1}$-equivariant spectra.
###### Construction 4.13.
Let $R$ be an $E_{2}$-ring and let $S$ be an $E_{1}$ $R$-algebra. For us this
will mean that the pair
$(R,S)\in\operatorname{RMod}^{(2)}(\operatorname{Sp})$, where
$\operatorname{RMod}^{(2)}(\mathcal{C})=\operatorname{Alg}(\operatorname{RMod}(\mathcal{C}))$
for an arbitrary symmetric monoidal $\infty$-category $\mathcal{C}$. Here,
$\operatorname{RMod}(\mathcal{C})$ is the $\infty$-category of pairs $(A,M)$
where $A$ is an $E_{1}$-algebra and
$M\in\operatorname{RMod}_{A}(\mathcal{C})$. Thus, objects in
$\operatorname{RMod}^{(2)}(\mathcal{C})$ may be identified with pairs $(A,M)$
where $A$ is an $E_{2}$-algebra, and $M$ is an $E_{1}$ A-algebra in
$\mathcal{C}$.
We remark that in general, $\operatorname{RMod}^{(2)}(\mathcal{C})$ may be
written as $\operatorname{Alg}_{\mathcal{O}}(\mathcal{C})$ where $\mathcal{O}$
is the _tensor product_ of operads
$\mathcal{O}:=\operatorname{RMod}\times E_{1}$
This tensor product of operads, studied in depth in [36, Section 2.2.5] is
symmetric and satisfies the following universal property at the level of
algebra objects:
$\operatorname{Alg}_{\mathcal{O}}(\mathcal{C})\simeq\operatorname{Alg}_{E_{1}}(\operatorname{Alg}_{\operatorname{RMod}}(\mathcal{C}))\simeq\operatorname{Alg}_{\operatorname{RMod}}(\operatorname{Alg}_{E_{1}}(\mathcal{C}))$
Hence, applying the discussion to $\mathcal{C}=\operatorname{Sp}$ and $R$ and
$S$ as above, we may view $S$ as a right $R$-module in $E_{1}$-algebras.
Since $\operatorname{\textup{THH}}$ is a symmetric monoidal functor from
$E_{1}$-rings to spectra [41, Section IV.2] we deduce that
$\operatorname{\textup{THH}}(S)$ is a right
$\operatorname{\textup{THH}}(R)$-module.
###### Proposition 4.14.
Let $F$ be an $m$-graded $E_{1}$ $E$-algebra and $E\to F^{\prime}$ be a map of
$m$-graded $E_{2}$-rings. There is a natural equivalence of $m$-graded right
$\operatorname{\textup{THH}}(F^{\prime})$-modules in $S^{1}$-equivariant
spectra:
$\operatorname{\textup{THH}}(F\wedge_{E}F^{\prime})\simeq\operatorname{\textup{THH}}(F)\wedge_{\operatorname{\textup{THH}}(E)}\operatorname{\textup{THH}}(F^{\prime}),$
whose underlying undgraded equivalence is that of right
$\operatorname{\textup{THH}}(F^{\prime})$-modules in cyclotomic spectra. If
$E$ and $F$ are concentrated in weight zero, then we have the following.
$\operatorname{\textup{THH}}(F\wedge_{E}F^{\prime})_{i}\simeq\operatorname{\textup{THH}}(F)\wedge_{\operatorname{\textup{THH}}(E)}(\operatorname{\textup{THH}}(F^{\prime})_{i})$
###### Proof.
Let us recall that the functor
$\operatorname{\textup{THH}}:\operatorname{Alg}_{\operatorname{Sp}}\to\operatorname{CycSp}$
is symmetric monoidal. Furthermore, it commutes with sifted colimits; indeed
this can be seen from the fact that it can be decomposed into a composition of
functors comprised of taking tensor products and realizations of simplicial
objects, both of which commute with sifted colimits. Thus there will be a
natural equivalence
$\displaystyle\operatorname{\textup{THH}}(F\wedge_{E}F^{\prime})$
$\displaystyle\simeq\operatorname{\textup{THH}}(||\operatorname{Bar}_{\bullet}(F,E,F^{\prime})||)$
$\displaystyle\simeq||\operatorname{Bar}_{\bullet}(\operatorname{\textup{THH}}(F)_{\bullet},\operatorname{\textup{THH}}(E)_{\bullet},\operatorname{\textup{THH}}(F^{\prime})_{\bullet})||$
$\displaystyle\simeq\operatorname{\textup{THH}}(F)\wedge_{\operatorname{\textup{THH}}(E)}\operatorname{\textup{THH}}(F^{\prime})$
This allows us to deduce that $\operatorname{\textup{THH}}$ preserves the
sifted colimit given by the double sided Bar construction; this can be
computed at the level of underlying spectra by the bilinear pairing
${}_{F}\operatorname{BMod}_{E}\times{}_{E}\operatorname{BMod}_{F^{\prime}}\to_{F}\operatorname{BMod}_{F^{\prime}}$
corresponding to the relative tensor product. Furthermore, as
$\operatorname{\textup{THH}}$ preserves the sifted colimits corresponding to
this relative tensor product, the above equivalence is compatible with right
$\operatorname{\textup{THH}}(F^{\prime})$ module structures. The analogous
claims all hold when accounting for additional gradings, by recalling that
$\operatorname{\textup{THH}}$ promotes to a sifted colimit preserving
symmetric monoidal functor from algebras in graded spectra to
$S^{1}$-equivariant objects in graded spectra. In particular, if $E$ and $F$
are concentrated in weight zero, we deduce the equivalence
$\operatorname{\textup{THH}}(F\wedge_{E}F^{\prime})_{i}\simeq\operatorname{\textup{THH}}(F)\wedge_{\operatorname{\textup{THH}}(E)}(\operatorname{\textup{THH}}(F^{\prime})_{i}).$
of graded $\operatorname{\textup{THH}}(F)$-modules. ∎
###### Remark 4.15.
The $m=1$ case of the proposition above provides the non-graded case.
One may consider $\mathbb{S}[\sigma_{k}]$ as an $E_{1}$-ring obtained by
adjoining an $m$-root to $\mathbb{S}[\sigma_{mk}]$. Proposition 4.17
identifies the weight zero piece of
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])$. Before
Proposition 4.17, we state and prove a well known fact.
###### Lemma 4.16.
Let $\varphi\colon\thinspace M\to N$ be a map between bounded below spectra.
Then $\varphi$ is an equivalence if and only if $H\mathbb{Z}\wedge\varphi$ is
an equivalence. If furthermore $M$ and $N$ are $p$-local, then $\varphi$ is an
equivalence if and only if $H\mathbb{Z}_{(p)}\wedge\varphi$ is an equivalence.
###### Proof.
Let $K$ be the fiber of $\varphi$ and let $i$ be the smallest $i$ such that
$\pi_{i}K\neq 0$. Due to the Tor spectral sequence of [15, Theorem IV.4.1], we
have $\pi_{i}(H\mathbb{Z}\wedge K)=\pi_{i}K$. Therefore, if $H\mathbb{Z}\wedge
K\simeq 0$ then $K\simeq 0$ and $\varphi$ is an equivalence.
If $M$ and $N$ are $p$-local, then $\varphi$ is an equivalence if and only if
$\mathbb{S}_{(p)}\wedge\varphi$ is an equivalence. It follows by the previous
result that $\varphi$ is an equivalence if and only if
$H\mathbb{Z}\wedge\mathbb{S}_{(p)}\wedge\varphi\simeq
H\mathbb{Z}_{(p)}\wedge\varphi$ is an equivalence. ∎
For the following, let $k\geq 0$ be even and let $m>1$. Furthermore, fix a map
of $m$-graded $E_{2}$-rings
$\mathbb{S}_{(p)}[\sigma_{mk}]\to\mathbb{S}_{(p)}[\sigma_{k}]$ provided by
Proposition 3.5 (Remark 3.7 for $k=0$).
###### Proposition 4.17.
In the situation above, assume that $p\nmid m$. The induced map
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}])_{0}\xrightarrow{\simeq}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])_{0}$
is an equivalence of $E_{1}$-rings. Since $\mathbb{S}_{(p)}[\sigma_{mk}]$ is
concentrated in weight zero, we obtain the following chain of equivalences
$D(\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]))\simeq\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}])_{0}\xrightarrow{\simeq}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])_{0}$
of $E_{1}$-rings using Lemma 2.3.
###### Proof.
It suffices to prove that the map
$H\mathbb{Z}_{(p)}\wedge\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}])_{0}\to
H\mathbb{Z}_{(p)}\wedge\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])_{0}$
is an equivalence, see Lemma 4.16.
By the base change formula for THH, this is equivalent to the following map
$\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}(H\mathbb{Z}_{(p)}[\sigma_{mk}])\to\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}(H\mathbb{Z}_{(p)}[\sigma_{k}])$
being an equivalence in weight zero. Here, $H\mathbb{Z}_{(p)}[\sigma_{k}]$
denotes the free $H\mathbb{Z}_{(p)}$-algebra on $\mathbb{S}_{(p)}^{k}$ given
by $H\mathbb{Z}_{(p)}\wedge\mathbb{S}_{(p)}[\sigma_{k}]$.
The map $H\mathbb{Z}_{(p)}[\sigma_{mk}]\to H\mathbb{Z}_{(p)}[\sigma_{k}]$
induces a map
$\phi^{r}\colon\thinspace E^{r}\to{F}^{r}$
from the Bökstedt spectral sequence computing
$\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}(H\mathbb{Z}_{(p)}[\sigma_{mk}])$
to the Bökstedt spectral sequence computing
$\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}(H\mathbb{Z}_{(p)}[\sigma_{k}])$.
Since the weight grading on the THH of an $m$-graded ring spectrum comes from
a weight grading on the corresponding cyclic bar construction, the Bökstedt
spectral sequence computing THH of an $m$-graded ring spectrum admits an
$m$-grading, i.e. it splits into $m$ summands in a canonical way. Therefore,
in our situation, it is sufficient to show that $\phi^{2}$ is an isomorphism
on weight zero.
We have
(4.18)
$\phi^{2}\colon\thinspace\mathbb{Z}_{(p)}[\sigma_{mk}]\otimes\Lambda_{\mathbb{Z}_{(p)}}(d(\sigma_{mk}))\to\mathbb{Z}_{(p)}[\sigma_{k}]\otimes\Lambda_{\mathbb{Z}_{(p)}}(d(\sigma_{k}))$
where $d$ denotes the Connes operator. The degrees of the classes above are
given by the following.
$\text{deg}(\sigma_{mk})=(0,mk)\ \ \ \text{deg}(d(\sigma_{mk}))=(1,mk)$
$\text{deg}(\sigma_{k})=(0,k)\textup{\ \ \ \ }\text{deg}(d(\sigma_{k}))=(1,k)$
Furthermore, $\sigma_{mk}$ and $d(\sigma_{mk})$ are in weight $0$ and
$\sigma_{k}$ and $d(\sigma_{k})$ are in weight $1$. In particular, all of
$E^{2}$ is weight zero and the weight zero piece of $F^{2}$ is the
$\mathbb{Z}_{(p)}$-module generated by the classes $\sigma_{k}^{im}$ and
$\sigma_{k}^{(i+1)m-1}d(\sigma_{k})$ over $i\geq 0$.
Since $\phi^{2}(\sigma_{mk})=\sigma_{k}^{m}$, we obtain that
$\phi^{2}(d(\sigma_{mk}))=d(\phi^{2}(\sigma_{mk}))=d(\sigma_{k}^{m})=m\sigma_{k}^{m-1}d(\sigma_{k}).$
Therefore, we have
$\phi^{2}(\sigma_{mk}^{i})=\sigma_{k}^{im}\textup{\ \ \ and\ \
}\phi^{2}(\sigma_{mk}^{i}d(\sigma_{mk}))=m\sigma_{k}^{(i+1)m-1}d(\sigma_{k}).$
Since $p\nmid m$, we have that $m$ is a unit. Using this, one observes that
$\phi^{2}$ is an isomorphism after restricting and corestricting to weight
zero as desired. ∎
In the situation of Hypothesis 4.4, $A\to A(\sqrt[m]{a})$ is a map of
$m$-graded $E_{1}$-rings and $A$ is concentrated in weight zero. Therefore,
there is a map
(4.19)
$\operatorname{\textup{THH}}(A)\to\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{0}$
where $A$ above denotes the underlying $E_{1}$-ring of $A$.
###### Theorem 4.20.
Assume Hypothesis 4.4 and that $A$ is $p$-local and $p\nmid m$. Then (4.19) is
an equivalence.
###### Proof.
Recall that $A(\sqrt[m]{a})$ is given by
$A\wedge_{\mathbb{S}_{(p)}[\sigma_{mk}]}\mathbb{S}_{(p)}[\sigma_{k}]$
where $A$ and $\mathbb{S}_{(p)}[\sigma_{mk}]$ are concentrated in weight zero.
Due to Proposition 4.14, we have
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{0}\simeq\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}])}(\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])_{0})$
and it follows by Proposition 4.17 that the map
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}])\xrightarrow{\simeq}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])_{0}$
is an equivalence. This identifies
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{0}$ with
$\operatorname{\textup{THH}}(A)$ as desired. ∎
## 5\. Adjoining roots and algebraic $K$-theory
We now prove Theorem 1.1 from the introduction. For the rest of this section,
assume Hypothesis 4.4. We established that $A(\sqrt[m]{a})$ is an $m$-graded
ring spectrum and therefore $\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$ is
an $S^{1}$-equivariant $m$-graded spectrum, see (4.12). One might define
$\operatorname{\textup{TC}^{-}}(A(\sqrt[m]{a}))$ as an $m$-graded spectrum
given by:
$\operatorname{\textup{TC}^{-}}(A(\sqrt[m]{a}))_{i}\simeq\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{i}^{hS^{1}}.$
Since $m$ is finite, the underlying spectrum of an $m$-graded spectrum,
provided by the functor $D$, is given by a finite coproduct which is
equivalent to the corresponding finite product. In particular, $D$ commutes
with all limits and colimits. Because of this, we have
$D(\operatorname{\textup{TC}^{-}}(A(\sqrt[m]{a})))\simeq\operatorname{\textup{TC}^{-}}(D(A(\sqrt[m]{a})))$
and therefore, we often omit $D$ in our notation.
Similarly, $\operatorname{\textup{TP}}(A(\sqrt[m]{a}))$ and
$(\operatorname{\textup{THH}}(A(\sqrt[m]{a}))^{tC_{p}})^{hS^{1}}$ admit the
structure of $m$-graded spectra and these constructions commute with the
functor $D$ as above. Combining this, with Theorem 4.20, we obtain the
following result.
###### Theorem 5.1.
Assume Hypothesis 4.4, that $A$ is $p$-local, and that $p\nmid m$. The maps
$\displaystyle\operatorname{\textup{TC}^{-}}(A)\xrightarrow{\simeq}\operatorname{\textup{TC}^{-}}(A(\sqrt[m]{a}))_{0}$
$\displaystyle\operatorname{\textup{TP}}(A)\xrightarrow{\simeq}\operatorname{\textup{TP}}(A(\sqrt[m]{a}))_{0}$
$\displaystyle(\operatorname{\textup{THH}}(A)^{tC_{p}})^{hS^{1}}\xrightarrow{\simeq}((\operatorname{\textup{THH}}(A(\sqrt[m]{a}))^{tC_{p}})^{hS^{1}})_{0}$
induced by $A\to A(\sqrt[m]{a})$ are all equivalences. ∎
When $A$ is connective and $p$-local, $A(\sqrt[m]{a})$ is also connective and
$p$-local. By [41, Corollary 1.5] (and the discussion afterwards), the
topological cyclic homology of $A(\sqrt[m]{a})$ is defined via the following
fiber sequence.
(5.2)
$\operatorname{\textup{TC}}(A(\sqrt[m]{a}))\to\operatorname{\textup{THH}}(A(\sqrt[m]{a}))^{hS^{1}}\xrightarrow{\varphi_{p}^{hS^{1}}-can}(\operatorname{\textup{THH}}(A(\sqrt[m]{a}))^{tC_{p}})^{hS^{1}}$
As mentioned above, the middle term and the third term above admit canonical
splittings into $m$-cofactors. Furthermore, $can$ respects this splitting
since it only depends on the $S^{1}$-equivariant structure of
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$.
However, $\operatorname{\textup{TC}}(A(\sqrt[m]{a}))$ do not necessarily split
into $m$-cofactors. This is due to the fact that the Frobenius map does not
necessarily respect the grading. Indeed, the Frobenius is given by maps
$\varphi_{p}\colon\thinspace\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{i}\to\operatorname{\textup{THH}}(A(\sqrt[m]{a}))^{tC_{p}}_{ip},$
see [2, Corollary A.9]. On the other hand, we obtain the following splitting
of $\operatorname{\textup{TC}}(A(\sqrt[m]{a}))$.
###### Construction 5.3.
Assume Hypothesis 4.4 and that $A$ is connective and $p$-local where $p\nmid
m$. In this situation, $p$ is a non-zero divisor in $\mathbb{Z}/m$. Therefore,
the Frobenius map on $\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$ carries
pieces of non-zero weight to non-zero weight pieces. Moreover, $\varphi_{p}$
carries weight zero to weight zero. Therefore, the map $\varphi_{p}-can$
splits as a coproduct of their restriction to weight zero and their
restriction to non-zero weight. In particular, the fiber sequence in (5.2)
admits a splitting as follows.
$\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{0}\vee\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{1}\to\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{0}^{hS^{1}}\vee\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{>0}^{hS^{1}}\\\
\xrightarrow{(\varphi_{p})_{0}-can_{0}\vee(\varphi_{p})_{>0}-can_{>0}}(\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{0}^{tC_{p}})^{hS^{1}}\vee(\operatorname{\textup{THH}}(A(\sqrt[m]{a}))^{tC_{p}})_{>0}^{hS^{1}}$
Here, $(-)_{>0}$ denotes restriction to weight not equal to $0$. We have
(5.4)
$\operatorname{\textup{TC}}(A(\sqrt[m]{a}))\simeq\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{0}\vee\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{1}$
where $\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{0}$ denotes the fiber of
the map $(\varphi_{p})_{0}-can_{0}$ and
$\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{1}$ denotes the fiber of the map
$(\varphi_{p})_{>0}-can_{>0}$.
###### Remark 5.5.
There are interesting cases where one obtains further splittings of the
topological cyclic homology spectrum
$\operatorname{\textup{TC}}(A(\sqrt[m]{a}))$. For instance, if $p=1$ in
$\mathbb{Z}/m$, then and one obtains that
$\operatorname{\textup{TC}}(A(\sqrt[m]{a}))$ splits into $m$-summands. This
happens to be the case when $m=p-1$ or when $p$ is odd and $m=2$. We exploit
this in Construction 7.3 to obtain a splitting of
$\operatorname{\textup{TC}}(ku_{p})$ into $p-1$ summands. Moreover, if
$m=p^{n}-1$, then one obtains an underlying $p-1$-grading of
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$ by Kan extending through
$\mathbb{Z}/(p^{n}-1)\to\mathbb{Z}/(p-1)$. This provides a $p-1$-grading for
$\operatorname{\textup{TC}}(A(\sqrt[m]{a}))$.
###### Theorem 5.6.
Assume Hypothesis 4.4 with $p\nmid m$ and that $A$ is $p$-local and
connective. Under the equivalence (5.4), the canonical map
$\operatorname{\textup{TC}}(A)\to\operatorname{\textup{TC}}(A(\sqrt[m]{a}))$
is equivalent to the inclusion of the first wedge summand.
###### Proof.
Since $A$ is concentrated in weight zero, the map
$\operatorname{\textup{TC}}(A)\to\operatorname{\textup{TC}}(A(\sqrt[m]{a}))$
factors through the map
(5.7)
$\operatorname{\textup{TC}}(A)\to\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{0}$
induced by the canonical map
$\operatorname{\textup{THH}}(A)\to\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{0}$.
The map
$\operatorname{\textup{THH}}(A)\to\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{0}$
of cyclotomic spectra is an equivalence due to Theorem 4.20. Considering the
construction of $\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{0}$, one observes
that this equivalence induces an equivalence between the fiber sequences
defining $\operatorname{\textup{TC}}(A)$ and
$\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{0}$. In other words, (5.7) is an
equivalence as desired. ∎
Finally, we obtain the desired splitting for $K(A(\sqrt[m]{a}))$.
###### Theorem 5.8 (Theorem 1.1).
Assume Hypothesis 4.4 with $p\nmid m$ and $k>0$. Furthermore, assume that $A$
is $p$-local and connective. In this situation, the following map
$K(A)\to K(A(\sqrt[m]{a}))$
is the inclusion of a wedge summand.
###### Proof.
Since $\lvert a\rvert=mk$ and since $k>0$, we have
(5.9) $\pi_{0}A(\sqrt[m]{a})=\pi_{0}A.$
We start by constructing a map of $m$-graded $E_{1}$-algebras
(5.10) $A(\sqrt[m]{a})\to H\pi_{0}A$
that induces an isomorphism on $\pi_{0}$ where $H\pi_{0}A$ is concentrated in
weight $0$. Weight $0$ Postnikov truncation [25, Lemma B.0.6] provides a map
of graded $E_{2}$-rings $\mathbb{S}[\sigma_{k}]\to\mathbb{S}$ that we consider
as a map of $m$-graded $E_{2}$-rings by left Kan extending through
$\mathbb{Z}\to\mathbb{Z}/m$.
This provides a map of $m$-graded $E_{1}$-rings
$A(\sqrt[m]{a})\simeq
A\wedge_{\mathbb{S}[\sigma_{mk}]}\mathbb{S}[\sigma_{k}]\to
A\wedge_{\mathbb{S}[\sigma_{mk}]}\mathbb{S}$
(see (4.3)) where the right hand side is concentrated in weight $0$.
Postcomposing with the ordinary Postnikov truncation, we obtain (5.10).
Due to the Dundas-Goodwillie-McCarthy theorem [13], there is a pullback square
${K(A(\sqrt[m]{a}))}$${\operatorname{\textup{TC}}(A(\sqrt[m]{a}))\simeq\operatorname{\textup{TC}}(A)\vee\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{1}}$${K(H\pi_{0}A)}$${\operatorname{\textup{TC}}(H\pi_{0}A)}$
provided by the map (5.10). The equivalence on the upper right corner follows
by Construction 5.3 and Theorem 5.6.
The map $A(\sqrt[m]{a})\to H\pi_{0}A$ induces a map of $m$-graded spectra
$f\colon\thinspace\operatorname{\textup{THH}}(A(\sqrt[m]{a}))\to\operatorname{\textup{THH}}(H\pi_{0}A).$
Since $H\pi_{0}A$ is concentrated in weight zero,
$\operatorname{\textup{THH}}(H\pi_{0}A)$ is also concentrated in weight zero.
Therefore, the map $f$ is trivial on
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{>0}$. This shows that the right
vertical map above induces the trivial map on
$\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{1}$. Using this, we obtain that
the pullback square above splits as a coproduct of the pullback squares
${K(A)}$${\operatorname{\textup{TC}}(A)}$${K(H\pi_{0}A)}$${\operatorname{\textup{TC}}(H\pi_{0}A)}$
and
${\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{1}}$${\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{1}}$${*}$${*.}$$\scriptstyle{\simeq}$
This shows that
$K(A(\sqrt[m]{a}))\simeq
K(A)\vee\operatorname{\textup{TC}}(A(\sqrt[m]{a}))_{1}$
as desired.
∎
Using the Purity theorem for algebraic $K$-theory, we obtain the following for
non-connective $A$.
###### Corollary 5.11.
Assume Hypothesis 4.4 with $p\nmid m$ and $k>0$. If $A$ is $p$-local, then the
map
$L_{T(i)}K(A)\to L_{T(i)}K(A(\sqrt[m]{a}))$
is the inclusion of a wedge summand for every $i\geq 2$. In particular, if $A$
is of height larger than $0$ and $A$ satisfies the redshift conjecture, then
$A(\sqrt[m]{a})$ also satisfies the redshift conjecture.
###### Proof.
Let $cA$ denote the connective cover of $A$ in
$\mathbb{S}[\sigma_{mk}]$-algebras. We consider the following commuting
diagram of $m$-graded $E_{1}$-rings.
${cA}$${A}$${(cA)(\sqrt[m]{a})}$${A(\sqrt[m]{a})}$
Every spectrum with bounded above homotopy is $T(i)$-locally trivial for every
$i\geq 1$. Taking fibers, one obtains that the horizontal arrows above are
$T(i)$-equivalence for every $i\geq 1$, see (4.7).
It follows by [31, Purity Theorem] that the horizontal maps above induce
$T(i)$-equivalences in algebraic $K$-theory for every $i\geq 2$. The result
follows by applying Theorem 5.8 to the left vertical map. ∎
## 6\. A variant of logarithmic THH
Here, we introduce our definition of logarithmic THH and identify
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$ using
$\operatorname{\textup{THH}}(A)$ and logarithmic THH of $A$ whenever $A$ is
$p$-local and $p\nmid m$. Through our definition, logarithmic THH admits a
canonical structure of a cyclotomic spectrum; in upcoming work, Devalapurkar
and the third author develop a very general notion of logarithmic structures
for $E_{2}$-algebras and a corresponding theory of log THH which subsumes the
definition we use here. This will in particular recover the variant due to
Rognes, which is defined by way of the replete bar construction, cf. [46, 47]
Our definition of log THH starts with a definition of the log THH of the free
algebra $\mathbb{S}[\sigma_{k}]$ where $k\geq 0$ is even as before. We
consider $\sigma_{k}$ to be in weight 1.
For a graded $E_{n}$-ring spectrum $E$, we denote the _weight connective
cover_ of $E$ by $E_{\geq 0}$. Indeed, the weight connective cover is obtained
by restricting and then left Kan extending through the inclusion
$\mathbb{N}\to\mathbb{Z}$. The counit of this adjunction provides a map
$E_{\geq 0}\to E$ of graded $E_{n}$-algebras.
###### Construction 6.1.
Analogous to Construction 3.3, let $\mathbb{S}[\sigma_{k}^{{\pm
1}}]:=\operatorname{sh}^{k}(\mathbb{S}[t^{\pm 1}])$. The graded
$E_{\infty}$-map $\mathbb{S}[t]\to\mathbb{S}[t^{\pm 1}]$ provides a graded
$E_{2}$-map $\mathbb{S}[\sigma_{k}]\to\mathbb{S}[\sigma_{k}^{\pm 1}]$.
Furthermore, by the definition of the shearing functor,
$\mathbb{S}[\sigma_{k}^{\pm 1}]$ is indeed given by $\phi^{k}$ of Variant 3.2;
in particular, $\mathbb{S}[\sigma_{mk}^{\pm 1}]$ is the restriction of
$\mathbb{S}[\sigma_{k}^{\pm 1}]$ along $\mathbb{Z}\xrightarrow{\cdot
m}\mathbb{Z}$. Applying the adjunction $L_{m}\dashv R_{m}$ induced by $\cdot
m$ to the map $\mathbb{S}[\sigma_{mk}]\to\mathbb{S}[\sigma_{k}]$, one observes
that $\mathbb{S}[\sigma_{mk}]$ is also the restriction of
$\mathbb{S}[\sigma_{k}]$ along $\cdot m$. Therefore, the counit of
$L_{m}\dashv R_{m}$ provides a commutative diagram of graded $E_{2}$-rings:
${\mathbb{S}[\sigma_{mk}]}$${\mathbb{S}[\sigma_{mk}^{\pm
1}]}$${\mathbb{S}[\sigma_{k}]}$${\mathbb{S}[\sigma_{k}^{\pm 1}].}$
###### Remark 6.2.
One may also take weight connective covers in the $\infty$-category of graded
$S^{1}$-equivariant spectra by using the left Kan extension/restriction
adjunction induced by $\mathbb{N}^{ds}\to\mathbb{Z}$. This provides a map
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}^{\pm
1}])_{\geq 0}$
of graded $E_{1}$-algebras in $S^{1}$-equivariant spectra factoring the map
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}^{\pm
1}])$.
The following is analogous to the description of the replete bar construction
of commutative $\mathcal{J}$-space monoids generated by a single element; c.f.
[46, Proposition 3.21], [50, Section 8.5] and [47, Sections 6 and 7].
###### Definition 6.3.
Let $k\geq 0$ be even. The logarithmic THH of $\mathbb{S}[\sigma_{k}]$ with
respect to $\sigma_{k}\in\pi_{k}\mathbb{S}[\sigma_{k}]$ is the weight
connective cover of the topological Hochschild homology of
$\mathbb{S}[{\sigma_{k}}^{{\pm 1}}]$. In other words, it is the
$S^{1}$-equivariant $E_{1}$-algebra:
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k}):=\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}^{\pm
1}])_{\geq 0}.$
Similarly, the $p$-local counterpart is defined as follows.
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k}):=\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}^{\pm
1}])_{\geq 0}$
The following example provides a justification for this definition of
logarithmic THH by showing that its $H\mathbb{Z}$-homology provides what
should be the logarithmic Hochschild homology of the free algebra
$\mathbb{Z}[\sigma_{k}]$, c.f. [27, Example 10.3].
###### Example 6.4.
Considering $H\mathbb{Z}$ as a graded $E_{\infty}$-algebra concentrated in
weight $0$, we deduce that
$H\mathbb{Z}\wedge(\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}^{\pm
1}])_{\geq
0})\simeq(H\mathbb{Z}\wedge\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}^{\pm
1}]))_{\geq 0}.$
Therefore,
$H\mathbb{Z}_{*}\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})$
is given by the weight connective cover of
(6.5)
$\operatorname{\textup{THH}}^{H\mathbb{Z}}_{*}(H\mathbb{Z}[\sigma_{k}^{\pm
1}])\cong\mathbb{Z}[\sigma_{k}^{\pm 1}]\otimes\Lambda(d\sigma_{k})$
where $d\sigma_{k}$ is of weight $1$ and degree $k+1$ and $\sigma_{k}$ is of
weight $1$ and degree $k$. The isomorphism above follows by the usual Bökstedt
spectral sequence considerations applied together with the HKR theorem. Taking
the weight connective cover of (6.5), we obtain:
$H\mathbb{Z}_{*}\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})\cong\mathbb{Z}[\sigma_{k}]\otimes\Lambda(\textup{dlog}\sigma_{k})$
where $\text{dlog}\sigma_{k}$ is of weight $0$ and homotopical degree $1$ and
it corresponds to $(d\sigma_{k})/\sigma_{k}$. Furthermore, the map
$H\mathbb{Z}_{*}\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to
H\mathbb{Z}_{*}\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})$
carries $d\sigma_{k}$ to $d\sigma_{k}=\sigma_{k}\text{dlog}\sigma_{k}$.
Recall from Construction 4.13 that when $A$ is a
$\mathbb{S}[\sigma_{k}]$-algebra, $\operatorname{\textup{THH}}(A)$ admits the
structure of a right
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-module. We use this
structure in the following definition. Recall that Proposition 4.5 provides
various cases of interest where the assumptions on $A$ in the following
definition are satisfied.
###### Definition 6.6.
Let $A$ be an $E_{1}$ $\mathbb{S}[\sigma_{k}]$-algebra and assume that the
unit map $\mathbb{S}[\sigma_{k}]\to A$ carries
$\sigma_{k}\in\pi_{k}\mathbb{S}[\sigma_{k}]$ to $a\in\pi_{k}A$ with even
$k\geq 0$. We define the logarithmic THH of $A$ relative to $a$ as the
following $S^{1}$-equivariant spectrum.
$\operatorname{\textup{THH}}(A\mid
a):=\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})$
If $A$ is assumed to be $p$-local, we use the following equivalent definition
$\operatorname{\textup{THH}}(A\mid
a):=\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k}).$
The definition of logarithmic THH we provide above is analogous to the
definitions used in [46, 50, 47].
###### Remark 6.7.
We remark that
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})$ should be
a cyclotomic spectrum as the Frobenius maps of THH multiply the weight by $p$
and this should provide $\operatorname{\textup{THH}}(A\mid a)$ above with the
structure of a cyclotomic spectrum. However, since we don’t explicitly need
this for our application, displaying this will take us too far afield, and so,
we leave the details to the future work of Devalapurkar and the third author.
Since the definition of logarithmic THH is given by the extension of scalars
functor:
$-\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})\colon\thinspace\operatorname{RMod}_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}\to\operatorname{RMod}_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})},$
corresponding to the $E_{1}$-algebra map
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})$,
we deduce that $\operatorname{\textup{THH}}(A\mid a)$ is equipped with the
structure of a right
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})$-module.
Furthermore, the unit of the adjunction given by the extension of scalars
functor above and the corresponding forgetful functor provides a map
(6.8) $\operatorname{\textup{THH}}(A)\to\operatorname{\textup{THH}}(A\mid a)$
of right $\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-modules, see
(4.3).
###### Remark 6.9.
Using $MU[\sigma_{k}]$ mentioned in Remark 4.11, it is possible to equip
logarithmic THH with the structure of an $E_{n}$-algebra for $n>0$ in
favorable cases. For instance, for the $E_{3}$
$MU[\sigma_{2(p^{n}-1)}]$-algebra form of $BP\langle n\rangle$ constructed in
[25], $\operatorname{\textup{THH}}(BP\langle n\rangle\mid v_{n})$ admits the
structure of an $E_{1}$-ring. Indeed, using the map of $E_{2}$-rings
$\operatorname{\textup{THH}}(MU[\sigma_{2(p^{n}-1)}])\to\operatorname{\textup{THH}}(BP\langle
n\rangle)$, we obtain an $E_{1}$-ring:
$\operatorname{\textup{THH}}(BP\langle
n\rangle)\wedge_{\operatorname{\textup{THH}}(MU[\sigma_{2(p^{n}-1)}])}\operatorname{\textup{THH}}(MU[\sigma_{2(p^{n}-1)}^{\pm
1}])_{\geq 0},$
equivalent to $\operatorname{\textup{THH}}(BP\langle n\rangle\mid v_{n})$.
This equivalence follows by the following chain of equivalences
$\begin{split}\operatorname{\textup{THH}}&(BP\langle
n\rangle)\wedge_{\operatorname{\textup{THH}}(MU[\sigma_{2(p^{n}-1)}])}\operatorname{\textup{THH}}(MU[\sigma_{2(p^{n}-1)}^{\pm
1}])_{\geq 0}\\\ &\simeq\operatorname{\textup{THH}}(BP\langle
n\rangle)\wedge_{\operatorname{\textup{THH}}(MU)\wedge\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{2(p^{n}-1)}])}\operatorname{\textup{THH}}(MU)\wedge\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{2(p^{n}-1)}^{\pm
1}])_{\geq 0}\\\ &\simeq\operatorname{\textup{THH}}(BP\langle
n\rangle)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{2(p^{n}-1)}])}\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{2(p^{n}-1)}])\end{split}$
obtained from the equivalence of $E_{2}$ $MU$-algebras
$MU[\sigma_{2(p^{n}-1)}]\simeq MU\wedge\mathbb{S}[\sigma_{2(p^{n}-1)}]$
mentioned in Remark 4.11.
Furthermore, Hahn and Yuan [26, 1.11 and 1.12] show that there is an
$E_{\infty}$-map $MU[\sigma_{2}]\to ku_{p}$ for $p=2$ and claim that their
methods provide such a map for odd primes too. In this situation,
$\operatorname{\textup{THH}}(ku_{p}\mid u_{2})$ is equipped with the structure
of an $E_{\infty}$-ring (by arguing as above) where $u_{2}$ denotes the Bott
element. Note that the logarithmic THH of $ku_{p}$ relative to $u_{2}$ is also
constructed as an $E_{\infty}$-ring in [48].
###### Remark 6.10.
In work in progress, S. Devalapurkar and the second author show in a general
context, that for every $E_{2}$-ring with even homotopy, logarithmic THH, as
in our definition, may be equipped with a canonical $E_{1}$-algebra structure
in cyclotomic spectra.
To study the log THH of $E_{1}$-rings obtained via root adjunctions, we use
the following constructions.
###### Construction 6.11.
Using Construction 6.1 and the weight connective cover adjunction mentioned in
Remark 6.2, we obtain the following commuting diagram of graded
$E_{1}$-algebras.
(6.12)
${\mathbb{S}[\sigma_{mk}]}$${\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{mk}])}$${\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{mk}]\mid\sigma_{mk})}$${\mathbb{S}[\sigma_{k}]}$${\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}$${\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})}$
###### Construction 6.13.
Assume Hypothesis 4.4. By Proposition 4.14, there is an equivalence:
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))\simeq\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{mk}])}\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]).$
This equips $\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$ with the structure
of a right $\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-module in
$m$-graded spectra. Considering the map
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\mathbb{S}[\sigma_{k}])$
as a map of $m$-graded $E_{1}$-ring spectra, Definition 6.6 may be employed at
the level of $m$-graded spectra. This shows that
$\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})$ admits a
canonical structure of a right
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})$-module in
$m$-graded spectra. Furthermore, the map
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))\to\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})$
is a map of $m$-graded
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-modules.
### 6.1. Logarithmic THH-étale root adjunctions
Here, our goal is to show that when $A$ is $p$-local and $p\nmid m$, root
adjunction is logarithmic THH-étale. In other words, we show that there is an
equivalence of $m$-graded spectra:
$\operatorname{\textup{THH}}(A\mid
a)\wedge_{\mathbb{S}_{(p)}[\sigma_{mk}]}\mathbb{S}_{(p)}[\sigma_{k}]\simeq\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a}).$
###### Remark 6.14.
A notion of logarithmic THH-étalenes is already defined in [48]. In the
language of Rognes, Sagave and Schlichtkrull [48], logarithmic THH-étaleness
of $A\to A(\sqrt[m]{a})$ would be expressed by an equivalence:
(6.15) $\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})\simeq
A(\sqrt[m]{a})\wedge_{A}\operatorname{\textup{THH}}(A\mid a).$
Since we only assume $A$ to be $E_{1}$, $\operatorname{\textup{THH}}(A\mid a)$
may not admit an $A$-module structure and therefore, the right hand side above
may not be defined in our generality. On the other hand, if one starts with an
$E_{3}$-algebra $A$ with even homotopy, the logarithmic THH of $A$ may be
given an $A$-module structure and we obtain that $A\to A(\sqrt[m]{a})$ is
logarithmic THH-étale in the sense of (6.15) whenever $A$ is $p$-local and
$p\nmid m$.
###### Proposition 6.16.
For $k\geq 0$, the spectra
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})$ and
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})$ are
connective in homotopy.
###### Proof.
This follows by the fact that the weight connective part of the cyclic bar
construction on $\mathbb{S}[\sigma_{k}^{\pm 1}]$
($\mathbb{S}_{(p)}[\sigma_{k}^{\pm 1}]$) is connective in homotopy in each
simplicial degree. ∎
We start with proving a logarithmic THH étaleness result for the $p$-localized
free $E_{1}$-algebra $\mathbb{S}_{(p)}[\sigma_{mk}]$.
###### Proposition 6.17.
Let $k\geq 0$ be even and let $m>0$ with $p\nmid m$. In this situation, there
is an equivalence of left
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})$-modules
in $m$-graded spectra:
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})\wedge_{\mathbb{S}_{(p)}[\sigma_{mk}]}\mathbb{S}_{(p)}[\sigma_{k}]\simeq\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})$
where the left
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})$-module
structure on the right hand is provided by Construction 6.11.
###### Proof.
We start by constructing the desired map. First, there is a composite map of
$m$-graded $E_{1}$-ring spectra,
(6.18)
$\mathbb{S}_{(p)}[\sigma_{k}]\to\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})$
which is in particular a map of left $\mathbb{S}_{(p)}[\sigma_{mk}]$-modules
in $m$-graded spectra by forgetting structure trough the $m$-graded
$E_{1}$-ring map
$\mathbb{S}_{(p)}[\sigma_{mk}]\to\mathbb{S}_{(p)}[\sigma_{k}]$. Using the
extension of scalars functor induced by the map
(6.19)
$\mathbb{S}_{(p)}[\sigma_{mk}]\to\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})$
of $m$-graded $E_{1}$-algebras, we obtain the desired map:
$f\colon\thinspace\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})\wedge_{\mathbb{S}_{(p)}[\sigma_{mk}]}\mathbb{S}_{(p)}[\sigma_{k}]\to\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k}),$
of left
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})$-modules
in $m$-graded spectra from (6.18). Here, we used the fact that the left
$\mathbb{S}_{(p)}[\sigma_{mk}]$-module structure on
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})$ used
in (6.18) is compatible with the one obtained by forgetting the canonical left
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})$-module
structure on
$\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})$
through (6.19); this follows by the $p$-local version of Diagram (6.12).
What remains is to show that $f$ is an equivalence. Since $f$ is a map between
$p$-local connective spectra (Proposition 6.16), it is sufficient to show that
$H\mathbb{Z}_{(p)}\wedge f$ is an equivalence, see Lemma 4.16.
By inspection on the two sided bar construction defining relative smash
products, one obtains that
$\displaystyle
H\mathbb{Z}_{(p)}\wedge\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})$
$\displaystyle\wedge_{\mathbb{S}_{(p)}[\sigma_{mk}]}\mathbb{S}_{(p)}[\sigma_{k}]\simeq$
$\displaystyle(H\mathbb{Z}_{(p)}\wedge\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk}))\wedge_{H\mathbb{Z}_{(p)}[\sigma_{mk}]}H\mathbb{Z}_{(p)}[\sigma_{k}].$
Using the base change formula for THH, we obtain that $H\mathbb{Z}_{(p)}\wedge
f$ is given by the canonical map
$H\mathbb{Z}_{(p)}\wedge
f\colon\thinspace\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}(H\mathbb{Z}_{(p)}[\sigma_{mk}^{\pm
1}])_{\geq
0}\wedge_{H\mathbb{Z}_{(p)}[\sigma_{mk}]}H\mathbb{Z}_{(p)}[\sigma_{k}]\to\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}(H\mathbb{Z}_{(p)}[\sigma_{k}^{\pm
1}])_{\geq 0}.$
To prove that $H\mathbb{Z}_{(p)}\wedge f$ is an equivalence, we argue as in
the proof of Proposition 4.17. The map of Bökstedt spectral sequences
computing the map
(6.20)
$\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}_{*}(H\mathbb{Z}_{(p)}[\sigma_{mk}^{\pm
1}])\to\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}_{*}(H\mathbb{Z}_{(p)}[\sigma_{k}^{\pm
1}])$
is given on the second page, due to the HKR theorem, by the ring map
(6.21) $\phi\colon\thinspace\mathbb{Z}_{(p)}[\sigma_{mk}^{\pm
1}]\otimes\Lambda_{\mathbb{Z}_{(p)}}(d(\sigma_{mk}))\to\mathbb{Z}_{(p)}[\sigma_{k}^{\pm
1}]\otimes\Lambda_{\mathbb{Z}_{(p)}}(d(\sigma_{k}))$
satisfying
$\phi(\sigma_{mk})=\sigma_{k}^{m}\textup{ \ and \
}\phi(d(\sigma_{mk}))=m\sigma_{k}^{m-1}d(\sigma_{k})$
where $m$ is a unit as $p\nmid m$. In particular,
(6.22)
$\phi(\sigma_{mk}^{-1}d(\sigma_{mk}))=\sigma_{k}^{-m}m\sigma_{k}^{m-1}d(\sigma_{k})=m\sigma_{k}^{-1}d(\sigma_{k}).$
Here, $\sigma_{mk}$ and $\sigma_{k}$ are in degrees $(0,mk)$ and $(0,k)$
respectively and $d(\sigma_{mk})$ and $d(\sigma_{k})$ are in degrees $(1,mk)$
and $(1,k)$ respectively. Furthermore, $\sigma_{mk}$ and $d(\sigma_{mk})$ are
of weight $m$ and $\sigma_{k}$ and $d(\sigma_{k})$ are of weight $1$. In
particular, both Bökstedt spectral sequences degenerate on the second page and
the map $\phi$ provides the map (6.20).
Taking connective covers in weight direction and identifying
$\sigma_{mk}^{-1}d(\sigma_{mk})$ as $\text{dlog}\sigma_{mk}$ and
$\sigma_{k}^{-1}d(\sigma_{k})$ as $\text{dlog}\sigma_{k}$, we obtain that the
map
$\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}_{*}(H\mathbb{Z}_{(p)}[\sigma_{mk}^{\pm
1}])_{\geq
0}\to\operatorname{\textup{THH}}^{H\mathbb{Z}_{(p)}}_{*}(H\mathbb{Z}_{(p)}[\sigma_{k}^{\pm
1}])_{\geq 0}$
is given by a map
$\mathbb{Z}_{(p)}[\sigma_{mk}]\otimes\Lambda_{\mathbb{Z}_{(p)}}(\textup{dlog}\sigma_{mk})\to\mathbb{Z}_{(p)}[\sigma_{k}]\otimes\Lambda_{\mathbb{Z}_{(p)}}(\textup{dlog}\sigma_{k})$
that carries $\sigma_{mk}$ to $\sigma_{k}^{m}$ and $\text{dlog}\sigma_{mk}$ to
$\text{dlog}\sigma_{k}$ up to a unit due to (6.22) as $p\nmid m$.
Upon extending scalars with respect to the map
$\mathbb{Z}_{(p)}[\sigma_{mk}]\to\mathbb{Z}_{(p)}[\sigma_{k}]$, this map
becomes an isomorphism. In other words, $\pi_{*}(H\mathbb{Z}_{(p)}\wedge f)$
is an isomorphism and therefore, $f$ is an equivalence.
∎
The following provides the logarithmic THH-étaleness of root adjunction in
ring spectra.
###### Theorem 6.23.
Assume Hypothesis 4.4 with $p\nmid m$ and that $A$ is $p$-local. In this
situation, there is an equivalence of $m$-graded spectra
$\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})\simeq\operatorname{\textup{THH}}(A\mid
a)\wedge_{\mathbb{S}_{(p)}[\sigma_{mk}]}\mathbb{S}_{(p)}[\sigma_{k}].$
In other words, as an $m$-graded spectrum,
$\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})$ is given by
$\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})_{i}\simeq\Sigma^{ik}\operatorname{\textup{THH}}(A\mid
a)$
for every $0\leq i<m$.
###### Proof.
We have the following chain of equivalences
$\begin{split}&\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})\simeq\operatorname{\textup{THH}}(A(\sqrt[m]{a}))\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})\\\
&\simeq\big{(}\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}])}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])\big{)}\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}])}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})\\\
&\simeq\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}])}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})\\\
&\simeq\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}])}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})\\\
&\simeq\operatorname{\textup{THH}}(A\mid
a)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{k}]\mid\sigma_{k})\\\
&\simeq\operatorname{\textup{THH}}(A\mid
a)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})}\big{(}\operatorname{\textup{THH}}(\mathbb{S}_{(p)}[\sigma_{mk}]\mid\sigma_{mk})\wedge_{\mathbb{S}_{(p)}[\sigma_{mk}]}\mathbb{S}_{(p)}[\sigma_{k}]\big{)}\\\
&\simeq\operatorname{\textup{THH}}(A\mid
a)\wedge_{\mathbb{S}_{(p)}[\sigma_{mk}]}\mathbb{S}_{(p)}[\sigma_{k}]\\\
\end{split}$
The first and the fifth equivalences follow by the definition of logarithmic
THH, the second equivalence follows by our definition of root adjunction and
Proposition 4.14 and the sixth equivalence follows by Proposition 6.17. ∎
###### Remark 6.24.
In [48], the authors show that $\ell\to ku_{(p)}$ is logarithmic THH-étale.
This compares to our result above since we show that
$ku_{p}\simeq\ell_{p}(\sqrt[p-1]{v_{1}})$ in Theorem 4.10.
### 6.2. Relating THH and logarithmic THH
The goal of this section is to show that there is a fiber sequence
$\operatorname{\textup{THH}}(A)\to\operatorname{\textup{THH}}(A\mid
a)\to\Sigma\operatorname{\textup{THH}}(A/a)$
under our usual assumptions. The $E_{1}$-ring $A/a$ above is described in the
following construction which is analogous to [47, Lemmas 6.14 and 6.15].
###### Construction 6.25.
Let $A$ be an $\mathbb{S}[\sigma_{k}]$-algebra where $\sigma_{k}$ acts through
$a\in\pi_{k}A$ where $k\geq 0$ is even. The weight $0$ Postnikov section of
$\mathbb{S}[\sigma_{k}]$ provides a map $\mathbb{S}[\sigma_{k}]\to\mathbb{S}$
of $E_{2}$-rings [25, B.0.6]. Considering the extension of scalars functor
$-\wedge_{\mathbb{S}[\sigma_{k}]}\mathbb{S}$ from the $\infty$-category of
$E_{1}$ $\mathbb{S}[\sigma_{k}]$-algebras to the $\infty$-category of $E_{1}$
$\mathbb{S}$-algebras, one equips
$A/a:=A\wedge_{\mathbb{S}[\sigma_{k}]}\mathbb{S}$
with the structure of an $E_{1}$-ring spectrum. Since $\mathbb{S}$ is the
cofiber of the map
$\mathbb{S}[\sigma_{k}]\xrightarrow{\cdot\sigma_{k}}\mathbb{S}[\sigma_{k}]$,
$A/a$ is indeed the cofiber of the map $A\xrightarrow{\cdot a}A$.
Considering $\mathbb{S}[\sigma_{k}]$ as a graded $E_{2}$-ring, we have
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])_{0}\simeq\mathbb{S}.$
This can be observed by inspection on the cyclic bar construction on
$\mathbb{S}[\sigma_{k}]$ or by computing the $H\mathbb{Z}$-homology of the
left hand side above. This is used in the statement of the following
proposition.
###### Remark 6.26.
The following proposition is analogous to [47, Proposition 6.11]. We remark
that unlike in loc cit., we do not take $S^{1}$-equivariance into account,
which leads to a simpler proof.
###### Proposition 6.27.
The cofiber of the map
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})$
is given by $\Sigma\mathbb{S}$ concentrated in weight $0$ as a left
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-module in graded
spectra. Here, the left
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-module structure on
$\mathbb{S}$ is given by the weight-Postnikov truncation map of graded
$E_{1}$-rings
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])_{0}\simeq\mathbb{S}.$
###### Proof.
Let $M$ be the cofiber of the map $f$ below in left
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-modules in graded
spectra.
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\xrightarrow{f}\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}],\mid\sigma_{k})\to
M$
We start by computing $H\mathbb{Z}_{*}M$. The map
$H\mathbb{Z}_{*}f\colon\thinspace\operatorname{\textup{THH}}^{H\mathbb{Z}}_{*}(H\mathbb{Z}[\sigma_{k}])\to\operatorname{\textup{THH}}^{H\mathbb{Z}}_{*}(H\mathbb{Z}[\sigma_{k}^{\pm
1}])_{\geq 0}$
is given by the ring map
$\mathbb{Z}[\sigma_{k}]\otimes\Lambda(d(\sigma_{k}))\to\mathbb{Z}[\sigma_{k}]\otimes\Lambda(\textup{dlog}\sigma_{k})$
that carries $\sigma_{k}$ to $\sigma_{k}$ and $d(\sigma_{k})$ to
$\sigma_{k}\text{dlog}\sigma_{k}$; this follows by the Böksedt spectral
sequences in (4.18) and (6.21). This map is injective and the only class that
is not in the image is $\text{dlog}\sigma_{k}$. We obtain,
$H\mathbb{Z}\wedge M\simeq\Sigma H\mathbb{Z}$
where the right hand side is concentrated in weight $0$. Due to Proposition
6.16, $f$ is a map between connective spectra. In particular, $M$ is
connective and we obtain an equivalence of spectra
$M\simeq\Sigma\mathbb{S}.$
We need to improve this to an equivalence of left
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-modules in graded
spectra. Since $M$ is a left
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-module in graded
spectra, there is a map
$\Sigma\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to M$
of left $\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-modules in
graded spectra carrying $1$ to $1$ in homotopy. Taking weight $0$ Postnikov
sections [25, B.0.6], we obtain an equivalence of left
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-modules in graded
spectra
$\Sigma\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])_{0}\xrightarrow{\simeq}M.$
This map is an equivalence because it carries $1$ to $1$ in homotopy by
construction and since both sides are equivalent as spectra to
$\Sigma\mathbb{S}$.
∎
We are ready to provide the cofiber sequence relating THH to logarithmic THH.
###### Theorem 6.28.
Let $A$ be an $\mathbb{S}[\sigma_{k}]$-algebra where $\sigma_{k}$ acts through
$a\in\pi_{k}A$ with even $k\geq 0$. In this situation, there is a cofiber
sequence of spectra:
$\operatorname{\textup{THH}}(A)\to\operatorname{\textup{THH}}(A\mid
a)\to\Sigma\operatorname{\textup{THH}}(A/a).$
The corresponding cofiber sequence for
$\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})$ is a cofiber
sequence of $m$-graded spectra.
###### Proof.
Proposition 6.27 provides the following cofiber sequence of left
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-modules in graded
spectra.
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}]\mid\sigma_{k})\to\Sigma\mathbb{S}$
Applying the functor
$\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}-$
to this cofiber sequence, we obtain the following cofiber sequence
(6.29) $\operatorname{\textup{THH}}(A)\to\operatorname{\textup{THH}}(A\mid
a)\to\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}\Sigma\mathbb{S}.$
What is left is to identify the cofiber above as
$\operatorname{\textup{THH}}(A/a)$. We have
(6.30)
$\begin{split}\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}\Sigma\mathbb{S}&\simeq\Sigma\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}\mathbb{S}.\\\
\end{split}$
Here, $\mathbb{S}$ on the right hand side denotes the de-suspension of
$\Sigma\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])_{0}$ as a left
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])$-module in graded
spectra, see Proposition 6.27. This is
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])_{0}$ which admits the
structure of a graded $E_{1}$-ring spectrum equipped with a map
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])_{0}$
of graded $E_{1}$-ring spectra given by the relevant weight $0$ Postnikov
section map. Indeed, due to the universal property of Postnikov sections, this
weight $0$ Postnikov section map factors the map of graded $E_{1}$-rings
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S})$
induced by the weight $0$ Postnikov section map
$\mathbb{S}[\sigma_{k}]\to\mathbb{S}$; i.e. we have a factorization of this
map of graded $E_{1}$-algebras as
$\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])\to\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])_{0}\xrightarrow{\simeq}\operatorname{\textup{THH}}(\mathbb{S}).$
The second map above is an equivalence as its domain and codomain are
equivalent to $\mathbb{S}$ as spectra and it carries the unit to the unit by
construction. In particular, we can replace $\mathbb{S}$ on the right hand
side of (6.30) with $\operatorname{\textup{THH}}(\mathbb{S})$. This provides
the first equivalence below.
(6.31)
$\begin{split}\Sigma\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}\mathbb{S}&\simeq\Sigma\operatorname{\textup{THH}}(A)\wedge_{\operatorname{\textup{THH}}(\mathbb{S}[\sigma_{k}])}\operatorname{\textup{THH}}(\mathbb{S})\\\
&\simeq\Sigma\operatorname{\textup{THH}}(A\wedge_{\mathbb{S}[\sigma_{k}]}\mathbb{S})\\\
&\simeq\Sigma\operatorname{\textup{THH}}(A/a)\end{split}$
The second equivalence follows by Proposition 4.14 and the third equivalence
follows by our description of the $E_{1}$-algebra $A/a$ in Construction 6.25.
Equations (6.30) and (6.31) identify the cofiber in (6.29) as
$\operatorname{\textup{THH}}(A/a)$ providing the cofiber sequence claimed in
the theorem.
The statement regarding the cofiber sequence in $m$-graded spectra for
$A(\sqrt[m]{a})$ follows by utilizing same arguments. ∎
###### Remark 6.32.
The above localization sequence is of fundamental importance in the theory of
log THH. A proof of the above localization sequence for general $E_{2}$ log
structures, using more general methods, will be supplied in [14].
### 6.3. THH after root adjunction
Here, we identify $\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$ in terms of
$\operatorname{\textup{THH}}(A)$ and $\operatorname{\textup{THH}}(A\mid a)$.
###### Theorem 6.33 (Theorem 1.3).
Assume Hypothesis 4.4 with $p\nmid m$ and that $A$ is $p$-local. In this
situation, the $m$-graded spectrum
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))$ is given by
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{0}\simeq\operatorname{\textup{THH}}(A)$
and
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{i}\simeq\Sigma^{ik}\operatorname{\textup{THH}}(A\mid
a)\textup{ \ for \ }0<i<m.$
In particular, there is an equivalence of spectra:
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))\simeq\operatorname{\textup{THH}}(A)\vee\big{(}\bigvee_{0<i<m}\Sigma^{ik}\operatorname{\textup{THH}}(A\mid
a)\big{)}.$
###### Proof.
The identification of $\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{0}$ is
provided by Proposition 4.20. Therefore, it is sufficient to provide the
identification of $\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{i}$ for $i\neq
0$.
Due to Theorem 6.23,
(6.34)
$\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})_{i}\simeq\Sigma^{ik}\operatorname{\textup{THH}}(A\mid
a).$
Therefore, it is sufficient to show that
(6.35)
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))_{i}\simeq\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})_{i}$
whenever $i\neq 0$. This follows once we show that the cofiber of the the map
(6.36)
$\operatorname{\textup{THH}}(A(\sqrt[m]{a}))\to\operatorname{\textup{THH}}(A(\sqrt[m]{a})\mid\sqrt[m]{a})$
of $m$-graded spectra is concentrated in weight $0$. Due to Theorem 6.28, the
cofiber of this map is given by
$\operatorname{\textup{THH}}(A(\sqrt[m]{a})/\sqrt[m]{a})$
where $A(\sqrt[m]{a})/\sqrt[m]{a}$ is defined to be
$A(\sqrt[m]{a})\wedge_{\mathbb{S}[\sigma_{k}]}\mathbb{S}$. Therefore, we have
$A(\sqrt[m]{a})/\sqrt[m]{a}:=A(\sqrt[m]{a})\wedge_{\mathbb{S}[\sigma_{k}]}\mathbb{S}\simeq
A\wedge_{\mathbb{S}[\sigma_{mk}]}\mathbb{S}[\sigma_{k}]\wedge_{\mathbb{S}[\sigma_{k}]}\mathbb{S}\simeq
A\wedge_{\mathbb{S}[\sigma_{mk}]}\mathbb{S}.$
Since $A$, $\mathbb{S}[\sigma_{mk}]$ and $\mathbb{S}$ are concentrated in
weight $0$, we obtain that $A(\sqrt[m]{a})/\sqrt[m]{a}$ and therefore
$\operatorname{\textup{THH}}(A(\sqrt[m]{a})/\sqrt[m]{a})$ are also
concentrated in weight $0$. This proves that the cofiber of (6.36) is
concentrated in weight $0$ which proves (6.35) and this, together with (6.34)
proves the theorem. ∎
## 7\. Algebraic $K$-theory of complex and real topological $K$-theories
Here, we start by showing that $K(ku_{p})$ splits into $p-1$ non-trivial
summands. Afterwards, we show that $ku_{p}$ may be constructed from $ko_{p}$
via root adjunction. We use this to obtain an explicit description of the
$V(1)$-homotopy of $K(ko_{p})$ from the first authors computation of
$V(1)_{*}K(ku_{p})$ [7].
### 7.1. Adams’ splitting result for $2$-vector bundles
Recall that $\pi_{*}ku_{p}\cong\mathbb{Z}_{p}[u]$ and
$\pi_{*}\ell_{p}\cong\mathbb{Z}_{p}[v_{1}]$ where $\lvert u\rvert=2$ and
$\lvert v_{1}\rvert=2p-2$. The $E_{\infty}$-map $\ell_{p}\to ku_{p}$ carries
$v_{1}$ to $u^{p-1}$ in homotopy. For the rest of this section, we fix a map
$\mathbb{S}[\sigma_{2(p-1)}]\to\ell_{p}$ of $E_{2}$-algebras carrying
$\sigma_{2(p-1)}$ to $v_{1}$ and perform root adjunction using this map.
Recall from Theorem 4.10 that there is an equivalence of $E_{1}$
$\ell_{p}$-algebras
$\ell_{p}(\sqrt[p-1]{v_{1}})\simeq ku_{p}.$
This equips $ku_{p}$ with the structure of a $p-1$-graded $E_{1}$
$\ell_{p}$-algebra which further equips $\operatorname{\textup{THH}}(ku_{p})$
with the structure of a $p-1$-graded $S^{1}$-equivariant spectrum.
Let $p>3$ be a prime and let $V(1)$ denote the type-$2$ finite spectrum used
in [6]; $V(1)$ is a homotopy ring spectrum.
There is another grading on $V(1)_{*}\operatorname{\textup{THH}}(ku_{p})$ that
the first author calls the $\delta$-grading [6]. The group
$\Delta:=\mathbb{Z}/(p-1)$ acts on the $E_{\infty}$-ring $ku_{p}$ through
Adams operations. Let $\delta\in\Delta$ be a chosen generator and let
$\alpha\in\mathbb{F}_{p}^{\times}$ satisfy
$\pi_{*}(\mathbb{S}/p\wedge\delta)(u)=\alpha u$ where
$\pi_{*}(\mathbb{S}/p\wedge\delta)\colon\thinspace\pi_{*}(\mathbb{S}/p\wedge
ku_{p})\to\pi_{*}(\mathbb{S}/p\wedge ku_{p})\cong\mathbb{F}_{p}[u].$
We say $u^{i}$ has $\delta$-weight $i$ as
$\pi_{*}(\mathbb{S}/p\wedge\delta)(u^{i})=\alpha^{i}u^{i}$. Similarly, one
says $x\in V(1)_{*}\operatorname{\textup{THH}}(ku_{p})$ has $\delta$-weight
$i$ if the self map of $V(1)_{*}\operatorname{\textup{THH}}(ku_{p})$ induced
by $\delta$ carries $x$ to $\alpha^{i}x$. One defines $\delta$-weight in a
similar way on other invariants of $ku_{p}$ [6, Definition 8.2].
###### Proposition 7.1.
The group $V(1)_{*}\operatorname{\textup{THH}}(ku_{p})_{i}$ is given by the
classes of $\delta$-weight $i$ in
$V(1)_{*}\operatorname{\textup{THH}}(ku_{p})$.
###### Proof.
Since $H\mathbb{F}_{p}\wedge ku_{p}$ is a $p-1$ graded $E_{1}$
$H\mathbb{F}_{p}$-algebra, there is a $p-1$-grading on
$\textup{HH}^{\mathbb{F}_{p}}_{*}(H{\mathbb{F}_{p}}_{*}ku_{p})$. By inspection
on the Hochschild complex, one observes that the $\delta$-weight grading on
$\textup{HH}^{\mathbb{F}_{p}}_{*}(H{\mathbb{F}_{p}}_{*}ku_{p})$ agrees with
the weight grading. In particular, the $\delta$-weight grading and the weight
grading agree on the second page of the Bökstedt spectral sequence computing
$\operatorname{\textup{HH}^{\mathbb{F}_{p}}}(H\mathbb{F}_{p}\wedge ku_{p})$.
Due to [6, Section 9], this shows that the $\delta$-weight grading and the
weight grading agree on
$\textup{HH}_{*}^{\mathbb{F}_{p}}(H\mathbb{F}_{p}\wedge ku_{p})$. Furthermore,
there is a basis of $\textup{HH}_{*}^{\mathbb{F}_{p}}(H\mathbb{F}_{p}\wedge
ku_{p})$ as an $\mathbb{F}_{p}$-module where $\delta$-weight is defined for
each basis element. Therefore, the $H\mathbb{F}_{p}$-module
$\operatorname{\textup{HH}^{H\mathbb{F}_{p}}}(H\mathbb{F}_{p}\wedge ku_{p})$
splits as a coproduct of suspensions of $H\mathbb{F}_{p}$ in a way that the
map
$\operatorname{\textup{HH}^{H\mathbb{F}_{p}}}(H\mathbb{F}_{p}\wedge\delta)$ is
given by the respective multiplication map corresponding to the
$\delta$-weight on each cofactor. Using this, one observes that the
$\delta$-weight and the weight grading agree on
$H_{*}(V(1)\wedge\operatorname{\textup{THH}}(ku_{p});\mathbb{F}_{p})$.
The Hurewicz map
$V(1)_{*}\operatorname{\textup{THH}}(ku_{p})\to
H_{*}(V(1)\wedge\operatorname{\textup{THH}}(ku_{p});\mathbb{F}_{p})$
is injective and this map preserves both gradings. From this, we deduce that
the weight grading and the $\delta$-weight grading agree on
$V(1)_{*}\operatorname{\textup{THH}}(ku_{p})$. ∎
In general, THH of $m$-graded ring spectra may not result in an $m$-graded
cyclotomic spectrum as the Frobenius map do not preserve the grading; it
multiplies the grading by $p$. On the other hand, for $ku_{p}$,
$\operatorname{\textup{THH}}(ku_{p})$ is $p-1$-graded and $p=1$ in
$\mathbb{Z}/(p-1)$. In particular, the Frobenius map preserves the grading and
one obtains that $\operatorname{\textup{THH}}(ku)$ is a $p-1$-graded
cyclotomic spectrum.
###### Proposition 7.2.
The $S^{1}$-equivariant structure on $\operatorname{\textup{THH}}(ku_{p})_{i}$
lifts to a cyclotomic structure for which there is an equivalence
$\operatorname{\textup{THH}}(ku)\simeq\prod_{i\in\mathbb{Z}/(p-1)}\operatorname{\textup{THH}}(ku)_{i}$
of cyclotomic spectra.
###### Proof.
The monoid $\mathbb{Z}/(p-1)$ satisfies the conditions in [2, Appendix A]
needed endow $\operatorname{\textup{THH}}(ku)$ with an $L_{p}$ twisted
cyclotomic structure. However, since $p\cong 1\mod p-1$, this ends up being
the identity functor on $\mathbb{Z}/(p-1)$-graded spectra. Thus one obtains a
sequence of $S^{1}$-equivariant maps
$\operatorname{\textup{THH}}(ku)_{i}\to\operatorname{\textup{THH}}(ku)^{tC_{p}}_{i}$
for each $i\in\mathbb{Z}/(p-1)$, which is precisely the relevant additional
piece of structure needed to view this as a cyclotomic object. ∎
###### Construction 7.3.
Here, we construct a splitting of $K(ku_{p})$ using Proposition 7.2. Since the
product mentioned in Proposition 7.2 is a finite product, it is at the same
time a coproduct. In particular, it commutes with all limits and colimits.
Therefore, the fiber sequence defining $\operatorname{\textup{TC}}(ku_{p})$
splits into a product of fiber sequences
$\operatorname{\textup{TC}}(ku_{p})_{i}\to\operatorname{\textup{THH}}(ku_{p})_{i}^{hS^{1}}\xrightarrow{(\varphi_{p})_{i}-can_{i}}(\operatorname{\textup{THH}}(ku_{p})_{i}^{tC_{p}})^{hS^{1}}.$
Hence, there is a splitting of $\operatorname{\textup{TC}}(ku_{p})$:
$\operatorname{\textup{TC}}(ku_{p})\simeq\prod_{i\in\mathbb{Z}/(p-1)}\operatorname{\textup{TC}}(ku_{p})_{i}$
where
$\operatorname{\textup{TC}}(ku_{p})_{i}:=\operatorname{\textup{TC}}(\operatorname{\textup{THH}}(ku_{p})_{i})$.
Arguing as in the proof of Theorem 5.8, one obtains a map $ku_{p}\to
H\mathbb{Z}_{p}$ of $p-1$-graded $E_{1}$-rings where $H\mathbb{Z}_{p}$ is
concentrated in weight $0$. Therefore, the induced map
$\operatorname{\textup{THH}}(ku_{p})\to\operatorname{\textup{THH}}(\mathbb{Z}_{p})$
of $p-1$-graded spectra is trivial in non-zero weight. By inspection on the
product splitting of the fiber sequence defining
$\operatorname{\textup{TC}}(ku_{p})$, we consider
$\operatorname{\textup{TC}}(ku_{p})\to\operatorname{\textup{TC}}(\mathbb{Z}_{p})$
as a map of $p-1$ graded spectra where
$\operatorname{\textup{TC}}(\mathbb{Z}_{p})$ is concentrated in weight $0$.
Again, as in the proof of Theorem 5.8, this splits the pull-back square (from
Dundas-Goodwillie-McCarthy theorem) relating
$\operatorname{\textup{TC}}(ku_{p})$ to $K(ku_{p})$ resulting in a splitting
of $K(ku_{p})$ that we denote by
$K(ku_{p})\simeq\bigvee_{i\in\mathbb{Z}/(p-1)}K(ku_{p})_{i}.$
Here, $K(ku_{p})_{0}\simeq K(\ell_{p})$ due to Theorem 5.8.
To understand the resulting splitting of $K(ku_{p})$, we identify the
$V(1)$-homotopy of each weight piece. The computation of $V(1)_{*}K(ku_{p})$
is due to the first author [7, Theorem 8.1] and these groups are given below.
(7.4)
$\begin{split}V(1)_{*}K(ku_{p})\cong&\mathbb{F}_{p}[b]\otimes\Lambda(\lambda_{1},a_{1})\oplus\mathbb{F}_{p}[b]\otimes\mathbb{F}_{p}\\{\partial\lambda_{1},\partial
b,\partial a_{1},\partial\lambda_{1}a_{1}\\}\\\
&\oplus\mathbb{F}_{p}[b]\otimes\Lambda(a_{1})\otimes\mathbb{F}_{p}\\{t^{d}\lambda_{1}\mid
0<d<p\\}\\\
&\oplus\mathbb{F}_{p}[b]\otimes\Lambda(\lambda_{1})\otimes\mathbb{F}_{p}\\{\sigma_{n},\lambda_{2}t^{p^{2}-p}\mid
1\leq n\leq p-2\\}\\\ &\oplus\mathbb{F}_{p}\\{s\\}\end{split}$
Here, $\lvert b\rvert=2p+2$, $\lvert\partial\rvert=-1$,
$\lvert\lambda_{1}\rvert=2p-1$, $\lvert a_{1}\rvert=2p+3$,
$\lvert\sigma_{n}\rvert=2n+1$, $\lvert t\rvert=-2$,
$\lvert\lambda_{2}\rvert=2p^{2}-1$ and $\lvert s\rvert=2p-3$. We assign
weights to these classes in a way that turns $V(1)_{*}K(ku_{p})$ into a
$p-1$-graded abelian group. The weights of $\sigma_{n}$, $b$, $a_{1}$,
$\partial$, $\lambda_{1}$, $t$, $\lambda_{2}$ and $s$ are given by $n$, $1$,
$1$, $0$, $0$, $0$, $0$ and $0$ respectively. Classes denoted by tensor
products or products above have the canonical degrees and weights.
Furthermore, the isomorphism above is that of $\mathbb{F}_{p}[b]$-modules and
$b^{p-1}=-v_{2}$.
###### Theorem 7.5.
For the equivalence of spectra
$K(ku_{p})\simeq\bigvee_{i\in\mathbb{Z}/(p-1)}K(ku_{p})_{i}$
provided by Construction 7.3, there is an equivalence:
$K(ku_{p})_{0}\simeq K(\ell_{p})$
and there are isomorphisms
$V(1)_{*}(K(ku_{p})_{i})\cong\big{(}V(1)_{*}K(ku_{p})\big{)}_{i}$
for each $i\in\mathbb{Z}/(p-1)$ where the right hand side denotes the weight
$i$ piece of the $p-1$-grading on $V(1)_{*}K(ku_{p})$ described above.
###### Proof.
The identification of $K(ku_{p})_{0}$ is given in Construction 7.3. This
provides the identification of $V(1)_{*}K(ku_{p})_{0}$ as
$\big{(}V(1)_{*}K(ku_{p})\big{)}_{0}$ since this is precisely the image of the
map
$V(1)_{*}K(\ell_{p})\to V(1)_{*}K(ku_{p}),$
see [6, Theorem 10.2]. The identification of $V(1)_{*}(K(ku_{p})_{i})$ for
$i\neq 0$ follows by noting from Proposition 7.1 that it is sufficient to keep
track of the contribution of $\delta$-weight $i$ classes in
$V(1)_{*}\operatorname{\textup{THH}}(ku_{p})$ to
$V(1)_{*}\operatorname{\textup{TC}}(ku_{p})$. This follows by inspection on
[7, Section 7] and [7, Section 5]. ∎
### 7.2. Algebraic $K$-theory of real $K$-theory
Let $p>3$. Using Theorem 5.8, the splitting of $K(ku_{p})$ discussed above and
our root adjunction formalism, we obtain a straightforward computation of
$V(1)_{*}K(ko_{p})$ from our knowledge of $V(1)_{*}K(ku_{p})$ from [7]. Here,
$ko_{p}$ denotes the connective cover of the $p$-completed real topological
$K$-theory spectrum $KO_{p}$. We have
$\pi_{*}KO_{p}\cong\mathbb{Z}_{p}[\alpha^{\pm 1}]$ with
$\lvert\alpha\rvert=4$.
There is a subgroup of $C_{2}$ of $\Delta\cong\mathbb{Z}/(p-1)$ such that
$KO_{p}\simeq KU_{p}^{hC_{2}}$. Through this, the induced map $KO_{p}\to
KU_{p}$ carries $\alpha$ to $u^{2}$ up to a unit that we are going to omit.
Since $L\simeq(KU_{p})^{h\Delta}$, we obtain a sequence of $E_{\infty}$-maps
$L_{p}\to KO_{p}\to KU_{p}$
where the first map carries $v_{1}$ to $\alpha^{\frac{p-1}{2}}$ in homotopy.
###### Theorem 7.6.
For $p>3$, there is an equivalence
$ko_{p}\simeq\ell_{p}(\sqrt[\frac{p-1}{2}]{v_{1}})$
of $E_{1}$ $\ell_{p}$-algebras.
###### Proof.
This follows as in the proof of Theorem 4.10 by noting that
$p\nmid\frac{p-1}{2}$. ∎
Furthermore, $ku_{p}$ may also be obtained from $ko_{p}$ via root adjunction;
for this root adjunction, we use the $\mathbb{S}_{p}[\sigma_{4}]$-algebra
structure on $ko_{p}$ provided by Theorem 7.6. To identify the resulting
$2$-graded $E_{1}$-ring structure on $ku_{p}$, we use the symmetric monoidal
functor
$D^{\prime}\colon\thinspace\operatorname{Fun}(\mathbb{Z}/(p-1),\operatorname{Sp})\to\operatorname{Fun}(\mathbb{Z}/2,\operatorname{Sp})$
given by left Kan extension through the canonical map
$\mathbb{Z}/(p-1)\to\mathbb{Z}/2$.
###### Proposition 7.7.
For $p>3$, there is an equivalence
$ko_{p}(\sqrt[2]{\alpha})\simeq D^{\prime}(ku_{p})$
of $2$-graded $E_{1}$-algebras where $D^{\prime}$ is defined above and the
$p-1$-grading on $ku_{p}$ is given by Theorem 4.10.
###### Proof.
Due to Theorem 7.6, $ko_{p}$ is an $\mathbb{S}[\sigma_{4}]$-algebra given by
$\ell_{p}\wedge_{\mathbb{S}[\sigma_{2(p-1)}]}\mathbb{S}[\sigma_{4}].$
To adjoin a root to $ko_{p}$ using this structure, we use the sequence of maps
$\mathbb{S}[\sigma_{2(p-1)}]\to\mathbb{S}[\sigma_{4}]\to
D^{\prime}(\mathbb{S}[\sigma_{2}])$
of $2$-graded $E_{2}$-ring spectra where $\mathbb{S}[\sigma_{2(p-1)}]$ and
$\mathbb{S}[\sigma_{4}]$ are concentrated in weight $0$ and
$\mathbb{S}[\sigma_{2}]$ above is given its canonical $p-1$-grading so that
$\sigma_{2}$ in $D^{\prime}(\mathbb{S}[\sigma_{2}])$ lies in weight $1$.
We obtain the following equivalences of $2$-graded $E_{1}$-rings.
(7.8)
$ko_{p}(\sqrt[2]{\alpha})\simeq\ell_{p}\wedge_{\mathbb{S}[\sigma_{2(p-1)}]}\mathbb{S}[\sigma_{4}]\wedge_{\mathbb{S}[\sigma_{4}]}D^{\prime}(\mathbb{S}[\sigma_{2}])\simeq\ell_{p}\wedge_{\mathbb{S}[\sigma_{2(p-1)}]}D^{\prime}(\mathbb{S}[\sigma_{2}])$
The functor $D^{\prime}$ is a left adjoint and it is symmetric monoidal.
Therefore, it commutes with the two sided bar construction defining relative
smash products. This provides the second equivalence in the following
equivalences of $2$-graded $E_{1}$-algebras.
(7.9) $D^{\prime}(ku_{p})\simeq
D^{\prime}(\ell_{p}\wedge_{\mathbb{S}[\sigma_{2(p-1)}]}\mathbb{S}[\sigma_{2}])\simeq
D^{\prime}(\ell_{p})\wedge_{D^{\prime}(\mathbb{S}[\sigma_{2(p-1)}])}D^{\prime}(\mathbb{S}[\sigma_{2}])$
The first equivalence above follows by Theorem 4.10 and the relative smash
product in the middle is taken in $p-1$-graded spectra. Since $\ell_{p}$ and
$\mathbb{S}[\sigma_{2(p-1)}]$ are concentrated in weight $0$, the right hand
side above is equivalent to the right hand side of (7.8); this follows by
Lemma 2.3. In other words, (7.8) and (7.9) agree. ∎
Recall that the spectra $K(ku_{p})_{i}$ are given in Construction 7.3 and the
groups $V(1)_{*}K(ku_{p})_{i}$ are identified in Theorem 7.5.
###### Theorem 7.10.
For $p>3$, there is an equivalence of spectra:
$K(ko_{p})\simeq\bigvee_{0\leq i<(p-1)/2}K(ku_{p})_{2i}.$
Therefore, we have
$V(1)_{*}K(ko_{p})\cong\bigoplus_{0\leq i<(p-1)/2}V(1)_{*}K(ku_{p})_{2i}.$
and $V(1)_{*}K(ko_{p})$, as an abelian group, is given by:
$\begin{split}V(1)_{*}K(ko_{p})\cong&\mathbb{F}_{p}[b^{2}]\otimes\Lambda(\lambda_{1},ba_{1})\oplus\mathbb{F}_{p}[b^{2}]\otimes\mathbb{F}_{p}\\{\partial\lambda_{1},b\partial
b,b\partial a_{1},b\partial\lambda_{1}a_{1}\\}\\\
&\oplus\mathbb{F}_{p}[b^{2}]\otimes\Lambda(ba_{1})\otimes\mathbb{F}_{p}\\{t^{d}\lambda_{1}\mid
0<d<p\\}\\\
&\oplus\mathbb{F}_{p}[b^{2}]\otimes\Lambda(\lambda_{1})\otimes\mathbb{F}_{p}\\{b^{\epsilon(n)}\sigma_{n},\lambda_{2}t^{p^{2}-p}\mid
1\leq n\leq p-2\\}\\\ &\oplus\mathbb{F}_{p}\\{s\\},\end{split}$
where $\epsilon(n)=1$ if $n$ is odd and $\epsilon(n)=0$ if $n$ is even. Here,
the class denoted by $(b^{2})^{(p-1)/2}$ is $-v_{2}$.
As a consequence, we have an isomorphism of abelian groups:
$T(2)_{*}K(ko)\cong T(2)_{*}K(\ell_{p})[b^{2}]/((b^{2})^{(p-1)/2}+v_{2}).$
###### Proof.
We start by identifying $\operatorname{\textup{THH}}(ko_{p})$ as a cyclotomic
spectrum. We have the following chain of equivalences
$\begin{split}\operatorname{\textup{THH}}(ko_{p})&\simeq\operatorname{\textup{THH}}(ko_{p}(\sqrt[2]{\alpha}))_{0}\\\
&\simeq\operatorname{\textup{THH}}(D^{\prime}(ku_{p}))_{0}\\\
&\simeq\big{(}D^{\prime}(\operatorname{\textup{THH}}(ku_{p}))\big{)}_{0}\\\
&\simeq\prod_{0\leq
i<(p-1)/2}\operatorname{\textup{THH}}(ku_{p})_{2i}\end{split}$
The first equivalence above follows by Theorem 4.20, the second equivalence
follows by Proposition 7.7 and the third equivalence is a consequence of [2,
Corollary A.15]. The last equivalence above follows by the description of
$D^{\prime}$ as a left Kan extension, see Section 2.2. Indeed, this shows that
the following composite map of cyclotomic spectra is an equivalence.
$\operatorname{\textup{THH}}(ko_{p})\to\operatorname{\textup{THH}}(ku_{p})\simeq\prod_{0\leq
i<p-1}\operatorname{\textup{THH}}(ku_{p})_{i}\to\prod_{0\leq
i<(p-1)/2}\operatorname{\textup{THH}}(ku_{p})_{2i}$
Here, the equivalence in the middle follows by Proposition 7.2. The last map
above is the canonical projection.
The composite equivalence of cyclotomic spectra above shows that
$\operatorname{\textup{TC}}(\operatorname{\textup{THH}}(ko_{p}))\simeq\operatorname{\textup{TC}}(\prod_{0\leq
i<(p-1)/2}\operatorname{\textup{THH}}(ku_{p})_{2i})\simeq\prod_{0\leq
i<(p-1)/2}\operatorname{\textup{TC}}(ku_{p})_{2i}.$
Considering the Dundas-Goodwillie-McCarthy theorem with respect to the
composite $ko_{p}\to ku_{p}\to H\mathbb{Z}_{p}$, we obtain that the splitting
of the pullback square relating $K(ku_{p})$ with
$\operatorname{\textup{TC}}(ku_{p})$ (mentioned in Construction 7.3) provides
a splitting for $K(ko_{p})$ given by
(7.11) $K(ko_{p})\simeq\prod_{0\leq i<(p-1)/2}K(ku_{p})_{2i}.$
The first and the second statements in the theorem follow from this splitting.
The third statement follows by this, and by inspection on (7.4).
For the last statement, note that $T(2)_{*}K(ko)\cong T(2)_{*}K(ko_{p})$ due
to the purity of algebraic $K$-theory and [31, Lemma 2.2 (vi)]. It follows by
Theorem 7.5, that the map
$T(2)_{*}K(ku_{p})\xrightarrow{\cdot b^{i}}T(2)_{*}K(ku_{p})$
carries $T(2)_{*}K(ku_{p})_{0}$ to $T(2)_{*}K(ku_{p})_{i}$ for $i<p-1$ where
the map above multiplies by $b^{i}$. Using this fact, together with [7,
Proposition 1.2 (b)], provides isomorphisms
$T(2)_{*}K(\ell_{p})\cong
T(2)_{*}K(ku_{p})_{0}\xrightarrow{\cong}T(2)_{*}K(ku_{p})_{i}$
given by $\cdot b^{i}$ for $i<p-1$. This, together with (7.11) provides the
desired identification of $T(2)_{*}K(ko)\cong T(2)_{*}K(ko_{p})$ as
$T(2)_{*}K(\ell_{p})[b^{2}]/((b^{2})^{(p-1)/2}+v_{2})$. ∎
## 8\. Root adjunction and Lubin-Tate spectra
Recall that in [25], Hahn and Wilson prove that there are $E_{3}$ $MU$-algebra
forms of $BP\langle n\rangle$. Furthermore, their constructions provide an
$E_{3}$ $MU[\sigma_{2(p^{n}-1)}]$-algebra form of $BP\langle n\rangle$ where
$\sigma_{2(p^{n}-1)}$ acts through $v_{n}$ [25, Remark 2.1.2].
To relate particular forms of $BP\langle n\rangle$ to Lubin-Tate spectra, we
use the spherical Witt vectors constructed by Lurie [37, Example 5.2.7]. For a
given discrete perfect $\mathbb{F}_{p}$-algebra $B_{0}$, this provides an
$E_{\infty}$-ring $\mathbb{S}_{W(B_{0})}$ that is flat over $\mathbb{S}$ in
the sense of [36, Defnition 7.2.2.10]. Therefore, it follows by [36,
Proposition 7.2.2.13] and [38, Proposition 2.7] that
(8.1) $\pi_{n}(\mathbb{S}_{W(B_{0})}\wedge F)\cong W(B_{0})\otimes\pi_{n}F.$
for every spectrum $F$. We would like to thank Jeremy Hahn for showing us the
proof of the following proposition.
###### Proposition 8.2.
Fix an $E_{3}$ $MU$-algebra form of $BP\langle n\rangle$. Then
$\mathbb{S}_{W(B_{0})}\wedge BP\langle n\rangle$ satisfies the redshift
conjecture for every discrete perfect $\mathbb{F}_{p}$-algebra $B_{0}$.
###### Proof.
There is an equivalence of $E_{2}$-algebras in $S^{1}$-equivariant spectra
$\mathbb{S}_{W(B_{0})}\wedge\operatorname{\textup{THH}}^{MU}(BP\langle
n\rangle)\simeq\operatorname{\textup{THH}}^{\mathbb{S}_{W(B_{0})}\wedge
MU}(\mathbb{S}_{W(B_{0})}\wedge BP\langle n\rangle).$
Therefore, it follows by (8.1) that we have
(8.3) $\operatorname{\textup{THH}}^{\mathbb{S}_{W(B_{0})}\wedge
MU}_{*}(\mathbb{S}_{W(B_{0})}\wedge BP\langle n\rangle)\cong
W(B_{0})\otimes\operatorname{\textup{THH}}^{MU}_{*}(BP\langle n\rangle)$
and this is a polynomial algebra over $W(B_{0})[v_{1},...,v_{n}]$ due to [25,
Theorem 2.5.4] where one of the generators is denoted by $\sigma^{2}v_{n+1}$ .
The rest of the argument follows as in the proofs of [25, Theorems 2.5.4 and
5.0.1, Corollary 5.0.2] by considering
(8.4) $\pi_{*}\big{(}\operatorname{\textup{THH}}^{\mathbb{S}_{W(B_{0})}\wedge
MU}(\mathbb{S}_{W(B_{0})}\wedge BP\langle n\rangle)^{hS^{1}}\big{)}$
instead of $\pi_{*}\big{(}\operatorname{\textup{THH}}^{MU}(BP\langle
n\rangle)^{hS^{1}}\big{)}$.
Namely, we obtain from [25, Theorem 2.5.4] that (8.3) is concentrated in even
degrees. Therefore, the corresponding homotopy fixed point spectral sequence
degenerates on the second page providing the
$W(B_{0})[v_{1},...,v_{n}]$-algebra (8.4) as
$W(B_{0})\otimes\operatorname{\textup{THH}}^{MU}_{*}(BP\langle
n\rangle)\llbracket t\rrbracket$
since (8.3) is polynomial. Using the map from
$\pi_{*}\big{(}\operatorname{\textup{THH}}^{MU}(BP\langle
n\rangle)^{hS^{1}}\big{)}$ to (8.4), we deduce from [25, Theorem 5.0.1] that
$v_{n+1}$ in (8.4) is represented by the class $t\sigma^{2}v_{n+1}$.
Considering the action of $v_{0},...v_{n+1}$ on (8.4) described above, one
observes that
$L_{T(n+1)}\operatorname{\textup{THH}}^{\mathbb{S}_{W(B_{0})}\wedge
MU}(\mathbb{S}_{W(B_{0})}\wedge BP\langle n\rangle)^{hS^{1}}\not\simeq*.$
Using this and the $E_{2}$-map
$L_{T(n+1)}K(\mathbb{S}_{W(B_{0})}\wedge BP\langle n\rangle)\to
L_{T(n+1)}\operatorname{\textup{THH}}^{\mathbb{S}_{W(B_{0})}\wedge
MU}(\mathbb{S}_{W(B_{0})}\wedge BP\langle n\rangle)^{hS^{1}},$
we deduce that $L_{T(n+1)}K(\mathbb{S}_{W(B_{0})}\wedge BP\langle
n\rangle)\not\simeq*$ as desired. ∎
###### Construction 8.5.
Fix an $E_{3}$ $MU[\sigma_{2(p^{n}-1)}]$-algebra form of $BP\langle n\rangle$
where $\sigma_{2(p^{n}-1)}$ acts through $v_{n}$ and let $k$ be a perfect
field of characteristic $p$. Recall that in this situation, degree
$p^{n}-1$-root adjunction to $v_{n}$ provides a $p^{n}-1$-graded $E_{3}$
$MU[\sigma_{2}]$-algebra, see Remark 4.11. We consider the $E_{3}$
$MU$-algebra:
$E:=\big{(}L_{K(n)}(\mathbb{S}_{W(k)}\wedge BP\langle
n\rangle)\big{)}(\sqrt[p^{n}-1]{v_{n}}).$
It follows by [22, Theorem 1.5.4] and [21, Theorem 1.9] that
$\pi_{*}E\cong W(k)[\lvert u_{1},...,u_{n-1}\rvert][u^{{\pm 1}}]$
where $\lvert u_{i}\rvert=0$ and $\lvert u\rvert=-2$. Furthermore, the
resulting $E_{3}$-map $MU\to BP\langle n\rangle\to E$ provides a formal group
law over $\pi_{*}E$.
For a given perfect $\mathbb{F}_{p}$-algebra $B_{0}$ and a height $n$ formal
group law $\Gamma$ over $B_{0}$, we let $E_{(B_{0},\Gamma)}$ denote the
corresponding Lubin-Tate spectrum. By Lurie’s generalization [37, Section 5]
of the Goerss-Hopkins-Miller theorem [44, 16], $E_{(B_{0},\Gamma)}$ is an
$E_{\infty}$-ring.
###### Proposition 8.6.
In the setting of Construction 8.5, $E$ is equivalent to $E_{(k,\Gamma)}$ as
an $E_{1}$-ring for some height $n$ formal group law $\Gamma$ over $k$.
###### Proof.
By construction, the map $\pi_{*}MU_{(p)}\to\pi_{*}E$ carries $v_{i}$ to
$u_{i}u^{-(p^{i}-1)}$ for $0<i<n$ and $v_{n}$ to $u^{-(p^{n}-1)}$. Therefore,
the corresponding formal group law on $\pi_{0}E$ is the universal deformation
of the resulting height $n$ formal group law $\Gamma$ over $k$. This follows
by [34, Theorem 5 in Lecture 21]. Alternatively, one can directly check the
conditions given in [32, Proposition 1.1]. It follows from Hopkins-Miller
theorem that there is an equivalence of $E_{1}$-rings $E\simeq E_{(k,\Gamma)}$
[44, Theorem 7.1].
∎
Burklund, Hahn, Levy and Schlank are going to use the following example in
their construction of a counterexample to the telescope conjecture.
###### Example 8.7.
Let $k$ be a perfect algebraic extension of $\mathbb{F}_{p}$. We know from
[43, Corollary 4.31] that the $E_{1}$-algebra structure on $E_{(k,\Gamma)}$
lifts to a unique $E_{d}$-algebra structure for every $1\leq d\leq\infty$.
Since $E$ in Construction 8.5 is an $E_{3}$-ring, it follows from Proposition
8.6 that there is an $E_{3}$-equivalence
$E\simeq E_{(k,\Gamma)}$
for $\Gamma$ as in Proposition 8.6. Through this equivalence, we may equip
$E_{(k,\Gamma)}$ with the structure of an $E_{3}$ $MU$-algebra. In particular,
we obtain a map of $E_{3}$ $MU$-algebras
$BP\langle n\rangle\to E_{(k,\Gamma)}.$
Furthermore, $E_{(k,\Gamma)}$ is a $\mathbb{Z}/(p^{n}-1)$-graded $E_{3}$
$MU[\sigma_{2}]$-algebra where the weight $1$ class $\sigma_{2}$ acts through
$u^{-1}$.
###### Theorem 8.8.
In the setting of Construction 8.5 and Proposition 8.6, the canonical map
$L_{T(n+1)}K(\mathbb{S}_{W(k)}\wedge BP\langle n\rangle)\to
L_{T(n+1)}K(E_{(k,\Gamma)})$
is the inclusion of a non-trivial wedge summand.
###### Proof.
It follows by Corollary 5.11 and Proposition 8.6 that
$L_{T(n+1)}K\big{(}L_{K(n)}(\mathbb{S}_{W(k)}\wedge BP\langle
n\rangle)\big{)}\to L_{T(n+1)}K(E_{(k,\Gamma)})$
is the inclusion of a wedge summand. Since
$\mathbb{S}_{W(k)}\wedge BP\langle n\rangle\to
L_{K(n)}(\mathbb{S}_{W(k)}\wedge BP\langle n\rangle)$
is a $T(n)\vee T(n+1)$-equivalence, the result follows by [31, Purity
Theorem]. ∎
We finally prove the following theorem of Yuan.
###### Theorem 8.9 ([52]).
For every perfect $\mathbb{F}_{p}$-algebra $B_{0}$ and height $n$ formal group
law $\Gamma$ over $B_{0}$, the Lubin-Tate spectrum $E_{(B_{0},\Gamma)}$
satisfies the redshift conjecture.
###### Proof.
There is an $E_{\infty}$-map $E_{(B_{0},\Gamma)}\to E_{(k,\Gamma^{\prime})}$
for some algebraically closed field $k$ of characteristic $p$ and
$\Gamma^{\prime}$ is the corresponding height $n$ formal group law on $k$.
Since there is an induced $E_{\infty}$-map $K(E_{(B_{0},\Gamma)})\to
K(E_{(k,\Gamma^{\prime})})$, it suffices to prove the redshift conjecture for
$E_{(k,\Gamma^{\prime})}$.
Since $k$ is algebraically closed, there is a unique formal group law of
height $n$ over $k$ [29, Theorem IV]. Using Proposition 8.6, we deduce that
$E_{(k,\Gamma^{\prime})}\simeq E$ for $E$ as in Construction 8.5. Combining
Proposition 8.2 and Theorem 8.8 we obtain that $E_{(k,\Gamma^{\prime})}$
satisfies the redshift conjecture as desired. ∎
We remark that it should be possible to prove the redshift conjecture for all
$E_{1}$ $MU$-algebra forms of $BP\langle n\rangle$ by constructing maps
$BP\langle n\rangle\to E_{(k,\Gamma)}$ through root adjunction.
## 9\. Algebraic $K$-theory and THH of Morava $E$-theories
In this section, we work with a particular form of Lubin-Tate spectra. This is
the Morava $E$-theory spectrum $E_{n}$ and $E_{n}$ is central in the Ausoni-
Rognes program for the computation of $K(\mathbb{S})$. When we say Morava
$E$-theory $E_{n}$, we mean the Lubin-Tate spectrum corresponding to the
height $n$ Honda formal group. This formal group is characterized by its
$p$-series
$[p]_{n}(x)=x^{p^{n}},$
and admits a canonical form over $\mathbb{F}_{p^{n}}$, in the sense that all
of its endomorphisms are defined over this field. In this section, we prove a
splitting result for the algebraic $K$-theory of the Morava $E$-theory $E_{n}$
and the corresponding two periodic Morava $K$-theory. Furthermore, we show
that the THH of $E_{n}$ may be obtain from the THH of the $K(n)$-localized
Johnson-Wilson spectrum through base change.
### 9.1. An identification of Morava $E$-theory
Here, we provide an alternate description of $E_{n}$ in terms of its fixed
points and spherical Witt vectors. We have
$\pi_{*}E_{n}\cong W(\mathbb{F}_{p^{n}})[\lvert
u_{1},\dots,u_{n-1}\rvert][u^{\pm 1}]$
where $\lvert u_{i}\rvert=0$ and $\lvert u\rvert=-2$.
###### Proposition 9.1.
The map
$\pi_{*}\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\cong
W(\mathbb{F}_{p^{n}})\otimes_{\mathbb{Z}_{p}}\pi_{*}\mathbb{S}_{p}\to\pi_{*}E_{n}$
obtained via the map $\pi_{*}\mathbb{S}_{p}\to\pi_{*}E_{n}$ and the canonical
$W(\mathbb{F}_{p^{n}})$-module structure on $\pi_{*}E_{n}$ lifts to a map of
$E_{\infty}$ $\mathbb{S}_{p}$-algebras
$\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\to E_{n}.$
###### Proof.
This is a consequence of Lurie’s theory of thickenings of relatively perfect
morphisms [37, Section 5.2]. Indeed,
$\mathbb{S}_{p}\to\mathbb{S}_{W(\mathbb{F}_{p^{n}})}$ is an
$\mathbb{S}_{p}$-thickening of $H\mathbb{F}_{p}\to H\mathbb{F}_{p^{n}}$ in the
sense of [37, Definition 5.2.1], see [37, Example 5.2.7].
In particular, this implies that the space of $E_{\infty}$
$\mathbb{S}_{p}$-algebra maps from $\mathbb{S}_{W(\mathbb{F}_{p^{n}})}$ to the
connective cover $cE_{n}$ of $E_{n}$ is given by the set of
$\mathbb{F}_{p}$-algebra maps
(9.2)
$\hom_{\mathbb{F}_{p}\textup{-}\mathcal{A}\textup{lg}}(\mathbb{F}_{p^{n}},\mathbb{F}_{p^{n}}[\lvert
u_{1},\dots,u_{n-1}\rvert])$
where this correspondence is given by the functor $\pi_{0}(-)/p$.
Let $f\colon\thinspace\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\to cE_{n}$ be the map
of $E_{\infty}$ $\mathbb{S}_{p}$-algebras corresponding to the canonical
$\mathbb{F}_{p}$-algebra map in (9.2); in particular, $\pi_{0}(f)/p$ is the
canonical map in (9.2). We first show that $\pi_{0}f$ is given by the
canonical inclusion
$W(\mathbb{F}_{p^{n}})\to W(\mathbb{F}_{p^{n}})[\lvert
u_{1},\dots,u_{n-1}\rvert].$
Since $W(\mathbb{F}_{p})$ is the ring of integers of the unique unramified
extension $\mathbb{Q}_{p}[\mu_{p^{n}-1}]$ of $\mathbb{Q}_{p}$ of degree $d$,
$W(\mathbb{F}_{p^{n}})$ is generated as a $\mathbb{Z}_{p}$-algebra by a
primitive $p^{n}-1$ root of the unit. Since the roots of the unit of
$\pi_{0}cE_{n}$ are all in the image of the canonical inclusion
$W(\mathbb{F}_{p^{n}})\to W(\mathbb{F}_{p^{n}})[\lvert
u_{1},\dots,u_{n-1}\rvert]$, one observes that the map $\pi_{0}f$ has to
factor through the canonical inclusion $W(\mathbb{F}_{p^{n}})\to
W(\mathbb{F}_{p^{n}})[\lvert u_{1},\dots,u_{n-1}\rvert]$. Furthermore, there
is a unique ring map $W(\mathbb{F}_{p^{n}})\to W(\mathbb{F}_{p^{n}})$ that
lifts the identity map on $\mathbb{F}_{p^{n}}$. This shows that $\pi_{0}f$ is
given by the canonical inclusion $W(\mathbb{F}_{p^{n}})\to
W(\mathbb{F}_{p^{n}})[\lvert u_{1},\dots,u_{n-1}\rvert]$. Since $\pi_{*}f$ is
a map of $\pi_{*}\mathbb{S}_{p}$-modules, it follows that the composition of
$f$ with the map $cE_{n}\to E_{n}$ provides the map claimed in the
proposition.
∎
Let $Gal$ denote the Galois group
$\text{Gal}(\mathbb{F}_{p^{n}},\mathbb{F}_{p})$. Due to Goerss-Hopkins-Miller
theorem, there is an action of $Gal$ on the $E_{\infty}$-algebra $E_{n}$ for
which
$\pi_{*}E_{n}^{hGal}\cong\mathbb{Z}_{p}[\lvert
u_{1}\dots,u_{n-1}\rvert][u^{\pm 1}]$
where the degrees of the generators are as in $\pi_{*}E_{n}$.
###### Proposition 9.3.
There is an equivalence of $E_{\infty}$-$\mathbb{S}_{p}$-algebras:
$\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge_{\mathbb{S}_{p}}E_{n}^{hGal}\simeq
E_{n}.$
###### Proof.
This equivalence is given by the following composite map of $E_{\infty}$
$\mathbb{S}_{p}$-algebras
(9.4)
$\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge_{\mathbb{S}_{p}}E_{n}^{hGal}\to
E_{n}\wedge_{\mathbb{S}_{p}}E_{n}\to E_{n}$
where the first map is induced by the map provided by Proposition 9.1 and the
second map is given by the multiplication map of $E_{n}$. Due to the flatness
of $\mathbb{S}_{W(\mathbb{F}_{p^{n}})}$, this map induces the canonical map
(9.5) $W(\mathbb{F}_{p^{n}})\otimes_{\mathbb{Z}_{p}}\mathbb{Z}_{p}[\lvert
u_{1},\dots,u_{n-1}\rvert][u^{\pm 1}]\to W(\mathbb{F}_{p^{n}})[\lvert
u_{1},\dots,u_{n-1}\rvert][u^{\pm 1}].$
at the level of homotopy groups [36, 7.2.2.13]. Since $W(\mathbb{F}_{p^{n}})$
is a free $\mathbb{Z}_{p}$-module of finite rank, the functor
$W(\mathbb{F}_{p^{n}})\otimes_{\mathbb{Z}_{p}}-$ is given by taking a $n$-fold
product of $-$. In particular, the functor
$W(\mathbb{F}_{p^{n}})\otimes_{\mathbb{Z}_{p}}-$ commutes with completions.
This shows that (9.5) is an isomorphism as desired. ∎
### 9.2. Algebraic $K$-theory of Morava $E$-theories
There is a finite subgroup $\mathbb{F}_{p^{n}}^{\times}$ of the Morava
stabilizer group such that $K=\mathbb{F}_{p^{n}}^{\times}\rtimes Gal$ acts on
the $E_{\infty}$-algebra $E_{n}$. Furthermore,
$E_{n}^{hK}\simeq\widehat{E(n)}$
where $\widehat{E(n)}$ denotes the $K(n)$-localization of the Johnson-Wilson
spectrum $E(n)$, see [45, Section 5.4.7]. We have
$\pi_{*}E(n)\cong\mathbb{Z}_{(p)}[v_{1},\dots,v_{n-1}][v_{n}^{\pm 1}]\textup{\
\ and \
}\pi_{*}\widehat{E(n)}\cong\mathbb{Z}_{(p)}[v_{1},\dots,v_{n-1}][v_{n}^{\pm
1}]_{I}^{\land}$
where $I$ denotes the ideal $(p,v_{1},\dots,v_{n-1})$. Since $\widehat{E(n)}$
is given by $E_{n}^{hK}$, there is a $K$-equivariant map of
$E_{\infty}$-algebras $\widehat{E(n)}\to E_{n}$. In particular, this provides
a map
$\widehat{E(n)}\to E_{n}^{hGal}$
of $E_{\infty}$-algebras. This map carries $v_{n}$ to $u^{-(p^{n}-1)}$ and
$v_{i}$ to $u_{i}u^{-(p^{i}-1)}$ for $0<i<n$.
For the following, we fix an $E_{2}$-map
$\mathbb{S}[\sigma_{2(p^{n}-1)}]\to\widehat{E(n)}$ to adjoin roots. Recall
from Remark 4.8 that in this situation,
$\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}})$ is an $\widehat{E(n)}$-algebra.
###### Theorem 9.6.
There are equivalences of $E_{1}$ $\widehat{E(n)}$-algebras:
$\displaystyle E_{n}^{hGal}\simeq$ $\displaystyle\
\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}})$ $\displaystyle E_{n}\simeq$
$\displaystyle\
\mathbb{S}_{W(\mathbb{F}_{q})}\wedge_{\mathbb{S}_{p}}\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}})$
where the class $u^{-1}$ corresponds to $\sqrt[p^{n}-1]{v_{n}}$ at the level
of homotopy groups for both of these equivalences.
In particular, $E_{n}^{hGal}$ and $E_{n}$ are $p^{n}-1$-graded $E_{1}$
$\widehat{E(n)}$-algebras with
$(E_{n}^{hGal})_{i}\simeq\Sigma^{2i}\widehat{E(n)}$
and
$(E_{n})_{i}\simeq\Sigma^{2i}\mathbb{S}_{W(\mathbb{F}_{q})}\wedge_{\mathbb{S}_{p}}\widehat{E(n)}$
for every $0\leq i<p^{n}-1$.
###### Proof.
By inspection, one observes that
$\pi_{*}(\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}}))\cong\pi_{*}E_{n}^{hGal},$
see [45, 5.4.9]. Furthermore, the map of rings,
$\pi_{*}\widehat{E(n)}\to\pi_{*}(\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}}))\cong(\pi_{*}\widehat{E(n)})[z]/(z^{p^{n}-1}-v_{n})$
is a map of étale rings as $v_{n}$ and $p^{n}-1$ are invertible in
$\pi_{*}\widehat{E(n)}$. Through [23, Theorem 1.10], we obtain the first
equivalence in the theorem. The second equivalence follows by the first
equivalence and Proposition 9.3. The statement on graded ring structures
follows by the fact that root adjunction results in $m$-graded ring spectra,
see Construction 4.6. ∎
We are ready to prove our result on the $K$-theory of Morava $E$-theories. For
this, we use the following composite map
(9.7)
$E(n)\to\widehat{E(n)}\to\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}})\xrightarrow{\simeq}E_{n}^{hGal}$
of $E_{1}$-rings where the last map above is given by Theorem 9.6. Using
Proposition 9.3, we obtain the following composite:
(9.8) $\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge
E(n)\to\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge\widehat{E(n)}\to\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge
E_{n}^{hGal}\to\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge_{\mathbb{S}_{p}}E_{n}^{hGal}\xrightarrow{\simeq}E_{n}.$
###### Theorem 9.9.
The maps
$\displaystyle K(E(n))\to$ $\displaystyle\ K(E_{n}^{hGal})$ $\displaystyle
K(\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge E(n))\to$ $\displaystyle\ K(E_{n})$
induced by those above are inclusions of wedge summands after
$T(n+1)$-localization.
###### Proof.
The first map in (9.7) is a $T(n)\vee T(n+1)$-equivalence and hence induces a
$T(n+1)$-equivalence in algebraic $K$-theory [31, Purity Theorem]. Therefore,
the first equivalence in the theorem follows by applying Corollary 5.11 to
$\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}})$.
Similarly, the first and the third maps in (9.8) induce $T(n+1)$-equivalences
in algebraic $K$-theory. The second equivalence in the theorem follows by
observing that there is an equivalence of $E_{1}$-algebras:
$\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge\big{(}\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}})\big{)}\simeq\big{(}\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge\widehat{E(n)}\big{)}(\sqrt[p^{n}-1]{v_{n}})$
and then applying Corollary 5.11. ∎
### 9.3. Two-periodic Morava $K$-theories
We obtain analogous results for two-periodic Morava K-theories. Taking a
quotient with respect to a regular sequence in
$\pi_{*}\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge_{\mathbb{S}_{p}}\widehat{E(n)}$,
one obtains an $\widehat{E(n)}$-algebra $K(n)$ [30, 3, 24]. Here, $K(n)$ is
the Morava $K$-theory spectrum with coefficients in $\mathbb{F}_{p^{n}}$. We
have
$\pi_{*}K(n)\cong\mathbb{F}_{p^{n}}[v_{n}^{{\pm 1}}].$
Using the $\widehat{E(n)}$-algebra structure on $K(n)$, we adjoin roots and
define the two periodic Morava $K$-theory as follows
$K_{n}:=K(n)(\sqrt[p^{n}-1]{v_{n}}).$
In this case,
$\pi_{*}K_{n}\cong\mathbb{F}_{p^{n}}[u^{{\pm 1}}]$
where $\lvert u\rvert=-2$. Together with Theorem 9.6, this provides a
commuting diagram of $E_{1}$-rings
${\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\wedge_{\mathbb{S}_{p}}\widehat{E(n)}}$${E_{n}}$${K(n)}$${K_{n}}$
which justifies our definition of $K_{n}$. In particular, $K_{n}$ is a
$p^{n}-1$-graded $E_{1}$-ring in a non-trivial way and we obtain the following
from Corollary 5.11.
###### Theorem 9.10.
The following map
$K(K(n))\to K(K_{n})$
is the inclusion of a wedge summand after $T(i)$-localization for every $i\geq
2$.
###### Corollary 9.11.
If $K(n)$ satisfies the redshift conjecture, then so does $K_{n}$.
The $V(1)$-homotopy of $K(k(1))$ is computed by Ausoni and Rognes in [5] for
$p>3$. In particular, their computation shows that $K(1)$ satisfies the
redshift conjecture. We obtain the following.
###### Corollary 9.12.
The two periodic Morava $K$-theory $K_{1}$ of height one satisfies the
redshift conjecture for $p>3$.
### 9.4. THH descent for Morava $E$-theories
Theorem 6.28 identifies THH of various periodic ring spectra with their
logarithmic THH. For instance, the Morava $E$-theory spectrum $E_{n}$ is
periodic with a unit $u$ in degree $-2$. Since $E_{n}/(u^{-1})\simeq 0$, the
canonical map
$\operatorname{\textup{THH}}(E_{n})\xrightarrow{\simeq}\operatorname{\textup{THH}}(E_{n}\mid
u^{-1})$
is an equivalence. Using this, together with our result on logarithmic THH-
étaleness of root adjunction, we show that
$\operatorname{\textup{THH}}(E_{n})$ may be obtained from
$\operatorname{\textup{THH}}(\widehat{E(n)})$ via base-change up to
$p$-completion. Such base-change formulas and their relationship with Galois
descent problems for THH were studied by Mathew in [39].
###### Theorem 9.13.
The canonical map:
$\operatorname{\textup{THH}}(\widehat{E(n)})\wedge_{\widehat{E(n)}}E_{n}^{hGal}\xrightarrow{\simeq}\operatorname{\textup{THH}}(E_{n}^{hGal}),$
is an equivalence.
###### Proof.
Recall from Theorem 9.6 that there is an equivalence of
$\widehat{E(n)}$-algebras
$E_{n}^{hGal}\simeq\widehat{E(n)}(\sqrt[p^{n}-1]{v_{n}}).$
Therefore, it follows by Construction 4.6 that
(9.14)
$\begin{split}\operatorname{\textup{THH}}(\widehat{E(n)})\wedge_{\widehat{E(n)}}E_{n}^{hGal}&\simeq\operatorname{\textup{THH}}(\widehat{E(n)})\wedge_{\widehat{E(n)}}\widehat{E(n)}\wedge_{\mathbb{S}_{(p)}[\sigma_{2(p^{n}-1)}]}\mathbb{S}_{(p)}[\sigma_{2}]\\\
&\simeq\operatorname{\textup{THH}}(\widehat{E(n)})\wedge_{\mathbb{S}_{(p)}[\sigma_{2(p^{n}-1)}]}\mathbb{S}_{(p)}[\sigma_{2}].\end{split}$
Since $v_{n}$ is a unit in $\widehat{E(n)}$ and $u^{-1}$ is a unit in
$E_{n}^{hGal}$, Theorem 6.28 provides the equivalences:
$\operatorname{\textup{THH}}(\widehat{E(n)})\xrightarrow{\simeq}\operatorname{\textup{THH}}(\widehat{E(n)}\mid
v_{n})\textup{\ \ and \
}\operatorname{\textup{THH}}(E_{n}^{hGal})\xrightarrow{\simeq}\operatorname{\textup{THH}}(E_{n}^{hGal}\mid
u^{-1}).$
Using these equivalences together with Theorem 6.23, we obtain that the
following canonical map is an equivalence.
$\operatorname{\textup{THH}}(\widehat{E(n)})\wedge_{\mathbb{S}_{(p)}[\sigma_{2(p^{n}-1)}]}\mathbb{S}_{(p)}[\sigma_{2}]\xrightarrow{\simeq}\operatorname{\textup{THH}}(E_{n}^{hGal})$
This, together with (9.14), provides the desired result. ∎
###### Theorem 9.15.
The canonical map:
$\operatorname{\textup{THH}}(\widehat{E(n)})\wedge_{\widehat{E(n)}}E_{n}\xrightarrow{\simeq_{p}}\operatorname{\textup{THH}}(E_{n}),$
is an equivalence after $p$-completion.
###### Proof.
The first equivalence below follows by Proposition 9.3. and the second follows
by Theorem 9.13.
$\begin{split}\operatorname{\textup{THH}}(\widehat{E(n)})\wedge_{\widehat{E(n)}}E_{n}&\simeq\operatorname{\textup{THH}}(\widehat{E(n)})\wedge_{\widehat{E(n)}}(E_{n}^{hGal}\wedge_{\mathbb{S}_{p}}\mathbb{S}_{W(\mathbb{F}_{p^{n}})})\\\
&\simeq\operatorname{\textup{THH}}(E_{n}^{hGal})\wedge_{\mathbb{S}_{p}}\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\end{split}$
Therefore, it is sufficient to show that the canonical map
$\operatorname{\textup{THH}}(E_{n}^{hGal})\wedge_{\mathbb{S}_{p}}\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\to\operatorname{\textup{THH}}(E_{n}^{hGal}\wedge_{\mathbb{S}_{p}}\mathbb{S}_{W(\mathbb{F}_{p^{n}})})$
is an equivalence after $p$-completion. This follows by the following
canonical diagram of $\mathbb{S}/p$-equivalences.
${\operatorname{\textup{THH}}(E_{n}^{hGal})\wedge_{\mathbb{S}_{p}}\mathbb{S}_{W(\mathbb{F}_{p^{n}})}}$${\operatorname{\textup{THH}}(E_{n}^{hGal}\wedge_{\mathbb{S}_{p}}\mathbb{S}_{W(\mathbb{F}_{p^{n}})})}$${\operatorname{\textup{THH}}^{\mathbb{S}_{p}}(E_{n}^{hGal})\wedge_{\mathbb{S}_{p}}\mathbb{S}_{W(\mathbb{F}_{p^{n}})}}$${\operatorname{\textup{THH}}^{\mathbb{S}_{p}}(E_{n}^{hGal}\wedge_{\mathbb{S}_{p}}\mathbb{S}_{W(\mathbb{F}_{p^{n}})})}$${\operatorname{\textup{THH}}^{\mathbb{S}_{p}}(E_{n}^{hGal})\wedge_{\mathbb{S}_{p}}\operatorname{\textup{THH}}^{\mathbb{S}_{p}}(\mathbb{S}_{W(\mathbb{F}_{p^{n}})})}$$\scriptstyle{\simeq_{p}}$$\scriptstyle{\simeq_{p}}$$\scriptstyle{\simeq_{p}}$$\scriptstyle{\simeq}$
The right hand vertical map and the upper left vertical map are
$\mathbb{S}/p$-equivalences due to [38, Lemma 5.20]. The fact that the lower
left vertical map is an $\mathbb{S}/p$-equivalence follows by [38, proof of
Lemma 5.20] and the fact that the composite
$\mathbb{S}_{W(\mathbb{F}_{p^{n}})}\to\operatorname{\textup{THH}}(\mathbb{S}_{W(\mathbb{F}_{p^{n}})})\to\mathbb{S}_{W(\mathbb{F}_{p^{n}})}$
is an equivalence. This shows that the upper horizontal map is an
$\mathbb{S}/p$-equivalence proving the theorem. ∎
## References
* AKAC+ [22] Gabriel Angelini-Knoll, Christian Ausoni, Dominic Leon Culver, Eva Höning, and John Rognes, _Algebraic K-theory of elliptic cohomology_ , arXiv preprint arXiv:2204.05890 (2022).
* AMMN [22] Benjamin Antieau, Akhil Mathew, Matthew Morrow, and Thomas Nikolaus, _On the Beilinson fiber square_ , Duke Math. J. 171 (2022), no. 18, 3707–3806. MR 4516307
* Ang [08] Vigleik Angeltveit, _Topological Hochschild homology and cohomology of $A_{\infty}$ ring spectra_, Geom. Topol. 12 (2008), no. 2, 987–1032. MR 2403804
* AR [02] Christian Ausoni and John Rognes, _Algebraic $K$-theory of topological $K$-theory_, Acta Math. 188 (2002), no. 1, 1–39. MR 1947457
* AR [12] by same author, _Algebraic $K$-theory of the first Morava $K$-theory_, J. Eur. Math. Soc. (JEMS) 14 (2012), no. 4, 1041–1079. MR 2928844
* Aus [05] Christian Ausoni, _Topological Hochschild homology of connective complex $K$-theory_, Amer. J. Math. 127 (2005), no. 6, 1261–1313. MR 2183525
* Aus [10] by same author, _On the algebraic $K$-theory of the complex $K$-theory spectrum_, Invent. Math. 180 (2010), no. 3, 611–668. MR 2609252
* Bay [23] Haldun Özgür Bayındır, _Algebraic $K$-theory of the two-periodic first Morava $K$-theory_, arXiv preprint arXiv:2305.14308 (2023).
* BDRR [11] Nils A. Baas, Bjørn Ian Dundas, Birgit Richter, and John Rognes, _Stable bundles over rig categories_ , J. Topol. 4 (2011), no. 3, 623–640. MR 2832571
* BLPØ [22] Federico Binda, Tommy Lundemo, Doosung Park, and Paul Arne Østvær, _A Hochschild-Kostant-Rosenberg theorem and residue sequences for logarithmic Hochschild homology_ , arXiv preprint arXiv:2209.14182 (2022).
* BM [22] Haldun Özgür Bayındır and Tasos Moulinos, _Algebraic $K$-theory of ${\rm THH}(\mathbb{F}_{p})$_, Trans. Amer. Math. Soc. 375 (2022), no. 6, 4177–4207. MR 4419056
* BSY [22] Robert Burklund, Tomer M. Schlank, and Allen Yuan, _The chromatic Nullstellensatz_ , arXiv preprint arXiv:2207.09929 (2022).
* DGM [13] Bjørn Ian Dundas, Thomas G. Goodwillie, and Randy McCarthy, _The local structure of algebraic K-theory_ , Algebra and Applications, vol. 18, Springer-Verlag London, Ltd., London, 2013. MR 3013261
* DM [22] Sanath Devalpurkar and Tasos Moulinos, _Logarithmic structures for $E_{2}$-algebras_, in progress (2022).
* EKMM [97] A. D. Elmendorf, I. Kriz, M. A. Mandell, and J. P. May, _Rings, modules, and algebras in stable homotopy theory_ , Mathematical Surveys and Monographs, vol. 47, American Mathematical Society, Providence, RI, 1997, With an appendix by M. Cole. MR 1417719
* GH [04] P. G. Goerss and M. J. Hopkins, _Moduli spaces of commutative ring spectra_ , Structured ring spectra, London Math. Soc. Lecture Note Ser., vol. 315, Cambridge Univ. Press, Cambridge, 2004, pp. 151–200. MR 2125040
* GKRW [18] Soren Galatius, Alexander Kupers, and Oscar Randal-Williams, _Cellular ${E}\\_k$-algebras_, arXiv preprint arXiv:1805.07184 (2018).
* Gla [16] Saul Glasman, _Day convolution for $\infty$-categories_, Math. Res. Lett. 23 (2016), no. 5, 1369–1385. MR 3601070
* HM [97] Lars Hesselholt and Ib Madsen, _Cyclic polytopes and the $K$-theory of truncated polynomial algebras_, Invent. Math. 130 (1997), no. 1, 73–97. MR 1471886
* HM [03] by same author, _On the $K$-theory of local fields_, Ann. of Math. (2) 158 (2003), no. 1, 1–113. MR 1998478
* Hov [95] Mark Hovey, _Bousfield localization functors and Hopkins’ chromatic splitting conjecture_ , The Čech centennial. A conference on homotopy theory dedicated to Eduard Čech on the occasion of his 100th birthday, June 22-26, 1993, Northeastern University, Boston, MA, USA, Providence, RI: American Mathematical Society, 1995, pp. 225–250 (English).
* Hov [97] Mark A. Hovey, _$v_{n}$ -elements in ring spectra and applications to bordism theory_, Duke Math. J. 88 (1997), no. 2, 327–356 (English).
* HP [22] Lars Hesselholt and Piotr Pstragowski, _Dirac geometry I: Commutative algebra_ , arXiv preprint arXiv:2207.09256 (2022).
* HW [18] Jeremy Hahn and Dylan Wilson, _Quotients of even rings_ , arXiv preprint arXiv:1809.04723 (2018).
* HW [22] by same author, _Redshift and multiplication for truncated Brown-Peterson spectra_ , Ann. of Math. (2) 196 (2022), no. 3, 1277–1351. MR 4503327
* HY [20] Jeremy Hahn and Allen Yuan, _Exotic multiplications on periodic complex bordism_ , J. Topol. 13 (2020), no. 4, 1839–1852. MR 4186145
* KN [19] Achim Krause and Thomas Nikolaus, _Bökstedt periodicity and quotients of DVRs_ , arXiv preprint arXiv:1907.03477 (2019).
* Law [20] Tyler Lawson, _Adjoining roots in homotopy theory_ , arXiv preprint arXiv:2002.01997 (2020).
* Laz [55] Michel Lazard, _Sur les groupes de Lie formels à un paramètre_ , Bull. Soc. Math. France 83 (1955), 251–274. MR 73925
* Laz [03] A. Lazarev, _Towers of $M$U-algebras and the generalized Hopkins-Miller theorem_, Proc. London Math. Soc. (3) 87 (2003), no. 2, 498–522. MR 1990937
* LMMT [20] Markus Land, Akhil Mathew, Lennart Meier, and Georg Tamme, _Purity in chromatically localized algebraic $K$-theory_, arXiv preprint arXiv:2001.10425 (2020).
* LT [66] Jonathan Lubin and John Tate, _Formal moduli for one-parameter formal Lie groups_ , Bull. Soc. Math. France 94 (1966), 49–59. MR 238854
* Lur [09] Jacob Lurie, _Higher topos theory_ , Princeton University Press, 2009.
* Lur [10] by same author, _Chromatic homotopy theory_ , lecture notes, available at
https://people.math.harvard.edu/~lurie/252x.html (2010).
* Lur [15] by same author, _Rotation invariance in algebraic K-theory_ , preprint (2015).
* Lur [17] by same author, _Higher algebra_ , Preprint, available at
https://www.math.ias.edu/~lurie/papers/HA.pdf (2017).
* Lur [18] by same author, _Elliptic cohomology ii: orientations_.
* Mao [20] Zhouhang Mao, _Perfectoid rings as thom spectra_ , arXiv preprint arXiv:2003.08697 (2020).
* Mat [17] Akhil Mathew, _THH and base-change for Galois extensions of ring spectra_ , Algebr. Geom. Topol. 17 (2017), no. 2, 693–704. MR 3623668
* Nik [16] Thomas Nikolaus, _Stable $\infty$-operads and the multiplicative Yoneda lemma_, arXiv preprint arXiv:1608.02901 (2016).
* NS [18] Thomas Nikolaus and Peter Scholze, _On topological cyclic homology_ , Acta Math. 221 (2018), no. 2, 203–409. MR 3904731
* Rak [20] Arpon Raksit, _Hochschild homology and the derived de Rham complex revisited_ , arXiv preprint arXiv:2007.02576 (2020).
* Ram [23] Maxime Ramzi, _Separability in homotopical algebra_ , arXiv preprint arXiv:2305.17236 (2023).
* Rez [98] Charles Rezk, _Notes on the Hopkins-Miller theorem_ , Homotopy theory via algebraic geometry and group representations (Evanston, IL, 1997), Contemp. Math., vol. 220, Amer. Math. Soc., Providence, RI, 1998, pp. 313–366. MR 1642902
* Rog [08] John Rognes, _Galois extensions of structured ring spectra. Stably dualizable groups_ , Mem. Amer. Math. Soc. 192 (2008), no. 898, viii+137. MR 2387923
* Rog [09] by same author, _Topological logarithmic structures_ , New topological contexts for Galois theory and algebraic geometry (BIRS 2008), Geom. Topol. Monogr., vol. 16, Geom. Topol. Publ., Coventry, 2009, pp. 401–544. MR 2544395
* RSS [15] John Rognes, Steffen Sagave, and Christian Schlichtkrull, _Localization sequences for logarithmic topological Hochschild homology_ , Math. Ann. 363 (2015), no. 3-4, 1349–1398. MR 3412362
* RSS [18] by same author, _Logarithmic topological Hochschild homology of topological $K$-theory spectra_, J. Eur. Math. Soc. (JEMS) 20 (2018), no. 2, 489–527. MR 3760301
* Sag [14] Steffen Sagave, _Logarithmic structures on topological $K$-theory spectra_, Geom. Topol. 18 (2014), no. 1, 447–490. MR 3159166
* SS [19] Steffen Sagave and Christian Schlichtkrull, _Virtual vector bundles and graded Thom spectra_ , Math. Z. 292 (2019), no. 3-4, 975–1016. MR 3980280
* SVW [99] R. Schwänzl, R. M. Vogt, and F. Waldhausen, _Adjoining roots of unity to $E_{\infty}$ ring spectra in good cases—a remark_, Homotopy invariant algebraic structures (Baltimore, MD, 1998), Contemp. Math., vol. 239, Amer. Math. Soc., Providence, RI, 1999, pp. 245–249. MR 1718085
* Yua [21] Allen Yuan, _Examples of chromatic redshift in algebraic $K$-theory_, arXiv preprint arXiv:2111.10837 (2021).
|
# No-signaling-in-time as a condition for macrorealism: the case of neutrino
oscillations
Massimo Blasone<EMAIL_ADDRESS>Dipartimento di Fisica, Università di
Salerno, Via Giovanni Paolo II 132, 84084 Fisciano (SA), Italy INFN Sezione
di Napoli, Gruppo collegato di Salerno, Italy Fabrizio Illuminati
<EMAIL_ADDRESS>INFN Sezione di Napoli, Gruppo collegato di Salerno,
Italy Dipartimento di Ingegneria Industriale, Università di Salerno, Via
Giovanni Paolo II 132, 84084 Fisciano (SA), Italy Luciano Petruzziello
<EMAIL_ADDRESS>INFN Sezione di Napoli, Gruppo collegato di Salerno,
Italy Dipartimento di Ingegneria Industriale, Università di Salerno, Via
Giovanni Paolo II 132, 84084 Fisciano (SA), Italy Institut für Theoretische
Physik, Albert-Einstein-Allee 11, Universität Ulm, 89069 Ulm, Germany Kyrylo
Simonov<EMAIL_ADDRESS>s7 rail technology GmbH, Lastenstraße 36,
4020 Linz, Austria Luca Smaldone<EMAIL_ADDRESS>Faculty of
Physics, University of Warsaw, ul. Pasteura 5, 02-093 Warsaw, Poland
###### Abstract
We consider two necessary and sufficient conditions for macrorealism recently
appeared in the literature, known as no-signaling-in-time and arrow-of-time
conditions, respectively, and study them in the context of neutrino flavor
transitions, within both the plane wave description and the wave packet
approach. We then compare the outcome of the above investigation with the
implication of various formulations of Leggett–Garg inequalities. In
particular, we show that the fulfillment of the addressed conditions for
macrorealism in neutrino oscillations implies the fulfillment of Leggett–Garg
inequalities, whereas the converse is not true. Finally, in the framework of
wave packet approach, we also prove that, for distances longer than the
coherence length, the no-signaling-in-time condition is always violated whilst
Leggett–Garg inequalities are not.
## I Introduction
Neutrino mixing and oscillations represent the main indications of physics
beyond the Standard Model [1, 2, 3, 4, 5]. Among the multifaceted aspects of
the above phenomenon, in recent years the quantum informational properties of
mixed flavor states have been widely investigated [6, 7, 8, 9, 10, 11, 12, 13,
14, 15]. An important achievement along this direction is the characterization
of the intrinsic quantum nature of neutrino oscillations, which has been
probed with the data available from the MINOS experiment by means of the
Leggett–Garg inequalities (LGIs) [16].
Loosely speaking, LGIs are typically regarded as the temporal analogues of
Bell inequalities; whilst the latter quantify the quantumness of a given
system via spatially-separated tests (thus dealing with quantum nonlocality),
the former rely on the notion of macroscopic coherence based upon temporal
auto-correlation functions [17, 18, 19, 20, 21]. Indeed, LGIs are closely
related to the concept of _macrorealism_ , an intuitive view of our classical
macroscopic world according to which measurements do not perturb the state of
the probed system and reveal a pre-existing, observable quantity.
Because of their relevance, LGIs have been extensively employed in
experimental verifications [16, 22, 23, 24, 25]. On the same footing, in the
last decades systems revealing phenomena of mixing and flavor oscillations
have become the subject of an emergent exploration dealing with classicality
and macroscopic superpositions [26, 27, 28, 29, 30, 30, 31, 32, 33, 34, 35,
36, 37, 14, 38, 39]. As a matter of fact, it is no coincidence that neutrinos
provide a promising probe for testing the validity of LGIs, since their flavor
oscillations exhibit quantum coherence even after the particles have traveled
macroscopic distances [34, 35, 36, 37, 14].
Despite the pivotal role covered by LGIs, experiments centered around
macrorealism reveal a more complex structure if compared with tests based upon
local realism [40]. The crucial difference lies in the fact that, whilst Bell
inequalities are both necessary and sufficient conditions for local realism
[41], the fulfillment of LGIs is not in a one-to-one correspondence with
macrorealism. Indeed, the validity of the standard LGIs and their variants
such as the _Wigner form_ of LGIs (WLGIs) [42] turns out not to be sufficient
for macrorealism [43, 40, 44]. For this reason, it is essential to introduce
another set of conditions for macrorealism which would be both necessary and
sufficient; such a set has already been developed, and it is given by a
combination of _no-signaling-in-time_ (NSIT) (which is an alternative
necessary condition for macrorealism [19, 43]) and _arrow-of-time_ (AoT)
conditions [45, 43]. Being equalities for joint probabilities rather than
inequalities, these requirements are more suitable to be interpreted as
quantum witnesses.
In this paper, we study the NSIT and AoT conditions in the case of two-flavor
neutrino oscillations. We find that, while AoT conditions are always trivially
satisfied, neutrino oscillations always violate NSIT excluding an integer set
of isolated points. However, if a wave-packet treatment is considered and the
measurements are performed at sufficiently large intervals of time
(corresponding to distances longer than the coherence length), the NSIT
conditions are always violated. This fact confirms that, even after the
occurrence of wave-packet decoherence, neutrinos still retain their intrinsic
quantum nature, thereby preventing a macrorealistic interpretation of flavor
transitions even at late times. In conjunction with that, we also compare the
validity of LGIs (WLGIs) with the validity of NSIT and AoT conditions; in so
doing, we find that LGIs (WLGIs) are never violated when NSIT and AoT are not,
and that for large-time intervals all the LGIs (WLGIs) are fulfilled.
The remainder of the paper is organized as follows: in Section II, we review
the notion of macrorealism and the related quantifiers we will employ to
support our reasoning (namely, LGIs, WLGIs and NSIT/AoT). In Section III, we
provide the necessary tools to investigate neutrino oscillations and analyze
the ensuing NSIT conditions in the two-flavor approximation; with these
results, we then establish a comparison with the predictions stemming from
LGIs and WLGIs. Finally, Section IV contains conclusions and future
perspectives.
## II Macrorealism and NSIT conditions
According to our daily experience, we do not observe macroscopic objects
around us being in two different positions at the same time. Furthermore, a
motionless object with a net vanishing force acting on it stays at all times
in a given place which can be determined by simply looking at it.
_Macrorealism_ aims at formalizing this knowledge by relying on the following
basic assumptions111Often, a third extra condition of _induction_ is
considered [20], which states that the outcome of a measurement on a system
cannot be affected by what will/will not be measured on it later.:
* •
_macrorealism per se_ : given a set of available macroscopically distinct
states, a macroscopic object is in one of them at any given time;
* •
_non-invasive measurability_ : it is possible in principle to determine the
state of the macroscopic object without affecting either its state or its
dynamical evolution.
Similarly to the celebrated Bell inequalities in the framework of local
realism, one can derive a set of inequalities (known as LGIs) that have to be
satisfied by any physical system abiding by the above macrorealistic
prescriptions. To show this in a simple case, let us consider a system with a
dichotomous macroscopic observable $O$ with associated values $\pm 1$ which is
consecutively measured $N$ times by an observer at fixed time points
$\\{t_{0},t_{1},...,t_{N-1}\\}$. Assuming for simplicity $N=3$ (_i.e._ , three
measurements at times $t_{0},t_{1},t_{2}$), the measurement statistics with
respect to the 2-time correlation functions $C_{ij}=\langle
O(t_{i})O(t_{j})\rangle$ has to satisfy the LGIs [20, 44]
$\displaystyle\mathcal{L}_{1}(t_{0},t_{1},t_{2})=1+C_{01}+C_{12}+C_{02}\geq
0\,,$ (1)
$\displaystyle\mathcal{L}_{2}(t_{0},t_{1},t_{2})=1-C_{01}-C_{12}+C_{02}\geq
0\,,$ (2)
$\displaystyle\mathcal{L}_{3}(t_{0},t_{1},t_{2})=1+C_{01}-C_{12}-C_{02}\geq
0\,,$ (3)
$\displaystyle\mathcal{L}_{4}(t_{0},t_{1},t_{2})=1-C_{01}-C_{12}-C_{02}\geq
0\,,$ (4)
if macrorealism holds true. Hence, as for Bell inequalities, these relations
can be used to explore the quantumness of a system and the existence of
macroscopic superpositions. Indeed, in quantum mechanics the LGIs (1)-(4) can
be (and are) violated, in particular by the systems coherently oscillating
between the states on which $O=\pm 1$, respectively [20].
The Leggett–Garg inequalities (1)-(4) can then be regarded as the temporal
counterpart of the Bell inequalities, and just like the latter they are not
unique. As a matter of fact, alternative forms of LGIs can be found by
focusing solely on the joint probabilities $P(O_{i},O_{j})$ of finding
outcomes $O_{i}$ and $O_{j}$ after measuring $O$ at times $t_{i}$ and $t_{j}$,
respectively (instead of evaluating the functions $C_{ij}$ [21, 42]). Indeed,
macrorealism entails the existence of an overall joint probability
distribution $P(O_{0},O_{1},O_{2})$ of definite outcomes at all measurement
times $t_{0},t_{1},t_{2}$. Thus, the two-time probabilities $P(O_{i},O_{j})$
can be straightforwardly calculated as marginals of the overall joint
probability distribution. The requirement of positivity
$P(O_{0},O_{1},O_{2})\geq 0$ demands specific constraints on $P(O_{i},O_{j})$;
the shape of such constraints can be summarized in the so-called WLGIs [35]
$\displaystyle\mathcal{W}_{1}(t_{0},t_{1},t_{2})$ $\displaystyle=$
$\displaystyle P(O_{1},O_{2})-P(-O_{0},O_{1})-P(O_{0},O_{2})\ \leq\ 0\,,$ (5)
$\displaystyle\mathcal{W}_{2}(t_{0},t_{1},t_{2})$ $\displaystyle=$
$\displaystyle P(O_{0},O_{2})-P(O_{0},-O_{1})-P(O_{1},O_{2})\ \leq\ 0\,,$ (6)
$\displaystyle\mathcal{W}_{3}(t_{0},t_{1},t_{2})$ $\displaystyle=$
$\displaystyle P(O_{0},O_{1})-P(O_{1},-O_{2})-P(O_{0},O_{2})\ \leq\ 0\,.$ (7)
As it occurs for LGIs (1)-(4), WLGIs (5)-(7) can be violated by quantum
mechanical probabilities.
Interestingly, it has been pointed out that all forms of LGIs represent only a
necessary (but not a sufficient) condition for macrorealism, which can still
be violated even if LGIs are satisfied [43, 40]. This raises the need to seek
alternative conditions that could signal a quantum behavior for the cases in
which LGIs provide an incomplete description. A necessary and sufficient
condition is given by a set of equalities [43] consisting of two classes that
constrain signaling from past to future (known as no-signaling-in-time
conditions, or NSIT) and from future to past (known as arrow-of-time
conditions, or AoT). In the case $N=3$ (the measurements considered in the
present work), one can identify three NSIT conditions
$\displaystyle\mathrm{NSIT}^{(1)}:\ \ P(O_{2})\ =\
\sum_{O_{1}}P(O_{1},O_{2})\,,$ (8) $\displaystyle\mathrm{NSIT}^{(2)}:\ \
P(O_{0},O_{2})\ =\ \sum_{O_{1}}P(O_{0},O_{1},O_{2})\,,$ (9)
$\displaystyle\mathrm{NSIT}^{(3)}:\ \ P(O_{1},O_{2})\ =\
\sum_{O_{0}}P(O_{0},O_{1},O_{2})\,,$ (10)
and three AoT conditions
$\displaystyle\mathrm{AoT}^{(1)}:\ \ P(O_{0},O_{1})\ =\
\sum_{O_{2}}P(O_{0},O_{1},O_{2})\,,$ (11) $\displaystyle\mathrm{AoT}^{(2)}:\ \
P(O_{0})\ =\ \sum_{O_{1}}P(O_{0},O_{1})\,,$ (12)
$\displaystyle\mathrm{AoT}^{(3)}:\ \ P(O_{1})\ =\
\sum_{O_{2}}P(O_{1},O_{2})\,.$ (13)
Remarkably, it can be proved that NSIT conditions imply all possible forms of
LGIs.
In the following, we apply the notions introduced above in the context of
neutrino flavor transitions to compare the different conditions for
macrorealism.
## III Macrorealism in neutrino oscillations
### III.1 Phenomenology of neutrino oscillations
Neutrinos provide a paradigmatic example of mixed particles, whose physical
(flavor) states distinguishable in a weak process do not coincide with the
(mass) eigenstates of their Hamiltonian, which propagate with frequencies that
depend on the corresponding masses. In the relativistic regime, neutrino mass
eigenstates evolve according to
$\displaystyle|\nu_{j}(t)\rangle\ $ $\displaystyle=$ $\displaystyle\
e^{-iE_{j}t}|\nu_{j}(0)\rangle,$ (14) $\displaystyle E_{j}$ $\displaystyle=$
$\displaystyle\sqrt{p^{2}+m_{j}^{2}}\;\approx\;E+\frac{m_{j}}{2E},$ (15)
where the masses $m_{j}$ are taken to be much smaller than their momentum and
$E=p$ is the energy of a massless neutrino. On the other hand, flavor states
are well-described as superpositions of the mass eigenstates [1, 2]
$|\nu_{\sigma}(t)\rangle\ =\ \sum_{j}\,U^{*}_{\sigma j}|\nu_{j}(t)\rangle\,,$
(16)
with coefficients given by the elements $U_{\sigma j}$ of the mixing matrix
$U$. The non-equivalence of physical flavor states and mass eigenstates of the
particle Hamiltonian ascribed to the mixing phenomenon is responsible for the
oscillation between distinct flavor states. If a neutrino is produced in a
weak process at time $t=0$ with a given flavor $\sigma$, it evolves into a
superposition of flavor states at $t>0$ in such a way that the probability of
detecting another flavor $\rho$ is
$\displaystyle P_{\sigma\rightarrow\rho}(t)$ $\displaystyle=$
$\displaystyle\left|\langle\nu_{\rho}(t)|\nu_{\sigma}(0)\right|^{2}$ (17)
$\displaystyle=$ $\displaystyle\sum_{j,k}\,U_{\rho j}U_{\sigma k}U^{*}_{\rho
k}U^{*}_{\sigma j}\exp\left(-i\frac{\Delta m^{2}_{jk}}{2E}t\right)\,,$
where $\Delta m^{2}_{jk}\equiv m_{j}^{2}-m_{k}^{2}$. In particular, for the
two-flavor case (a typical approximation that successfully describes many
experiments with good accuracy [46]), the mixing matrix is given by
$U\ =\ \begin{pmatrix}\cos\theta&\sin\theta\\\
-\sin\theta&\cos\theta\end{pmatrix}\,,$ (18)
with $\theta$ being the _mixing angle_. Under these assumptions, the flavor
oscillation probability is given by the Pontecorvo formula
$\displaystyle P_{\sigma\rightarrow\rho}(t)$ $\displaystyle=$
$\displaystyle\sin^{2}(2\theta)\,\sin^{2}\left(\frac{\Delta
m^{2}}{2E}t\right)\,,\quad\sigma\neq\rho\,,$ (19) $\displaystyle
P_{\sigma\rightarrow\sigma}(t)$ $\displaystyle=$ $\displaystyle
1-P_{\sigma\rightarrow\rho}(t)\,,$ (20)
and $\Delta m^{2}\equiv\Delta m^{2}_{12}$. In light of these features, flavor
neutrinos resemble the behavior of two-level systems such as spin-${1}/{2}$
states and polarized photons; hence, they are naturally liable to be studied
in the framework of macrorealism.
Note that, in the scenario described so far, mass eigenstates possess a
definite momentum $p$; therefore, they are considered as propagating plane
waves. Nevertheless, the above picture still manages to fit most of neutrino
physics phenomenology that is probed in actual experiments. However, a more
realistic investigation of neutrinos requires a treatment of mass eigenstates
in terms of wave packets. To this aim, let us now consider a neutrino
propagating along the $x$-direction
$|\nu_{\sigma}(x,t)\rangle\ =\ \sum_{j}\,U^{*}_{\sigma
j}\,\psi_{j}(t,x)\,|\nu_{j}\rangle\,,$ (21)
where the wave packets $\psi_{j}(t,x)$ can be chosen as being Gaussian
functions [47]
$\displaystyle\psi_{j}(t,x)$ $\displaystyle=$
$\displaystyle\left(\sqrt{2\pi}\sigma_{x}\right)^{-\frac{1}{2}}e^{i(px-
E_{j}t)}e^{-\frac{(x-v_{j}t)^{2}}{4\sigma^{2}_{x}}}\,.$
Here, $p$ is the average momentum of the wave packet222To keep our
considerations simple and without loss of generality, we can impose the same
average momentum for all mass eigenstates $|\nu_{j}\rangle$., while
$\sigma_{x}$ is the spatial spreading and
$\displaystyle v_{j}=\frac{p}{E_{j}}\;\approx\;1-\frac{m_{j}^{2}}{2E^{2}}\,,$
(22)
where $v_{j}$ are the group velocities. The flavor oscillation formula is thus
given by
$\displaystyle P_{\sigma\to\rho}(t,x)$ $\displaystyle=$
$\displaystyle\left(\sqrt{2\pi}\sigma_{x}\right)^{-1}\,\sum_{j,k}\,U_{\rho
j}U_{\sigma k}U^{*}_{\rho k}U^{*}_{\sigma j}e^{-i\frac{\Delta
m^{2}_{jk}}{2E}t}$ (23) $\displaystyle\times$ $\displaystyle
e^{-\frac{(x-v_{j}t)^{2}}{4\sigma^{2}_{x}}-\frac{(x-v_{k}t)^{2}}{4\sigma^{2}_{x}}}\,.$
In neutrino experiments, there is no direct access to time measurements, but
the distance between the source and the detector is known. Therefore,
concerning neutrino phenomenological studies, time is typically superseded by
space. As our aim is to test macrorealism involving measurements taken at
different times, a reverse conversion of space into time is mandatory. This
procedure does not affect the oscillation formula, which essentially remains
the same because of the interchangeability between time and space in the
relativistic regime [47].
Now, we can integrate (23) over $x$ and normalize it in order to obtain a
consistent probabilistic description (_i.e._ ,
$\sum_{\sigma}P_{\sigma\to\rho}(t)=1$). Eventually, one obtains the following
oscillation formula:
$\displaystyle P_{\sigma\rightarrow\rho}(t)$ $\displaystyle=$
$\displaystyle\sum_{j,k}\,U_{\rho j}U_{\sigma k}U^{*}_{\rho k}U^{*}_{\sigma
j}\exp\left(-i\frac{\Delta m^{2}_{jk}}{2E}t\right)$ (24) $\displaystyle\times$
$\displaystyle\exp\left(-\frac{(\Delta
m^{2}_{jk})^{2}\,t^{2}}{32E^{4}\sigma^{2}_{x}}\right)\,.$
The exponential damping factor is responsible for the relative spread of mass-
neutrino wave packets and, in turn, for the decoherence mechanism which
averages the oscillations on long-time intervals (distances). Therefore, it is
possible to identify a characteristic space/time scale at which the
decoherence occurs, namely the so-called _coherence length_
$L^{coh}_{jk}\ =\ \frac{4\sqrt{2}\,E^{2}}{\left|\Delta
m_{jk}^{2}\right|}\sigma_{x}\,.$ (25)
Finally, by specializing Eq. (24) for the two-flavor case, the oscillation
formula reads
$P_{\sigma\rightarrow\rho}(t)\ =\
\frac{\sin^{2}(2\theta)}{2}\,\left(1-e^{-\left(\frac{t}{L^{coh}}\right)^{2}}\cos\left(\frac{\Delta
m^{2}}{E}t\right)\right)\,,$ (26)
with $L^{coh}=\frac{4\sqrt{2}\,E^{2}}{\left|\Delta m^{2}\right|}\sigma_{x}$.
We are now ready to introduce the necessary and sufficient conditions for
macrorealism in neutrino oscillations. For this purpose, both the plane-wave
and the wave-packet description of two-flavor Dirac neutrinos will be
considered.
### III.2 Necessary and sufficient NSIT/AoT conditions for macrorealism in
neutrino oscillations
In order to test macrorealism in neutrino oscillations using the combined
NSIT/AoT conditions (8)–(13), we choose neutrino flavor to be the macroscopic
dichotomous observable $O(t)$. Since we work within the two-flavor
approximation (where the flavor can be either electronic $e$ or muonic $\mu$),
we define it as
$O(t)=|\nu_{e}(t)\rangle\langle\nu_{e}(t)|-|\nu_{\mu}(t)\rangle\langle\nu_{\mu}(t)|$,
which thus represents a dichotomous variable with values $\pm 1$ corresponding
to $e$\- and $\mu$-neutrino flavors, respectively. The ensuing joint
probabilities in the NSIT/AoT conditions (8)–(13) for the measurement outcomes
can be straightforwardly rewritten in terms of flavor oscillating
probabilities using the conditional probability rule
$\displaystyle P(O_{i},O_{j})$ $\displaystyle=$ $\displaystyle
P(O_{i})P(O_{j}|O_{i})$ (27) $\displaystyle=$ $\displaystyle
P_{O_{0}\rightarrow O_{i}}(t_{i})P_{O_{i}\rightarrow O_{j}}(t_{j}-t_{i}).$
Without loss of generality, we assume that an electronic neutrino is produced
at time $t_{0}=0$ and its flavor is subsequently measured at $t_{1}=t$ and
$t_{2}=2t$. When the measurement outcomes $O_{i}$ are fixed, we assume
$O_{0}=+1\equiv e$, $O_{1}=-1\equiv\mu$, and $O_{2}=-1\equiv\mu$. Therefore,
the full set of NSIT/AoT conditions in neutrino oscillations is
$\begin{aligned} P_{e\rightarrow\mu}(2t)&=P_{e\rightarrow
e}(t)P_{e\rightarrow\mu}(t)+P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\\\
P_{e\rightarrow e}(0)P_{e\rightarrow\mu}(2t)&=P_{e\rightarrow
e}(0)P_{e\rightarrow e}(t)P_{e\rightarrow\mu}(t)+P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\\\
P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)&=P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)+P_{e\rightarrow\mu}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\\\
P_{e\rightarrow e}(0)P_{e\rightarrow\mu}(t)&=P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow e}(t)+P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\\\ P_{e\rightarrow
e}(0)&=P_{e\rightarrow e}(0)P_{e\rightarrow e}(t)+P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)\\\
P_{e\rightarrow\mu}(t)&=P_{e\rightarrow\mu}(t)P_{\mu\rightarrow
e}(t)+P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\end{aligned}\begin{aligned}
&\left.\vphantom{\begin{aligned} P_{e\rightarrow\mu}(2t)&=P_{e\rightarrow
e}(t)P_{e\rightarrow\mu}(t)+P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\\\
P_{e\rightarrow e}(0)P_{e\rightarrow\mu}(2t)&=P_{e\rightarrow
e}(0)P_{e\rightarrow e}(t)P_{e\rightarrow\mu}(t)+P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\\\
P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)&=P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)+P_{e\rightarrow\mu}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\end{aligned}}\right\\}\quad\text{NSIT}\\\
&\left.\vphantom{\begin{aligned} P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)&=P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow e}(t)+P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\\\ P_{e\rightarrow
e}(0)&=P_{e\rightarrow e}(0)P_{e\rightarrow e}(t)+P_{e\rightarrow
e}(0)P_{e\rightarrow\mu}(t)\\\
P_{e\rightarrow\mu}(t)&=P_{e\rightarrow\mu}(t)P_{\mu\rightarrow
e}(t)+P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\end{aligned}}\right\\}\quad\text{AoT}\end{aligned}$
Interestingly, by suitably manipulating the AoT conditions, one ends up with
three relations which are identically satisfied at all times333Note that this
occurrence might not be true when considering the three-flavor scenario
because of the presence of a non-vanishing CP-violating phase., that is
$\displaystyle\mathrm{AoT}^{(1)}:\ \ 1\ =\ P_{\mu\rightarrow
e}(t)+P_{\mu\rightarrow\mu}(t)\,,$ (28) $\displaystyle\mathrm{AoT}^{(2)}:\ \
1\ =\ P_{e\rightarrow e}(t)+P_{e\rightarrow\mu}(t)\,,$ (29)
$\displaystyle\mathrm{AoT}^{(3)}:\ \ 1\ =\ P_{\mu\rightarrow
e}(t)+P_{\mu\rightarrow\mu}(t)\,.$ (30)
This is somewhat expected, because the AoT conditions are usually satisfied in
standard quantum mechanics [40]. Thus, for neutrino oscillations, AoT
conditions can be safely neglected, thereby leaving the NSIT conditions as the
relevant ones. Accounting for the symmetry of flavor oscillation probabilities
under exchange of flavors, _i.e._ , $P_{e\rightarrow
e}(t)=P_{\mu\rightarrow\mu}(t)$ and $P_{e\rightarrow\mu}(t)=P_{\mu\rightarrow
e}(t)$, the NSIT are then given by
$\displaystyle\mathrm{NSIT}^{(1)}:\ \ P_{e\rightarrow\mu}(2t)\ =\
2P_{e\rightarrow\mu}(t)P_{e\rightarrow e}(t)\,,$ (31)
$\displaystyle\mathrm{NSIT}^{(2)}:\ \ P_{e\rightarrow\mu}(2t)\ =\
2P_{e\rightarrow\mu}(t)P_{e\rightarrow e}(t)\,,$ (32)
$\displaystyle\mathrm{NSIT}^{(3)}:\ \
P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t)\ =\
P_{e\rightarrow\mu}(t)P_{\mu\rightarrow\mu}(t).$ (33)
It is straightforward to check that $\mathrm{NSIT}^{(3)}$ is a trivial
relation, whilst $\mathrm{NSIT}^{(1)}$ and $\mathrm{NSIT}^{(2)}$ coincide.
Consequently, macrorealism in neutrino oscillations can be witnessed by a
single necessary and sufficient NSIT condition:
$\mathcal{N}(t)\ \equiv\
P_{e\rightarrow\mu}(2t)\,-\,2P_{e\rightarrow\mu}(t)\,P_{e\rightarrow e}(t)\ =\
0.$ (34)
The function $\mathcal{N}$ is plotted in Fig. 1 as a function of time for both
the plane-wave and the Gaussian wave-packet flavor oscillation probabilities
(19) and (26), respectively. It is worth stressing that, for the plane-wave
description, the NSIT condition (34) is periodically fulfilled in isolated
points. On the other hand, in the realistic wave-packet scenario, Eq. (34) is
fulfilled only for fewer values of the time with respect to the previous case;
this occurs because the behavior of $\mathcal{N}(t)$ is similar to the plane-
wave result only for small $t$ (_i.e._ , when the damping exponential is still
close to unity). Nevertheless, it is crucial to observe that, for large $t$,
$\mathcal{N}(t)$ approaches a constant value which in general is different
from zero, thereby preventing flavor transitions to be interpreted in a
macrorealistic way. This occurs because, at late times, one can check that
$\lim_{t\to+\infty}\mathcal{N}(t)=-\frac{\sin^{2}(4\theta)}{8}\,,$ (35)
which is identically zero only for integer multiples of the maximal mixing
angle $\pi/4$.
The above picture can be easily explained in quantum informational terms.
Indeed, if neutrinos with different flavors are viewed as being qubits of a
two-level system [6, 7, 8, 9], it can be shown that, despite the decoherence
due to the wave-packet spreading, the amount of quantum correlations shared by
the qubits is always non-vanishing, thus allowing for the constant presence of
a signature of quantum behavior [48, 49]. In turn, this fact entails that,
regardless of the distance traveled and of the wave-packet separation, for
realistic values of the mixing angle (such as the ones used in Fig. 1 [50])
under no circumstances the phenomenon of flavor transition is compatible with
macrorealism.
Figure 1: $\mathcal{N}(t)$ for the plane-wave (red) and the Gaussian wave-
packet (blue) approach as a function of time expressed in eV-1. The values
used to generate the plot have been taken from the MINOS experiment [50], with
$\sin^{2}\theta=0.314$, $\Delta m^{2}=7.92\times 10^{-5}$ eV2, $E=10$ GeV and
$\sigma_{x}=0.5$ GeV-1.
Before concluding this section, an important remark has to be made: the
obtained results related to the NSIT/AoT conditions for macrorealism in
neutrino oscillations are independent of the choice of the initial condition
(namely, the neutrino flavor at $t=0$) and the values of the outcomes $O_{0}$,
$O_{1}$, and $O_{2}$. In fact, by following the same steps as above, one can
easily prove that any arbitrary choice for $O_{0}$, $O_{1}$, and $O_{2}$ leads
to the same necessary and sufficient condition (34). This statement further
corroborates the reliability of neutrino oscillations as a suitable instrument
to investigate macrorealism.
### III.3 Comparison of NSIT/AoT with other conditions for macrorealism
In order to compare the condition (34) of macrorealism in neutrino
oscillations with the predictions obtained with LGIs, we have to adapt the
latter to the problem at hand. To this aim, we first investigate the LGIs in
their standard formulations (1)-(4), which require the evaluation of the
correlation functions in terms of flavor oscillation probabilities:
$\displaystyle C_{ij}$ $\displaystyle=$ $\displaystyle\langle
O(t_{i})O(t_{j})\rangle$ (36) $\displaystyle=$ $\displaystyle P_{e\rightarrow
e}(t_{i})\Bigl{(}P_{e\rightarrow
e}(t_{j}-t_{i})-P_{e\rightarrow\mu}(t_{j}-t_{i})\Bigr{)}$ $\displaystyle+$
$\displaystyle
P_{e\rightarrow\mu}(t_{i})\Bigr{(}P_{\mu\rightarrow\mu}(t_{j}-t_{i})-P_{\mu\rightarrow
e}(t_{j}-t_{i})\Bigr{)}.$
Bearing this in mind, we have
$\displaystyle C_{01}$ $\displaystyle=$ $\displaystyle P_{e\rightarrow
e}(0)\Bigl{(}P_{e\rightarrow e}(t)-P_{e\rightarrow\mu}(t)\Bigr{)}$
$\displaystyle+$ $\displaystyle
P_{e\rightarrow\mu}(0)\Bigl{(}P_{\mu\rightarrow\mu}(t)-P_{\mu\rightarrow
e}(t)\Bigr{)},$ $\displaystyle C_{12}$ $\displaystyle=$ $\displaystyle
P_{e\rightarrow e}(t)\Bigl{(}P_{e\rightarrow
e}(t)-P_{e\rightarrow\mu}(t)\Bigr{)}$ $\displaystyle+$ $\displaystyle
P_{e\rightarrow\mu}(t)\Bigl{(}P_{\mu\rightarrow\mu}(t)-P_{\mu\rightarrow
e}(t)\Bigr{)},$ $\displaystyle C_{02}$ $\displaystyle=$ $\displaystyle
P_{e\rightarrow e}(0)\Bigl{(}P_{e\rightarrow
e}(2t)-P_{e\rightarrow\mu}(2t)\Bigr{)}$ $\displaystyle+$ $\displaystyle
P_{e\rightarrow\mu}(0)\Bigl{(}P_{\mu\rightarrow\mu}(2t)-P_{\mu\rightarrow
e}(2t)\Bigr{)}.$
Finally, invoking the symmetry of flavor oscillation probabilities under the
exchange of flavor subscripts, it is immediate to verify that
$\displaystyle C_{01}$ $\displaystyle=$ $\displaystyle P_{e\rightarrow
e}(t)-P_{e\rightarrow\mu}(t),$ (37) $\displaystyle C_{12}$ $\displaystyle=$
$\displaystyle P_{e\rightarrow e}(t)-P_{e\rightarrow\mu}(t),$ (38)
$\displaystyle C_{02}$ $\displaystyle=$ $\displaystyle P_{e\rightarrow
e}(2t)-P_{e\rightarrow\mu}(2t).$ (39)
Now, plugging the found correlation functions (37)–(39) into the definitions
(1)-(4), we reach the expression of LGIs in the framework of neutrino
oscillations, that is
$\displaystyle\mathcal{L}_{1}(t)$ $\displaystyle=$ $\displaystyle
2P_{e\rightarrow e}(t)+2P_{e\rightarrow e}(2t)-2P_{e\rightarrow\mu}(t)\geq
0\,,$ (40) $\displaystyle\mathcal{L}_{2}(t)$ $\displaystyle=$ $\displaystyle
2P_{e\rightarrow\mu}(t)-P_{e\rightarrow\mu}(2t)\geq 0\,,$ (41)
$\displaystyle\mathcal{L}_{3}(t)$ $\displaystyle=$ $\displaystyle
2P_{e\rightarrow e}(2t)\geq 0\,,$ (42) $\displaystyle\mathcal{L}_{4}(t)$
$\displaystyle=$ $\displaystyle
2P_{e\rightarrow\mu}(t)+2P_{e\rightarrow\mu}(2t)-2P_{e\rightarrow e}(t)\geq
0\,.$ (43)
It is evident that Eq. (42) is trivially satisfied. However, it is interesting
to compare the other Leggett–Garg conditions with the NSIT (34). A plot of the
above functions together with $\mathcal{N}(t)$ for the flavor oscillation
probability (26) in the wave-packet description is displayed in Fig. 2. It can
be immediately observed that the entire set of LGIs (40)-(43) is always
fulfilled at late times, whilst $\mathcal{N}(t)\neq 0$. This divergence in the
predictions could have been foreseen, since the NSIT (34) is a necessary and
sufficient condition for macrorealism, but the LGIs (40)-(43) are not.
Figure 2: $\mathcal{N}(t)$ (blue) vs $\mathcal{L}_{1}(t)$ (brown),
$\mathcal{L}_{2}(t)$ (red), and $\mathcal{L}_{4}(t)$ (black) as functions of
time expressed in eV-1. The former witnesses violation of the NSIT condition
whenever it is not equal to zero, while the latter witness violation of LGIs
whenever they are negative. It is worth highlighting that, for large $t$, all
$\mathcal{L}_{j}(t)$ are always non-negative while $\mathcal{N}(t)$ differs
from zero. The values used to generate the plot have been taken from the MINOS
experiment [50], with $\sin^{2}\theta=0.314$, $\Delta m^{2}=7.92\times
10^{-5}$ eV2, $E=10$ GeV and $\sigma_{x}=0.5$ GeV-1.
Turning the attention on the Wigner formulation of LGIs (5)-(7), we observe
that they are already cast in terms of probabilities of measurement outcomes,
and hence the identification with flavor oscillation probabilities turns out
to be more natural. Indeed, we obtain
$\displaystyle\mathcal{W}_{1}(t)$ $\displaystyle=$ $\displaystyle
P_{e\rightarrow e}(t)\,P_{\mu\rightarrow e}(t)-P_{\mu\rightarrow e}(2t)\ \leq\
0\,,$ (44) $\displaystyle\mathcal{W}_{2}(t)$ $\displaystyle=$ $\displaystyle
P^{2}_{e\rightarrow\mu}(t)-P_{e\rightarrow e}(2t)\leq\ 0\,,$ (45)
$\displaystyle\mathcal{W}_{3}(t)$ $\displaystyle=$ $\displaystyle
P_{e\rightarrow e}(t)\,P_{\mu\rightarrow e}(t)-P_{\mu\rightarrow e}(2t)\ \leq\
0\,,$ (46)
from which we deduce that the first and the last conditions coincide, thus
leading to two non-trivial inequalities for neutrino oscillations. In Fig. 3,
we compare the relevant WLGIs with the NSIT condition (34).
Figure 3: $\mathcal{N}(t)$ (blue) vs $\mathcal{W}_{1}(t)$ (red, first plot)
and $\mathcal{W}_{2}(t)$ (red, second plot) as functions of time expressed in
eV-1. The former witnesses violation of the NSIT condition whenever it is not
equal to zero, while the latter witness violation of WLGIs whenever they are
positive. Note that for large $t$, all $\mathcal{W}_{j}(t)$ are always non-
positive while $\mathcal{N}(t)$ differs from zero. The values used to generate
the plot have been taken from the MINOS experiment [50], with
$\sin^{2}\theta=0.314$, $\Delta m^{2}=7.92\times 10^{-5}$ eV2, $E=10$ GeV and
$\sigma_{x}=0.5$ GeV-1.
As in the case of the standard LGIs (40)–(43), the WLGIs (44)–(46) are
satisfied for large $t$, where according to the NSIT condition no
macrorealistic interpretation can be alleged, thereby confirming the previous
considerations. In fact, the obtained results for both formulations of LGIs
reveal that a macrorealistic description is not necessarily valid in the
regime where such inequalities are satisfied.
## IV Conclusions
In this paper, we have provided a preliminary analysis of necessary and
sufficient conditions for macrorealism in neutrino flavor transitions. In
particular, we have unambiguously found that the set of necessary and
sufficient NSIT/AoT conditions derived in Ref. [43] reduces to a single, non-
trivial NSIT relation for macrorealism which can be potentially probed in two-
flavor neutrino experiments. Moreover, concerning the wave-packet approach, we
have seen that the effect of decoherence for long detection times/distances
allows for a net deviation from a macrorealistic interpretation, thereby
unambiguously attributing a quantum nature to the phenomenon of neutrino
oscillations. For this reason, neutrinos can never be described in a
macrorealistic way, even when quantum coherence is apparently degraded because
of the wave packet spreading.
Additionally, we have compared the aforementioned NSIT condition for
macrorealism with the LGIs in their standard and Wigner formulations. In both
scenarios, we have discovered that, as long as the NSIT requirement is met,
the LGIs are satisfied. However, at late times, we have shown that the LGIs
are not faithful quantifiers of the macrorealistic description, since they are
fulfilled whilst the NSIT condition is always violated.
Our research paves the way toward a more accurate study of macrorealism for
neutrino flavor transitions. Although the phenomenology of neutrino
oscillations can be effectively studied in the framework of quantum mechanics
(QM), a proper treatment of neutrinos demands the application of quantum field
theory (QFT) due to their relativistic nature [5]. As a preliminary analysis
along this direction, in Ref. [14] violations of the WLGIs in neutrino
oscillations have been compared in the context of QM and QFT. Interestingly,
it turns out that QFT violates the WLGIs more frequently than QM, which is in
agreement with the results obtained for the Bell tests of local realism within
the general framework of algebraic QFT [51, 52, 53]. As a further evidence for
the same occurrence, it has been proven that even vacuum correlations in QFT
can lead to a maximal quantum violation of Bell inequalities [13]. Therefore,
both studies seem to indicate that QFT is _less classical_ than QM, and these
results has to be reviewed by means of the NSIT/AoT conditions for
macrorealism.
## Acknowledgements
F.I. and L.P. acknowledge support by MUR (Ministero dell’Università e della
Ricerca) via the project PRIN 2017 “Taming complexity via QUantum Strategies:
a Hybrid Integrated Photonic approach” (QUSHIP) Id. 2017SRNBRK. L.S.
acknowledges support by the Polish National Science Center grant
2018/31/D/ST2/02048. L.P. acknowledges networking support by the COST Action
CA18108 and is grateful to the “Angelo Della Riccia” foundation for the
awarded fellowship received to support the study at Universität Ulm.
## References
* Bilenky and Pontecorvo [1978] S. M. Bilenky and B. Pontecorvo, Phys. Rept. 41, 225 (1978).
* Bilenky and Petcov [1987] S. M. Bilenky and S. T. Petcov, Rev. Mod. Phys. 59, 671 (1987), [Erratum: Rev. Mod. Phys. 61, 169 (1989), Erratum: Rev. Mod. Phys. 60, 575–575 (1988)].
* Beuthe [2003] M. Beuthe, Phys. Rept. 375, 105 (2003).
* Giunti and Kim [2007] C. Giunti and C. Kim, _Fundamentals of Neutrino Physics and Astrophysics_ (OUP Oxford, 2007).
* Smaldone and Vitiello [2021] L. Smaldone and G. Vitiello, Universe 7, 504 (2021), arXiv:2111.11809 [hep-th] .
* Blasone _et al._ [2009] M. Blasone, F. Dell’Anno, S. De Siena, and F. Illuminati, EPL 85, 50002 (2009).
* Blasone _et al._ [2008] M. Blasone, F. Dell’Anno, S. De Siena, M. Di Mauro, and F. Illuminati, Phys. Rev. D 77, 096002 (2008).
* Blasone _et al._ [2014] M. Blasone, F. Dell’Anno, S. De Siena, and F. Illuminati, Adv. High Energy Phys. 2014, 359168 (2014).
* Alok _et al._ [2016] A. K. Alok, S. Banerjee, and S. U. Sankar, Nucl. Phys. B 909, 65 (2016).
* Dixit _et al._ [2019a] K. Dixit, J. Naikoo, S. Banerjee, and A. Kumar Alok, Eur. Phys. J. C 79, 96 (2019a).
* Simonov _et al._ [2019] K. Simonov, A. Capolupo, and S. M. Giampaolo, Eur. Phys. J. C 79, 902 (2019).
* Dixit _et al._ [2019b] K. Dixit, J. Naikoo, B. Mukhopadhyay, and S. Banerjee, Phys. Rev. D 100, 055021 (2019b).
* Blasone _et al._ [2021a] M. Blasone, F. Illuminati, G. G. Luciano, and L. Petruzziello, Phys. Rev. A 103, 032434 (2021a).
* Blasone _et al._ [2021b] M. Blasone, F. Illuminati, L. Petruzziello, and L. Smaldone, arXiv:2111.09979 (2021b).
* Simonov _et al._ [2021] K. Simonov, A. Capolupo, and S. M. Giampaolo, J. Phys.: Conf. Ser. 1919, 012001 (2021).
* Formaggio _et al._ [2016] J. Formaggio, D. Kaiser, M. Murskyj, and T. Weiss, Phys. Rev. Lett. 117 (2016).
* Leggett and Garg [1985] A. J. Leggett and A. Garg, Phys. Rev. Lett. 54, 857 (1985).
* Kofler and Brukner [2007] J. Kofler and C. Brukner, Phys. Rev. Lett. 99, 180403 (2007).
* Kofler and Brukner [2008] J. Kofler and C. Brukner, Phys. Rev. Lett. 101, 090403 (2008).
* Emary _et al._ [2014] C. Emary, N. Lambert, and F. Nori, Rep. Prog. Phys. 77, 016001 (2014).
* Kumari and Pan [2017] S. Kumari and A. K. Pan, Europhys. Lett. 118, 50002 (2017).
* Wang _et al._ [2017] K. Wang, C. Emary, X. Zhan, Z. Bian, J. Li, and P. Xue, Optics Express 25, 31462 (2017).
* Zhang _et al._ [2021] W. Zhang, R. K. Saripalli, J. M. Leamer, R. T. Glasser, and D. I. Bondar, Phys. Rev. A 104, 043711 (2021).
* Santini and Vitale [2022] A. Santini and V. Vitale, Phys. Rev. A 105, 032610 (2022).
* Majidy _et al._ [2021] S. Majidy, J. J. Halliwell, and R. Laflamme, Phys. Rev. A 103, 062212 (2021).
* Bertlmann _et al._ [2003] R. A. Bertlmann, K. Durstberger, and B. C. Hiesmayr, Phys. Rev. A 68, 012111 (2003).
* Donadi _et al._ [2013a] S. Donadi, A. Bassi, C. Curceanu, A. Di Domenico, and B. C. Hiesmayr, Found. Phys. 43, 813 (2013a).
* Donadi _et al._ [2013b] S. Donadi, A. Bassi, C. Curceanu, and L. Ferialdi, Found. Phys. 43, 1066 (2013b).
* Bahrami _et al._ [2013] M. Bahrami, S. Donadi, L. Ferialdi, A. Bassi, C. Curceanu, A. Di Domenico, and B. C. Hiesmayr, Sci. Rep. 3, 1952 (2013).
* Simonov and Hiesmayr [2016] K. Simonov and B. C. Hiesmayr, Phys. Lett. A 380, 1253 (2016).
* Simonov and Hiesmayr [2018] K. Simonov and B. C. Hiesmayr, EPJ Web of Conferences 166, 00006 (2018).
* Simonov [2020] K. Simonov, Phys. Rev. A 102, 022226 (2020).
* Capolupo _et al._ [2019] A. Capolupo, S. M. Giampaolo, and G. Lambiase, Phys. Lett. B 792, 298 (2019).
* Gangopadhyay and Roy [2017] D. Gangopadhyay and A. S. Roy, Eur. Phys. J. C 77, 260 (2017), arXiv:1702.04646 [quant-ph] .
* Naikoo _et al._ [2020a] J. Naikoo, S. Kumari, S. Banerjee, and A. Pan, J. Phys. G 47, 095004 (2020a).
* Naikoo _et al._ [2020b] J. Naikoo, A. K. Alok, S. Banerjee, S. Uma Sankar, G. Guarnieri, C. Schultze, and B. C. Hiesmayr, Nucl. Phys. B 951, 114872 (2020b), arXiv:1710.05562 [hep-ph] .
* Dixit and Alok [2021] K. Dixit and A. K. Alok, Eur. Phys. J. Plus 136, 334 (2021).
* Wang and Ma [2022] X.-Z. Wang and B.-Q. Ma, Eur. Phys. J. C 82, 133 (2022).
* Simonov [2022] K. Simonov, Phys. Lett. A 452, 128413 (2022).
* Clemente and Kofler [2016] L. Clemente and J. Kofler, Phys. Rev. Lett. 116, 150401 (2016).
* Fine [1982] A. Fine, Phys. Rev. Lett. 48, 291 (1982).
* Saha _et al._ [2015] D. Saha, S. Mal, P. K. Panigrahi, and D. Home, Phys. Rev. A 91 (2015).
* Clemente and Kofler [2015] L. Clemente and J. Kofler, Phys. Rev. A 91, 062103 (2015).
* Halliwell [2016] J. J. Halliwell, Phys. Rev. A 93, 022123 (2016).
* Kofler and Brukner [2013] J. Kofler and C. Brukner, Phys. Rev. A 87, 052115 (2013).
* Bilenky _et al._ [2008] S. M. Bilenky, F. von Feilitzsch, and W. Potzel, J. Phys. G: Nucl. Part. Phys. 35, 095003 (2008).
* Giunti _et al._ [1991] C. Giunti, C. W. Kim, and U. W. Lee, Phys. Rev. D 44, 3635 (1991).
* Blasone _et al._ [2021c] M. Blasone, S. De Siena, and C. Matrella, Eur. Phys. J. C 81, 660 (2021c), arXiv:2104.03166 [quant-ph] .
* Bittencourt _et al._ [2022] V. A. S. V. Bittencourt, M. Blasone, S. De Siena, and C. Matrella, Eur. Phys. J. C 82, 566 (2022), arXiv:2205.01601 [quant-ph] .
* Fogli _et al._ [2007] G. L. Fogli, E. Lisi, A. Marrone, A. Melchiorri, A. Palazzo, P. Serra, J. Silk, and A. Slosar, Phys. Rev. D 75, 053001 (2007), arXiv:hep-ph/0608060 .
* Summers and Werner [1987a] S. J. Summers and R. Werner, J. Math. Phys. 28, 2440 (1987a).
* Summers and Werner [1987b] S. J. Summers and R. Werner, J. Math. Phys. 28, 2448 (1987b).
* Summers and Werner [1987c] S. J. Summers and R. Werner, Commun. Math. Phys. 110, 247 (1987c).
|
# Tensor network study of the spin-$1/2$ Heisenberg anti-ferromagnet on the
Shuriken lattice
Philipp Schmoll Dahlem Center for Complex Quantum Systems and Institut für
Theoretische Physik, Freie Universität Berlin, Arnimallee 14, 14195 Berlin,
Germany Augustine Kshetrimayum Dahlem Center for Complex Quantum Systems and
Institut für Theoretische Physik, Freie Universität Berlin, Arnimallee 14,
14195 Berlin, Germany Helmholtz-Zentrum Berlin für Materialien und Energie,
Hahn-Meitner-Platz 1, 14109 Berlin, Germany Theory Division, Saha Institute
of Nuclear Physics, 1/AF Bidhannagar, Kolkata 700 064, India Jan Naumann
Dahlem Center for Complex Quantum Systems and Institut für Theoretische
Physik, Freie Universität Berlin, Arnimallee 14, 14195 Berlin, Germany Jens
Eisert Dahlem Center for Complex Quantum Systems and Institut für
Theoretische Physik, Freie Universität Berlin, Arnimallee 14, 14195 Berlin,
Germany Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner-
Platz 1, 14109 Berlin, Germany Yasir Iqbal<EMAIL_ADDRESS>Department of Physics and Quantum Centers in Diamond and Emerging Materials
(QuCenDiEM) group, Indian Institute of Technology Madras, Chennai 600036,
India
###### Abstract
We investigate the ground state of the spin $S=1/2$ Heisenberg anti-
ferromagnet on the Shuriken lattice, also in the presence of an external
magnetic field. To this end, we employ two-dimensional tensor network
techniques based on infinite projected entangled pair and simplex states
considering states with different sizes of the unit cells. We show that a
valence bond crystal with resonances over length six loops emerges as the
ground state (at any given finite bond dimension) yielding the lowest reported
estimate of the ground state energy $E_{0}/J=-0.4410\pm 0.0001$ for this
model, estimated in the thermodynamic limit. We also study the model in the
presence of an external magnetic field and find the emergence of $0$, $1/3$
and $2/3$ magnetization plateaus with states respecting translation and point
group symmetries that feature loop-four plaquette resonances instead.
## I Introduction
Systems of anti-ferromagnetically interacting quantum spins decorated on
corner sharing arrangements of triangles continue to attract much interest as
promising platforms for realizing novel quantum phases [1]. Indeed, the
arrival of candidate quantum spin liquid materials based on the iconic Kagome
lattice such as the celebrated Herbertsmithite [2, 3, 4] and Kapellasite [5]
have provided an impetus to the field of frustrated magnetism. Their
intriguing properties have triggered a flurry of experimental and theoretical
studies which established the Kagome lattice as a fertile host for a myriad of
exotic states. The parameter space of its Heisenberg Hamiltonian in the
presence of long-range interactions is known to be host to quantum spin
liquids including chiral states, spin and lattice nematics, and valence bond
crystals. Recently, a class of materials based on a different corner sharing
arrangement of triangles — the so called _Shuriken lattice_ (also called
square-Kagome, Squagome, and squa-Kagome lattice.) — have come into limelight
as promising candidate _quantum spin liquid materials_ [6, 7]. No sign of
magnetic ordering down to 50 mK has been observed in the spin $S=1/2$ Cu2+
based materials KCu6AlBiO4(SO4)5Cl [8] and Na6Cu7BiO4(PO4)4[Cl,(OH)]3 [9]
despite them having large negative Curie-Weiss temperatures of
$-237\text{\,}\mathrm{K}$ and $-212\text{\,}\mathrm{K}$, respectively. This
reveals a scenario similar to Herbertsmithite for which dominant anti-
ferromagnetic interactions on this highly frustrated lattice prevent the onset
of magnetic order. Such studies can be traced back to early work hinting at
quantum materials featuring that lattice structure [10]. On the theoretical
front, previous investigations into the nature of the ground state of the
$S=1/2$ Heisenberg anti-ferromagnet have provided compelling evidence for a
magnetically disordered ground state while revealing a subtle competition
between different types of nonmagnetic ground states which remains debated
[11, 12, 13, 14, 15, 16, 17, 18, 19].
In this work, we employ instances of two-dimensional _tensor network_ (TN)
algorithms formulated directly in the thermodynamic limit towards resolving
the nature of the ground state. Tensor networks are quantum-information
inspired tools that use entanglement as a resource for studying strongly
correlated quantum many-body systems [20, 21, 22, 23]. They naturally build in
quantum correlations and are suited to capture non-local entanglement by
construction. They do not suffer from the sign problem plaguing quantum Monte
Carlo simulations on frustrated systems. Moreover, these techniques can be
used to study large system sizes including the thermodynamic limit, thus
mitigating finite size effects. In two spatial dimensions, they are known as
projected entangled pair states (PEPS) or iPEPS [24, 25] in their infinite
instance and have become a state-of-the-art numerical tool for studying
strongly interacting systems. These techniques have recently proven to be
quite successful in studying frustrated model Hamiltonians [26, 27, 28, 29,
30], real materials [31, 32, 33], open systems [34, 35, 36, 37, 38, 39] and
non-equilibrium phenomena [40, 41, 42, 43, 44, 45].
Figure 1: Nearest-neighbour spin-spin correlations for the ground state
configuration of the pinwheel VBC (a) and the loop-six VBC (b) states at
$\chi_{B}=12$. The expectation values of the spin-$z$ component are typically
$<10^{-3}$ and only shown for completeness. (c) Shuriken lattice with
spin-$1/2$ degrees of freedom on the sites. The elementary unit cell (gray
rectangles) consists of six sites, which are coarse-grained to map the
Shuriken lattice to a regular square lattice. Straight lines denote virtual
bond indices, curly lines denote physical indices in the TN structure.
Recent theoretical works investigating the ground state of the isotropic
$S=1/2$ Heisenberg anti-ferromagnet on the Shuriken lattice have identified
two competing _valence bond crystals_ (VBCs) involving resonating loops of
different lengths: (i) a pinwheel VBC which maximizes the number of smallest
possible loops of length four [see Fig. 1(a)] and (ii) a VBC pattern
comprising only of loops of length six [see Fig. 1(b)]. Surprisingly, it has
been shown within an effective _resonating valence bond_ (RVB) theory that the
tunneling processes can be renormalized in such a way that the smallest loops
are not always the most relevant in capturing the correct ground state
correlations [15]. Indeed, based on a quantum dimer model approach it has been
shown in Ref. [15] that the loop-six VBC is more stable energetically compared
to the pinwheel VBC when non-local processes outside the nearest-neighbor
valence bond basis were invoked. Complementing this, based on energetic
considerations alone, the pinwheel VBC should conventionally be the expected
ground state as found in a recent variational Monte Carlo study [11]. This
opens a delicate question on how to properly account for such non-local
quantum correlations and patterns of long-range entanglement in highly
degenerate frustrated systems.
Here, we use a _tensor network_ approach to simulate the model directly in the
thermodynamic limit. TNs represent the state vector of a many-body system,
e.g., reflecting the ground state, as a contraction of a network of local
tensors, that are connected by auxiliary indices (bond indices). This enables
efficient numerical simulations with only a polynomial scaling in the number
of constituents [23, 46, 20, 21]. In this work, we employ the _infinite
projected entangled pair state_ (iPEPS) [24] and _infinite projected entangled
simplex state_ (iPESS) (a variant of iPEPS) [47] techniques with an ansatz
based on different and specifically tailored unit cell sizes for optimizing
the ground state of our model. In this context, the TN is used as an ansatz
for the full many-body state vector, consisting of a unit cell of different
tensors that generates a translationally invariant state. The accuracy of the
ansatz can be systematically improved by increasing the bond dimension of the
TN, which is the dimension of the virtual indices connecting the local
tensors, see Fig. 1(c) [see Appendix]. It controls the number of variational
parameters in the ansatz and is a measure for the amount of quantum
entanglement that can be captured. We mainly employ the so-called simple
update [48] to optimize the ground state tensors, which is expected to work
well for the gapped model at hand [49, 50, 48]. In order to verify its
accurate functioning and ability to resolve the close competition between the
two candidate ground states, we additionally employ a variational update [51]
for this task. The corner transfer matrix renormalization group (CTMRG) [52,
53, 54, 55] is then used to compute the expectation values of the ground state
energy in a variational manner, the spin-$z$ operator as well as the two-point
correlations to decipher the nature of the ground state. We also employ
additional $SU(2)$-symmetric simulations [56, 30] for the model. Given the
flexibility of the framework, we apply an external magnetic field to study the
magnetization process of the model and provide a compelling picture of the
nature of phases corresponding to different magnetization plateaus.
## II Model and methods
The model we are considering is the $S=1/2$ Heisenberg anti-ferromagnet on the
Shuriken lattice
$\hat{H}=\sum_{\langle
i,j\rangle}\mathbf{\hat{S}}_{i}\cdot\mathbf{\hat{S}}_{j}-h\sum_{i}{\hat{S}}^{z}_{i}\
$ (1)
in the presence of an external magnetic field, where $\mathbf{\hat{S}}_{i}$
are the $S=1/2$ operators on site $i$ and $\langle i,j\rangle$ denotes
nearest-neighbours. The Shuriken lattice [see Fig. 1(c)] features corner-
sharing triangles, and thus leads to only a marginal alleviation of geometric
frustration in the presence of anti-ferromagnetic couplings. Being composed of
corner-sharing triangles, it is locally similar to the Kagome lattice. At the
same time, the Shuriken lattice shares two inequivalent sublattices, rendering
this lattice ideal to study effects of lattice anisotropy, for which our
methods are ideally suited.
We have applied two different TN structures for the simulation of the Shuriken
lattice. The first ansatz (iPEPS) uses a partial coarse-graining of the
Shuriken lattice to an irregular square lattice. Inspired by the success for
the $S=1/2$ Kagome Heisenberg anti-ferromagnet [26], the second structure is
based on the iPESS ansatz [47] that generalizes iPEPS to lattices with higher
simplices. For the Shuriken lattice, it is defined on its dual lattice, the
so-called $(4,8^{2})$ Archimedean lattice (also referred to as the square-
octagon, Fisher or CaVO lattice). While the simple update for iPESS
incorporates three lattice sites at each update step, it includes six sites
for iPEPS [see Appendix]. In order to compute expectation values, a unit cell
of six sites on the Shuriken lattice is coarse-grained into a single tensor on
the regular square lattice, as shown in Fig. 1(c). This approach is taken both
for the iPESS and the iPEPS simulations, starting from a $(4,8^{2})$
Archimedean lattice and a partially coarse-grained Shuriken lattice,
respectively. A directional CTMRG routine then computes effective environment
tensors for each coarse-grained iPEPS tensors, such that quantum correlations
are fully incorporated when computing expectation values (details are given in
the Appendix). This is achieved by a well chosen environment bond dimension
$\chi_{E}$ — a refinement parameter controlling the approximations in the
CTMRG routine, which is increased until the expectation values converge.
## III Results on the ground state energy and dimer orders
The ground state energy of the Shuriken Heisenberg model can be
straightforwardly evaluated in the TN representation. The Hamiltonian consists
of a sum of nearest-neighbour terms,
$\displaystyle E_{0}=\frac{1}{N}\sum_{\langle
i,j\rangle}\langle\psi_{0}|h_{i,j}|\psi_{0}\rangle\,$ (2)
where $N$ is the number of lattice sites and $|\psi_{0}\rangle$ is the
normalized ground state vector. The smallest possible geometrical unit cell of
the Shuriken lattice consists of six sites [the dashed regions in Fig. 1(c)].
This ansatz — which imposes translational invariance while being compatible
with quantum spin liquid and lattice nematic candidate ground states — would,
however, fail to capture translation symmetry broken VBC orders such as the
pinwheel and the loop-six VBCs, which are the prime competing ground state
candidates. Therefore, we use different unit cell sizes to search for
competitive TN ansätze with the lowest ground state energy. The different unit
cell configurations are then labeled by the size of the super-unit-cell on the
square lattice, denoted by $(L_{x},L_{y})$. A configuration $(L_{x},L_{y})$
hence corresponds to an TN state with $6\cdot L_{x}\cdot L_{y}$ spins in
total.
Figure 2: Ground state energy without magnetic field [$h=0$ in Eq. (1)] versus
the inverse of the bond dimension $\chi_{B}$. iPESS simulations are denoted by
$(S)$, iPEPS simulations by $(P)$.
In Fig. 2, we show the ground state energy for the iPEPS and iPESS simulations
and square lattice unit cells of $(1,1)$ and $(2,2)$ as a function of the
inverse of the iPEPS bond dimension $\chi_{B}$ (bulk bond dimension). Our
results are compared to a previous iPEPS study of the model in Ref. [11],
using a $(1,1)$ TN ansatz with a coarse-graining to a honeycomb lattice. Based
on our simulations with different sizes of the unit cells, we find the lowest
energy is obtained with a $(2,2)$ configuration (unit cell with 24 sites),
i.e., corresponding to a valence bond crystal ground state. A similar ground
state can be obtained by a $(1,2)$ configuration in a checkerboard
arrangement, which consists of only twelve spins. However, we use the more
general state vector ansatz with 24 spins to be able to incorporate possible
richer patterns of spin correlations.
Figure 3: Comparison of unconstrained and constrained ground state simulations
up to $\chi_{B}=12$ for the different configurations imprinted. A first-order
polynomial fit is used to extract the infinite bond dimension limit for the
three largest bond dimensions.
In our simulation, the main difference in the iPESS and iPEPS calculations is
in the simple update. It is more local in the iPESS with only three sites that
are updated at once, compared to six sites in the iPEPS ansatz [see Appendix].
This, along with a larger number of variational parameters in the iPEPS ansatz
is responsible for a better ground state approximation with lower energies.
For subsequent investigation of the model we therefore use the iPEPS with a
$(2,2)$ unit cell configuration. In addition to the unconstrained simulations,
we incorporate fully $SU(2)$-symmetric simulations of the model. By imposing
the symmetry, the simulated ground state is guaranteed to be in the spin-$0$
sector, i.e., a spin singlet. In contrast, an unconstrained simulation of the
ground state can spontaneously break $SU(2)$-symmetry, which would lower its
energy. An energy comparison is therefore another way to ascertain the nature
of the ground state. For large enough bond dimensions, the $SU(2)$-symmetric
simulations converge to the same energy as the unconstrained ones, confirming
a nonmagnetic VBC ground state with spin-$0$ of the model as previously
reported [11].
Within our iPEPS simulations, we use two ways to ascertain the nature of the
VBC ground state, (i) we prepare our initial state with the pinwheel and the
loop-six VBC patterns imprinted for a given low bond dimensions and
progressively increase $\chi_{B}$ in a manner which uses the converged state
vectors at any given $\chi_{B}$ as initial states for the simulation with one
higher bond dimension and (ii) an unconstrained optimization starting from a
random state. In procedure (i) we observe that while both the pinwheel VBC and
loop-six VBC patterns remain stable up to $\chi_{B}=12$, the latter is always
lower in energy at any given finite bond dimension [see Fig. 3 and Table 1].
The resulting spatial spin-spin correlation profiles at $\chi_{B}=12$ are
shown in Fig. 1. Further compelling evidence supporting a loop-six VBC ground
state scenario for a finite bond dimension is provided by the unconstrained
optimization which at higher bond dimensions $(\chi_{B}\geqslant 4)$ already
converges to the energy of the loop-six VBC [see Fig. 3 and Table 1]. Notice
that the pinwheel and loop-six pattern is explicitly imprinted in the
simulations at $\chi_{B}=[2,3]$, so that those points are not expressive. The
inset of Fig. 3 shows the meaningful, i.e., linear regime where the energy
differences between the two orders are small, highlighting the subtle
competition.
An extrapolation to the infinite bond dimension limit using a linear fit of
the three values of energy corresponding to the largest $\chi_{B}$ yields a
lower bound for the energy $E_{l}$. The last data point at $\chi_{B}=12$
provides an upper bound $E_{u}$, such that the true ground state energy lies
in the interval $[E_{l},E_{u}]$ [57]. To estimate the final ground state
energy, we compute $E_{0}=(E_{u}+E_{l})/2$ with an error of $\Delta
E=(E_{u}-E_{l})/2$, which results in
$\displaystyle\begin{split}E_{0}(\text{pinwheel VBC})&=-0.4408\pm 0.0001,\\\
E_{0}(\text{loop-six VBC})&=-0.4410\pm 0.0001\,,\end{split}$ (3)
which is lower than previous estimates of the ground state energy [11]. The
numerical values for the results in Fig. 3 are summarized in Table 1. Given
that the estimates of the ground state energy for the pinwheel and loop-six
VBC states evaluated in the limit $\chi_{B}\to\infty$ are very close,
variational iPEPS simulations have been employed to resolve which of these two
competing states wins in this limit. Until the largest reachable bond
dimension of $\chi_{B}=7$, the variational energies lie below the presented
simple update energies and reinforce the ground state to be a loop-six VBC
[51]. A direct comparison of the simple update and variational energies is
presented in Table 2.
$\chi_{B}$ | Unconstrained | Pinwheel | Loop-six
---|---|---|---
2 | -0.428020 | -0.405664 | -0.397792
3 | -0.435105 | -0.405664 | -0.397792
4 | -0.439242 | -0.438734 | -0.439242
5 | -0.439665 | -0.439242 | -0.439665
6 | -0.440058 | -0.439738 | -0.440058
7 | -0.440375 | -0.440091 | -0.440376
8 | -0.440700 | -0.440522 | -0.440700
9 | -0.440776 | -0.440584 | -0.440776
10 | -0.440859 | -0.440646 | -0.440859
11 | -0.440886 | -0.440667 | -0.440886
12 | -0.440908 | -0.440689 | -0.440908
Table 1: Energy comparison for different ground state configurations of the $(2,2)$ iPEPS. Note that the states at $\chi_{B}=[2,3]$ cannot be used, since the ground state pattern is already imprinted. $\chi_{B}$ | Simple update | Variational update
---|---|---
2 | -0.428020 | -0.433600
3 | -0.435105 | -0.437320
4 | -0.439242 | -0.439877
5 | -0.439665 | -0.440162
6 | -0.440058 | -0.440391
7 | -0.440375 | -0.440592
Table 2: Energy comparison between simple update and variational energies for
the ground state of the loop-six VBC. The variational update uses a two-tensor
checkerboard pattern on the coarse-grained Shuriken lattice. Figure 4:
Magnetization curve of the Heisenberg model for $\chi_{B}=10$. Upon tuning the
magnetic field, two magnetization plateaus at $1/3$ and $2/3$ of the saturated
magnetization $m_{S}=1/2$ appear. Additionally, we find the presence of a
small plateau at $m_{z}=0$ indicative of the gapped nature of the ground
state.
## IV Results on magnetization plateaus
Finally, we study the Heisenberg model on the Shuriken lattice in the presence
of an external magnetic field. We compute the average magnetization over all
the sites in the lattice along the field axis
$\displaystyle
m_{z}=\frac{1}{N}\sum_{i}\langle\psi_{0}(h)|\hat{S}_{i}^{z}|\psi_{0}(h)\rangle\
,$ (4)
where $|\psi_{0}(h)\rangle$ is the normalized ground state vector. In Fig. 4
we show the magnetization curve normalized to the saturation value of
$m_{S}=1/2$. The magnetization curve reveals the presence of three
magnetization plateaus, at $0$, $1/3$ and $2/3$ of the saturation value [18,
17, 58]. Furthermore, we observe a macroscopic jump from the $2/3$ plateau to
saturation magnetization as conventionally expected due to the presence of a
flat one-magnon band which leads to the appearance of localized multi-magnon
eigenstates [59, 60, 61, 62, 63, 64]. The plateau at $h\rightarrow 0$ is a
further indication of the fact that the ground state of the model at $h=0$ is
actually gapped. An estimate on the size of the spin gap $\Delta>0$ is given
by the width of the plateau $\Delta\sim 0.04J$, consistent with exact
diagonalization studies [16, 17]. The $1/3$ and $2/3$ plateaus can further be
characterized by the spatial pattern of spin-spin correlations and the
expectation values of the spin-$x$, -$y$ and -$z$ components. Those
expectation values are shown in Fig. 5 for both phases, at magnetic fields
$h=1.4$ and $h=2.8$ respectively. Interestingly, one observes that once the
magnetic field is turned on, a pattern governed by strong loop four resonances
emerges.
Within error-bars in the expectation values, the states at both magnetization
plateaus are invariant under translations of the original six-site
crystallographic unit cell and also under point group symmetries. It is also
visible that correlations are much stronger on the squares compared to the
triangle bonds in the lattice. For both plateau states the spins which are not
part of the squares are isolated and almost fully polarized [see Fig. 6],
implying that they are nearly aligned with the magnetic field. In contrast,
the spins on squares, despite a finite magnetization possess a nonzero singlet
density reminiscent of $h_{z}=0$ resonating plaquettes. This is also evidenced
by observing the different magnetization behaviours
$\langle\hat{S}_{i}^{z}\rangle$ of the two symmetry inequivalent sites over
the range of the magnetization as shown in Fig. 6. The three plateaus
appearing in Fig. 4 and the full saturation are shown with dotted lines. The
behaviour is in good agreement with previous numerical diagonalization studies
of the model on finite systems [17].
Figure 5: Ground state configurations of magnetic plateaus for $h=1.4$ (top)
and $h=2.8$ (bottom) at $\chi_{B}=10$. Both the $1/3$ plateau state and the
$2/3$ plateau state have strong spin-spin correlations on the squares, whose
spins appear to be in an entangled superposition. In contrast, spins on the
Shuriken sites are isolated and (almost) fully polarized.
Following the discussion of an analytic description for the magnetic plateau
states of the Heisenberg model on the Kagome lattice [65], one can similarly
construct the 2/3 plateau state on the Shuriken lattice. For the 2/3 case, one
can see in Fig. 5 and 6 that the expectation value
$\langle\hat{S}_{i}^{z}\rangle$ is roughly $0.25$ for the sites which are part
of the square and $0.5$ for the sites which are not part of the square. This
motivates the conclusion that the state vector for one unit cell consists of
an entangled state on the square and a product of this superposition with the
fully polarized non-square sites as
$\ket{\psi^{2/3}}=\ket{\uparrow}_{\leavevmode\hbox to3.96pt{\vbox
to6.56pt{\pgfpicture\makeatletter\hbox{\hskip 1.97829pt\lower-3.27988pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}
{}{}{}\pgfsys@moveto{1.77829pt}{3.07988pt}\pgfsys@lineto{-1.77829pt}{-3.07988pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{-1.77829pt}{3.07988pt}\pgfsys@lineto{1.77829pt}{-3.07988pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{0.85355pt}{0.0pt}\pgfsys@curveto{0.85355pt}{0.4714pt}{0.4714pt}{0.85355pt}{0.0pt}{0.85355pt}\pgfsys@curveto{-0.4714pt}{0.85355pt}{-0.85355pt}{0.4714pt}{-0.85355pt}{0.0pt}\pgfsys@curveto{-0.85355pt}{-0.4714pt}{-0.4714pt}{-0.85355pt}{0.0pt}{-0.85355pt}\pgfsys@curveto{0.4714pt}{-0.85355pt}{0.85355pt}{-0.4714pt}{0.85355pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>\leavevmode\hbox to13.67pt{\vbox
to13.67pt{\pgfpicture\makeatletter\hbox{\hskip 6.83647pt\lower 35.84267pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{
}{{}} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@lineto{-6.63647pt}{40.90083pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@lineto{-1.7783pt}{36.04266pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{46.23573pt}\pgfsys@lineto{-6.63647pt}{44.45744pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{46.23573pt}\pgfsys@lineto{-1.7783pt}{49.31561pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{3.5566pt}{39.12254pt}\pgfsys@lineto{1.7783pt}{36.04266pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{3.5566pt}{39.12254pt}\pgfsys@lineto{6.63647pt}{40.90083pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{3.5566pt}{46.23573pt}\pgfsys@lineto{6.63647pt}{44.45744pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{3.5566pt}{46.23573pt}\pgfsys@lineto{1.7783pt}{49.31561pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{} {}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@lineto{-3.5566pt}{46.23573pt}\pgfsys@lineto{3.5566pt}{46.23573pt}\pgfsys@lineto{3.5566pt}{39.12254pt}\pgfsys@lineto{-3.5566pt}{39.12254pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@moveto{-2.70305pt}{39.12254pt}\pgfsys@curveto{-2.70305pt}{39.59395pt}{-3.08519pt}{39.97609pt}{-3.5566pt}{39.97609pt}\pgfsys@curveto{-4.028pt}{39.97609pt}{-4.41014pt}{39.59395pt}{-4.41014pt}{39.12254pt}\pgfsys@curveto{-4.41014pt}{38.65114pt}{-4.028pt}{38.269pt}{-3.5566pt}{38.269pt}\pgfsys@curveto{-3.08519pt}{38.269pt}{-2.70305pt}{38.65114pt}{-2.70305pt}{39.12254pt}\pgfsys@closepath\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{-3.5566pt}{46.23573pt}\pgfsys@moveto{-2.70305pt}{46.23573pt}\pgfsys@curveto{-2.70305pt}{46.70714pt}{-3.08519pt}{47.08928pt}{-3.5566pt}{47.08928pt}\pgfsys@curveto{-4.028pt}{47.08928pt}{-4.41014pt}{46.70714pt}{-4.41014pt}{46.23573pt}\pgfsys@curveto{-4.41014pt}{45.76433pt}{-4.028pt}{45.38219pt}{-3.5566pt}{45.38219pt}\pgfsys@curveto{-3.08519pt}{45.38219pt}{-2.70305pt}{45.76433pt}{-2.70305pt}{46.23573pt}\pgfsys@closepath\pgfsys@moveto{-3.5566pt}{46.23573pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{3.5566pt}{39.12254pt}\pgfsys@moveto{4.41014pt}{39.12254pt}\pgfsys@curveto{4.41014pt}{39.59395pt}{4.028pt}{39.97609pt}{3.5566pt}{39.97609pt}\pgfsys@curveto{3.08519pt}{39.97609pt}{2.70305pt}{39.59395pt}{2.70305pt}{39.12254pt}\pgfsys@curveto{2.70305pt}{38.65114pt}{3.08519pt}{38.269pt}{3.5566pt}{38.269pt}\pgfsys@curveto{4.028pt}{38.269pt}{4.41014pt}{38.65114pt}{4.41014pt}{39.12254pt}\pgfsys@closepath\pgfsys@moveto{3.5566pt}{39.12254pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{3.5566pt}{46.23573pt}\pgfsys@moveto{4.41014pt}{46.23573pt}\pgfsys@curveto{4.41014pt}{46.70714pt}{4.028pt}{47.08928pt}{3.5566pt}{47.08928pt}\pgfsys@curveto{3.08519pt}{47.08928pt}{2.70305pt}{46.70714pt}{2.70305pt}{46.23573pt}\pgfsys@curveto{2.70305pt}{45.76433pt}{3.08519pt}{45.38219pt}{3.5566pt}{45.38219pt}\pgfsys@curveto{4.028pt}{45.38219pt}{4.41014pt}{45.76433pt}{4.41014pt}{46.23573pt}\pgfsys@closepath\pgfsys@moveto{3.5566pt}{46.23573pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}
}}}}\ket{\uparrow}_{\leavevmode\hbox to6.56pt{\vbox
to3.96pt{\pgfpicture\makeatletter\hbox{\hskip 3.27988pt\lower-1.97829pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}
{}{}{}\pgfsys@moveto{-3.07988pt}{1.77829pt}\pgfsys@lineto{3.07988pt}{-1.77829pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}
{}{}{}\pgfsys@moveto{-3.07988pt}{-1.77829pt}\pgfsys@lineto{3.07988pt}{1.77829pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{0.85355pt}{0.0pt}\pgfsys@curveto{0.85355pt}{0.4714pt}{0.4714pt}{0.85355pt}{0.0pt}{0.85355pt}\pgfsys@curveto{-0.4714pt}{0.85355pt}{-0.85355pt}{0.4714pt}{-0.85355pt}{0.0pt}\pgfsys@curveto{-0.85355pt}{-0.4714pt}{-0.4714pt}{-0.85355pt}{0.0pt}{-0.85355pt}\pgfsys@curveto{0.4714pt}{-0.85355pt}{0.85355pt}{-0.4714pt}{0.85355pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}$. Using this ansatz,
it is straightforward to find the ground state vector
$\displaystyle\ket{\framebox[12.0925pt]{\raisebox{-4.26773pt}{{
\leavevmode\hbox to13.67pt{\vbox
to13.67pt{\pgfpicture\makeatletter\hbox{\hskip 6.83647pt\lower 35.84267pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\pgfsys@beginscope\pgfsys@invoke{
}{{}} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@lineto{-6.63647pt}{40.90083pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@lineto{-1.7783pt}{36.04266pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{46.23573pt}\pgfsys@lineto{-6.63647pt}{44.45744pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{46.23573pt}\pgfsys@lineto{-1.7783pt}{49.31561pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{3.5566pt}{39.12254pt}\pgfsys@lineto{1.7783pt}{36.04266pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{3.5566pt}{39.12254pt}\pgfsys@lineto{6.63647pt}{40.90083pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{3.5566pt}{46.23573pt}\pgfsys@lineto{6.63647pt}{44.45744pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{}{}{}
{}{}{}\pgfsys@moveto{3.5566pt}{46.23573pt}\pgfsys@lineto{1.7783pt}{49.31561pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{} {}{} {}{} {}{}
{}{}{}\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@lineto{-3.5566pt}{46.23573pt}\pgfsys@lineto{3.5566pt}{46.23573pt}\pgfsys@lineto{3.5566pt}{39.12254pt}\pgfsys@lineto{-3.5566pt}{39.12254pt}\pgfsys@stroke\pgfsys@invoke{
} {}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@moveto{-2.70305pt}{39.12254pt}\pgfsys@curveto{-2.70305pt}{39.59395pt}{-3.08519pt}{39.97609pt}{-3.5566pt}{39.97609pt}\pgfsys@curveto{-4.028pt}{39.97609pt}{-4.41014pt}{39.59395pt}{-4.41014pt}{39.12254pt}\pgfsys@curveto{-4.41014pt}{38.65114pt}{-4.028pt}{38.269pt}{-3.5566pt}{38.269pt}\pgfsys@curveto{-3.08519pt}{38.269pt}{-2.70305pt}{38.65114pt}{-2.70305pt}{39.12254pt}\pgfsys@closepath\pgfsys@moveto{-3.5566pt}{39.12254pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{-3.5566pt}{46.23573pt}\pgfsys@moveto{-2.70305pt}{46.23573pt}\pgfsys@curveto{-2.70305pt}{46.70714pt}{-3.08519pt}{47.08928pt}{-3.5566pt}{47.08928pt}\pgfsys@curveto{-4.028pt}{47.08928pt}{-4.41014pt}{46.70714pt}{-4.41014pt}{46.23573pt}\pgfsys@curveto{-4.41014pt}{45.76433pt}{-4.028pt}{45.38219pt}{-3.5566pt}{45.38219pt}\pgfsys@curveto{-3.08519pt}{45.38219pt}{-2.70305pt}{45.76433pt}{-2.70305pt}{46.23573pt}\pgfsys@closepath\pgfsys@moveto{-3.5566pt}{46.23573pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{3.5566pt}{39.12254pt}\pgfsys@moveto{4.41014pt}{39.12254pt}\pgfsys@curveto{4.41014pt}{39.59395pt}{4.028pt}{39.97609pt}{3.5566pt}{39.97609pt}\pgfsys@curveto{3.08519pt}{39.97609pt}{2.70305pt}{39.59395pt}{2.70305pt}{39.12254pt}\pgfsys@curveto{2.70305pt}{38.65114pt}{3.08519pt}{38.269pt}{3.5566pt}{38.269pt}\pgfsys@curveto{4.028pt}{38.269pt}{4.41014pt}{38.65114pt}{4.41014pt}{39.12254pt}\pgfsys@closepath\pgfsys@moveto{3.5566pt}{39.12254pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{3.5566pt}{46.23573pt}\pgfsys@moveto{4.41014pt}{46.23573pt}\pgfsys@curveto{4.41014pt}{46.70714pt}{4.028pt}{47.08928pt}{3.5566pt}{47.08928pt}\pgfsys@curveto{3.08519pt}{47.08928pt}{2.70305pt}{46.70714pt}{2.70305pt}{46.23573pt}\pgfsys@curveto{2.70305pt}{45.76433pt}{3.08519pt}{45.38219pt}{3.5566pt}{45.38219pt}\pgfsys@curveto{4.028pt}{45.38219pt}{4.41014pt}{45.76433pt}{4.41014pt}{46.23573pt}\pgfsys@closepath\pgfsys@moveto{3.5566pt}{46.23573pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}} }}}}$
$\displaystyle=\frac{1}{2}\left(\ket{\downarrow\uparrow\uparrow\uparrow}-\ket{\uparrow\downarrow\uparrow\uparrow}-\ket{\uparrow\uparrow\downarrow\uparrow}+\ket{\uparrow\uparrow\uparrow\downarrow}\right)$
(5)
for the square terms, consisting of pairwise singlets on the four bonds. Since
the individual unit cells are not entangled in our ansatz it is easy to check
with exact diagonalization (ED), that the full state is the ground state of
the subspace we have chosen. The per-site energy of the analytic construction
is $-2h/6$. This perfectly fits our simple update results, where for $h=2.8$
we find an energy of $-0.933331$ while the analytic result is $-0.933333$.
This construction built out of one-magnon states on the squares and fully
polarized spins on Shuriken sites is the well-known magnon crsytal state [66,
18, 59].
Figure 6: On-site magnetization of the triangle and square sites versus the
magnetic field $h$. Dotted vertical lines represent the four magnetization
plateaus.
For the 1/3 plateau the same approach is not successful. If we limit ourselves
to the subspace with fully polarized spins on the non-square sites and the
superposition on the square sites, we can find a ground state from ED but this
time the analytic energy for $h=1.4$ of $-0.566667$ is clearly above the
result $-0.592928$ of the SU calculations. This is expected however, since as
one can see in Fig. 6 the spin expectation values
$\langle\hat{S}_{i}^{z}\rangle$ are neither exactly $0.5$ for the polarized
nor zero for the spins on the square. Therefore, a more sophisticated ansatz
would be needed to describe this state but this beyond the scope of this work.
## V Conclusions
In this work we have studied the ground state properties of the $S=1/2$
Heisenberg anti-ferromagnet on the Shuriken lattice using two-dimensional
tensor network techniques of iPEPS and iPESS. We find that the incorporation
of non-local correlations proves indispensable in accurately capturing the
nature of the ground state, which is shown to be a loop-six VBC at any given
finite bond dimension up until $\chi_{B}=12$. Indeed, here we are faced with a
scenario featuring a delicate energetic competition between states governed by
different stabilization mechanisms. In particular, we have (i) a pinwheel VBC
favored by energy gain from short-range loop resonances and (ii) a loop-six
VBC favored by strong resonances over longer-length loops which are amplified
by the dressing of virtual singlets on top of the nearest-neighbor basis. This
is precisely what we aim to address by employing the TN framework which
naturally contains both these key ingredients and allows us to accurately
investigate their interplay towards accurately determining the nature of the
ground state. Our estimate of the ground state energy per site in the
thermodynamic limit obtained by extrapolating $\chi_{B}\to\infty$ is given by
$E_{0}=-0.4410\pm 0.0001$.
We have also investigated the effect of an external magnetic field in the
model and obtained its magnetization curve which shows three magnetization
plateaus at $0$, $1/3$ and $2/3$ of the saturation magnetization. The width of
the magnetization plateau at zero-field gives us an estimate of the spin gap
$\Delta\sim 0.04J$ consistent with exact diagonalization studies [16, 17]. The
nature of the phases at these plateaus are not only polarized but also show
strong signature of singlet correlations on four-site plaquettes. These states
are found to respect the spatial symmetries (both translation and point group)
of the Shuriken lattice, in contrast to the pinwheel VBC state.
Our work paves the way for future investigation of the Heisenberg model on the
Shuriken lattice in more general settings. It would be interesting to study
the anisotropic model with different couplings on the square and the triangle
bonds, or longer range couplings which could potentially be of relevance in
describing the recently studied materials [6, 8, 9]. An investigation of the
excitation spectrum would be another promising route to revealing diverse
manifestation of frustration [67]. Similarly, the corresponding model for
higher spins, e.g., $S=1$, could be explored, which could be host to a
trimerized ground state and display a wealth of magnetization plateaus hosting
exotic phases. This perspective seems even more interesting as it seems
plausible to recreate frustrated systems in Shuriken lattices under the
precisely controlled conditions of _quantum simulations_ [68] involving ultra-
cold atoms, giving rise to the interesting situation of benchmarking quantum
and classical simulations against each other.
## VI Acknowledgments
We thank Arnaud Ralko, Ronny Thomale and Erik Weerda for insightful
discussions and helpful comments on the manuscript. Y. I. acknowledges support
from the Department of Science and Technology (DST), India through the MATRICS
Grant No. MTR/2019/001042, CEFIPRA Project No. 64T3-1, the ICTP through the
Associates Programme and from the Simons Foundation through grant number
284558FY19. This research was supported in part by the National Science
Foundation under Grant No. NSF PHY-1748958, IIT Madras through the Institute
of Eminence (IoE) program for establishing the QuCenDiEM group (Project No.
SB20210813PHMHRD002720), the International Centre for Theoretical Sciences
(ICTS), Bengaluru, India during a visit for participating in the program
“Frustrated Metals and Insulators” (Code: ICTS/frumi2022/9). Y. I.
acknowledges the use of the computing resources at HPCE, IIT Madras. The
Berlin team has been supported by the BMBF (MUNIQC-ATOMS), the DFG (CRC 183 on
“Entangled states of matter”, project No. 277101999), and the Helmholtz Center
Berlin. The authors would like to thank the HPC Service of ZEDAT, Freie
Universität Berlin for computing time [69].
*
## Appendix A Tensor networks
### A.1 Tensor network structures and algorithms
In this section, we present further technical details about the tensor network
structures and algorithms employed in the main article. We will start with the
_infinite projected entangled simplex state_ (iPESS) ansatz [70], which can be
straightforwardly extended from the original formulation on the Kagome lattice
to the Shuriken lattice. To this end, we consider the dual lattice (the so-
called $(4,8^{2})$ Archimedean lattice), where the spin-$1/2$s are located on
the lattice links instead of the lattice sites. This lattice is visualized in
Fig. 7 in green.
Figure 7: Shuriken lattice shown in black and its dual lattice (the so-called
$(4,8^{2})$ Archimedean lattice) shown in green. Since the spin-$1/2$s live on
the links of the dual lattice, additional three-index simplex tensors are
introduced on the vertices for the iPESS ansatz.
In order to connect the spins, additional purely virtual three-index simplex
tensors have to be introduced (shown in gray). An elementary iPESS unit cell
therefore consists of six lattice site tensors carrying the physical degree of
freedom, and four simplex tensors connecting them.
As an alternative approach we consider a modified version of the _infinite
projected entangled pair state_ (iPEPS). It is constructed by a partial
coarse-graining of the original lattice to a square lattice with missing
bonds. To this end we merge the four spins on each square configuration into a
single site, which is modeled by a tensor with physical dimension $p^{4}$. The
two remaining sites per unit cell are left unchanged. This mapping is shown in
Fig. 8.
Figure 8: Shuriken lattice shown on the left and iPEPS ansatz shown on the
right. The four sites on each square are coarse-grained into an effective
site, highlighted by the green dotted area. The resulting structure is a
square lattice with missing links.
One advantage that both TN structures share is the fact that one virtual bond
in the TN corresponds to only two links in the original Shuriken lattice.
Since the maximal entanglement shared between neighbouring lattice sites is
limited by the bond dimension of the TN ansatz, it is favourable to keep this
number small. Due to this property, we can directly compare results with the
same bond dimension. Note that a coarse-graining of the six spins per unit
cell in the Shuriken lattice directly results in a regular square lattice.
### A.2 Simple update
In order to obtain an approximation of the ground state wave function, we
employ the simple update technique [48] in both TNs. This method is based on
an imaginary-time evolution under the Hamiltonian [24], in which all the
tensors in the networks are updated sequentially. For a sufficiently long
evolution, the ground state is projected out. For the iPESS network we employ
a regular three-site update of the different simplex configurations [70]. In
order to restore the individual tensors after each update, a higher-order
singular value decomposition (SVD) is used. During this process, the singular
values are truncated to the fixed bulk bond dimension $\chi_{B}$ to keep the
simulations computationally feasible. The process is illustrated in Fig. 9 for
the update of a down simplex, denoted $\bigtriangledown$, together with the
three connected lattice tensors.
Figure 9: Simple update in the iPESS simulations of the Shuriken lattice. The
Trotterized three-body Hamiltonian gate is absorbed into a triangle
configuration. A truncated higher-order SVD is used to decompose the resulting
six-index tensor back into the separate iPESS tensors. The same procedure is
applied to the other simplex tensors along with the different lattice tensors.
The simple update for the iPEPS ansatz on the deformed square lattice follows
the same spirit. However, instead of only updating three sites (along with one
simplex tensor) as in the iPESS ansatz, we choose a six-site update across
corners in the lattice. A single update step is presented in Fig. 10.
Figure 10: Simple update in the iPEPS simulations of the Shuriken lattice. The
Trotterized six-body Hamiltonian gate is absorbed into a bottom-left corner of
the deformed square lattice. The individual tensors are restored using two
successive SVDs with truncation to a fixed bond dimension. The update of a
top-right corner is performed similarly.
Again, the virtual links of the network are kept at a maximal bond dimension
$\chi_{B}$, which is achieved by two successive SVDs. Notice that we omit to
show additional diagonal tensors carrying the singular values on each virtual
link for more clarity both in Fig. 9 and Fig. 10.
Besides the two presented TN approaches, we also implemented a simple update
scheme on the original Shuriken lattice, using two-body gates on neighbouring
sites to evolve the wave function. Similarly as for the iPESS, the update is
very local and could not resolve all the magnetization plateaus present in the
model, so that we rejected the simulations.
### A.3 Environments and expectation values
In order to compute accurate expectation values for the wave functions
obtained by the simple update, we employ a _corner transfer matrix
renormalization group_ (CTMRG) [52, 53, 54, 55] procedure to compute the
effective environments. To this end, the six lattice sites per unit cell are
coarse-grained into a single iPEPS site with local Hilbert space dimension
$2^{6}=64$. This maps the Shuriken lattice to a regular square lattice, for
which a directional CTMRG procedure can directly compute the approximate
contraction of the infinite lattice.
Figure 11: A CTMRG routine is used to approximate the contraction of the
infinite square lattice by a set of fixed-point environment tensors.
As shown in Fig. 11, the contraction of the infinite square lattice is
approximated by a set of eight fixed-point tensors surrounding every iPEPS
tensor in the unit cell. Expectation values can then be computed
straightforwardly by evaluating local operators
$\langle\psi|\hat{O}|\psi\rangle/\langle\psi|\psi\rangle$, where the
environment around the sites on which the operator acts and the norm of the
wave function is approximated by the CTMRG tensors.
## References
* Savary and Balents [2016] L. Savary and L. Balents, Quantum spin liquids: A review, Rep. Prog. Phys. 80, 016502 (2016).
* Mendels _et al._ [2007] P. Mendels, F. Bert, M. A. de Vries, A. Olariu, A. Harrison, F. Duc, J. C. Trombe, J. S. Lord, A. Amato, and C. Baines, Quantum magnetism in the paratacamite family: Towards an ideal Kagomé lattice, Phys. Rev. Lett. 98, 077204 (2007).
* Han _et al._ [2012] T.-H. Han, J. S. Helton, S. Chu, D. G. Nocera, J. A. Rodriguez-Rivera, C. Broholm, and Y. S. Lee, Fractionalized excitations in the spin-liquid state of a Kagome-lattice antiferromagnet, Nature 492, 406 (2012).
* Khuntia _et al._ [2020] P. Khuntia, M. Velazquez, Q. Barthélemy, F. Bert, E. Kermarrec, A. Legros, B. Bernu, L. Messio, A. Zorko, and P. Mendels, Gapless ground state in the archetypal quantum Kagome antiferromagnet ZnCu3(OH)6Cl2, Nat. Phys. 16, 469 (2020).
* Fåk _et al._ [2012] B. Fåk, E. Kermarrec, L. Messio, B. Bernu, C. Lhuillier, F. Bert, P. Mendels, B. Koteswararao, F. Bouquet, J. Ollivier, A. D. Hillier, A. Amato, R. H. Colman, and A. S. Wills, Kapellasite: A Kagome quantum spin liquid with competing interactions, Phys. Rev. Lett. 109, 037208 (2012).
* Yakubovich _et al._ [2021] O. V. Yakubovich, L. V. Shvanskaya, G. V. Kiriukhina, A. S. Volkov, O. V. Dimitrova, and A. N. Vasiliev, Hydrothermal synthesis and a composite crystal structure of Na6Cu7BiO4(PO4)4[Cl(OH)]3 as a candidate for quantum spin liquid, Inorg. Chem. 60, 11450 (2021).
* Pohle _et al._ [2016] R. Pohle, O. Benton, and L. D. C. Jaubert, Reentrance of disorder in the anisotropic Shuriken Ising model, Phys. Rev. B 94, 014429 (2016).
* Fujihala _et al._ [2020] M. Fujihala, K. Morita, R. Mole, S. Mitsuda, T. Tohyama, S.-i. Yano, D. Yu, S. Sota, T. Kuwai, A. Koda, H. Okabe, H. Lee, S. Itoh, T. Hawai, T. Masuda, H. Sagayama, A. Matsuo, K. Kindo, S. Ohira-Kawamura, and K. Nakajima, Gapless spin liquid in a square-Kagome lattice antiferromagnet, Nat. Commun. 11, 3429 (2020).
* Liu _et al._ [2022] B. Liu, Z. Zeng, A. Xu, Y. Sun, O. Yakubovich, L. Shvanskaya, S. Li, and A. Vasiliev, Low-temperature specific-heat studies on two square-Kagome antiferromagnets, Phys. Rev. B 105, 155153 (2022).
* Wolf _et al._ [1972] W. P. Wolf, B. Schneider, D. P. Landau, and B. E. Keen, Magnetic and thermal properties of dysprosium aluminum garnet. II. Characteristic parameters of an Ising antiferromagnet, Phys. Rev. B 5, 4472 (1972).
* Astrakhantsev _et al._ [2021] N. Astrakhantsev, F. Ferrari, N. Niggemann, T. Müller, A. Chauhan, A. Kshetrimayum, P. Ghosh, N. Regnault, R. Thomale, J. Reuther, T. Neupert, and Y. Iqbal, Pinwheel valence bond crystal ground state of the spin-$\frac{1}{2}$ Heisenberg antiferromagnet on the Shuriken lattice, Phys. Rev. B 104, L220408 (2021).
* Lugan _et al._ [2019] T. Lugan, L. D. C. Jaubert, and A. Ralko, Topological nematic spin liquid on the square Kagome lattice, Phys. Rev. Research 1, 033147 (2019).
* Morita and Tohyama [2018] K. Morita and T. Tohyama, Magnetic phase diagrams and magnetization plateaus of the spin-1/2 antiferromagnetic Heisenberg model on a square-Kagome lattice with three nonequivalent exchange interactions, J. Phys. Soc. Jpn. 87, 043704 (2018).
* Hasegawa _et al._ [2018] Y. Hasegawa, H. Nakano, and T. Sakai, Metamagnetic jump in the spin-$\frac{1}{2}$ antiferromagnetic Heisenberg model on the square Kagome lattice, Phys. Rev. B 98, 014404 (2018).
* Ralko and Rousochatzakis [2015] A. Ralko and I. Rousochatzakis, Resonating-valence-bond physics is not always governed by the shortest tunneling loops, Phys. Rev. Lett. 115, 167202 (2015).
* Rousochatzakis _et al._ [2013] I. Rousochatzakis, R. Moessner, and J. van den Brink, Frustrated magnetism and resonating valence bond physics in two-dimensional Kagome-like magnets, Phys. Rev. B 88, 195109 (2013).
* Nakano and Sakai [2013] H. Nakano and T. Sakai, The two-dimensional S=1/2 Heisenberg antiferromagnet on the Shuriken lattice – a lattice composed of vertex-sharing triangles, J. Phys. Soc. Jpn. 82, 083709 (2013).
* Richter _et al._ [2009] J. Richter, J. Schulenburg, P. Tomczak, and D. Schmalfuß, The Heisenberg antiferromagnet on the square-Kagomé lattice, Condens. Matter Phys. 12, 507 (2009).
* Tomczak and Richter [2003] P. Tomczak and J. Richter, Specific heat of the spin- Heisenberg antiferromagnet on Squagome lattice, J. Phys. A 36, 5399 (2003).
* Orús [2014] R. Orús, A practical introduction to tensor networks: Matrix product states and projected entangled pair states, Ann. Phys. 349, 117 (2014).
* Eisert [2013] J. Eisert, Entanglement and tensor network states, (2013), arXiv:1308.3318 .
* Eisert _et al._ [2010] J. Eisert, M. Cramer, and M. B. Plenio, Area laws for the entanglement entropy, Rev. Mod. Phys. 82, 277 (2010).
* Verstraete _et al._ [2008] F. Verstraete, J. I. Cirac, and V. Murg, Matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems, Adv. Phys. 57, 143 (2008).
* Jordan _et al._ [2008] J. Jordan, R. Orús, G. Vidal, F. Verstraete, and J. I. Cirac, Classical simulation of infinite-size quantum lattice systems in two spatial dimensions, Phys. Rev. Lett. 101, 250602 (2008).
* Verstraete and Cirac [2004] F. Verstraete and I. Cirac, Renormalization algorithms for quantum-many body systems in two and higher dimensions, (2004), arXiv:cond-mat:0407066 .
* Liao _et al._ [2017] H. J. Liao, Z. Y. Xie, J. Chen, Z. Y. Liu, H. D. Xie, R. Z. Huang, B. Normand, and T. Xiang, Gapless spin-liquid ground state in the $S=1/2$ Kagome antiferromagnet, Phys. Rev. Lett. 118, 137202 (2017).
* Picot _et al._ [2016] T. Picot, M. Ziegler, R. Orús, and D. Poilblanc, Spin-$s$ kagome quantum antiferromagnets in a field with tensor networks, Phys. Rev. B 93, 060407 (2016).
* Picot and Poilblanc [2015] T. Picot and D. Poilblanc, Nematic and supernematic phases in kagome quantum antiferromagnets under the influence of a magnetic field, Phys. Rev. B 91, 064415 (2015).
* Kshetrimayum _et al._ [2016] A. Kshetrimayum, T. Picot, R. Orús, and D. Poilblanc, Spin-$\frac{1}{2}$ Kagome XXZ model in a field: Competition between lattice nematic and solid orders, Phys. Rev. B 94, 235146 (2016).
* Schmoll and Orús [2020] P. Schmoll and R. Orús, Benchmarking global $SU(2)$ symmetry in two-dimensional tensor network algorithms, Phys. Rev. B 102, 241101 (2020).
* Boos _et al._ [2019] C. Boos, S. P. G. Crone, I. A. Niesen, P. Corboz, K. P. Schmidt, and F. Mila, Competition between intermediate plaquette phases in ${\mathrm{SrCu}}_{2}$(${\mathrm{BO}}_{3}{)}_{2}$ under pressure, Phys. Rev. B 100, 140413 (2019).
* Kshetrimayum _et al._ [2020a] A. Kshetrimayum, C. Balz, B. Lake, and J. Eisert, Tensor network investigation of the double layer Kagome compound Ca10Cr7O28, Ann. Phys. (N.Y.) 421, 168292 (2020a).
* Schmoll _et al._ [2022] P. Schmoll, C. Balz, B. Lake, J. Eisert, and A. Kshetrimayum, Finite temperature tensor network algorithm for frustrated two-dimensional quantum materials, (2022), arXiv:2211.00121 .
* Kshetrimayum _et al._ [2017] A. Kshetrimayum, H. Weimer, and R. Orús, A simple tensor network algorithm for two-dimensional steady states, Nat. Commun. 8, 1291 (2017).
* Czarnik _et al._ [2012] P. Czarnik, L. Cincio, and J. Dziarmaga, Projected entangled pair states at finite temperature: Imaginary time evolution with ancillas, Phys. Rev. B 86, 245101 (2012).
* Czarnik and Dziarmaga [2015] P. Czarnik and J. Dziarmaga, Variational approach to projected entangled pair states at finite temperature, Phys. Rev. B 92, 035152 (2015).
* Czarnik _et al._ [2016] P. Czarnik, M. M. Rams, and J. Dziarmaga, Variational tensor network renormalization in imaginary time: Benchmark results in the Hubbard model at finite temperature, Phys. Rev. B 94, 235142 (2016).
* Kshetrimayum _et al._ [2019] A. Kshetrimayum, M. Rizzi, J. Eisert, and R. Orús, Tensor network annealing algorithm for two-dimensional thermal states, Phys. Rev. Lett. 122, 070502 (2019).
* Mondal _et al._ [2020] S. Mondal, A. Kshetrimayum, and T. Mishra, Two-body repulsive bound pairs in a multibody interacting Bose-Hubbard model, Phys. Rev. A 102, 023312 (2020).
* Czarnik _et al._ [2019] P. Czarnik, J. Dziarmaga, and P. Corboz, Time evolution of an infinite projected entangled pair state: An efficient algorithm, Phys. Rev. B 99, 035115 (2019).
* Hubig and Cirac [2019] C. Hubig and J. I. Cirac, Time-dependent study of disordered models with infinite projected entangled pair states, SciPost Phys. 6, 31 (2019).
* Kshetrimayum _et al._ [2020b] A. Kshetrimayum, M. Goihl, and J. Eisert, Time evolution of many-body localized systems in two spatial dimensions, Phys. Rev. B 102, 235132 (2020b).
* Kshetrimayum _et al._ [2021] A. Kshetrimayum, M. Goihl, D. M. Kennes, and J. Eisert, Quantum time crystals with programmable disorder in higher dimensions, Phys. Rev. B 103, 224205 (2021).
* Dziarmaga [2021] J. Dziarmaga, Time evolution of an infinite projected entangled pair state: Neighborhood tensor update, Phys. Rev. B 104, 094411 (2021).
* Dziarmaga [2022] J. Dziarmaga, Time evolution of an infinite projected entangled pair state: A gradient tensor update in the tangent space, Phys. Rev. B 106, 014304 (2022).
* Cirac _et al._ [2021] J. I. Cirac, D. Pérez-García, N. Schuch, and F. Verstraete, Matrix product states and projected entangled pair states: Concepts, symmetries, theorems, Rev. Mod. Phys. 93, 045003 (2021).
* Xie _et al._ [2014a] Z. Y. Xie, J. Chen, J. F. Yu, X. Kong, B. Normand, and T. Xiang, Tensor renormalization of quantum many-body systems using projected entangled simplex states, Phys. Rev. X 4, 011025 (2014a).
* Jiang _et al._ [2008] H. C. Jiang, Z. Y. Weng, and T. Xiang, Accurate determination of tensor network state of quantum lattice models in two dimensions, Phys. Rev. Lett. 101, 090603 (2008).
* Corboz _et al._ [2010] P. Corboz, R. Orús, B. Bauer, and G. Vidal, Simulation of strongly correlated fermions in two spatial dimensions with fermionic projected entangled-pair states, Phys. Rev. B 81, 165104 (2010).
* Bruognolo _et al._ [2021] B. Bruognolo, J.-W. Li, J. von Delft, and A. Weichselbaum, A beginner’s guide to non-Abelian iPEPS for correlated fermions, SciPost Phys. Lect. Notes , 25 (2021).
* [51] J. Naumann, E. Weerda, M. Rizzi, J. Eisert, and P. Schmoll, in preparation.
* Nishino and Okunishi [1996] T. Nishino and K. Okunishi, Corner transfer matrix renormalization group method, J. Phys. Soc. Jap. 65, 891 (1996).
* Nishino and Okunishi [1997] T. Nishino and K. Okunishi, Corner transfer matrix algorithm for classical renormalization group, J. Phys. Soc. Jap. 66, 3040 (1997).
* Orús and Vidal [2009] R. Orús and G. Vidal, Simulation of two-dimensional quantum systems on an infinite lattice revisited: Corner transfer matrix for tensor contraction, Phys. Rev. B 80, 094403 (2009).
* Orús [2012] R. Orús, Exploring corner transfer matrices and corner tensors for the classical simulation of quantum lattice systems, Phys. Rev. B 85, 205117 (2012).
* Schmoll _et al._ [2020] P. Schmoll, S. Singh, M. Rizzi, and R. Orús, A programming guide for tensor networks with global $SU(2)$ symmetry, Ann. Phys. (N.Y.) 419, 168232 (2020).
* Corboz [2016] P. Corboz, Improved energy extrapolation with infinite projected entangled-pair states applied to the 2D Hubbard model, Phys. Rev. B 93, 045116 (2016).
* Richter _et al._ [2022] J. Richter, O. Derzhko, and J. Schnack, Thermodynamics of the spin-half square Kagome lattice antiferromagnet, Phys. Rev. B 105, 144427 (2022).
* Schnack _et al._ [2001] J. Schnack, H. J. Schmidt, J. Richter, and J. Schulenburg, Independent magnon states on magnetic polytopes, Eur. Phys. J. B 24, 475 (2001).
* Schulenburg _et al._ [2002] J. Schulenburg, A. Honecker, J. Schnack, J. Richter, and H.-J. Schmidt, Macroscopic magnetization jumps due to independent magnons in frustrated quantum spin lattices, Phys. Rev. Lett. 88, 167207 (2002).
* Derzhko and Richter [2004] O. Derzhko and J. Richter, Finite low-temperature entropy of some strongly frustrated quantum spin lattices in the vicinity of the saturation field, Phys. Rev. B 70, 104415 (2004).
* Zhitomirsky and Tsunetsugu [2004] M. E. Zhitomirsky and H. Tsunetsugu, Exact low-temperature behavior of a Kagomé antiferromagnet at high fields, Phys. Rev. B 70, 100403 (2004).
* Derzhko _et al._ [2007] O. Derzhko, J. Richter, A. Honecker, and H.-J. Schmidt, Universal properties of highly frustrated quantum magnets in strong magnetic fields, Low Temp. Phys. 33, 745 (2007), https://doi.org/10.1063/1.2780166 .
* Mizoguchi _et al._ [2021] T. Mizoguchi, Y. Kuno, and Y. Hatsugai, Flat band, spin-1 Dirac cone, and Hofstadter diagram in the fermionic square Kagome model, Phys. Rev. B 104, 035161 (2021).
* Capponi _et al._ [2013] S. Capponi, O. Derzhko, A. Honecker, A. M. Läuchli, and J. Richter, Numerical study of magnetization plateaus in the spin-$\frac{1}{2}$ Kagome Heisenberg antiferromagnet, Phys. Rev. B 88, 144416 (2013).
* Richter _et al._ [2004] J. Richter, O. Derzhko, and J. Schulenburg, Magnetic-field induced Spin-Peierls instability in strongly frustrated quantum spin lattices, Phys. Rev. Lett. 93, 107206 (2004).
* McClarty _et al._ [2020] P. A. McClarty, M. Haque, A. Sen, and J. Richter, Disorder-free localization and many-body quantum scars from magnetic frustration, Phys. Rev. B 102, 224303 (2020).
* Glaetzle _et al._ [2014] A. W. Glaetzle, M. Dalmonte, R. Nath, I. Rousochatzakis, R. Moessner, and P. Zoller, Quantum spin-ice and dimer models with Rydberg atoms, Phys. Rev. X 4, 041037 (2014).
* Bennett _et al._ [2020] L. Bennett, B. Melchers, and B. Proppe, Curta: A general-purpose high-performance computer at ZEDAT, Freie Universität Berlin (2020).
* Xie _et al._ [2014b] Z. Y. Xie, J. Chen, J. F. Yu, X. Kong, B. Normand, and T. Xiang, Tensor renormalization of quantum many-body systems using projected entangled simplex states, Phys. Rev. X 4, 011025 (2014b).
|
# Quadrature-based Lattice Boltzmann model for non-equilibrium dense gas flows
S. Busuioc Department of Physics, West University of Timisoara Bd. Vasile
Parvan 4, 300223 Timisoara, Romania. Institute for Advanced Environmental
Research, West University of Timişoara, 300223 Timişoara, Romania
<EMAIL_ADDRESS>
###### Abstract
The Boltzmann equation becomes invalid as the size of gas molecules is
comparable with the average intermolecular distance. A better description is
provided by the Enskog collision operator, which takes into account the finite
size of gas molecules. This extension implies non-local collisions as well as
an increase in collision frequency, making it computationally expensive to
solve. An approximation of the Enskog collision operator, denoted the
simplified Enskog collision operator, is used in this work to develop a
quadrature-based Lattice Boltzmann model for non-ideal monatomic dense gases.
The Shakhov collision term is implemented in order to fine-tune the Prandtl
number. This kinetic model is shown to be able to tackle non-equilibrium flow
problems of dense gases, namely the sound wave and the shock wave propagation.
The results are compared systematically with the results of the more accurate
but computationally intensive particle method of solving the Enskog equation.
The model introduced in this paper is shown to have good accuracy for small to
moderate denseness of the fluid (defined as the ratio of the molecular
diameter to the mean free path) and, due to the efficiency in terms of the
computational time, it is suitable for practical applications.
## I Introduction
Over the past decades, flows at non-negligible values of the Knudsen number Kn
(defined as the ratio between the mean free path of the fluid particles in a
gas and the characteristic length of the domain), i.e. rarefied gas flows,
were successfully approached within the framework of the Boltzmann equation,
where the fluid constituents are point particles. The effect of the finite
molecular size must be considered when the mean-free path of the fluid
particles is comparable to their molecular sizeFerziger and Kaper (1972). This
is found in many applications, including high-pressure shock tubesPetersen and
Hanson (2001), flows through microfabricated nanomembranesHolt _et al._
(2006), single-bubble sonoluminescenceBrenner, Hilgenfeldt, and Lohse (2002),
gas extraction in unconventional reservoirsWu _et al._ (2016); Sander, Pan,
and Connell (2017) and the interfacial dynamics of liquid–vapour in high-
pressure liquid injection systemsDahms and Oefelein (2015).
In principle, the Enskog equation can be used to extend the kinetic theory
description of fluids to densities beyond the dilute-gas Boltzmann
limitChapman and Cowling (1970); Ferziger and Kaper (1972); Kremer (2010).
While keeping binary collision dynamics, the gas molecules are no longer
treated as point-like particles, as in the Boltzmann approach, and the finite-
size effects are accounted for by including the space correlations between
colliding molecules, the molecular mutual shielding and the reduction in the
volume available to molecules. This equation can be solved numerically using a
probabilistic or deterministic method, just as in the case of the Boltzmann
equation. In the past years, the Enskog equation was solved deterministically
using different methods, such as the Monte Carlo quadrature methodFrezzotti
and Sgarra (1993)(’direct method’), the fast spectral methodWu, Zhang, and
Reese (2015); Wu _et al._ (2016) and the Fokker-Planck approximationSadr and
Gorji (2017, 2019). On the other hand, after the success of the Direct
Simulation Monte Carlo method (DSMC)Bird (1976), probabilistic methods have
been developed by Alexander et al.Alexander, Garcia, and Alder (1995),
Montanero et al.Montanero and Santos (1996) and FrezzottiFrezzotti (1997) in
the ${}^{\prime}90$s. The Enskog equation has been used over the years to
study the properties of the hard-sphere dense gas near the solid walls of
micro- and nano-channelsDavis (1987); Din and Michaelides (1997); Frezzotti
(1997); Nedea _et al._ (2006). Its extension to systems of weakly attracting
hard-spheres has successfully been used to describe liquid–vapour flows of
monoatomicFrezzotti, Gibelli, and Lorenzani (2005); Kon, Kobayashi, and
Watanabe (2014); Frezzotti, Barbante, and Gibelli (2019); Busuioc _et al._
(2020a) and polyatomicBruno and Frezzotti (2019); Busuioc and Gibelli (2020),
mixturesKobayashi _et al._ (2017), as well as the formation and breakage of
liquid menisci in nanochannelsBarbante, Frezzotti, and Gibelli (2015).
The methods mentioned above, albeit reliable and accurate, require high
computational costs which renders them impractical for many applications. In
order to reduce the computational costs, one can simplify the non-local Enskog
collision integral by expanding it into a Taylor series around the point
$\bm{x}$ in the coordinate space. The first term in this expansion renders the
usual Boltzmann collision operator, while the second term is further
simplified by replacing the distribution function with the local equilibrium
distribution function, which is valid when the fluid is not far from
equilibriumChapman and Cowling (1970); Kremer (2010). This simplification was
used in Lattice Boltzmann (LB) models to investigate non-ideal gasesLuo (1998,
2000); Melchionna and Marconi (2007) and multiphase flows by adding the long-
range attractive forceHe and Doolen (2002). More recently, the simplified
Enskog collision operator was successfully implemented in a series of solvers,
namely the discrete velocity methodWang _et al._ (2020), the discrete unified
gas kinetic scheme (DUGKS)Chen _et al._ (2022), the double-distribution LB
modelHuang, Wu, and Adams (2021) and the discrete Boltzmann methodZhang _et
al._ (2020); Gan _et al._ (2022). They were used to investigate the normal
shock wave structures, the rarefaction effects in head-on collisions of two
identical droplets and the liquid-vapour phase transition, respectively.
In this paper, we employ a LB model based on Gauss-Hermite quadraturesShan,
Yuan, and Chen (2006), where finite difference schemes are used for the
advection and time-steppingPiaud _et al._ (2014); Ambruş and Sofonea (2016a,
b); Sofonea _et al._ (2018); Ambruş, Sharipov, and Sofonea (2020); Busuioc
_et al._ (2020b). This finite-difference Lattice Boltzmann (FDLB) belongs to
the off-lattice LB models family, which also includes finite-volume and
interpolation schemesHe (1997); Chen (1998). In this approach, the kinetic
equation is used to obtain an accurate evolution of the macroscopic moments of
$f$Succi (2018), with less attention directed to the distribution $f$ itself.
This allows the momentum space to be optimally sampled for the recovery of the
moments of $f$Shan, Yuan, and Chen (2006). By using the Gauss quadrature
method in the momentum space, off-lattice LB models of any orderShan, Yuan,
and Chen (2006); Ambruş and Sofonea (2016a, b) can be constructed to
accommodate the problem at hand.
This paper is organised as follows. In sec. II, the simplified Enskog equation
is presented along the FDLB model used to numerically solve it. The particle
method of solving the Enskog equationFrezzotti (1997), which is used to
systematically compare the FDLB results in the case of the shock wave
propagation, is briefly presented in Sec. III. The simulation results are
reported in Sec. IV. In Sec. IV.1 the sound wave propagation results are
compared with the analytic solution, while in Sec. IV.2, the shock wave
results are compared with the results obtained using the particle method, as
well as with the inviscid limit solution. We conclude the paper in Sec.V. The
details regarding the numerical schemes employed in this paper, namely the
third-order TVD Runge-Kutta method for time-stepping, the fifth-order WENO-5
advection scheme and the $6$th order central difference scheme used for
gradient evaluation, are relegated to Appendix A
## II The Enskog Lattice Boltzmann model
### II.1 Enskog equation
The Enskog equation describing the evolution of a system composed of rigid
spherical molecules was proposed by its author in 1922Enskog (1922). Unlike
Boltzmann in his equation, where molecules are assumed to be point-like
particles and collisions are local, Enskog has taken into account the volume
of the fluid particles (i.e., molecules,) that reduces the free movement space
available to each particle, which results into an increased number of
collisions. Moreover, the interparticle collisions are non-local, as the
positions of the two colliding molecules are one molecular diameter apart. The
Enskog equation can be written asChapman and Cowling (1970); Kremer (2010):
$\frac{\partial f}{\partial
t}+\frac{\bm{p}}{m}\cdot\bm{\nabla}_{\bm{x}}f+\bm{F}\cdot\bm{\nabla}_{\bm{p}}f=J_{E}$
(1)
where $m$ is the particle mass, ${\bm{F}}$ is the external body force and
$f(\bm{x},\bm{p},t)$ is the single-particle distribution function, giving at
time $t$ the number of particles of momentum $\bm{p}$ located within the unit
phase space volume centered in the point whose position vector is $\bm{x}$.
The right-hand side is given by the Enskog collision operator $J_{E}$ which
reads:
$J_{E}=\sigma^{2}\int\left\\{\chi\left({\bm{x}}+\frac{\sigma}{2}{\bm{k}}\right)f({\bm{x}},\bm{p^{*}})f({\bm{x}}+\sigma{\bm{k}},\bm{p_{1}^{*}})\right.\\\
-\left.\chi\left({\bm{x}}-\frac{\sigma}{2}{\bm{k}}\right)f({\bm{x}},\bm{p})f({\bm{x}}-\sigma{\bm{k}},\bm{p_{1}})\right\\}({\bm{p_{r}}}\cdot{\bm{k}})d{\bm{k}}d{\bm{p_{1}}}$
(2)
where $\sigma$ is the molecular diameter. $\bm{p_{r}}=\bm{p_{1}}-\bm{p}$ is
the relative momentum and ${\bm{k}}$ is the unit vector giving the relative
position of the two colliding particles. In the equation above, the
distribution function dependence on time $t$ was dropped for brevity. The
superscript $*$ refers to the post-collision momenta.
The contact value of the pair correlation function $\chi$ accounts for the
effect of the molecular diameter $\sigma$ on the collision frequency. In the
standard Enskog theory (SET), $\chi$ is approximated by the value of the pair
correlation function at the contact point of two colliding particles in a
fluid which is in uniform equilibrium. An approximate, but accurate expression
for $\chi_{\mbox{\tiny SET}}$, namely:
$\chi_{\text{\tiny
SET}}[n]=\frac{1}{nb}\left(\frac{P^{hs}}{nk_{B}T}-1\right)=\frac{1}{2}\frac{2-\eta}{(1-\eta)^{3}},$
(3)
is obtained from the equation of state of the hard-sphere fluid proposed by
Carnahan and Starling Carnahan and Starling (1969):
$P^{hs}=nk_{B}T\frac{1+\eta+\eta^{2}-\eta^{3}}{(1-\eta)^{3}}$ (4)
where $n$ is the particle number density, $\eta=b\rho/4$ is the reduced
particle density, with $b=2\pi\sigma^{3}/3m$, $p^{hs}$ is the pressure of a
system of hard-spheres and $k_{B}$ is the Boltzmann constant and $T$ is the
temperature. The square brackets in Eq. (3) denote a functional dependence.
In the revised (modified) Enskog theoryVan Beijeren and Ernst (1973), $\chi$
is given by the value of the pair correlation function at the contact point of
the two colliding particles in a fluid in non-uniform equilibrium. A good
approximation for the radial distribution function is obtained following the
Fischer-Methfessel (FM) prescription Fischer and Methfessel (1980). In this
approach, the actual value of the density at the contact point is replaced
with $\overline{n}({\bm{x}})$, which represents the value of the density field
averaged over a spherical volume of radius $\sigma$ centered in the point
${\bm{x}}$. Consequently, the contact value of the pair correlation function
is given by:
$\chi_{\mbox{\tiny RET-
FM}}\left(n\Big{(}\bm{x}-\frac{\sigma}{2}{\bm{\hat{k}}}\Big{)}\right)=\chi_{\mbox{\tiny
SET}}\left(\overline{n}\Big{(}\bm{x}-\frac{\sigma}{2}{\bm{\hat{k}}}\Big{)}\right),$
(5a) where $\displaystyle\overline{n}(\bm{x})$ $\displaystyle=$
$\displaystyle\frac{3}{4\pi\sigma^{3}}\int_{\mathbb{R}^{3}}n(\bm{x}^{\prime})w(\bm{x},\bm{x}^{\prime})\,d\bm{x}^{\prime},$
(5b) $\displaystyle w(\bm{x},\bm{x}^{\prime})$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{cc}1,&\qquad\|\bm{x}^{\prime}-\bm{x}\|<\sigma,\\\
0,&\qquad\|\bm{x}^{\prime}-\bm{x}\|>\sigma.\end{array}\right.$ (5e)
The Enskog collision operator in Eq. (2) can be regarded as a generalisation
of the Boltzmann collision operator to particles that have spatial extent. By
taking the limit of molecular diameter $\sigma$ going to zero, the pair
correlation function goes to unity ($\chi\rightarrow 1$) and one obtains the
Boltzmann collision operator since the term $\sigma^{2}$ stems from the
scattering cross-section.
We base the non-dimensionalization procedure employed in this paper on
reference quantitiesAmbruş and Sofonea (2018), which we introduce as follows.
Let $L_{\text{ref}}$ be the value of the reference length. The reference
values of the particle number density and the temperature are denoted
$n_{\text{ref}}$ and $T_{\text{ref}}$, respectively. Hence the reference value
of the momentum is $p_{\text{ref}}=\sqrt{m_{\text{ref}}k_{B}T_{\text{ref}}}$
and the reference time is
$t_{\text{ref}}=m_{\text{ref}}L_{\text{ref}}/p_{\text{ref}}$, where
$m_{\text{ref}}$ is the mass of a fluid particle.
### II.2 Enskog-Shakhov equation using the simplified Enskog collision
operator
By assuming that the contact value of the pair correlation function $\chi$
(functional dependence dropped for brevity) and the distribution functions
$\\{f^{*}\equiv f({\bm{x}},\bm{p^{*}}),f_{1}^{*}\equiv
f({\bm{x}}+\sigma{\bm{k}},\bm{p_{1}^{*}}),f\equiv
f({\bm{x}},\bm{p}),f_{1}\equiv f({\bm{x}}-\sigma{\bm{k}},\bm{p_{1}})\\}$ are
smooth functions, one can approximate these functions in the Enskog collision
integral $J_{E}$ through a Taylor series near the point $\bm{x}$. The
resulting terms up to first order gradients $J_{E}\approx J_{0}+J_{1}$
areChapman and Cowling (1970); Kremer (2010):
$\displaystyle J_{0}(f,f)$ $\displaystyle=$
$\displaystyle\chi\int(f^{*}f_{1}^{*}-ff_{1})\sigma^{2}({\bm{p_{r}}}\cdot{\bm{k}})d{\bm{k}}d{\bm{p_{1}}}$
(6) $\displaystyle J_{1}(f,f)$ $\displaystyle=$
$\displaystyle\chi\sigma\int\bm{k}(f^{*}\bm{\nabla}f_{1}^{*}-f\bm{\nabla}f_{1})\sigma^{2}({\bm{p_{r}}}\cdot{\bm{k}})d{\bm{k}}d{\bm{p_{1}}}$
(7) $\displaystyle+$
$\displaystyle\frac{\sigma}{2}\int\bm{k}\bm{\nabla}\chi(f^{*}f_{1}^{*}-ff_{1})\sigma^{2}({\bm{p_{r}}}\cdot{\bm{k}})d{\bm{k}}d{\bm{p_{1}}}$
where all functions $f^{*},f_{1}^{*},f,f_{1}$ and $\chi$ are evaluated at the
point ${\bm{x}}$.
The collision term $J_{0}(f,f)$ is the usual collision term of the Boltzmann
equation multiplied by $\chi$, and is treated as such, by applying the usual
relaxation time approximation. In this paper we will employ the Shakhov
collision termShakhov (1968a, b), namely:
$J_{0}(f,f)=-\frac{1}{\tau}(f-f^{S}),$ (8)
where $\tau$ is the relaxation time and $f_{S}$ is the equilibrium Maxwell-
Boltzmann distribution times a correction factorShakhov (1968a, b); Graur and
Polikarpov (2009); Ambruş, Sharipov, and Sofonea (2020):
$f^{S}=f_{\text{\tiny
MB}}\left[1+\frac{1-\text{Pr}}{P_{i}k_{B}T}\left(\frac{\bm{\xi}^{2}}{5mk_{B}T}-1\right)\bm{\xi}\cdot\bm{q}\right]$
(9)
where $q$ is the heat flux obtained using:
$\bm{q}=\int d^{3}pf\frac{\bm{\xi}^{2}}{2m}\frac{\bm{\xi}}{m},$ (10)
$\bm{\xi}=\bm{p}-m\bm{u}$ is the peculiar momentum,
$\text{Pr}=c_{P}\mu/\lambda$ is the Prandtl number, $c_{P}=5k_{B}/2m$ is the
specific heat at constant pressure and $P_{i}=\rho RT=nk_{B}T$ is the ideal
gas equation of state, with $R$ being the specific gas constant. The Maxwell-
Boltzmann distribution $f_{\text{\tiny MB}}$ is given by:
$f_{\text{\tiny MB}}=\frac{n}{(2m\pi
k_{B}T)^{3/2}}\exp{\left(-\frac{\bm{\xi}^{2}}{2mk_{B}T}\right)}$ (11)
The second term of $J_{E}$, namely $J_{1}(f,f)$, can be approximated by
replacing the distribution functions ($f^{*},f_{1}^{*},f,f_{1}$) with the
corresponding equilibrium distribution functions. By using $f_{\text{\tiny
MB}}^{*}f_{\text{\tiny MB},1}^{*}=f_{\text{\tiny MB}}f_{\text{\tiny MB},1}$,
and integrating over $\bm{k}$ and $\bm{p_{1}}$, one obtainsChapman and Cowling
(1970); Kremer (2010):
$J_{1}(f,f)\approx J_{1}(f_{\text{\tiny MB}},f_{\text{\tiny MB}})=\\\
-b\rho\chi f_{\text{\tiny
MB}}\left\\{\bm{\xi}\left[\bm{\nabla}\ln(\rho^{2}\chi
T)+\frac{3}{5}\left(\zeta^{2}-\frac{5}{2}\right)\bm{\nabla}\ln
T\right]\right.\\\ \left.+\frac{2}{5}\left[2\bm{\zeta}\bm{\zeta}\bm{:\nabla
u}+\left(\zeta^{2}-\frac{5}{2}\right)\bm{\nabla\cdot u}\right]\right\\}$ (12)
where $\bm{\zeta}=\bm{\xi}/\sqrt{2RT}$. With the above approximations and
considering no external force, the Enskog equation Eq. (1) becomes:
$\frac{\partial f}{\partial
t}+\frac{\bm{p}}{m}\nabla_{\bm{x}}f=-\frac{1}{\tau}(f-f_{S})+J_{1}(f_{\text{\tiny
MB}},f_{\text{\tiny MB}})$ (13)
The macroscopic quantities are evaluated as moments of the distribution
function:
$\begin{pmatrix}n\\\ \rho\bm{u}\\\ \frac{3}{2}nk_{B}T\end{pmatrix}=\int
d^{3}p\begin{pmatrix}1\\\ \bm{p}\\\ \frac{\bm{\xi}^{2}}{2m}\end{pmatrix}f$
(14)
where $\rho=mn$.
The Chapman-Enskog expansion of Eq. (13) yields the following conservation
equations for mass, momentum and energyKremer (2010):
$\displaystyle\frac{D\rho}{Dt}+\rho\nabla\bm{u}$ $\displaystyle=0$ (15a)
$\displaystyle\rho\frac{D{\bm{u}}}{Dt}+\nabla P$
$\displaystyle=-\nabla\cdot\Pi$ (15b)
$\displaystyle\rho\frac{De}{Dt}+P\nabla\cdot\bm{u}$
$\displaystyle=-\nabla\cdot\bm{q}+\Pi\bm{:}\nabla\bm{u}$ (15c)
where $D/Dt=\partial_{t}+\bm{u}\cdot\nabla$ is the material derivative and
$P=P_{i}(1+b\rho\chi)$ is the equation of state of a non-ideal gas. The heat
flux and the viscous part of the stress tensor $\Pi_{\alpha\beta}$ are given
by:
$\displaystyle\bm{q}=-\lambda\nabla T,$ (16)
$\displaystyle\Pi=-\mu_{v}\mathcal{I}\bm{\nabla}\cdot\bm{u}-\mu\left(\nabla
u+(\nabla u)^{T}-\frac{2}{3}\mathcal{I}\bm{\nabla}\cdot\bm{u}\right)$ (17)
where $\mathcal{I}$ is the identity matrix and the bulk viscosity $\mu_{v}$,
the shear viscosity $\mu$ and the thermal conductivity $\lambda$ are given
byKremer (2010):
$\mu_{v}=\frac{16}{5\pi}\mu_{0}b^{2}\rho^{2}\chi,$ (18a) $\mu=\tau
P_{i}=\mu_{0}b\rho\left[\frac{1}{b\rho\chi}+0.8+\frac{4}{25}\left(1+\frac{12}{\pi}\right)b\rho\chi\right],$
(18b) $\lambda=\frac{5k_{B}}{2m}\frac{\tau
P_{i}}{\text{Pr}}=\lambda_{0}b\rho\left[\frac{1}{b\rho\chi}+1.2+\frac{9}{25}\left(1+\frac{32}{9\pi}\right)b\rho\chi\right],$
(18c)
In these equations, $\mu_{0}=\mu_{\text{ref}}\sqrt{T/T_{0}}$ is the viscosity
coefficient for hard-sphere molecules, where $\mu_{\text{ref}}$ represents the
viscosity coefficient for dilute gases at temperature $T_{0}$ and
$\lambda_{0}\equiv\lambda_{\text{ref}}$ is the reference thermal conductivity
for dilute gases at temperature $T_{0}$. The reference values areKremer
(2010):
$\mu_{\text{ref}}=\frac{5}{16\sigma^{2}}\sqrt{\frac{mk_{B}T_{0}}{\pi}},\quad\lambda_{\text{ref}}=\frac{75k_{B}}{64m\sigma^{2}}\sqrt{\frac{mk_{B}T_{0}}{\pi}}.$
(19)
For the dense gas the Prandtl number is:
$\text{Pr}=\frac{2}{3}\,\frac{1+\frac{4}{5}b\rho\chi+\frac{4}{25}\left(1+\frac{12}{\pi}\right)(b\rho\chi)^{2}}{1+\frac{6}{5}b\rho\chi+\frac{9}{25}\left(1+\frac{32}{9\pi}\right)(b\rho\chi)^{2}}$
(20)
with the dilute limit of $\text{Pr}=2/3$.
From here it follows directly that the relaxation time $\tau$ is given by:
$\tau=\frac{\mu}{P_{i}}$ (21)
Since $\mu$ takes into account both the kinetic and the potential
contributions, associated with the flow of the molecules and collisional
contribution to the transfer of gas momentum and energyChapman and Cowling
(1970); Kremer (2010), respectively, the collisional transfer due to the non-
local molecular collisions is well described in the relaxation time
approximation. Note that the viscosity of the dense gas of a fixed reduced
density $\eta$ can be changed by varying the molecular diameter $\sigma$ and
the number density $n$.
By using the reference mean free path $l=m/\sqrt{2}\pi\sigma^{2}n\chi$, one
can define the degree of denseness $E_{l}$ introduced by Frezzotti and
SgarraFrezzotti and Sgarra (1993), given by the ratio of the molecular
diameter and the mean free path:
$E_{l}=\frac{\sigma}{l}=\frac{3}{\sqrt{2}}bn\chi.$ (22)
The relaxation time $\tau$ can be rewritten as the molecular diameter $\sigma$
times a functional $g$ of $\eta$:
$\tau=\sigma g[\eta]$ (23)
such that one can vary $\tau$ at constant reduced density $\eta$ by changing
$\sigma$. Furthermore, in the case of the standard Enskog theory (SET), one
can keep $\sigma$ and $n$ fixed (i.e. a constant $\eta$) and multiply $\tau$
with a relaxation scaling factor $\widetilde{\tau}$, which is equivalent to
setting $\sigma=\widetilde{\tau}$ and keeping $\eta$ constant. This is true
also for $J_{1}$ since all terms remain unchanged when $\eta=\text{const}$ and
varying $\sigma$ and $n$.
### II.3 Reduced distributions
In the context of the longitudinal waves and 1D shock waves considered in this
paper, the dynamics along the $y$ and $z$ directions is trivial and it is
convenient to integrate out the momentum space degrees of freedom at the level
of the model equation. The $y$ and $z$ degrees of freedom can be integrated
out and two reduced distribution functions, $\phi$ and $\theta$, can be
introduced asLi and Zhang (2004); Graur and Polikarpov (2009); Meng _et al._
(2013); Ambruş and Sofonea (2018); Ambrus and Sofonea (2019); Busuioc and
Ambruş (2019):
$\displaystyle\phi({\bm{x}},p_{x},t)$ $\displaystyle=\int
dp_{y}dp_{z}f({\bm{x}},{\bm{p}},t),$ (24)
$\displaystyle\theta({\bm{x}},p_{x},t)$ $\displaystyle=\int
dp_{y}dp_{z}\frac{p_{y}^{2}+p_{z}^{2}}{m}f({\bm{x}},{\bm{p}},t)$ (25)
In the following, all dependencies of the reduced distribution functions will
be dropped for brevity. The macroscopic moments can be evaluated as:
$\displaystyle\begin{pmatrix}n\\\ \rho u_{x}\\\ \Pi_{xx}\end{pmatrix}$
$\displaystyle=\int dp_{x}\begin{pmatrix}1\\\ p_{x}\\\
\frac{\xi_{x}^{2}}{m}\end{pmatrix}\phi,$ (26)
$\displaystyle\begin{pmatrix}\frac{3}{2}nk_{B}T\\\ q_{x}\end{pmatrix}$
$\displaystyle=\int dp_{x}\begin{pmatrix}1\\\
\frac{\xi_{x}}{m}\end{pmatrix}\left(\frac{\xi_{x}^{2}}{2m}\phi+\frac{1}{2}\theta\right)$
(27)
The evolution equations for the reduced distribution functions are:
$\frac{\partial}{\partial t}\begin{pmatrix}\phi\\\
\theta\end{pmatrix}+\frac{p_{x}}{m}\frac{\partial}{\partial
x}\begin{pmatrix}\phi\\\
\theta\end{pmatrix}=-\frac{1}{\tau}\begin{pmatrix}\phi-\phi_{S}\\\
\theta-\theta_{S}\end{pmatrix}+\begin{pmatrix}J_{1}^{\phi}\\\
J_{1}^{\theta}\end{pmatrix}$ (28)
In the above the, $\phi_{S}$ and $\theta_{S}$ are given by:
$\displaystyle\phi_{S}=f^{x}_{\text{\tiny
MB}}\left[1+\frac{1-\text{Pr}}{5P_{i}mk_{B}T}\left(\frac{\xi_{x}^{2}}{mk_{B}T}-3\right)\xi_{x}q_{x}\right],$
(29) $\displaystyle\theta_{S}=2k_{B}Tf^{x}_{\text{\tiny
MB}}\left[1+\frac{1-\text{Pr}}{5P_{i}mk_{B}T}\left(\frac{\xi_{x}^{2}}{mk_{B}T}-1\right)\xi_{x}q_{x}\right]$
(30)
where
$f^{x}_{\text{\tiny MB}}=\frac{n}{(2m\pi
k_{B}T)^{1/2}}\exp{\left(-\frac{\xi_{x}^{2}}{2mk_{B}T}\right)}$ (31)
while the first order corrections $J_{1}^{\phi}$ and $J_{1}^{\theta}$ are:
$J_{1}^{\phi}=-\left[\xi_{x}\partial_{x}\ln\chi+2\xi_{x}\partial_{x}\ln\rho+\frac{3}{5}\left(\frac{\xi_{x}^{2}}{mk_{B}T}-1\right)\partial_{x}u_{x}\right.\\\
\left.+\frac{3}{10}\left(\frac{\xi_{x}^{3}}{m^{2}k_{B}T}+\frac{\xi_{x}}{3m}\right)\partial_{x}\ln
T\right]f_{\text{\tiny MB}}$ (32a)
$J_{1}^{\theta}=-\left[\xi_{x}\partial_{x}\ln\chi+2\xi_{x}\partial_{x}\ln\rho+\frac{3}{5}\left(\frac{\xi_{x}^{2}}{mK_{B}T}-\frac{1}{3}\right)\partial_{x}u_{x}\right.\\\
\left.+\frac{3}{10}\left(\frac{\xi_{x}^{3}}{m^{2}k_{B}T}+\frac{7\xi_{x}}{3m}\right)\partial_{x}\ln
T\right]2mk_{B}Tf_{\text{\tiny MB}}b\rho\chi$ (32b)
### II.4 The finite-difference Enskog Lattice Boltzmann model
By using the reduced distribution, one has to solve the 1D evolution equations
Eqs. (28). In the following, we will introduce the notation
$\psi\in\\{\phi,\,\theta\\}$ to represent the reduced distributions introduced
in Sec. II.3.
When the Shakhov collision term is used in an LB model, the moments of the
distribution function $\psi({x},{p},t)$ up to order $N\geq 6$ are needed in
order to get the evolution equations of the macroscopic fieldsAmbruş and
Sofonea (2018). Thus, the minimum number of the momentum vectors in the LB
model based on the full-range Gauss-Hermite quadrature that ensures all the
moments of $\psi({x},{p_{x}},t)$ up to order $N_{\text{min}}=6$ is
$Q_{\text{min}}=(N_{\text{min}}+1)=7$Shan, Yuan, and Chen (2006); Piaud _et
al._ (2014); Fede _et al._ (2015); Ambruş and Sofonea (2016a). Hence, the
momentum set $\\{{p}_{k}\\}$ has $Q\geq Q_{\text{min}}$ elements that belong
to the set $\\{{r}_{k}\\}$, $1\leq k\leq Q$, of the roots of the full-range
Hermite polynomial $H_{Q}(p)$Shan, Yuan, and Chen (2006); Ambruş and Sofonea
(2016a) and the their associated weights ${w}_{k}$ given byAmbruş and Sofonea
(2016a, b); Hildebrand (1987); F.W.J. Olver (2010)
${w}_{k}=\frac{Q!}{\,[H_{Q+1}({r}_{k})]^{2}\,}.$ (33)
The full range Hermite polynomials $H_{\ell}(p)$ used in this paper are the
so-called _probabilistic_ Hermite polynomials, which are orthogonal with
respect to the weight function
$\omega(p)=\frac{1}{\,\sqrt{2\pi}\,}e^{-p^{2}/2},$ (34)
and their orthogonality relation readsHildebrand (1987)
$\int_{-\infty}^{+\infty}dp\,\omega(p)H_{\ell}(p)H_{\ell^{\prime}}(p)=\ell!\,\delta_{\ell,\ell^{\prime}}.$
(35)
The equilibrium functions $f_{\text{\tiny MB}}^{k}\equiv f_{\text{\tiny
MB}}(x,p_{k},t)$ are replaced byAmbruş and Sofonea (2016a, b):
$f_{\text{\tiny MB}}^{k}=ng_{k},$ (36a) where $g_{k}\equiv
g_{k}\left[u,T\right]=w_{k}\,\sum_{\ell=0}^{N}H_{\ell}(p_{k})\sum_{s=0}^{\lfloor\ell/2\rfloor}\frac{\,(mT-1)^{s}(mu)^{\ell-2s}\,}{\,2^{s}s!(\ell-2s)!\,},$
(36b)
and $\lfloor\ell/2\rfloor$ is the integer part of $\ell/2$.
The non-dimensionalized form of the evolution equation of the functions
$\phi_{k}$ and $\theta_{k}$ is:
$\frac{\partial}{\partial t}\begin{pmatrix}\phi_{k}\\\
\theta_{k}\end{pmatrix}+\frac{p_{k}}{m}\frac{\partial}{\partial
x}\begin{pmatrix}\phi_{k}\\\
\theta_{k}\end{pmatrix}=-\frac{1}{\tau}\begin{pmatrix}\phi_{k}-\phi_{S;k}\\\
\theta_{k}-\theta_{S;k}\end{pmatrix}+\begin{pmatrix}J_{1;k}^{\phi}\\\
J_{1;k}^{\theta}\end{pmatrix}.$ (37)
The macroscopic quantities are evaluated as:
$\displaystyle\begin{pmatrix}n\\\ \rho u\\\ \Pi\end{pmatrix}$
$\displaystyle=\sum_{k=1}^{Q}\begin{pmatrix}1\\\ p_{k}\\\
\frac{\xi_{k}^{2}}{m}\end{pmatrix}\phi_{k},$ (38)
$\displaystyle\begin{pmatrix}\frac{3}{2}nk_{B}T\\\ q\end{pmatrix}$
$\displaystyle=\sum_{k=1}^{Q}dp_{k}\begin{pmatrix}1\\\
\frac{\xi_{k}}{m}\end{pmatrix}\left(\frac{\xi_{k}^{2}}{2m}\phi_{k}+\frac{1}{2}\theta_{k}\right)$
(39)
## III Particle method for Enskog equation
The Enskog equation Eq. (1) is solved numerically using also a particle
method. The method is an extension of the original Direct Simulation Monte-
Carlo (DSMC) to deal with the nonlocal structure of the Enskog collision
integralFrezzotti (1997). For a thorough description of the numerical scheme
and the analysis of its computational complexity please refer to Ref.
Frezzotti, Barbante, and Gibelli (2019). A brief description of the scheme is
outlined below.
The main framework of the DSMC scheme used to solve the Boltzmann equation is
preserved, with modifications occurring in the collision algorithm due to the
nonlocal structure of the Enskog collision operator. The distribution function
is represented by $N$ computational particles:
$f(\bm{x},\bm{p},t)=\frac{1}{m}\sum_{i=1}^{N}\delta{\left(\bm{x}-\bm{x}_{i}(t)\right)}\delta(\bm{p}-\bm{p}_{i}(t)),$
(40)
where $\bm{x}_{i}$ and $\bm{p}_{i}$ are the position and the momentum of the
$i$th particle at time $t$, respectively.
The distribution function is updated by a fractional-step method based on the
time-splitting of the evolution operator in two sub-steps, namely free
streaming and collision. In the first stage, the distribution function is
advanced from $t$ to $t+\Delta t$ by neglecting the collisions between
particles, i.e. by solving the equation:
$\frac{\partial f}{\partial t}+\frac{\bm{p}}{m}\cdot\nabla_{\bm{x}}f=0,$ (41)
which translates into updating the positions of the computational particles
according to:
$\bm{x}_{i}(t+\Delta t)=\bm{x}_{i}(t)+\frac{\bm{p}_{i}}{m}\Delta t,$ (42)
with the resulting distribution function denoted
$\tilde{f}(\bm{x},\bm{p},t+\Delta t)$.
In the second stage, the short-range hard-sphere interactions are evaluated
and the updating rule for the distribution function is given by:
$f(\bm{x},\bm{p},t+\Delta t)=\tilde{f}(\bm{x},\bm{p},t+\Delta
t)+J_{E}[\tilde{f}]\Delta t.$ (43)
During this stage, the $N$ particle positions $\bm{x}_{i}$ are unchanged while
their momenta $\bm{p}_{i}/m$ are modified according to stochastic rules which
essentially correspond to the Monte Carlo evaluation of the collision integral
given by Eq. (2) by selecting collision pairs accordingly. The macroscopic
quantities are obtained by time-averaging the particles’ microscopic states,
as well as phase averaging, by running identically macroscopic but
statistically independent simulations (i.e. same initialisation but a
different random seed).
## IV Results
### IV.1 Longitudinal waves
Figure 1: The evolution of the normalized density amplitude
$\delta\rho(t)/\delta\rho_{0}$ obtained numerically with $N_{x}=100$ and
$Q_{x}=8$, compared with the analytic prediction in Eq. (56), for two values
of the reduced density $\eta$.
#### IV.1.1 Problem statement
The study of longitudinal waves is an important topic in fluid mechanicsFaber
(1995); Sharipov and Kalempa (2008); Wang and Xu (2012); Sharipov (2015);
Ambruş (2018); Shan (2019). The propagation of longitudinal waves induces
fluctuations in the macroscopic properties of the fluid, the amplitudes of
which decay due to viscous and thermal dissipation. The sound wave propagates
as a longitudinal wave through the compression and relaxation of the
neighboring fluid elements. For simplicity, we will consider small
perturbations of density and pressure around the constant values $\rho_{0}$
and $P_{0}$ in a fluid that is homogeneous along the $y$ and $z$ axis. The
wave propagates along the $x$ axis with a small velocity $u(x,t)$:
$\rho(x,t)=\rho_{0}[1+\delta\rho(x,t)],\quad P(x,t)=P_{0}[1+\delta P(x,t)]$
(44)
where the perturbations $\delta\rho$ and $\delta P$ are of the same order of
magnitude as $u$.
#### IV.1.2 Analytic solution
We will briefly go through the usual approach to sound wave propagationFaber
(1995); Kundu, Cohen, and Dowling (2015); White (2016). In the linearised
regime, the macroscopic equations reduce to:
$\displaystyle\partial_{t}\delta\rho+\partial_{x}u=0$ (45a)
$\displaystyle\partial_{t}u+\frac{P_{0}}{\rho_{0}}\partial_{x}\delta
P-\frac{1}{\rho_{0}}\partial_{x}\Pi=0$ (45b) $\displaystyle\partial_{t}\delta
T+\frac{\partial_{x}q}{\rho_{0}c_{V}T_{0}}+\frac{P_{0}}{\rho_{0}c_{V}T_{0}}\partial_{x}u=0$
(45c)
where the specific energy is $e=c_{V}T=c_{V}T_{0}(1+\delta T)$ and $\Pi=O(u)$.
Considering that the pressure $P$ depends on $x$ and $t$ only through the
variables $\rho$ and $T$, the derivative can be written as:
$P_{0}\partial_{x}\delta
P=\rho_{0}(\partial_{\rho}P)(\partial_{x}\delta\rho)+T_{0}(\partial_{T}P)(\partial_{x}\delta
T)$ (46)
By replacing the above results in Eq. (45b) and applying a time derivative of
the whole equation one obtains:
$\partial_{t}^{2}u-\left(\partial_{\rho}+\frac{P_{0}}{\rho_{0}^{2}c_{V}}\partial_{T}P\right)\partial_{x}^{2}u=\frac{1}{\rho_{0}^{2}c_{V}}(\partial_{T}P)(\partial_{x}^{2}q)+\frac{1}{\rho_{0}}\partial_{t}\partial_{x}\Pi$
(47)
By neglecting dissipative effects one can identify the square of the sound
speed as:
$c_{s}^{2}=\partial_{\rho}P+\frac{P_{0}}{\rho_{0}^{2}c_{V}}\partial_{T}P$ (48)
A harmonic decomposition can be performed with respect to the perturbation
amplitudes based on the linearity and homogeneity of Eqs. (45). Given a wave
number $k=2\pi/L$ of a longitudinal wave of length $L$, the following
relations can be established:
$\begin{pmatrix}\delta\rho\\\ \delta P\\\
\tau\end{pmatrix}=\begin{pmatrix}\widetilde{\delta\rho}(t)\\\
\widetilde{\delta P}(t)\\\
\widetilde{\tau}(t)\end{pmatrix}\cos(kx),\quad\begin{pmatrix}u\\\
q\end{pmatrix}=\begin{pmatrix}\widetilde{u}(t)\\\
\widetilde{q}(t)\end{pmatrix}\sin(kx)$ (49)
where the amplitudes
$\widetilde{A}\equiv\widetilde{A}(t)\,(\widetilde{A}\in\\{\widetilde{\delta\rho},\widetilde{\delta
P},\widetilde{\tau},u,q\\})$ depend only on time. These amplitudes can be
written in terms of independent modes:
$\widetilde{A}(t)=\sum_{\alpha}e^{-\alpha t}A_{\alpha}$ (50)
where $A_{\alpha}$ are constants. The viscous part of the stress tensor
$\Pi_{ij}$ can be written as:
$\Pi=\left(\frac{4\mu}{3}+\mu_{V}\right)\partial_{x}u\implies\Pi_{\alpha}=\left(\frac{4\mu}{3}+\mu_{V}\right)ku_{\alpha}$
(51)
From the energy equation (45c) one gets:
$\alpha\delta
T_{\alpha}=\frac{k}{\rho_{0}c_{V}T_{0}}(q_{\alpha}+P_{0}u_{\alpha})$ (52)
By virtue of the Fourier law $\bm{q}=-\lambda\nabla T$, one obtains:
$q_{\alpha}=\frac{\lambda k^{2}P_{0}}{\alpha\rho c_{V}-\lambda
K^{2}}u_{\alpha}.$ (53)
Replacing all into Eq. (47) we get:
$\alpha^{3}-\frac{\mu}{\rho_{0}}k^{2}\alpha^{2}\left(\frac{4}{3}+\frac{\mu_{V}}{\mu}+\frac{\gamma}{\text{Pr}}\right)+\\\
k^{2}c_{s}^{2}\alpha\left[1+\frac{\gamma
k^{2}\mu^{2}}{\rho_{0}^{2}c_{s}^{2}\text{Pr}}\left(\frac{4}{3}+\frac{\mu_{V}}{\mu}\right)\right]-\frac{\gamma\mu
k^{4}}{\rho_{0}\text{Pr}}\partial_{\rho}P=0$ (54)
where $\text{Pr}=c_{P}\mu/\lambda$ and $\gamma=c_{s}^{2}\rho/P$ is the
adiabatic index.
The above equation is cubic with respect to $\alpha$, thus it admits at least
one real solution, which corresponds to the thermal mode $\alpha_{t}$. The
other two roots $\alpha_{\pm}$, corresponding to the acoustic modes, must be
complex in order to allow the wave to propagate. Writing
$\alpha_{\pm}=\alpha_{a}\pm i\alpha_{s}$, we see that $\alpha_{a}$ induces
acoustic dissipation, while $\alpha_{s}=kc_{s}$ is related to the speed of
sound $c_{s}$ at the background parameters:
$\displaystyle\alpha_{t}=\frac{\gamma\mu
k^{2}}{\text{Pr}\rho_{0}c_{s}^{2}}\partial_{\rho}P,\quad\alpha_{s}=kc_{s}$
$\displaystyle\alpha_{a}=\frac{k^{2}\mu}{2\rho_{0}}\left[\frac{4}{3}+\frac{\mu_{V}}{\mu}+\frac{\gamma
c_{s}^{2}}{\text{Pr}}\left(1-\partial_{\rho}P\right)\right]$ (55)
In this paper, we will restrict our simulations to the case when the pressure
perturbation vanishes at initial time $\delta P(t_{0})=0$, since all other
combinations are equivalent. After some calculations one can write the full
solution of the density amplitude:
$\delta\rho(t)\approx\delta\rho_{0}\left[e^{\alpha_{t}t}+\left(e^{-\alpha_{a}t}\cos(kc_{s}t)-e^{-\alpha_{t}t}\right)\frac{\partial_{\rho}P}{c_{s}^{2}}\right]$
(56)
#### IV.1.3 Computational setup
Figure 2: The sound speed $c_{s}$ obtained from the simulation results and the
analytic prediction Eq. (48).
All simulations are performed using a system of length equal to $L=1$
($k=2\pi/L$) and $N_{x}=100$ nodes. The quadrature order is chosen to be $Q=8$
as it has resulted to be more stable than the minimal $Q=7$ order. The
molecular diameter was set to $\sigma=\widetilde{\tau}=10^{-6}$ in order to
maintain the viscosity at relatively low values over orders of magnitude of
the reduced density $\eta$, for the comparison with the analytic solution. The
time step was set to $\Delta t=10^{-6}$, the temperature at $T=1$ and the
number density and the number density perturbation were set to $\rho_{0}=0.1$
and $\delta\rho_{0}=10^{-6}$, respectively. The contact value of pair
correlation function $\chi$ is evaluated according to the Standard Enskog
theory using $\chi_{\text{\tiny SET}}$ given in Eq. (3).
The values of the amplitude of the number density $\widetilde{\delta\rho}$ are
stored at intervals $T=100\Delta t$ using the following procedure:
$\widetilde{\delta\rho}(t_{s})=\frac{2}{N_{x}}\sum_{i=1}^{N_{x}}\rho(x_{i},t_{s})\cos(kx_{i})$
(57)
where $t_{s}=s\times T$.
#### IV.1.4 Simulation results
Fig.1 shows the time evolution of the amplitude $\delta\rho/\delta\rho_{0}$
obtained using our numerical method, compared with the analytic prediction for
the parameters $\alpha_{a}$, $\alpha_{t}$ (Eqs. (IV.1.2)) and $c_{s}$(Eq.
(48)). Very good agreement can be observed between the numerical and analytic
results.
In order to assess the viability of the Simplified Enskog operator, we perform
a series of simulations over a couple of orders of magnitude of the reduced
density $\eta$. The simulation results are fitted using the analytic solution
with the damping coefficients as free parameters and the resulting values are
compared to the analytic prediction. The values of the parameters $c_{s}$,
$\alpha_{a}$ and $\alpha_{t}$, given by Eqs. (48) and (IV.1.2), were obtained
using the fitting function given in Eq. (56) and the non-linear least-squares
(NLLS) Marquardt-Levenberg fitting algorithm. Fig. 2 shows the fitted values
of the sound speed $c_{s}$ compared to the analytic prediction given by Eq.
(48). Excellent agreement is observed throughout the whole span of the reduced
density $\eta$.
On the other hand, in the case of the damping coefficients $\alpha_{a}$ and
$\alpha_{t}$ (Fig. 3) one can observe very good agreement when the molecular
diameter is small enough, i.e. for small values of the reduced density $\eta$.
However, the values of these coefficients diverge from the analytic prediction
as the reduced density approaches values of $\eta=0.1$ ($E_{l}=1.10576$).
First, the thermal mode $\alpha_{t}$ is underestimated starting around
$\eta=0.1$, while the acoustic mode $\alpha_{a}$ diverges from the analytic
prediction at around $\eta=0.3$ ($E_{l}=6.3083$). Furthermore, we added in
Fig. 3 the analytic values of the damping coefficients in the case of the
dilute gas, evaluated according to Eq. (IV.1.2) with $P=P_{i}$ and the shear
viscosity given by Eq. (18b) at the corresponding $\eta$. As expected, the
curves converge at small values of $\eta$, as the Enskog collision operator
reduces to the Boltzmann one, and at around $\eta=0.01$ the first finite size
effects start to appear. This means that the approximation used for the Enskog
collision operator works very well for moderately dense gases, but should be
applied with care at large values of the reduced density. In the inset of Fig.
3 we plot the same values but on a linear scale. At the highest value of the
reduced density considered ($\eta\approx 0.49$) one can observe that the
relative error with respect to the analytic prediction in the case of the
acoustic mode $\alpha_{a}$ goes up to around $35\%$.
### IV.2 Shock wave propagation
Figure 3: The dependence of both the acoustic $\alpha_{a}$ and thermal
$\alpha_{t}$ modes with respect to the reduced density $\eta$. The points
denote the numerical values obtained using the present model and are fitted
using Eq. (56) with $\alpha_{a}$, $\alpha_{t}$ and $c_{s}$ as free parameters.
The analytic predictions in Eq. (IV.1.2) for the modes $\alpha_{a}$ and
$\alpha_{t}$ are plotted as solid lines, while their corresponding dilute gas
limit is shown as dashed lines. The values of $\alpha_{a}$ and $\alpha_{t}$ in
the dilute gas limit are evaluated at a viscosity value given by Eq. (18b) for
the corresponding $\eta$. Inset: the same values but on a linear scale. One
can observe that the relative error in the case of the acoustic mode
$\alpha_{a}$ goes up to around $35\%$ at the highest value of the reduced
density considered($\eta\approx 0.49$).
(a) Reduced density
(b) Velocity
(c) Temperature
Figure 4: Shock wave propagation. (a) Density, (b) velocity and (c)
temperature profiles for constant reduced density$\eta_{i}=0.05$
($E_{l}=0.4825$) but with various values of the molecular diameter (implicitly
various values of the relaxation time $\tau$) obtained using the LB model
(solid lines) and the particle method PM (points). The dashed line represent
the inviscid limit, while in the case of (a) the thin dashed line also shows
the initial condition. An excellent agreement can be observed for all flow
regimes.
(a) Reduced density $\eta/\eta_{0}$
(b) Velocity $u$
(c) Temperature $T$
Figure 5: Shock wave propagation. (a) Density, (b) velocity and (c)
temperature profiles for constant molecular diameter $\sigma=0.01$ but with
various values of the reduced density $\eta\in\\{0.0.5,~{}0.15,~{}0.25\\}$
($E_{l}\in\\{0.4825,~{}1.917,~{}4.3998\\}$) obtained using the LB model (solid
lines) and the particle method PM (points). The dashed line represent the
inviscid limit, while in the case of (a) the thin dashed line also shows the
initial condition. Excellent agreement is observed for all values of the
reduced density.
#### IV.2.1 Problem statement: 1D Sob shock tube
The 1D Sod shock tube problem was proposed by G. A. Sod in 1978Sod (1978).
Consider a membrane located at $x=x_{0}$ that separates two semi-infinite
domains. The fluid properties are homogeneous in each domain, while the
velocity is zero everywhere. At the initial time, the fluid properties are:
$\begin{pmatrix}\eta_{L}\\\ T_{L}\\\
u_{L}\end{pmatrix}=\begin{pmatrix}\eta_{i}\\\ 1.0\\\
0.0\end{pmatrix},\quad\begin{pmatrix}\eta_{R}\\\ T_{R}\\\
u_{R}\end{pmatrix}=\begin{pmatrix}\eta_{i}/8\\\ 1.0\\\ 0.0\end{pmatrix}$ (58)
where $\eta_{i}$ is the initial value of the reduced density in the left
domain.
#### IV.2.2 Inviscid limit
We describe here the standard approach to the solution in the inviscid regime
found in many textbooksFaber (1995); Kundu, Cohen, and Dowling (2015); White
(2016) and adapt it to the case of the dense gas.
Starting from the Euler equations:
$\displaystyle\frac{D\rho}{Dt}+\rho\nabla\bm{u}$ $\displaystyle=0$ (59a)
$\displaystyle\rho\frac{D\bm{u}}{Dt}+\nabla P$ $\displaystyle=0$ (59b)
$\displaystyle\rho\frac{De}{Dt}+P\nabla\bm{u}$ $\displaystyle=0$ (59c)
one can introduce the similarity variable:
$\xi=\frac{x-x_{0}}{t}.$ (60)
In this case the Eqs. (59) reduce to:
$\displaystyle\partial_{\xi}u-\frac{\xi-u}{\rho}\partial_{\xi}\rho$
$\displaystyle=0$ (61a)
$\displaystyle\partial_{\xi}P-(\xi-u)^{2}\partial_{\xi}\rho$ $\displaystyle=0$
(61b)
By replacing the above equations in Eq. (59c) and assuming that
$\partial_{\xi}\rho\neq 0$, the equations are satisfied either when $u=\xi$,
corresponding to the contact discontinuity, or when:
$u=\xi\pm c_{s}$ (62)
The ($+$) solution refers to the rarefaction head, travelling to the left,
while the ($-$) solution is the rarefaction tail. Since at the head of the
rarefaction wave $u=u_{L}=0$, the velocity of the head is constant and is
given by:
$\xi_{r}=-c_{s}$ (63)
while the tail of the rarefaction wave travels with the constant value on the
plateau $u=u_{c}$:
$\xi_{c}=u_{c}-c_{s}$ (64)
Replacing Eq. (62) in Eqs. (61), one obtains the system of equations for the
rarefaction wave:
$\displaystyle
1+\frac{1}{2c_{s}}\left(\partial_{\rho}c_{s}^{2}\partial_{\xi}\rho+\partial_{P}c_{s}^{2}\partial_{\xi}P\right)$
$\displaystyle=-c_{s}\partial_{\xi}\ln\rho$ (65a)
$\displaystyle\partial_{\xi}P$ $\displaystyle=c_{s}^{2}\partial_{\xi}\rho$
(65b)
where the sound speed in Eq. (48) is written in terms of $\rho$ and $P$ as:
$c_{s}^{2}(\rho,P)=\frac{P}{\rho}+\frac{2P(1+b\rho\chi)}{3\rho}+\frac{bP(\chi+\rho\partial_{\rho}\chi)}{1+b\rho\chi}$
(66)
This system of equations can be solved numerically in conjunction with the
Rankine-Hugoniot relations for the discontinuity (i.e. shock front) travelling
with velocity $\xi_{s}$, given by:
$\displaystyle\rho_{2}(u_{c}-\xi_{s})$ $\displaystyle=-\xi_{s}\rho_{R}$ (67a)
$\displaystyle\rho_{2}u_{c}(u_{c}-\xi_{s})+P_{c}$ $\displaystyle=P_{R}$ (67b)
$\displaystyle(e_{c}+\frac{1}{2}\rho_{2}u_{c}^{2})(u_{c}-\xi_{s})+u_{c}P_{c}$
$\displaystyle=e_{R}\xi_{s}$ (67c)
where the following notations have been introduced:
$\rho_{1}=\rho(\xi_{c}),\,\rho_{2}=\rho(\xi_{s}),\,e_{c}=e(\rho_{c},T_{c}),\,e_{R}=e(\rho_{R},T_{R})\\\
P_{c}=P(\rho_{1},T_{1})=P(\rho_{2},T_{2}),\,P_{R}=P(\rho_{R},T_{R})$ (68)
where the subscript $1$ and $2$ refer to the left and right side of the
contact discontinuity.
The solution is obtained using the high-precision numerical solver included in
the software package Mathematica®Inc. .
#### IV.2.3 Computational setup
The simulations are performed on a system of length $L=80$ and temperature
$T=1$. The contact value of pair correlation function $\chi$ is evaluated
according to the revised Enskog theory using $\chi_{\text{\tiny RET-FM}}$
given in Eq. (5a).
##### Lattice Boltzmann
The number of nodes varies depending on the molecular diameter $\sigma$, from
$N_{x}=8\times 10^{3}$ at $\sigma=10^{-3}$ to $N_{x}=160$ at $\sigma=1$. The
large number of nodes at small $\sigma$ is made equal to the number of
computational cells in the particle method and it offers sufficient resolution
to reveal the features of the shock wave. The quadrature order is set to
$Q_{x}=8$ for $\sigma<0.1$, $Q_{x}=20$ for $\sigma=0.1$, while for $\sigma=1$
a quadrature of $Q_{x}=200$ was necessary since the flow is close to the
ballistic regime. The time step was set at $\Delta t=10^{-3}$.
##### Particle method
The results for the particle method are obtained by averaging over 10 runs
comprised of $N_{p}=1.6\times 10^{7}$ particle per run, in a system of
$8\times 10^{3}$ computational cells. Also in this case, the time step was set
to $\Delta t=10^{-3}$.
#### IV.2.4 Numerical results
In this subsection, we compare the results obtained using the Lattice
Boltzmann implementation versus the results obtained using the particle method
presented in Sec. III.
##### Shock profiles at various relaxation times
(a) Reduced density $\eta/\eta_{0}$
(b) Velocity $u$
(c) Temperature $T$
Figure 6: Shock wave propagation: structure at the initial time. (a) Density,
(b) velocity and (c) temperature profiles for molecular diameter $\sigma=1$ at
reduced density $\eta_{i}=\\{0.05,0.25\\}$ ($E_{l}\in\\{0.4825,4.3998\\}$,
respectively) obtained using the LB model (solid lines) with quadrature
$Q_{x}=200$ and the PM method (points), at two time instances
$t\in\\{0.2,0.5\\}$. In the case of (a) the thin dashed line shows the initial
condition.
At first, we will consider the initial conditions listed in Eq. (58). Fig. 4
present the results for 4 values of the molecular diameter
$\sigma=\\{10^{-3},10^{-2},10^{-1},10^{0}\\}$, while keeping the reduced
density $\eta_{i}=0.05$ constant, resulting in 4 different relaxation times
$\tau$, in a system of length $L=80$. This relatively large size of the system
is required due to the high computational costs associated with the particle
method at small values of the molecular diameter $\sigma$. The profiles of
reduced density $\eta$, velocity $u$ and temperature $T$ are presented
alongside the inviscid limit. Very good agreement can be observed for all flow
regimes, from hydrodynamic to the near ballistic regime. The LB results are
plotted using solid lines, the particle method results are represented by
solid circles and the dashed line represents the inviscid limit obtained by
numerically solving the equations in Sec. IV.2.2. Please refer to Sec. IV.2.4
for further results close to the inviscid regime obtained at $\sigma=10^{-6}$
and $\eta_{i}=0.05$ ($E_{l}=0.4825$), and Sec. IV.2.4 for details about the
choice of quadrature at the near ballistic regime ($\sigma=1$).
Next, we fixed the molecular diameter at $\sigma=0.01$ and varied the reduced
density $\eta$. Due to the high computational demand of the PM, scaling with
the particle number density, we have chosen the above value of the molecular
diameter since it is small enough to be compared to the inviscid limit. The
set of reduced densities on the left-hand side is
$\eta_{i}=\\{0.05,0.15,0.25\\}$ ($E_{l}\in\\{0.4825,~{}1.917,~{}4.3998\\}$).
Very good agreement between the LB and PM results is observed for all values
of the initial reduced density $\eta$, as well as for each considered
macroscopic quantity, namely the reduced density $\eta$, the velocity $u$ and
the temperature $T$.
In terms of computational time, it is expected that the LB method is much
faster than the PM. This is expressed quantitatively in Table 1, where the
running times for each method, namely $t_{\text{\tiny LB}}$ and
$t_{\text{\tiny PM}}$, are evaluated using a single core of an Intel® Xeon®
Gold 6330 CPU. The time ratio $t_{\text{\tiny PM}}/t_{\text{\tiny LB}}$ varies
from $10^{4}$ at $\sigma=0.001$ and $120$ at $\sigma=1$. As it can be seen in
the table the running times for the LB increase with $\sigma$ due to the
larger velocity set needed, while for the PM the number of collisions scales
with the inverse of the molecular diameter ($N_{c}\approx 1/\sigma$). As
expected, the ratio $t_{\text{\tiny PM}}/t_{\text{\tiny LB}}$ increases for
smaller relaxation time $\tau$ (at constant reduced density $\eta$ the
relaxation time is proportional to the molecular diameter $\sigma$). The
listed times for the PM method are for only one run, a series of 10 runs have
been executed to obtain the results presented in Fig. 4.
Method$\rightarrow$ | LB | | | PM |
---|---|---|---|---|---
$\sigma$ | $Q_{x}$ | $N_{x}$ | $t_{\text{\tiny LB}}$ | $t_{\text{\tiny PM}}$ | $t_{\text{\tiny PM}}/t_{\text{\tiny LB}}$
0.001 | 8 | 1600 | 62s | 186h | $\approx 1.1\times 10^{4}$
0.01 | 8 | 800 | 32s | 23h | $\approx 2.5\times 10^{3}$
0.1 | 20 | 640 | 71s | 7.25h | $\approx 370$
1 | 200 | 160 | 176s | 5.8h | $\approx 120$
Table 1: Computational time comparison for the simulations presented in Fig.
4. As expected, the ratio $t_{\text{\tiny PM}}/t_{\text{\tiny LB}}$ increases
for smaller relaxation time $\tau$, since at constant reduced density $\eta$
the relaxation time is proportional to the molecular diameter $\sigma$ (Eq.
(23)).
##### Shock structure at the initial times
At first glance, the shock profiles presented in the above section look
qualitatively similar to the shock profiles for dilute gases, comprised of a
rarefaction wave, the two plateaus separated by the contact discontinuity and
the shock front. However, in their initial stage (i.e. close to the ballistic
regime, due to the self-similarity of the shock), the dense gas shock wave
deviates from the shape for dilute gases, at length scales comparable to the
molecular diameter $\sigma$. More precisely, the discrepancies become
negligible at the scales used in Sec. IV.2.4, as the system length is much
larger than the molecular diameter. Here we present the results for the shock
profiles at $t=\\{0.2,0.5\\}$ for $\sigma=1$ and for two values of the reduced
density $\eta_{i}\in\\{0.05,0.25\\}$ ($E_{l}\in\\{0.4825,4.3998\\}$,
respectively). To obtain these results, we employed the quadrature order
$Q_{x}=200$ and $N_{x}=400$ nodes at $\eta_{i}=0.05$ and $N_{x}=200$ nodes at
$\eta_{i}=0.25$, in a system of length $L=10$.
At first, the density develops a quasi plateau that is dissipated relatively
fast, in contrast with the ballistic results, due to the nonlocal
interactions. At low reduced density $\eta$, the LB model is able to reproduce
quantitatively and qualitatively the PM results for all macroscopic
quantities, while at large $\eta$ the features of the shock are recovered only
qualitatively, discrepancies been observed for all macroscopic quantities,
especially the temperature. This further denotes that the approximation used
for the Enskog collision integral gives good accuracy up to a moderate value
of the reduced density $\eta$.
##### Inviscid regime
(a) Reduced density $\eta/\eta_{0}$
(b) Velocity $u$
(c) Temperature $T$
Figure 7: Shock wave propagation. (a) Density, (b) velocity and (c)
temperature profiles for molecular diameter $\sigma=10^{-6}$ at reduced
density $\eta_{i}=0.05$ ($E_{l}=0.4825$) obtained using the LB model (solid
line) and compared with the inviscid solution (dashed line). In the case of
(a) the thin dashed line shows the initial condition. Perfect overlap can be
observed for all macroscopic quantities. In the case of (a) the thin dashed
line also shows the initial condition.
The results for the near inviscid regime were obtained using the proposed LB
model in a system of length $L=80$. The relaxation scaling factor was set to
$\widetilde{\tau}=10^{-6}$ and the reduced density was set to $\eta_{i}=0.05$
($E_{l}=0.4825$), being equivalent to a molecular diameter of $\sigma=10^{-6}$
and $\chi_{\text{\tiny SET}}$ was used. For better resolution at sharp
interfaces, a number of $N_{x}=8\times 10^{4}$ nodes have been used ($\Delta
x=10^{-3}$), and a time-step of $\Delta t=10^{-6}$. The results are plotted in
Fig. 7 and one can observe a perfect overlap between the analytic solution and
the LB results.
##### Near ballistic regime
(a) Reduced density $\eta/\eta_{0}$
(b) Velocity $u$
(c) Temperature $T$
Figure 8: Shock wave propagation. (a) Density, (b) velocity and (c)
temperature profiles for molecular diameter $\sigma=1$ at reduced
density$\eta_{i}=0.05$ ($E_{l}=0.4825$) obtained using the LB model (solid
lines) with quadrature $Q_{x}=\\{8,40,200\\}$. In the near ballistic regime,
one needs to employ a large velocity set in order to smooth out the profiles.
As the relaxation time is growing, a larger momentum space is needed in order
to capture the collisionless behaviour. A small number of velocities would
render a staircase solution since collisions play a very small role in
particle evolution. As such, a large number of momentum points need to be
employed in order to obtain a smooth profile of the macroscopic quantities. In
all simulations, the molecular diameter is set to $\sigma=1$ at reduced
density$\eta_{i}=0.05$ ($E_{l}=0.4825$) and the number of nodes is $N_{x}=160$
and the time step is $\Delta t=10^{-3}$. In Fig. 8 we present the profiles of
reduced density, velocity and temperature at three values of the quadrature
order $Q_{x}\in{8,40,200}$, chosen to have a factor of 5 between them in order
to track the improvement of the profiles. One can observe that at $Q=200$ the
profiles are smooth enough and they agree very well with the PM results, as
presented in Fig. 4.
## V Conclusions
In this work, the propagation of longitudinal, as well as of the shock waves
in dense gases are simulated in order to validate the proposed finite-
difference Lattice Boltzmann model employing the simplified Enskog collision
integral. In this model, the Enskog collision integral is approximated using a
Taylor expansion and retaining the first-order gradients. The simulation
results for the longitudinal waves were compared to the analytic solution for
various values of the reduced density $\eta$. The simulation results for shock
waves were compared to the results obtained using a particle method for the
solution of the Enskog equation.
The sound wave propagation was used to check the applicability domain of the
simplified Enskog collision operator with respect to the reduced density
$\eta$. The sound speed values are accurately recovered in the LB simulations,
while the damping coefficients show deviations from the analytic prediction as
the reduced density $\eta$ is increased. We observed that the discrepancies
appear around $\eta=0.1$ ($E_{l}=1.10576$) and become significant at
$\eta=0.3$ ($E_{l}=6.3083$). Beyond these values, one can still obtain results
with reasonable accuracy using the simplified Enskog collision operator.
Higher-order terms might be needed in order to extend the applicability of the
present model.
Shock wave propagation is employed to test the capabilities of the numerical
schemes when sharp variations in macroscopic quantities are present. The
results are compared with a particle method that solves the Enskog collision
integral using a Monte-Carlo method. The simulations were conducted for
various values of the relaxation time $\tau$, as well as various values of the
reduced density. For large systems and small values of the relaxation time
$\tau$ (i.e. small molecular diameters with respect to the system extension)
the LB results overlap very well with the PM results for a good range of
reduced densities values $\eta\in\\{0.05,0.15,0.25\\}$. When looking at the
initial stages of the shock wave propagation, at scales comparable to the
molecular diameter, one can observe some features that are not present in the
dilute gas regime. These features are well captured by the LB model at small
values of the reduced density ($\eta=0.05$), while at large values
($\eta=0.25$) the discrepancies are significant. We also presented that the
scheme can recover the inviscid regime with a perfect overlap over the
analytic solution. The overlap between the two methods is remarkably good,
given the huge computational time difference between the two methods ($2$ to
$4$ orders of magnitude).
We conclude that this model is able to deal with moderately dense gases.
Moreover, we determined the applicability range of the simplified Enskog
collision operator and challenged the proposed model in tackling flows with
sharp gradients in the macroscopic quantities. In the future, we plan to
consider also gas-surface interactions and as well as to introduce attractive
forces between molecules, in order to tackle bounded flows and multiphase
flows, respectively.
###### Acknowledgements.
The authors thank V.E. Ambrus and V. Sofonea for useful discussions regarding
the present manuscript. This work was supported through a grant from the
Ministry of Research, Innovation and Digitization, CNCS - UEFISCDI, project
number PN-III-P1-1.1-PD-2021-0216, within PNCDI III.
## Data Availability Statement
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## Author declarations
### V.1 Conflict of Interest
The authors have no conflicts to disclose.
## Appendix A Numerical schemes for the LB implementation
### A.1 Third-order TVD Runge-Kutta method
In order to implement the time-stepping algorithm, it is convenient to cast
the Boltzmann equation (28) in the following form:
$\partial_{t}f_{k}=L[f_{k}],\qquad L[f_{k}]=-\frac{{p}_{k}}{\,m\,}\cdot\nabla
f_{k}-\frac{1}{\tau}[f_{\bm{\kappa}}-f^{S}_{\bm{\kappa}}]+J^{1}_{k}.$ (69)
The third-order total variation diminishing (TVD) Runge-Kutta integrator gives
the following three-step algorithm for computing the values of $f_{k}$ at time
$t+\delta t$Shu and Osher (1988, 1998); Rezzolla and Zanotti (2013):
$\displaystyle f_{k}^{(1)}(t)=$ $\displaystyle f_{k}(t)+\delta
t\,L[f_{k}(t)],$ $\displaystyle f_{k}^{(2)}(t)=$
$\displaystyle\frac{3}{4}f_{k}(t)+\frac{1}{4}f_{k}^{(1)}(t)+\frac{1}{4}\delta
t\,L[f_{k}^{(1)}(t)],$ $\displaystyle f_{k}(t+\delta t)=$
$\displaystyle\frac{1}{3}f_{k}(t)+\frac{2}{3}f_{k}^{(2)}(t)+\frac{2}{3}\delta
t\,L[f_{k}^{(2)}(t)].$ (70)
The Butcher tableau Butcher (2008) corresponding to this scheme is given in
Table 2.
Table 2: Butcher tableau associated with the third-order Runge-Kutta time-stepping procedure described in Eq. (70). 0 | | |
---|---|---|---
1 | 1 | |
1/2 | 1/4 | 1/4 |
| 1/6 | 1/6 | 2/3
### A.2 WENO-5 advection scheme
The advection term which appears in Eq. (69) above, namely $p_{k}\cdot\nabla
f_{k}/m$ is computed using the Weighted Essentially Non-Oscillatory scheme of
order $5$ (WENO-5) along each coordinateGan _et al._ (2011); Jiang and Shu
(1996). We will describe in the following the one-dimensional case. Assuming
that the flow domain is discretized using $1\leq i\leq N$ nodes on the $x$
axis, the advection term becomes:
$\left(\frac{p_{k}}{m}\cdot\partial_{x}f_{k}\right)_{k;i}=\frac{\mathcal{F}_{k;i+1/2}-\mathcal{F}_{k;i-1/2}}{\delta
s}$ (71)
where $\mathcal{F}_{k;i+1/2}$ represents the flux of $f$ advected with
velocity $p_{k}/m$ through the interface between the cells centered on
$\bm{x}_{i}$ and $\bm{x}_{i+1}$. The construction of these fluxes is
summarized below, under the assumption of a positive advection velocity
$p_{k}/m>0$. In this case, the flux $\mathcal{F}_{k;i+1/2}$ can be computed
using the following expressionGan _et al._ (2011):
$\mathcal{F}_{i+1/2}=\overline{\omega}_{1}\mathcal{F}^{1}_{i+1/2}+\overline{\omega}_{2}\mathcal{F}^{2}_{i+1/2}+\overline{\omega}_{3}\mathcal{F}^{3}_{i+1/2},$
(72)
where for brevity, the momentum index $k$ was omitted.
The interpolating functions $\mathcal{F}^{q}_{i+1/2}$ ($q=1,2,3$) are given
by:
$\displaystyle\mathcal{F}^{1}_{i+1/2}=$
$\displaystyle\frac{p_{k}}{m}\left(\frac{1}{3}f_{i-2}-\frac{7}{6}f_{i-1}+\frac{11}{6}f_{i}\right),$
$\displaystyle\mathcal{F}^{2}_{i+1/2}=$
$\displaystyle\frac{p_{k}}{m}\left(-\frac{1}{6}f_{i-1}+\frac{5}{6}f_{i}+\frac{1}{3}f_{i+1}\right),$
$\displaystyle\mathcal{F}^{3}_{i+1/2}=$
$\displaystyle\frac{p_{k}}{m}\left(\frac{1}{3}f_{i}+\frac{5}{6}f_{i+1}-\frac{1}{6}f_{i+2}\right).$
(73)
The weighting factors $\overline{\omega}_{q}$ appearing in Eq. (72) are given
by:
$\overline{\omega}_{q}=\frac{\widetilde{\omega}_{q}}{\widetilde{\omega}_{1}+\widetilde{\omega}_{2}+\widetilde{\omega}_{3}},\qquad\widetilde{\omega}_{q}=\frac{\delta_{q}}{\varphi^{2}_{q}}.$
(74)
The ideal weights $\delta_{q}$ are:
$\delta_{1}=\frac{1}{10},\qquad\delta_{2}=\frac{6}{10},\qquad\delta_{3}=\frac{3}{10},$
(75)
while the indicators of smoothness $\varphi_{q}$ can be computed as follows:
$\displaystyle\varphi_{1}=$
$\displaystyle\frac{13}{12}\left(f_{i-2}-2f_{i-1}+f_{i}\right)^{2}+\frac{1}{4}\left(f_{i-2}-4f_{i-1}+3f_{i}\right)^{2},$
$\displaystyle\varphi_{2}=$
$\displaystyle\frac{13}{12}\left(f_{i-1}-2f_{i}+f_{i+1}\right)^{2}+\frac{1}{4}\left(f_{i-1}-f_{i+1}\right)^{2},$
$\displaystyle\varphi_{3}=$
$\displaystyle\frac{13}{12}\left(f_{i}-2f_{i+1}+f_{i+2}\right)^{2}+\frac{1}{4}\left(3f_{i}-4f_{i+1}+f_{i+2}\right)^{2}.$
(76)
| $\overline{\omega}_{1}$ | $\overline{\omega}_{2}$ | $\overline{\omega}_{3}$
---|---|---|---
$\phi_{1}=\varphi_{2}=\varphi_{3}=0$ | $0.1$ | $0.6$ | $0.3$
$\varphi_{2}=\varphi_{3}=0$ | $0$ | $2/3$ | $1/3$
$\varphi_{3}=\varphi_{1}=0$ | $1/4$ | $0$ | $3/4$
$\varphi_{1}=\varphi_{2}=0$ | $1/7$ | $6/7$ | $0$
$\varphi_{1}=0$ | $1$ | $0$ | $0$
$\varphi_{2}=0$ | $0$ | $1$ | $0$
$\varphi_{3}=0$ | $0$ | $0$ | $1$
Table 3: The values of the weighting factors $\overline{\omega}_{q}$ (74) when
one, two or all three of the indicators of smoothness $\sigma_{q}$ ($q=1,2,3$)
have vanishing values.
The computation of the weighting factors $\overline{\omega}_{q}$ (74) implies
the division between the ideal weights $\delta_{q}$ (75) and the indicators of
smoothness $\varphi_{q}$ (76). To avoid division by $0$ when either one, two
or all three of the indicators of smoothness vanish, we follow Refs. Ambruş
and Blaga (2018); Busuioc and Ambruş (2019) and compute the weighting factors
$\overline{\omega}_{q}$ directly using Table 3 in the limiting cases when any
of the indicators of smoothness vanishes.
### A.3 Gradient central difference
For evaluating the gradients appearing in Eq. (32) we employ the $6$th order
central difference schemeFornberg (1988):
$\partial_{x}Q(x)=\\\ \frac{1}{\Delta x}\left[-\frac{1}{60}Q(x-3\Delta
x)+\frac{3}{20}Q(x-2\Delta x)-\frac{3}{4}Q(x-\Delta x)\right.\\\
\left.+\frac{3}{4}Q(x+\Delta x)-\frac{3}{20}Q(x+2\Delta
x)+\frac{1}{60}Q(x+3\Delta x)\right]$ (77)
where $Q\in\\{\ln\rho,u,\ln T\\}$.
## References
## References
* Ferziger and Kaper (1972) J. Ferziger and H. Kaper, _Mathematical Theory of Transport Processes in Gases._ (North-Holland Publishing Company, Amsterdam, London, 1972).
* Petersen and Hanson (2001) E. L. Petersen and R. K. Hanson, “Nonideal effects behind reflected shock waves in a high-pressure shock tube,” Shock Waves 10, 405–420 (2001).
* Holt _et al._ (2006) J. K. Holt, H. G. Park, Y. Wang, M. Stadermann, A. B. Artyukhin, C. P. Grigoropoulos, A. Noy, and O. Bakajin, “Fast mass transport through sub-2-nanometer carbon nanotubes,” Science 312, 1034–1037 (2006).
* Brenner, Hilgenfeldt, and Lohse (2002) M. P. Brenner, S. Hilgenfeldt, and D. Lohse, “Single-bubble sonoluminescence,” Rev. Mod. Phys. 74, 425–484 (2002).
* Wu _et al._ (2016) L. Wu, H. Liu, J. M. Reese, and Y. Zhang, “Non-equilibrium dynamics of dense gas under tight confinement,” Journal of Fluid Mechanics 794, 252–266 (2016).
* Sander, Pan, and Connell (2017) R. Sander, Z. Pan, and L. D. Connell, “Laboratory measurement of low permeability unconventional gas reservoir rocks: A review of experimental methods,” Journal of Natural Gas Science and Engineering 37, 248–279 (2017).
* Dahms and Oefelein (2015) R. N. Dahms and J. C. Oefelein, “Non-equilibrium gas–liquid interface dynamics in high-pressure liquid injection systems,” Proceedings of the Combustion Institute 35, 1587–1594 (2015).
* Chapman and Cowling (1970) S. Chapman and T. G. Cowling, _The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases._ (Cambridge University Press, 1970).
* Kremer (2010) G. M. Kremer, _An introduction to the Boltzmann equation and transport processes in gases_ (Springer-Verlag, Berlin Heidelberg, 2010).
* Frezzotti and Sgarra (1993) A. Frezzotti and C. Sgarra, “Numerical analysis of a shock-wave solution of the Enskog equation obtained via a Monte Carlo method,” J. Stat. Phys. 73, 193–207 (1993).
* Wu, Zhang, and Reese (2015) L. Wu, Y. Zhang, and J. M. Reese, “Fast spectral solution of the generalized Enskog equation for dense gases,” Journal of Computational Physics 303, 66–79 (2015).
* Sadr and Gorji (2017) M. Sadr and M. H. Gorji, “A continuous stochastic model for non-equilibrium dense gases,” Physics of Fluids 29, 122007 (2017).
* Sadr and Gorji (2019) M. Sadr and M. Gorji, “Treatment of long-range interactions arising in the Enskog–Vlasov description of dense fluids,” J. Comput. Phys. 378, 129–142 (2019).
* Bird (1976) G. A. Bird, _Molecular Gas Dynamics_ (Oxford Univ. Press, Oxford, England, UK, 1976).
* Alexander, Garcia, and Alder (1995) F. J. Alexander, A. L. Garcia, and B. J. Alder, “A consistent Boltzmann algorithm,” Phys. Rev. Lett. 74, 5212–5215 (1995).
* Montanero and Santos (1996) J. M. Montanero and A. Santos, “Monte Carlo simulation method for the Enskog equation,” Phys. Rev. E 54, 438–444 (1996).
* Frezzotti (1997) A. Frezzotti, “A particle scheme for the numerical solution of the Enskog equation,” Phys. Fluids 9, 1329–1335 (1997).
* Davis (1987) H. T. Davis, “Kinetic theory of inhomogeneous fluid: Tracer diffusion,” J. Chem. Phys. 86, 1474–1477 (1987).
* Din and Michaelides (1997) X.-D. Din and E. E. Michaelides, “Kinetic theory and molecular dynamics simulations of microscopic flows,” Physics of Fluids 9, 3915–3925 (1997).
* Nedea _et al._ (2006) S. Nedea, A. Frijns, A. van Steenhoven, A. Jansen, A. Markvoort, and P. Hilbers, “Density distribution for a dense hard-sphere gas in micro/nano-channels: Analytical and simulation results,” Journal of Computational Physics 219, 532–552 (2006).
* Frezzotti, Gibelli, and Lorenzani (2005) A. Frezzotti, L. Gibelli, and S. Lorenzani, “Mean field kinetic theory description of evaporation of a fluid into vacuum,” Phys. Fluids 17, 012102 (2005).
* Kon, Kobayashi, and Watanabe (2014) M. Kon, K. Kobayashi, and M. Watanabe, “Method of determining kinetic boundary conditions in net evaporation/condensation,” Phys. Fluids 26, 072003 (2014).
* Frezzotti, Barbante, and Gibelli (2019) A. Frezzotti, P. Barbante, and L. Gibelli, “Direct simulation Monte Carlo applications to liquid-vapor flows,” Phys. Fluids 31, 062103 (2019).
* Busuioc _et al._ (2020a) S. Busuioc, L. Gibelli, D. A. Lockerby, and J. E. Sprittles, “Velocity distribution function of spontaneously evaporating atoms,” Phys. Rev. Fluids 5, 103401 (2020a).
* Bruno and Frezzotti (2019) D. Bruno and A. Frezzotti, “Dense gas effects in the Rayleigh-Brillouin scattering spectra of SF6,” Chem. Phys. Lett. 731, 136595 (2019).
* Busuioc and Gibelli (2020) S. Busuioc and L. Gibelli, “Mean-field kinetic theory approach to Langmuir evaporation of polyatomic liquids,” Physics of Fluids 32, 093314 (2020).
* Kobayashi _et al._ (2017) K. Kobayashi, K. Sasaki, M. Kon, H. Fujii, and M. Watanabe, “Kinetic boundary conditions for vapor–gas binary mixture,” Microfluid. Nanofluid. 21, 53 (2017).
* Barbante, Frezzotti, and Gibelli (2015) P. Barbante, A. Frezzotti, and L. Gibelli, “A kinetic theory description of liquid menisci at the microscale,” Kinet. Relat. Mod. 8, 235–254 (2015).
* Luo (1998) L.-S. Luo, “Unified theory of lattice Boltzmann models for nonideal gases,” Phys. Rev. Lett. 81, 1618–1621 (1998).
* Luo (2000) L.-S. Luo, “Theory of the lattice Boltzmann method: Lattice Boltzmann models for nonideal gases,” Phys. Rev. E 62, 4982–4996 (2000).
* Melchionna and Marconi (2007) S. Melchionna and U. M. B. Marconi, “Lattice Boltzmann method for inhomogeneous fluids,” Europhysics Letters 81, 34001 (2007).
* He and Doolen (2002) X. He and G. Doolen, “Thermodynamic foundations of kinetic theory and lattice Boltzmann models for multiphase flows.” J. Stat. Phys. 107, 309–328 (2002).
* Wang _et al._ (2020) P. Wang, L. Wu, M. T. Ho, J. Li, Z.-H. Li, and Y. Zhang, “The kinetic Shakhov–Enskog model for non-equilibrium flow of dense gases,” Journal of Fluid Mechanics 883, A48 (2020).
* Chen _et al._ (2022) T. Chen, L. Wu, L. Wang, and S. Chen, “Rarefaction effects in head-on collision of two identical droplets,” ArXiv.2205.03604 (2022).
* Huang, Wu, and Adams (2021) R. Huang, H. Wu, and N. A. Adams, “Mesoscopic lattice Boltzmann modeling of the liquid-vapor phase transition,” Phys. Rev. Lett. 126, 244501 (2021).
* Zhang _et al._ (2020) Y.-D. Zhang, A.-G. Xu, J.-J. Qiu, H.-T. Wei, and Z.-H. Wei, “Kinetic modeling of multiphase flow based on simplified Enskog equation.” Front. Phys. 15, 62503 (2020).
* Gan _et al._ (2022) Y. Gan, A. Xu, H. Lai, W. Li, G. Sun, and S. Succi, “Discrete Boltzmann multi-scale modelling of non-equilibrium multiphase flows,” Journal of Fluid Mechanics 951, A8 (2022).
* Shan, Yuan, and Chen (2006) X. Shan, X.-F. Yuan, and H. Chen, “Kinetic theory representation of hydrodynamics: a way beyond the navier–stokes equation,” Journal of Fluid Mechanics 550, 413–441 (2006).
* Piaud _et al._ (2014) S. Piaud, B.and Blanco, R. Fournier, V. E. Ambruş, and V. Sofonea, “Gauss quadratures – the keystone of lattice Boltzmann models,” International Journal of Modern Physics C 25, 1340016 (2014).
* Ambruş and Sofonea (2016a) V. Ambruş and V. Sofonea, “Lattice Boltzmann models based on half-range Gauss-Hermite quadratures,” J. Comput. Phys. 316, 760–788 (2016a).
* Ambruş and Sofonea (2016b) V. Ambruş and V. Sofonea, “Application of mixed quadrature lattice Boltzmann models for the simulation of Poiseuille flow at non-negligible values of the Knudsen number,” J. Comput. Science 17, 403–417 (2016b).
* Sofonea _et al._ (2018) V. Sofonea, T. Biciuşcă, S. Busuioc, V. E. Ambruş, G. Gonnella, and A. Lamura, “Corner-transport-upwind lattice Boltzmann model for bubble cavitation,” Phys. Rev. E 97, 023309 (2018).
* Ambruş, Sharipov, and Sofonea (2020) V. E. Ambruş, F. Sharipov, and V. Sofonea, “Comparison of the Shakhov and ellipsoidal models for the Boltzmann equation and DSMC for ab initio-based particle interactions,” Computers & Fluids 211, 104637 (2020).
* Busuioc _et al._ (2020b) S. Busuioc, V. E. Ambruş, T. Biciuşcă, and V. Sofonea, “Two-dimensional off-lattice Boltzmann model for van der Waals fluids with variable temperature,” Computers & Mathematics with Applications 79, 111–140 (2020b), mesoscopic Methods in Engineering and Science.
* He (1997) X. He, “Error analysis for the interpolation-supplemented lattice-Boltzmann equation scheme,” International Journal of Modern Physics C 08, 737–745 (1997).
* Chen (1998) H. Chen, “Volumetric formulation of the lattice Boltzmann method for fluid dynamics: Basic concept,” Phys. Rev. E 58, 3955–3963 (1998).
* Succi (2018) S. Succi, _The Lattice Boltzmann Equation: For Complex States of Flowing Matter_ (Oxford University Press, 2018).
* Enskog (1922) D. Enskog, “Kinetische theorie der wärmeleitung: Reibung und selbst-diffusion in gewissen verdichteten gasen und flüssigkeiten.” (1922).
* Carnahan and Starling (1969) N. F. Carnahan and K. E. Starling, “Equation of state for nonattracting rigid spheres,” J. Chem. Phys. 51, 635–636 (1969).
* Van Beijeren and Ernst (1973) H. Van Beijeren and M. Ernst, “The modified enskog equation,” Physica 68, 437–456 (1973).
* Fischer and Methfessel (1980) J. Fischer and M. Methfessel, “Born-Green-Yvon approach to the local densities of a fluid at interfaces,” Phys. Rev. A 22, 2836 (1980).
* Ambruş and Sofonea (2018) V. E. Ambruş and V. Sofonea, “Half-range lattice Boltzmann models for the simulation of Couette flow using the Shakhov collision term,” Phys. Rev. E 98, 063311 (2018).
* Shakhov (1968a) E. Shakhov, “Approximate kinetic equations in rarefied gas theory,” Fluid Dynamics 3, 95 – 96 (1968a).
* Shakhov (1968b) E. Shakhov, “Approximate kinetic equations in rarefied gas theory,” Fluid Dynamics 3, 112 – 115 (1968b).
* Graur and Polikarpov (2009) I. Graur and A. Polikarpov, “Comparison of different kinetic models for the heat transfer problem,” Heat Mass Transfer 46, 237–244 (2009).
* Li and Zhang (2004) Z.-H. Li and H.-X. Zhang, “Study on gas kinetic unified algorithm for flows from rarefied transition to continuum,” J. of Comput. Phys. 193, 708–738 (2004).
* Meng _et al._ (2013) J. Meng, L. Wu, J. M. Reese, and Y. Zhang, “Assessment of the ellipsoidal-statistical Bhatnagar–Gross–Krook model for force-driven Poiseuille flows,” Journal of Computational Physics 251, 383–395 (2013).
* Ambrus and Sofonea (2019) V. E. Ambrus and V. Sofonea, “Quadrature-based lattice Boltzmann models for rarefied gas flow,” in _Flowing Matter_ , edited by F. Toschi and M. Sega (Springer International Publishing, Cham, 2019) pp. 271–299.
* Busuioc and Ambruş (2019) S. Busuioc and V. E. Ambruş, “Lattice Boltzmann models based on the vielbein formalism for the simulation of flows in curvilinear geometries,” Phys. Rev. E 99, 033304 (2019).
* Fede _et al._ (2015) P. Fede, V. Sofonea, R. Fournier, S. Blanco, O. Simonin, G. Lepoutère, and V. Ambruş, “Lattice Boltzmann model for predicting the deposition of inertial particles transported by a turbulent flow,” International Journal of Multiphase Flow 76, 187–197 (2015).
* Hildebrand (1987) F. B. Hildebrand, _Introduction to Numerical Analysis, 2nd edition_ (Dover Publications, 1987).
* F.W.J. Olver (2010) R. B. C. C. F.W.J. Olver, D.W. Lozier, _NIST Handbook of Mathematical Functions_ (Cambridge University Press, New York, 2010).
* Faber (1995) T. E. Faber, _Fluid dynamics for physicists_ (Cambridge university press, 1995).
* Sharipov and Kalempa (2008) F. Sharipov and D. Kalempa, “Numerical modeling of the sound propagation through a rarefied gas in a semi-infinite space on the basis of linearized kinetic equation,” The Journal of the Acoustical Society of America 124, 1993–2001 (2008).
* Wang and Xu (2012) R. Wang and K. Xu, “The study of sound wave propagation in rarefied gases using unified gas-kinetic scheme,” Acta Mech Sin 28, 1022–1029 (2012).
* Sharipov (2015) F. Sharipov, _Rarefied gas dynamics: fundamentals for research and practice_ (John Wiley & Sons, 2015).
* Ambruş (2018) V. E. Ambruş, “Transport coefficients in ultrarelativistic kinetic theory,” Phys. Rev. C 97, 024914 (2018).
* Shan (2019) X. Shan, “Central-moment-based Galilean-invariant multiple-relaxation-time collision model,” Phys. Rev. E 100, 043308 (2019).
* Kundu, Cohen, and Dowling (2015) P. K. Kundu, I. M. Cohen, and D. R. Dowling, _Fluid mechanics_ (Academic press, 2015).
* White (2016) F. M. White, _Fluid Mechanics 8e In SI Units_ (McGraw-Hill Education (UK), 2016).
* Sod (1978) G. A. Sod, “A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws,” Journal of Computational Physics 27, 1–31 (1978).
* (72) W. R. Inc., “Mathematica, Version 13.1,” Champaign, IL, 2022\.
* Shu and Osher (1988) C.-W. Shu and S. Osher, “Efficient implementation of essentially non-oscillatory shock-capturing schemes,” Journal of Computational Physics 77, 439–471 (1988).
* Shu and Osher (1998) C.-W. Shu and S. Osher, “Total variation diminishing Runge-Kutta schemes,” Math. Comp. 67, 73–85 (1998).
* Rezzolla and Zanotti (2013) L. Rezzolla and O. Zanotti, _Relativistic Hydrodynamics_ (Oxford University Press, 2013).
* Butcher (2008) J. C. Butcher, _Numerical Methods for Ordinary Differential Equations, 2nd edition_ , Vol. 51 (John Wiley & Sons, Chichester, West Sussex, England., 2008).
* Gan _et al._ (2011) Y. Gan, A. Xu, G. Zhang, and Y. Li, “Lattice Boltzmann study on Kelvin-Helmholtz instability: Roles of velocity and density gradients,” Phys. Rev. E 83, 056704 (2011).
* Jiang and Shu (1996) G.-S. Jiang and C.-W. Shu, “Efficient implementation of weighted eno schemes,” Journal of Computational Physics 126, 202–228 (1996).
* Ambruş and Blaga (2018) V. E. Ambruş and R. Blaga, “High-order quadrature-based lattice Boltzmann models for the flow of ultrarelativistic rarefied gases,” Phys. Rev. C 98, 035201 (2018).
* Fornberg (1988) B. Fornberg, “Generation of finite difference formulas on arbitrarily spaced grids,” Mathematics of computation 51, 699–706 (1988).
|
11institutetext: Hvar Observatory, Faculty of Geodesy, University of Zagreb,
Kačićeva 26, HR-10000 Zagreb, Croatia
11email<EMAIL_ADDRESS>22institutetext: University of Applied
Sciences and Arts Northwestern Switzerland, Bahnhofstrasse 6, CH-5210
Windisch, Switzerland 33institutetext: Institute for Particle Physics and
Astrophysics, ETH Zürich, CH-8093 Zürich, Switzerland 44institutetext:
Astronomical Institute of the Czech Academy of Sciences, Fričova 298, CZ-25165
Ondřejov, Czech Republic
# Flares detected in ALMA single-dish images of the Sun
I. Skokić 11 A. O. Benz, 2233 R. Brajša 11 D. Sudar 11 F. Matković 11 M. Bárta
44
(Received -; accepted -)
###### Abstract
Context. The millimeter and submillimeter radiation of solar flares is poorly
understood. Without spatial resolution, it cannot be compared easily to flare
emissions in other wavelengths. The Atacama Large Millimeter-submillimeter
Array (ALMA) offers sufficient resolution for the first time. However, used as
an interferometer, its field of view is smaller than an active region and ALMA
cannot observe on demand when a flare occurs.
Aims. We use readily available large scale single-dish ALMA observations of
solar millimeter flares and compare them to well-known features observed in
other wavelengths. The properties of these other flare emissions, correlating
in space and time, may then be used to interpret the millimeter brightenings
and vice versa. The aim is to obtain reliable associations, limited by the
time and space resolution of single-dish observations.
Methods. Ordinary interferometric ALMA observations require single-dish images
of the full Sun for calibration. We collected such observations at 3 mm and 1
mm and searched for millimeter brightenings during times given in a flare
catalog.
Results. All of the flares left a signature in millimeter waves. We found five
events with 9 or more images that can be used for comparison in time and
space. The millimeter brightenings are associated with a variety of flare
features in cool (H$\alpha$, 304 Å), intermediate (171 Å), and hot (94 Å)
lines. In several cases, the millimeter brightening peaked at the footpoint of
a hot flare loop. In other cases the peak of the millimeter brightening
coincided with the top or footpoint of an active H$\alpha$ filament. We found
correlations also with post-flare loops and tops of a hot loop. In some images
the millimeter radiation peaked at locations, where no feature in the selected
lines was found.
Conclusions. The wide field of view provided by the single-dish observations
allowed for completely overviewing the flare activity in millimeter waves for
the first time. The associated phenomena often changed during the flare in
type and location. The variety of phenomena detected in these millimeter
observations may explain the sometimes bewildering behavior of millimeter
flare emissions observed previously without spatial resolution.
###### Key Words.:
Sun: chromosphere – Sun: flares – Sun: filaments – Sun: radio radiation
## 1 Introduction
Brightenings during solar flares have been reported over the full
electromagnetic spectrum from gamma-rays to decameter radio waves (Benz,
2017). They are caused by various drivers and mechanisms, but their
interpretation remains fragmentary. Little is known about the range from 100
GHz to 10 THz (mid infrared). Contrary to the flare emission below 100 GHz
(Bastian et al., 1998), the spectrum above sometimes increases with frequency,
rising the possibility of a ”THz-component” (Kaufmann et al., 2004). The
millimeter and sub-millimeter flare observations have been reviewed by Krucker
et al. (2013). The ¿100 GHz emission correlates best, but not completely with
HXR and gamma-rays, sometimes with chromospheric lines. The correlation is
often different in the pre-flare, impulsive, and even post-impulsive phases of
a flare and in different flares. In one case, where this was reliably
measured, the position of the 210 GHz emission was on one of the flare ribbons
(Lüthi et al., 2004). Krucker et al. (2013) list ten mechanisms that have been
proposed to interpret the THz-component. They remark that there may indeed be
more than one explanation.
Interferometric observations using the Berkeley-Illinois-Maryland Array (BIMA)
at 86 GHz found that the millimeter emission of a solar flare exhibits
impulsive and gradual phases, both often observed in the same flare (Kundu,
1996). High spatial resolution images of solar flares at millimeter
wavelengths obtained by BIMA showed that most of the gradual phase millimeter
flux came from the top of a flaring loop, with some contribution from the
footpoints (Silva et al., 1996a, b, 1997, 1998). The millimeter emission from
the gradual phase was likely due to thermal bremsstrahlung from the soft
X-ray-emitting hot plasma, while gyrosynchrotron radiation was suggested as
the main mechanism for the impulsive phase. Raulin et al. (1999) et al. found
indications of two different electron populations in the impulsive phase, one
responsible for HXR/microwave emission, the other for millimeter emission.
Kundu et al. (2000) analyzed two solar flares simultaneously observed at 17,
34, and 86 GHz and reported evidence of bipolar (looplike) structures.
The major limit of the previous ¿100 GHz observations was spatial resolution,
impeding the combination with observations at other wavelengths. The Atacama
Large Millimeter-submillimeter Array (ALMA) has a great potential to mitigate
this instrumental gap (Wedemeyer et al., 2016; Bastian et al., 2018). Since
ALMA is not a solar dedicated instrument, however, sporadic flares are
difficult to observe.
For this reason, solar ALMA observations in the past have focused on
stationary phenomena such as radius, center-to-limb and center -to-pole
brightnening measurements (Alissandrakis et al., 2017; Selhorst et al., 2019;
Sudar et al., 2019a, b; Menezes et al., 2021, 2022), identification and
analysis of various stationary structures in full disk (Brajša et al., 2018;
Skokić et al., 2020) and interferometric ALMA images (Iwai et al., 2017;
Bastian et al., 2017; Nindos et al., 2018; Jafarzadeh et al., 2019; Rodger et
al., 2019; Loukitcheva et al., 2019; Molnar et al., 2019; Shimojo et al.,
2020; Wedemeyer et al., 2020; Brajša et al., 2020, 2021).
Recent research extends ALMA results to stationary prominences/filaments
(Heinzel et al., 2022; da Silva Santos et al., 2022; Labrosse et al., 2022),
dynamical phenomena such as oscillations in the quiet Sun (Patsourakos et al.,
2020; Jafarzadeh et al., 2021; Chai et al., 2022) and transients (Nindos et
al., 2020; da Silva Santos et al., 2020; Eklund et al., 2020), and even to
submillimeter structures (Alissandrakis et al., 2022).
The only known flare-related ALMA observation to date is by Shimizu et al.
(2021), who observed an active region with ALMA in interferometric mode at 100
GHz. The field of view was 60″and the resolution 5.″0 x 3.″9. They report a
microflare, which correlated in time with Si IV line emission ($\sim 80\,000$
K) and in space with the footpoint of an SXR loop in the outskirts of the
active region.
Here we report on the first solar flares observed by ALMA with complete
spatial coverage. The aim of this paper is to analyze the temporal and spatial
evolution of five flares observed by ALMA in single-dish mode. We compare
images and time profiles with data from other wavelengths to identify
counterparts and to provide clues for the mechanisms responsible for flare
emission at millimeter wavelengths.
## 2 Data and methodology
Table 1: ALMA observation details. In the period column, start times of the first and last ALMA observation of that period are given. Number of observations in the period is given in the column denoted by N. Date | Period UT | Project code | Freq. | N
---|---|---|---|---
| | | GHz |
2017-04-23 | 14:06 – 16:30 | 2016.1.01129.S | 230 | 9
2017-04-26 | 14:17 – 16:19 | 2016.1.00070.S | 107 | 11
2018-04-03 | 13:47 – 17:39 | 2017.1.00072.S | 95 | 23
2018-04-19 | 15:26 – 17:47 | 2017.1.01138.S | 95 | 14
2018-12-15 | 13:08 – 15:12 | 2018.1.01879.S | 95 | 10
Observations of the Sun with the ALMA interferometer require total power full-
disk images for absolute calibration (White et al., 2017; Shimojo et al.,
2017). These full-disk solar images are publicly available from the ALMA
Science Archive111https://almascience.eso.org. We limited the data set to ALMA
bands 3 (100 GHz, $\lambda$=3 mm) and 6 (240 GHz, $\lambda$=1.2 mm) because
other bands were either not public at the time of retrieval or only a small
number of images are available. The final data set consisted of 372 images
taken between 2016 Dec 21 and 2019 Apr 13. They amount to a total on-source
observing time of about 40 hours. The images were then converted to a
helioprojective coordinate system and rotated to the Sun’s north pole pointing
upward (Skokić & Brajša, 2019). We improved the alignment in all images by
limb fitting and double-checked on compact structures. Next, the images were
scaled in intensity so that the brightness temperature of the quiet Sun region
nearest to the solar center is equal to 7300 K (5900 K) in band 3 (6), as
suggested by White et al. (2017). Alissandrakis et al. (2020) reported that in
band 6 the value was most probably underestimated by $\sim$440 K, but we did
not include this correction in the present analysis. Finally, the center-to-
limb brightness variation was removed by the procedure described in Sudar et
al. (2019a).
Depending on the band, it takes ALMA between 10 and 16 minutes to obtain one
full-disk image of the Sun. This is the time the antenna needs to scan the Sun
and perform additional calibration scans limiting the maximum cadence. The
actual scanning of the Sun without calibration takes 301.4 s in band 3 and
581.4 s in band 6, thus defining the temporal uncertainty of the images. We
transferred the DATE-OBS time in the FITS headers of the ALMA images referring
to the beginning of the scan to the middle of the scan.
Table 2: Parameters of the observed flare events retrieved from the HEK database. Class column denotes the GOES flare class, and HPC column lists helioprojective coordinates of the flare in arcseconds at peak time. Date | Start | Peak | End | GOES | HPC
---|---|---|---|---|---
| | | | Class | x, y
2017-04-23 | 15:47 | 15:57 | 16:12 | B1.8 | 15, 470
2017-04-26 | 15:28 | 15:38 | 15:50 | B3.4 | 350, 300
2018-04-03 | 14:13 | 14:17 | 14:32 | B1.2 | -275, -50
2018-04-19 | 15:57 | 16:29 | 17:25 | - | 750, -50
2018-12-15 | 13:40 | 13:53 | 14:15 | B1.0 | -640, 215
Figure 1: Intensity profiles for the SOL2017-04-23 flare. Gray shaded areas
represent time intervals during which an ALMA image was obtained. Black curves
show intensities spatially averaged over the flare region, and red curves show
peak intensities within the flare region. The scale of the red curves is given
on the right axis, in the same units as the black curves given on the left
axis
. Vertical blue lines denote instants that are shown in cutout images and
further analyzed. The same representation is used in all subsequent intensity
profiles.
Figure 2: SOL2017-04-23 flare. The ALMA images are difference images between
the designated time frame and the base pre-flare frame. The other images are
regular full-scale. ALMA contours outline five levels equidistantly within the
range of 100 - 500 K. The peak ALMA brightness in the flaring region, is
marked with a white ”x”. The intensity was clipped in each image to better
show structures of interest.
The observation times of the ALMA images were used to search for coinciding
flare events in the Heliophysics Events Knowledgebase
(HEK222https://www.lmsal.com/hek/). The search returned a list of nearly
hundred entries. However, many of them were related to the same flare event
reported by different observatories or detection methods. These multiple
entries were filtered out, along with short duration events that occurred
between two consecutive ALMA observations and events in which ALMA images were
of poor quality due to the presence of scan artifacts or other effects. A
total of four flare events were singled out. Additionally, one event was
manually found that was visually detected as a small brightening in the ALMA
data, but was not present in the HEK flare catalog. Details of ALMA
observations for all five selected events are listed in Table 1. Flare
properties obtained from the HEK catalog are listed in Table 2.
ALMA images were compared with filtergrams in extreme ultraviolet (EUV) from
the Atmospheric Imaging Assembly (AIA, Lemen et al. 2012) and magnetograms
from the Helioseismic and Magnetic Imager (HMI, Scherrer et al. 2012) on the
Solar Dynamics Observatory (SDO, Pesnell et al. 2012). These complementary
data were obtained from the Joint Science Operations Center
(JSOC333http://jsoc.stanford.edu). We limited the analysis to AIA 94 Å
(flaring regions, characteristic temperature of $T=6\times 10^{6}$ K), 171 Å
(quiet corona and upper transition region, $T=6\times 10^{5}$ K) and 304 Å
(chromosphere and transition region, $T=50\,000$ K). These channels were
selected for their characteristic temperature to complement the ALMA data. HMI
data provided photospheric line-of-sight magnetic field measurements. We used
a cadence of 1 minute for AIA images and 45 seconds for HMI data. Both AIA and
HMI data were preprocessed with routines from the Python aiapy software
package, which is analogous to the usual IDL aia_prep procedure.
For time profiles, we selected a circular region of a size large enough to
cover the entire flaring region and at least the ALMA beam size ($\sim$60″ in
band 3 and $\sim$28″ in band 6). The radius of 100″ was used for the first
four flares and 50″ for the SOL2018-12-15 flare. Both the average intensity
over this flaring region and the intensity of the brightest pixel in the
region were measured in all ALMA and SDO/AIA images at all times. The position
of peak intensity changes over time and in different wavelengths. While the
average intensity is useful for the overall comparison of the ALMA and SDO/AIA
intensity profiles and estimation of the total energy released, the peak
intensity refers to spatially separated individual brightenings and better
assesses the maximum increase of the brightness temperature.
The soft X-ray fluxes for all events were observed by the Geostationary
Operational Environmental Satellites (GOES), specifically by the GOES-15 X-Ray
Sensor (XRS), and were obtained from the National Oceanic and Atmospheric
Administration (NOAA).
GONG H$\alpha$ images are from the Cerro Tololo Interamerican Observatory in
Chile taken at the core of the H$\alpha$ line at 6562.8 Å with a spatial
resolution of 1″ per pixel and 1 minute cadence.
We searched for bursts related to the analyzed flares in dynamic spectra
recorded by the solar radio spectrometers e-CALLISTO in the 20 - 300 MHz range
and Radio Solar Telescope Network (RSTN) GHz data, but found no events.
In the following intensity profile figures, gray shaded areas indicate the
duration of each ALMA scan of the Sun, thus representing time uncertainty of
the ALMA measurements. Average intensity of the circular region containing the
flare is represented by a black curve while maximum intensity is depicted as a
red curve. The mean ALMA intensity error was obtained from the variation of an
intensity profile of a quiet Sun region.
The ALMA images relevant to the flare were further analyzed at the times
indicated by vertical blue lines in the intensity profiles. To better isolate
the flaring region, a difference image was made by subtracting a reference
image. The reference image was usually selected as the ALMA image obtained
just before the start of the flare. The effect of solar differential rotation
was taken into account. Then the contours from the ALMA difference image were
overlaid on the HMI magnetogram, the H$\alpha$ image from the Global
Oscillations Network Group (GONG), and the AIA 304, 171, and 94 Å filtergrams.
This was done in order to find counterparts of the ALMA flare emission.
## 3 Results
### 3.1 SOL2017-04-23 flare
The SOL2017-04-23 flare is the only ALMA band 6 event in the analyzed set. The
advantage of band 6 over band 3 is in twice the spatial resolution of the
images, but at the expense of less temporal resolution due to the longer time
required to scan the entire Sun. The beam size is 28.3″, and the pixel size is
3″. ALMA images were produced from PM01 antenna data at 230 GHz (1.3 mm).
The observed ALMA and SDO/AIA intensity profiles are shown in Fig. 1 and
compared with GOES XRS flux. The time of maximum millimeter brightness
temperature coincides with the maxima in the other wavelengths. The average
ALMA profile and the peak ALMA profile are similar except for the last
measurement, where the average rises but the peak values stay constant. The
reason for this is an increase in total radiation distributed over a larger
area (compare the contours at 16:14 UT and 16:34 UT in Fig. 2). The average
EUV profiles are similar to each other and to the GOES X-ray curve as well.
The peak intensity profiles of the various wavelengths differ more from each
other, especially in 171 Å channel, where many multiple post-maximum peaks can
be seen. They indicate strong brightenings originating from individual small
areas (high loops).
ALMA images of the flaring region are shown in Fig. 2 and compared with HMI
LoS magnetograms, GONG H$\alpha$ images, and AIA filtergrams with overlaid
ALMA 100 - 500 K contours. H$\alpha$ and 304 Å images suggest a two-ribbon
flare scenario. The first image shown, (Fig. 2, 15:57), corresponds to the
peak of the flare. The ALMA contours encompass the extent of the flare in
H$\alpha$, 304 and 94 Å images, and coincide with the polarity inversion line
in the HMI magnetogram. The peak ALMA brightness in the flaring region, marked
with a white ”x”, is located near the southern footpoint of a bright 94 Å
loop. The ALMA peak is also located near the top of a small loop visible in
H$\alpha$, 304, and 171 Å.
A second ALMA component, peaking to the north, coincides with a small filament
visible in H$\alpha$ absorption. The footpoint of a thin loop is visible at
the same location in 304 and 171 Å. A third ALMA source with significant
brightness is located northwest (up right) of the main peak. The movies in
H$\alpha$ and 304 Å suggest that the third source is at the location of plasma
downflow originating from a region near the second peak.
Figure 3: Intensity profiles for the SOL2017-04-26 flare. The colors and
markings are the same as in Fig. 1.
The next time shown, (Fig. 2, 16:14), is around the secondary flare peak,
visible in 304 and 171 Å average intensity profiles (Fig. 1). The ALMA peak
brightness is shifted to the north, next to a small H$\alpha$ filament that
became more pronounced in H$\alpha$ absorption. The ALMA peak location is
consistent with the top of the H$\alpha$ filament but not with any feature at
any other wavelength. However, an intensifying plasma flow can be observed in
304 and 171 Å movies from the main ALMA location, with some of the material
falling back to the surface near the location of the second ALMA bright region
(on the right).
Finally, in the last ALMA image (Fig. 2, 16:34), the H$\alpha$ filament has
intensified (darkened), and the ALMA contours now correspond well to its
shape. The maximum ALMA brightness is situated at the southern footpoint of
the filament. The area at the location of plasma inflow (to the right of the
filament) has slightly brightened. The original ALMA peak located above the
flare ribbons has also brightened and matches the shape of the hot flare loop
visible in 94 Å.
Figure 4: SOL2017-04-26 flare. ALMA contours outline five levels equidistantly
in the 50 - 200 K range.
We note in conclusion that the millimeter emission of SOL2017-04-23 is
associated with a bewildering number of features:
\- footpoint of hot (94 Å) loop (main ALMA peak)
\- top of cool loops in H$\alpha$, 304, and 171 Å
\- downflow region in H$\alpha$, 304, and 171 Å
\- top and footpoint of dark H$\alpha$ filament.
### 3.2 SOL2017-04-26 flare
The flare that occurred on 2017 Apr 26 was recorded by ALMA in band 3 in the
spectral window centered at 107 GHz (2.8 mm). The beam size is 60″ and pixel
size is 6″. Artifacts from the antenna scanning pattern can be seen in ALMA
images, especially in the difference images, somewhat affecting the observed
shape of the flaring region.
The region-averaged ALMA time profile looks similar to 304 Å, 94 Å, and GOES,
but the 171 Å emission peaks significantly later (Fig. 3). The pre-flare 171 Å
average intensity level was even higher than during the flare. This brightness
originates from activity within large loops arching over the region. The ALMA
peak intensity behaves similarly to the average ALMA intensity. The only
difference is that peak curve maximum occurs before the average maximum, in
line with the 304 Å channel behavior.
The ALMA peak in the first image at the time of the flare maximum coincides
with the bright spot in 94 Å (Fig. 4, 15:32). The region is also bright in
H$\alpha$ and 304 Å images, but not in the 171 Å image. Most 171 Å emission
originates from high loops having many small brightenings that do not show in
the ALMA image. These brightenings are visible in H$\alpha$ and 304 Å images
and contribute to the high temporal variability at these wavelengths. Whereas
the ALMA maximum brightness falls exactly on the 94 Å peak in the flare rise
phase, the ALMA peak shifts south along the neutral line in the following time
steps. The ALMA contours outline the whole flaring region in the next time
step (Fig. 4, 15:44). The peak brightness is at the location of the peak in
304 Å. A new thin loop can be seen nearby in 171 Å. The ALMA peak remains in
the same place in the next two frames (Figs. 4, 15:56 - 16:08) where small
filamentary structures are forming in 304 Å, while nothing can be seen in 94
Å. However, a secondary ALMA peak at 16:20 UT (Fig. 3) is located on two
decaying 94 Å flare loops in the northwest.
Millimeter emission of the SOL2017-04-26 flare is associated with:
\- footpoint of a hot flare loop (94 Å)
\- small 171 Å brightenings
\- decaying hot loops in 94 Å.
### 3.3 SOL2018-04-03 flare
The SOL2018-04-03 flare was a short-lived two-ribbon event. The flare lasted
less than 20 minutes, but evolution and post-flare aftermath were well
recorded in 23 ALMA images at 95 GHz (3.2 mm). The beam size is 66″ and pixel
size is 6″.
Figure 5: Intensity profiles for the SOL2018-04-03 flare. The colors and
markings are the same as in Fig. 1
The main ALMA peak seems to be delayed in time compared to all other channels.
However, this may be the result of undersampling of the ALMA time profile,
missing the peak in millimeter waves.
The preflare seen in the ALMA time profile (Fig. 5) coincides in space with a
dark H$\alpha$ filament (Fig. 6, 14:00). Yet, the H$\alpha$ preflare
brightening is related to emissions from the two ribbons located some 28″ to
the southeast. Interestingly, there is initially no visible ALMA enhancement
around the prominent filament located southeast of the small one.
At the peak of the ALMA flare (Fig. 6, 14:22), however, the ALMA maximum moves
to the northern footpoint of the larger filament, and now also encompasses the
flare ribbons and loops visible in 304 and 94 Å. The smaller filament remains
enhanced in the ALMA image. The ALMA emission follows in general the neutral
line (Fig. 6, 14:22). At the same time, the H$\alpha$ filament shrinked in
diameter and length.
Figure 6: SOL2018-04-03 flare. ALMA contours outline five levels equidistantly
in the 50 - 200 K range.
Further in time, the ALMA peak remains on the larger filament which is now
more pronounced. The prominent peaks in 171 Å at 15:04 and 17:10 UT are caused
by large loops leaving no traces in millimeter waves. The ALMA brightness
slightly increases at 16:07, when plasma erupts from the active region and
moves south along the major filament, as can be seen in 304 Å movies. Finally,
at 17:41 UT, the ALMA intensity peaks near the southern H$\alpha$ ribbon at a
footpoint of a hot loop visible in 94 Å emission.
The SOL2018-04-03 flare is an example where the ALMA peak emission is best
associated with H$\alpha$ filaments. They are not steadily bright in
millimeter emission, but become visible due to some activity related to the
flare. Millimeter and 94 Å emission are spatially not related except for a
microflare in the late post-flare phase.
### 3.4 SOL2018-04-19 flare
The SOL2018-04-19 flare was spotted while manually browsing ALMA images.
GOES-15 did not detect it in soft X-rays (Fig. 7). Thus there is no entry in
the HEK flare catalog. The flare is short and compact, and located near the
western solar limb where the ALMA antenna scanning pattern smears out sources
diagonally. This is well noticeable in the background of Fig. 8.
Figure 7: Intensity profiles for the SOL2018-04-19 flare. The colors and
markings are the same as in Fig. 1
The main peak in the EUV lines is sharp and precedes the ALMA peak by around
11 minutes, although the rise of ALMA intensity is visible even before. The
ALMA peak coincides in time with a small peak in 304 Å and 171 Å about 11
minutes after their main peak. The subpeak is not visible in 94 Å, indicating
the absence of hot plasma.
Figure 8: SOL2018-04-19 flare. ALMA contours outline five levels equidistantly
in the 50 - 120 K range.
The main ALMA peak is located 25″ southeast of the maxima in the line
emissions (Fig. 8, 16:32). The ALMA images show a secondary peak northwest of
the main peak. There is no corresponding brightening in the line emissions.
This is the case even in the last image, when the secondary ALMA peak exceeded
the primary peak (Fig. 8, 17:16).
The SOL2018-04-19 flare is a case where ALMA millimeter waves correlate with
cool (H$\alpha$ and 304 Å) lines, with the intermediate 171 Å line, the hot 94
Å, and the extremely hot bremsstrahlung emission (GOES) in time, but are
displaced in space.
### 3.5 SOL2018-12-15 flare
Figure 9: Intensity profiles for the SOL2018-12-15 flare. The colors and
markings are the same as in Fig. 1
The weak SOL2018-12-15 flare occurred close to the limb. ALMA was observing at
95 GHz (3.2 mm). The observed intensity profiles are shown in Fig. 9. The ALMA
intensity profile correlates best with 304 Å average intensities, but not with
the GOES X-ray and the 94 Å profile, which are similar to each other. The
flare reached its peak somewhat earlier in 304 Å relative to the millimeter
waves, but much later in 171 Å. The 171 Å emission is dominated by large loops
arching over the active region (Fig.10, 13:58). The averaged profile has a
minimum intensity at the time of the ALMA flare peak (13:58 UT) due to dimming
of some of the loops.
Figure 10: SOL2018-12-15 flare. ALMA contours outline five levels
equidistantly in the 50 - 150 K range.
The brightening in millimeter waves encompasses most of the active region as
seen in the magnetogram, but peaks 25″ farther south where little emission in
H$\alpha$, 304 Å, 171 Å, and 94 Å originates (Fig.10, 13:46 and 14:25). Only
at the peak time (13:58 UT) of the X-ray and 94 Å intensities, the peak
position of all line emissions coincide with the peak in millimeter waves.
The SOL2018-12-15 flare is another example of weak millimeter emission with
little relation to the emissions in H$\alpha$, 304 Å, 171 Å, 94 Å, and X-rays.
## 4 Discussion
Table 3: List of features in other wavelengths coincident with emission in millimeter ALMA images for each flare. Date | Time UT | Peak source | Coincides with
---|---|---|---
| | in mm image |
2017-04-23 | 15:57 | primary | along neutral line in HMI, near the top of a small loop in H$\alpha$, 304 and 171 Å, footpoint of a 94 Å loop
| | secondary | top of H$\alpha$ filament, thin loop footpoint at 304 and 171 Å
| | tertiary | plasma downflow in H$\alpha$, 304 Å
| 16:14 | primary | near the top of H$\alpha$ filament, plasma outflow in 304 and 171 Å
| 16:34 | primary | footpoint of a dark H$\alpha$ filament
2017-04-26 | 15:32 | primary | footpoint of a flare loop in 94 Å, bright in H$\alpha$ and 304 Å, nothing in 171 Å
| 15:44-15:56 | primary | along neutral line in HMI, newly forming filament in H$\alpha$, peak in 304 Å, top of loop in 171 Å, nothing in 94 Å
| 16:20 | secondary | decaying flare loops in 94 Å
2018-04-03 | 14:00 | primary | H$\alpha$ filament
| 14:22 | primary | along neutral line in HMI, H$\alpha$ filament footpoint, thin loops in 304 and 171 Å, near the hot loop top in 94 Å
| 14:22-16:07 | primary | H$\alpha$ filament footpoint
| 17:41 | primary | H$\alpha$ ribbon, footpoint of a hot loop in 94 Å
2018-04-19 | 16:21-16:32 | primary | nothing (displaced from the flaring region)
2018-12-15 | 13:46 | primary | loops in 171 Å, nothing elsewhere
| 13:58 | primary | top of the loop in H$\alpha$ and 304 Å, footpoint of the flaring loop in 94 Å
Table 4: NOAA number of the flare active region, peak pixel brightness temperature $T_{b}$ at wavelength $\lambda$ and peak pixel brightness temperature difference $\Delta T_{b}$ from the background level. Flare | NOAA | $\lambda$ | Peak $T_{b}$ | Peak $\Delta T_{b}$
---|---|---|---|---
| region | [mm] | [K] | [K]
SOL2017-04-23 | N/A | 1.3 | 7163 | 563
SOL2017-04-26 | 12652 | 2.8 | 8199 | 263
SOL2018-04-03 | 12703 | 3.2 | 8224 | 321
SOL2018-04-19 | N/A | 3.2 | 7757 | 142
SOL2018-12-15 | 12731 | 3.2 | 8086 | 194
Significant millimeter radiation was found in all 4 flares selected from the
HEK catalogue. In the event selected from ALMA data, brightenings in line
emission in H$\alpha$, 304 Å, 171 Å, and 94 Å are associated. This strongly
confirms a general correlation in time between the different radiations
originating from very hot, hot, intermediate and cool plasma. The comparison
of the images, on the other hand, shows a surprising variety of associations
with spatial features. A summary of features cospatial with the millimeter
emission for each flare is given in Table 3.
In both ALMA bands 3 and 6, the ALMA flare emission occurs mainly above the
neutral line observed in HMI magnetograms. Good examples are Fig. 2 at 15:57,
Fig. 4 at 15:44 and Fig. 6 at 14:22. However, this is not always the case and
millimeter emission may correspond to different features such as
\- H$\alpha$ active filaments (Fig. 2 at 16:14 and 16:34, Fig. 6 at 14:00,
14:22, and 16:07)
\- 94 Å (hot) loop tops (Fig. 4 at 15:56) or footpoints (Fig. 2 at 15:57),
\- post-flare loops (Fig. 4 at 16:08)
\- occasionally to no features in other wavelengths at all (Fig. 8 at 17:16,
secondary peak in Fig. 6 at 17:41)
Millimeter emission originating from flaring loop tops and footpoints was
found previously at 86 GHz with the BIMA interferometer (Silva et al., 1996b,
1998). They estimated that the major part (around 80%) of the emission in the
flare gradual phase came from the top of a loop as free-free radiation emitted
by hot plasma. Kundu et al. (2000) reported a clear detection of both
footpoints of a loop at $\lambda$=3 mm.
It is interesting to note that millimeter flare emission sometimes corresponds
very well with filaments, particularly to the activity within them (e.g.
SOL2018-04-03). Rutten (2017) predicted that the appearance of the Sun at ALMA
wavelengths will be similar to the one in H$\alpha$ with good dark-dark
correspondence. Brajša et al. (2021) did find dark filaments at 3 millimeters.
On the other hand, da Silva Santos et al. (2022) found some of the dark
threads visible in the AIA 304 Å and Mg II resonance lines to have dark
counterparts at 1.25 mm, but their visibility varied significantly across the
filament both spatially and temporally. Here we report occasional dark-bright
correspondence.
Two kinds of time profiles were extracted from the images. The average over
the active region, shown with a black solid curve, corresponds to what a low
resolution telescope would measure. In a second time profile, represented by a
red dashed curve, the peak of the brightest pixel in the area is shown. The
second curve indicates small, individual peaks of emission.
In all time profiles, the peak pixel curves have more maxima and shorter
maxima than the spatially averaged curves. It indicates that the millimeter
flare emission originates from different peaks within the active region. The
effect of a multitude of individual peaks is more pronounced in the SDO/AIA
lines, especially at 171 Å.
The peak pixel intensity measured from the intensity profiles and from the
difference ALMA images is given in Table 4. The largest increase is found in
the band 6 flare with a value of $\Delta T_{b}=563$ K above the pre-flare
level, while in band 3 the values vary between 140 and 320 K. This corresponds
to 500 – 1200 K above the quiet Sun level. These values are lower limits since
the maximum intensity is spread by the antenna beam for compact, unresolved
sources.
To investigate statistically the observed similarity of the ALMA and SDO/AIA
EUV intensity curves, we calculated the Pearson correlation coefficient
between ALMA and EUV, average and maximum, time profiles. The results are
listed in Table 5.
The correlation between average ALMA profile and AIA 94 and 304 Å channels is
quite good. In three cases, the 304 Å channel, indicating cool plasma, has the
highest correlation coefficient. This corresponds to earlier reports about
spatial best correlation with the 304 Å line among all EUV AIA lines (White et
al. 2017; Brajša et al. 2018).
The 94 Å line, originating from hot plasma, has a slightly higher correlation
of the remaining two cases (Table 5). However, the 171 Å line, emitted by
plasma of intermittent temperature in coronal loops, has a much lower
correlation coefficient, or even an anti-correlation. A similar trend with low
171 Å, and high 94 and 304 Å correlations is present in peak intensity
profile. However, there are two exceptions where 171 Å peak emission has a
similarly high correlation as 94 and 304 Å channels. The first one is
SOL2017-04-23 flare where peak pixel 171 Å curve follows the shape of the ALMA
peak and average profiles. The second one is SOL2018-04-19 flare where the
high 171 Å to ALMA correlation is due to the secondary peak(s).
The observed similarity between 304 Å, 94 Å and ALMA time profiles suggests
that the millimeter brightness in flares originates mainly from chromospheric
phenomena, but occasionally also from hot and dense flare components. Geometry
could have a role in the observed correlations as well since the last two
flares from Table 5, with generally the lowest correlation from the set, are
also closest to the solar limb.
The accuracy of the contours and the location of maximum intensity is better
than the ALMA beam width. The confidence interval depends on statistical
significance of the peak above background and the number of samplings of the
single-dish beam by the double-circle scanning pattern. This pattern consists
of minor circles with a radius of 600″, whose centers move steadily in a major
circle around the center of the Sun at a distance of 600″. Each new minor
circle is shifted along the major circle by a defined value called the
sampling length (White et al., 2017). In the worst case scenario when minor
circles fall on top of each other, the largest distance of the sampled points
is less than or equal to the sampling length. So, the beam is at least sampled
at 3 times the beam resolution, and in many places like near the disk center
even more, giving a minimum resolution of 10″ in band 6 and 20″ in band 3. The
location of the peak can be determined even more precisely by centroiding.
The flares of this work were not detected in hard X-rays and microwaves. Using
the time derivative as a proxy for the hard X-ray flux, we estimate the
impulsive phase of the flare by the rise phase of the soft X-rays from
beginning to peak. In the four flares with significant soft X-rays, the ALMA
millimeter intensity is consistent with peaking after the impulsive phase.
Flare SOL2017-04-23 (Fig. 1), however, is also consistent with a maximum
during the impulsive phase, and the peak intensity curve (red) of flare
SOL2017-04-26 (Fig. 3) has its maximum clearly during the impulsive phase.
Thus, ¿100 GHz flares appear to be mostly gradual phase phenomena with
occasional impulsive phase emission.
Table 5: Pearson correlation coefficients between intensity measurements from ALMA and different AIA channels. Top: Flare region averaged, bottom: peak pixel of flare region. Intensity | Date | N | 94 Å | 171 Å | 304 Å
---|---|---|---|---|---
profile | | | | |
Average | 2017-04-23 | 9 | 0.797 | 0.598 | 0.823
2017-04-26 | 11 | 0.777 | 0.582 | 0.746
2018-04-03 | 23 | 0.799 | 0.045 | 0.882
2018-04-19 | 14 | 0.721 | 0.391 | 0.613
2018-12-15 | 10 | 0.330 | -0.175 | 0.757
Peak pixel | 2017-04-23 | 9 | 0.939 | 0.924 | 0.938
2017-04-26 | 11 | 0.927 | 0.665 | 0.776
2018-04-03 | 23 | 0.771 | 0.064 | 0.713
2018-04-19 | 14 | 0.495 | 0.599 | 0.421
2018-12-15 | 10 | 0.373 | 0.367 | 0.168
## 5 Conclusions
Full-disk ALMA imaging has yielded the first spatially complete solar flare
observations in millimeter wavelengths. The field of view includes the whole
disk. We found a field of 250″$\times$250″ necessary for a complete overview
over the activity in millimeter waves taking place in an active region during
a solar flare. It significantly exceeds the field of view of ALMA in
interferometric mode. Single-dish observations thus complement interferometric
observations in an important way.
Most surprising is the fact that several phenomena in both hot and cold plasma
lead to enhanced brightness in millimeter waves. Although there is no evidence
contradicting the assumption that the emission process is thermal free-free
emission, it seems that the previous scenario of a chromospheric hot spot
heated by a precipitating electron beam is too simplistic.
In addition to footpoints of hot loops, millimeter flare emission was found to
be associated with activated H$\alpha$ filaments, impact points of plasma
motions, post-flare loops, and hot loop tops. Surprisingly, we found cases
where no feature in H$\alpha$, 304 Å, 171 Å, and 94 Å was visible at the
position of a millimeter wave emission peak.
In this work, we focused on comparing flare intensity profiles and coincident
spatial sources at different wavelengths. More information about the flares,
such as temperature and density of the emitting plasma, could be inferred from
comparison with simultaneous soft X-ray observations. Its analysis would
exceed the scope of this paper and it is planned for future work. The detected
flares were too small for significant hard X-ray and microwave emissions.
Higher temporal and spatial resolution are needed to gain a better insight
into the properties of flares at millimeter wavelengths. Both problems can be
solved by observing flares using ALMA interferometric mode, however, this is
difficult due to the ALMA scheduling constraints unable to wait for a flare.
The field of view in interferometric mode is small and a lot of luck is needed
to point at the right location and at the right moment. A possible way to
achieve better temporal resolution is to use the recently implemented total
power regional mapping in single-dish mode. There is also a third possibility
in the future to develop more advanced data analysis methods that could go
beyond the ALMA full-disk single-dish images presented here. The same data may
be much better exploited using specific characteristics and redundancies
present in the data.
###### Acknowledgements.
This work has been supported by the Croatian Science Foundation under the
project 7549 ”Millimeter and sub-millimeter observations of the solar
chromosphere with ALMA”. It has also received funding from the Horizon 2020
project SOLARNET (824135, 2019–2022). This paper makes use of the following
ALMA data: ADS/JAO.ALMA#2016.1.01129.S, ADS/JAO.ALMA#2016.1.00070.S,
ADS/JAO.ALMA#2017.1.00072.S, ADS/JAO.ALMA#2017.1.01138.S,
ADS/JAO.ALMA#2018.1.01879.S. ALMA is a partnership of ESO (representing its
member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST
and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the
Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and
NAOJ. SDO/AIA data courtesy of NASA/SDO and the AIA, EVE, and HMI science
teams. This work utilizes GONG data obtained by the NSO Integrated Synoptic
Program, managed by the National Solar Observatory, which is operated by the
Association of Universities for Research in Astronomy (AURA), Inc. under a
cooperative agreement with the National Science Foundation and with
contribution from the National Oceanic and Atmospheric Administration. The
GONG network of instruments is hosted by the Big Bear Solar Observatory, High
Altitude Observatory, Learmonth Solar Observatory, Udaipur Solar Observatory,
Instituto de Astrofísica de Canarias, and Cerro Tololo Interamerican
Observatory. We are grateful to the GOES team for making the data publicly
available. We acknowledge the use of the ALMA Solar Ephemeris Generator
(Skokić & Brajša, 2019). This research used version 3.1.4 (Mumford et al.,
2022) of the SunPy open source software package (The SunPy Community et al.,
2020). This research used version 0.6.4 of the aiapy open source software
package (Barnes et al., 2020). This research made use of Regions, an Astropy
package for region handling (Bradley et al., 2022).
## References
* Alissandrakis et al. (2022) Alissandrakis, C. E., Bastian, T. S., & Nindos, A. 2022, A&A, 661, L4
* Alissandrakis et al. (2020) Alissandrakis, C. E., Nindos, A., Bastian, T. S., & Patsourakos, S. 2020, A&A, 640, A57
* Alissandrakis et al. (2017) Alissandrakis, C. E., Patsourakos, S., Nindos, A., & Bastian, T. S. 2017, A&A, 605, A78
* Barnes et al. (2020) Barnes, W. T., Cheung, M. C. M., Bobra, M. G., et al. 2020, Journal of Open Source Software, 5, 2801
* Bastian et al. (2018) Bastian, T. S., Bárta, M., Brajša, R., et al. 2018, The Messenger, 171, 25
* Bastian et al. (1998) Bastian, T. S., Benz, A. O., & Gary, D. E. 1998, ARA&A, 36, 131
* Bastian et al. (2017) Bastian, T. S., Chintzoglou, G., De Pontieu, B., et al. 2017, ApJ, 845, L19
* Benz (2017) Benz, A. O. 2017, Living Reviews in Solar Physics, 14, 2
* Bradley et al. (2022) Bradley, L., Deil, C., Patra, S., et al. 2022, astropy/regions: v0.5
* Brajša et al. (2020) Brajša, R., Skokić, I., & Sudar, D. 2020, Central European Astrophysical Bulletin, 44, 1
* Brajša et al. (2021) Brajša, R., Skokić, I., Sudar, D., et al. 2021, A&A, 651, A6
* Brajša et al. (2018) Brajša, R., Sudar, D., Benz, A. O., et al. 2018, A&A, 613, A17
* Chai et al. (2022) Chai, Y., Gary, D. E., Reardon, K. P., & Yurchyshyn, V. 2022, ApJ, 924, 100
* da Silva Santos et al. (2020) da Silva Santos, J. M., de la Cruz Rodríguez, J., Leenaarts, J., et al. 2020, A&A, 634, A56
* da Silva Santos et al. (2022) da Silva Santos, J. M., White, S. M., Reardon, K., et al. 2022, Frontiers in Astronomy and Space Sciences, 9, 898115
* Eklund et al. (2020) Eklund, H., Wedemeyer, S., Szydlarski, M., Jafarzadeh, S., & Guevara Gómez, J. C. 2020, A&A, 644, A152
* Heinzel et al. (2022) Heinzel, P., Berlicki, A., Bárta, M., et al. 2022, ApJ, 927, L29
* Iwai et al. (2017) Iwai, K., Loukitcheva, M., Shimojo, M., Solanki, S. K., & White, S. M. 2017, ApJ, 841, L20
* Jafarzadeh et al. (2021) Jafarzadeh, S., Wedemeyer, S., Fleck, B., et al. 2021, Philosophical Transactions of the Royal Society of London Series A, 379, 20200174
* Jafarzadeh et al. (2019) Jafarzadeh, S., Wedemeyer, S., Szydlarski, M., et al. 2019, A&A, 622, A150
* Kaufmann et al. (2004) Kaufmann, P., Raulin, J.-P., de Castro, C. G. G., et al. 2004, The Astrophysical Journal, 603, L121
* Krucker et al. (2013) Krucker, S., Giménez de Castro, C. G., Hudson, H. S., et al. 2013, A&A Rev., 21, 58
* Kundu (1996) Kundu, M. R. 1996, Sol. Phys., 169, 389
* Kundu et al. (2000) Kundu, M. R., White, S. M., Shibasaki, K., & Sakurai, T. 2000, ApJ, 545, 1084
* Labrosse et al. (2022) Labrosse, N., Rodger, A. S., Radziszewski, K., et al. 2022, MNRAS, 513, L30
* Lemen et al. (2012) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17
* Loukitcheva et al. (2019) Loukitcheva, M. A., White, S. M., & Solanki, S. K. 2019, ApJ, 877, L26
* Lüthi et al. (2004) Lüthi, T., Lüdi, A., & Magun, A. 2004, A&A, 420, 361
* Menezes et al. (2021) Menezes, F., Selhorst, C. L., Giménez de Castro, C. G., & Valio, A. 2021, ApJ, 910, 77
* Menezes et al. (2022) Menezes, F., Selhorst, C. L., Giménez de Castro, C. G., & Valio, A. 2022, MNRAS, 511, 877
* Molnar et al. (2019) Molnar, M. E., Reardon, K. P., Chai, Y., et al. 2019, ApJ, 881, 99
* Mumford et al. (2022) Mumford, S. J., Freij, N., Christe, S., et al. 2022, SunPy
* Nindos et al. (2018) Nindos, A., Alissandrakis, C. E., Bastian, T. S., et al. 2018, A&A, 619, L6
* Nindos et al. (2020) Nindos, A., Alissandrakis, C. E., Patsourakos, S., & Bastian, T. S. 2020, A&A, 638, A62
* Patsourakos et al. (2020) Patsourakos, S., Alissandrakis, C. E., Nindos, A., & Bastian, T. S. 2020, A&A, 634, A86
* Pesnell et al. (2012) Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, Sol. Phys., 275, 3
* Raulin et al. (1999) Raulin, J. P., White, S. M., Kundu, M. R., Silva, A. V. R., & Shibasaki, K. 1999, ApJ, 522, 547
* Rodger et al. (2019) Rodger, A. S., Labrosse, N., Wedemeyer, S., et al. 2019, ApJ, 875, 163
* Rutten (2017) Rutten, R. J. 2017, A&A, 598, A89
* Scherrer et al. (2012) Scherrer, P. H., Schou, J., Bush, R. I., et al. 2012, Sol. Phys., 275, 207
* Selhorst et al. (2019) Selhorst, C. L., Simões, P. J. A., Brajša, R., et al. 2019, ApJ, 871, 45
* Shimizu et al. (2021) Shimizu, T., Shimojo, M., & Abe, M. 2021, ApJ, 922, 113
* Shimojo et al. (2017) Shimojo, M., Bastian, T. S., Hales, A. S., et al. 2017, Sol. Phys., 292, 87
* Shimojo et al. (2020) Shimojo, M., Kawate, T., Okamoto, T. J., et al. 2020, ApJ, 888, L28
* Silva et al. (1997) Silva, A. V. R., Gary, D. E., White, S. M., Lin, R. P., & de Pater, I. 1997, Sol. Phys., 175, 157
* Silva et al. (1998) Silva, A. V. R., Lin, R. P., de Pater, I., et al. 1998, Sol. Phys., 183, 389
* Silva et al. (1996a) Silva, A. V. R., White, S. M., Lin, R. P., et al. 1996a, ApJS, 106, 621
* Silva et al. (1996b) Silva, A. V. R., White, S. M., Lin, R. P., et al. 1996b, ApJ, 458, L49
* Skokić & Brajša (2019) Skokić, I. & Brajša, R. 2019, Rudarsko-geološko-naftni zbornik, 34, 59
* Skokić et al. (2020) Skokić, I., Sudar, D., & Brajša, R. 2020, Central European Astrophysical Bulletin, 44, 2
* Sudar et al. (2019a) Sudar, D., Brajša, R., Skokić, I., & Benz, A. O. 2019a, Sol. Phys., 294, 163
* Sudar et al. (2019b) Sudar, D., Skokić, I., & Brajša, R. 2019b, Central European Astrophysical Bulletin, 43, 1
* The SunPy Community et al. (2020) The SunPy Community, Barnes, W. T., Bobra, M. G., et al. 2020, The Astrophysical Journal, 890, 68
* Wedemeyer et al. (2016) Wedemeyer, S., Bastian, T., Brajša, R., et al. 2016, Space Sci. Rev., 200, 1
* Wedemeyer et al. (2020) Wedemeyer, S., Szydlarski, M., Jafarzadeh, S., et al. 2020, A&A, 635, A71
* White et al. (2017) White, S. M., Iwai, K., Phillips, N. M., et al. 2017, Sol. Phys., 292, 88
|
# Stability conditions on cyclic categories I: Basic definitions and examples
and Yucheng Liu Beijing International Center for Mathematical Research,
Peking University, No.5 Yi- heyuan Road Haidian District, Beijing, 100871,
P.R.China<EMAIL_ADDRESS>
###### Abstract.
A triangulated category $\mathcal{C}$ with a canonical Bott’s isomorphism
$[2]\xrightarrow{\sim}id$ is called a cyclic category in this paper. We give a
new notion of stability conditions on a $k$-linear Krull-Schmidt cyclic
category. Given such a stability condition $\sigma$, we can assign a Maslov
index to each basic loop in such a category. If all Maslov indices vanish, we
get $\widetilde{\mathcal{C}},\widetilde{\sigma}$ as the $\mathbb{Z}$-lifts of
$\mathcal{C},\sigma$ respectively, such that $\widetilde{\mathcal{C}}$ is a
$\mathbb{Z}$-graded triangulated category and $\widetilde{\sigma}$ is a
Bridgeland stability condition on $\widetilde{\mathcal{C}}$. Moreover, we
showed that there is an isomorphism
$Stab_{cyc}^{0,e}(\mathcal{C})\xrightarrow{\simeq}Stab(\widetilde{\mathcal{C}})/2\mathbb{Z},$
where $Stab_{cyc}^{0,e}(\mathcal{C})$ denotes the equivalence classes of
stability conditions which are deformation equivalent to $\sigma$, and
$Stab(\widetilde{\mathcal{C}})$ denotes the space of Bridgeland stability
conditions on $\widetilde{\mathcal{C}}$.
We provide some examples of stability conditions on a cyclic category. We also
discuss some interesting phenomena in these examples, such as the chirality
symmetry breaking phenomenon and nontrivial monodromy. The chirality symmetry
breaking phenomenon involves stability conditions which can not be lifted to
Bridgeland stability conditions.
###### Key words and phrases:
Bridgeland stability conditions, Matrix factorizations, Maslov index,
Chirality symmetry breaking
###### 2020 Mathematics Subject Classification:
14F08, 14B05
## 1\. Introduction
The phenomenon $[2]=[0]$ has a relatively long history in mathematics, and can
be found in many different branches of mathematics and physics. For examples:
the celebrated Tate cohomology of cyclic groups and Bott’s periodicity; in
early 1980s, Eisenbud discovered that every finitely generated $S$ module
admits a free resolution which will eventually become 2-periodic, where $S$ is
the algebra of functions of a hypersurface with an isolated singularity (see
[Eis80]); at almost the same time, people were developing the theory of cyclic
homology (e.g. [Con85],[Goo85],[CQ97],[NS18]), the periodic cyclic homology
played a role in such a theory; Kapustin and Li used the category of matrix
factorizations to describe the D-branes in Landau-Ginzburg models by following
a proposal of Kontsevich (see e.g. [KL03a],[KL03b]), which was further studied
by mathematicians (e.g. [Dyc11], [DM12], [PV12], [Mur13]); we could also find
the phenomenon of [2]=[0] in Treumann’s paper [Tre19].
On the other hand, the story of stability conditions on triangulated
categories is relatively new. Motivated by Douglas’s work on D-branes and
$\Pi$ stability (see e.g. [Dou02]), Bridgeland introduced a general theory of
stability conditions on triangulated categories in [Bri07]; the theory was
further studied by Kontsevich and Soibelman in [KS08].
Let $\mathcal{C}$ be a $k$-linear triangulated category, if there exists a
canonical isomorphism $\beta:[2]\simeq id$ between two functors, it is easy to
see that there is no $t$-structures on $\mathcal{C}$. Hence no Bridgeland
stability conditions exists on $\mathcal{C}$ either. However, as Bridgeland
stability condition can be viewed as an $\mathbb{R}$-grading refinement of a
$\mathbb{Z}$-grading ($t$-structure) on a triangulated category, we expect
that there exists a notion of $S^{1}$-grading which refines a
$\mathbb{Z}/2\mathbb{Z}$-grading of a cyclic category.
Unlike on the real line, there are many homotopy non-equivalent paths to
connects two points on $S^{1}$, so we need to introduce a new data to
distinguish these paths. This new data is the degree function of a real
decomposition on $\mathcal{C}$ (see Definition 3.5 for the precise
definition).
Roughly speaking, given a real decomposition on $\mathcal{C}$ and any two
indecomposable objects $E,F\in\mathcal{C}$, we have following decomposition
$Hom_{\mathcal{C}}(E,F)=\bigoplus_{a\in\mathbb{R}}Hom_{a}(E,F),$
which satisfy some natural conditions. A morphism $f\in Hom_{a}(E,F)$ is
called a homogeneous morphism of degree $a$. We get a degree function
$q:\\{Nontrivial\ homogeneous\ morphims\\}\rightarrow\mathbb{R}.$
This enables us to define the notion of connecting path (see Definition 3.10)
and liftable commutative diagram (see Definition 3.12), which are the basic
notion in our definitions and results. We use these notion to define what is a
stability condition on a cyclic category in Section 3.
In Section 4, we define what is a basic loop and its Maslov index. This notion
of Maslov index plays a similar role as its namesake in Fukaya categories.
Indeed, we have the following lifting theorem.
###### Theorem 1.1.
Given a stability condition $\sigma=(\mathcal{Q},Z,\phi,q)$ on a $k$-linear
Krull-Schmidt cyclic category $\mathcal{C}$, if the Maslov indices of all
basic loops are zero. Then there are $\mathbb{Z}$-lifts of $\sigma$ and
$\mathcal{C}$, which we denote by $\widetilde{\sigma}$ and
$\widetilde{\mathcal{C}}$ respectively, such that $\widetilde{\sigma}$ is a
Bridgeland stability condition on $\widetilde{\mathcal{C}}$.
Here the data $(Z,\phi,q)$ in a stability condition $\sigma$ is called a
charge triple, and in fact, the $\mathbb{Z}$-lift $\widetilde{\mathcal{C}}$
only depends on the charge triple. We say that $\mathcal{C}$ is liftable with
respect to a charge triple $R\coloneqq(Z,\phi,q)$ if the condition in Theorem
1.1 is satisfied. Furthermore, one can define when two charge triples are
deformation equivalent (see Definition 4.10). And we proved the following
result in Section 4.
###### Proposition 1.2.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt connective cyclic category,
$R_{1}=(Z_{1},\phi_{1},q_{1})$ and $R_{2}=(Z_{2},\phi_{2},q_{2})$ be two
charge triples on $\mathcal{C}$. Suppose that $R_{1},R_{2}$ are deformation
equivalent and $\mathcal{C}$ is liftable with respect to $R_{1}$. Then
$\mathcal{C}$ is also liftable with respect to $R_{2}$.
Moreover, if we denote the $\mathbb{Z}$-lifts of $\mathcal{C}$ with respect to
$R_{1},R_{2}$ by $\widetilde{\mathcal{C}}_{1}$ and
$\widetilde{\mathcal{C}}_{2}$ respectively, there exists an equivalence (not
canonical) $H:\widetilde{\mathcal{C}}_{1}\simeq\widetilde{\mathcal{C}}_{2}$
such that the following diagram is commutative.
${\widetilde{\mathcal{C}}_{1}}$${\widetilde{\mathcal{C}}_{2}}$${\mathcal{C}}$$\scriptstyle{\pi_{1}}$$\scriptstyle{H}$$\scriptstyle{\pi_{2}}$
Given a charge triple $R$ such that $\mathcal{C}$ is liftable with respect to
it, a consistent choice of these equivalences in Proposition 1.2 forms a
connection of $\mathbb{Z}$-lifts of $\mathcal{C}$ fibered over the set of
charge triples which are deformation equivalent to $R$. Hence Theorem 1.1 and
Proposition 1.2 provide us a map
$Stab_{cyc}^{0}(\mathcal{C})\rightarrow Stab(\widetilde{\mathcal{C}}),$
where $Stab_{cyc}^{0}(\mathcal{C})$ consists of stability conditions whose
charge triples are deformation equivalent to $R$, and
$\widetilde{\mathcal{C}}$ is the associated $\mathbb{Z}$-lift of $\mathcal{C}$
with respect to $R$. We use $Stab(\widetilde{\mathcal{C}})$ to denote the
space of Bridgeland stability conditions on $\widetilde{\mathcal{C}}$.
And a close look at this map provides us the following comparison theorem,
which is proved in Section 5.
###### Theorem 1.3.
There is an isomorphism
$Stab_{cyc}^{0}(\mathcal{C})/\simeq\xrightarrow{}Stab(\widetilde{\mathcal{C}})/2\mathbb{Z},$
where the equivalence relation is defined in Definition 5.6 and the
$2\mathbb{Z}$ action is given by $k\mapsto[k]$ for any $k\in 2\mathbb{Z}$.
However, there are many stability conditions which can not be lifted to
Bridgeland stability conditions. In Section 6, we provide some examples of
stability conditions on the category of $\mathbb{Z}/3\mathbb{Z}$-equivariant
matrix factorizations of $w=x^{3}$. In this section, we also discuss some
interesting phenomena in these examples, for instance, the chirality symmetry
breaking phenomenon and the non-trivial monodromy of the map from strong
stability conditions to their central charges. The chirality symmetry breaking
phenomenon involves stability conditions which can not be lifted to Bridgeland
stability conditions.
### 1.1. Outline of this paper
In Section 2, we briefly review some basic definitions and results of matrix
factorizations. In Section 3, we firstly give some preliminary definitions,
such as real decompositions, connecting paths, liftable commutative diagrams
and so on. Then we give our definition of stability conditions on cyclic
categories in the end of this section. In Section 4, we introduce our notion
of basic loop and its Maslov index, and prove the lifting theorem in this
section. In Section 5, we prove the uniqueness of Harder-Narasimhan filtration
under the assumption that all Maslov indices are non-negative. Furthermore, we
prove the comparison theorem. In Section 6, we give the examples of stability
conditions on the category of $\mathbb{Z}/3\mathbb{Z}$-equivariant matrix
factorizations of $w=x^{3}$. We also discuss the chirality symmetry breaking
phenomenon and nontrivial monodromy in these examples.
### 1.2. Acknowledgement
I would like to thank Emanuel Scheidegger for teaching me the background in
physics, many helpful discussions and his guidance to the literature. I thank
Alex Martsinkovsky and Amnon Neeman for answering my questions via email
correspondences. I also thank Tom Bridgeland and Yu Qiu for helpful comments.
I thank Qingyuan Bai, Hanfei Guo, Gang Han, Chunyi Li, Zhiyu Liu, Yongbin
Ruan, Zhiyu Tian, Shizhuo Zhang and Xiaolei Zhao for helpful discussions.
Theorem 5.9 is motivated by Zhiyu Tian’s question. The final write-up of this
paper was done while the author was visiting Zhejiang University, whose
hospitality is gratefully acknowledged.
### 1.3. Notation and convention
A cyclic category, usually denoted by $\mathcal{C}$ in this paper, is always
assumed to be $k$-linear, Krull-Schmidt and essentially small.
## 2\. Matrix factorizations
### 2.1. Matrix Factorizations
Matrix factorizations was firstly introduced by Eisenbud in [Eis80]. It got
the attention from physicists because of a proposal of Kontsevich, which
suggests using them to describe the D-branes on Landau-Ginzburg model (see
e.g. [KL03a] and [KL03b]). Mathematicians further studied them (see e.g.
[Dyc11], [PV12], [Mur13], [Orl04], and [Orl09]). All the materials in this
section are directly taken from these references.
Let us briefly recall the story of matrix factorizations. Suppose that $(R,m)$
is a regular local ring and $M$ is a finitely generated $R$-module. The famous
Auslander-Buchsbaum formula is
$pd(M)=dim(R)-depth(M).$
In particular, if the depth of $M$ is equal to the Krull dimension of $R$,
then $M$ is free.
However, if we consider a ring $S=R/w$ of hypersurface of singularity, where
$w$ is singular at $m$. The Auslander-Buchsbaum formula fails, and the
condition
$depth(M)=dim(S)$
no longer implies that $M$ is free. A module satisfying such condition is
called a maximal Cohen-Macaulay module.
Given a maximal Cohen-Macaulay $S$-module $M$, we can consider $M$ as an
$R$-module. Then by Auslander-Buchsbaum formula we know that there is a length
1 $R$-free resolution of $M$. Suppose the following short exact sequence
$0\rightarrow F^{1}\xrightarrow{f}F^{0}\rightarrow M\rightarrow 0$
is the $R$-free resolution of $M$. Since multiplication by $w$ on $M$ is
trivial, there exists a homotopy $g$ such that the diagram
${F^{1}}$${F^{0}}$${F^{1}}$${F^{0}}$$\scriptstyle{f}$$\scriptstyle{w}$$\scriptstyle{g}$$\scriptstyle{w}$$\scriptstyle{f}$
commutes. We use the pair $(f,g)$ to define a matrix factorization of $w$.
Note that the original maximal Cohen-Macaulay $M$ is isomorphic to $coker(f)$.
Therefore, it is natural to bring up the following definition, where we assume
that $R$ is a commutative ring over a field $k$ and $w\in R$ is an element.
###### Definition 2.1.
A matrix factorization of the potential $w$ over $R$ is a pair
$(E,\delta_{E})=(E^{0}\xrightarrow{\delta_{0}}E^{1}\xrightarrow{\delta_{1}}E_{0}),$
* •
$E=E^{0}\oplus E^{1}$ is a $\mathbb{Z}/2$-graded finitely generated projective
$R$-module, and
* •
$\delta_{E}=\begin{pmatrix}0&\delta_{1}\\\ \delta_{0}&0\end{pmatrix}\in
End_{R}^{1}(E)$ is an odd (i.e. of degree $1\in\mathbb{Z}/2$) endomorphism of
$E$, such that $\delta_{E}^{2}=w\cdot id_{E}$.
###### Remark 2.2.
The map $\delta_{E}$ is usually called a twisted differential in the
literature. We will adopt this terminology in this paper. Note that
$\delta_{E}^{2}=w\cdot id_{E}$ is also equivalent to
$\delta_{0}\delta_{1}=\delta_{1}\delta_{0}=w\cdot I$.
In the case when $R=k[[x_{1},x_{2},\cdots x_{n}]]$ is the ring of formal power
series in $n$ variables over a field $k$, $E_{0}$ and $E_{1}$ are free
$R$-modules. Moreover, $E_{0},\ E_{1}$ are of the same rank over $R$ by the
requirement $\delta_{0}\delta_{1}=\delta_{1}\delta_{0}=w\cdot I$, hence all
the maps $\delta_{0}$, $\delta_{1}$ and $\delta_{E}$ can be represented as
matrices of elements in $R$. For all explicit examples in this paper, we will
take $R$ to be a ring of formal power series.
To a potential $w\in R$ we can associated a $\mathbb{Z}/2$ dg-category
$MF(w)\coloneqq MF(R,w)$ whose objects are matrix factorizations of $w$ over
$R$. The morphisms from $\bar{E}=(E,\delta_{E})$ to $\bar{F}=(F,\delta_{F})$
are elements of the $\mathbb{Z}/2$-graded module of $R$-linear homomorphisms
$\mathcal{H}om_{w}(\bar{E},\bar{F})\coloneqq
Hom_{Mod_{R}}(E,F)=Hom_{\mathbb{Z}/2-Mod_{R}}(E,F)\oplus
Hom_{\mathbb{Z}/2-Mod_{R}}(E,F[1]).$
The $\mathbb{Z}/2$-graded dg-structure is given by the following differential
on $f\in\mathcal{H}om_{w}(\bar{E},\bar{F})$
$df=\delta_{F}\circ f-(-1)^{|f|}f\circ\delta_{E}.$
We use
$HMF(R,w)=H^{0}MF(R,w)$
to denote the associated homotopy category of $MF(R,w)$, i.e, the space of
morphisms in this category are chain maps up to homotopy. The homotopy
category $HMF(R,w)$ is naturally triangulated (see e.g. [Orl04]) with the
shift functor
$T:(E^{0}\xrightarrow{\delta_{0}}E^{1}\xrightarrow{\delta_{1}}E_{0})\mapsto(E^{1}\xrightarrow{-\delta_{1}}E^{0}\xrightarrow{-\delta_{0}}E_{1}).$
###### Remark 2.3.
Eisenbud proved that (see [Eis80]) in the case when $(R,m)$ is a regular local
ring and $S=R/w$, where $w$ is singular at the closed point $m$, the functor
$coker$ induces an equivalence
$coker:HMF(R,w)\xrightarrow{\sim}\underline{MCM}(S).$
Here $\underline{MCM}(S)$ is the stable category of maximal Cohen-Macaulay
$S$-modules. The objects are maximal Cohen-Macaulay modules, the morphisms are
defined by
$\underline{Hom}_{S}(M,M^{\prime})=Hom_{S}(M,M^{\prime})/P,$
where $P$ denotes the set of $S$-linear homomorphisms factoring through some
free $S$-module.
In such a case, Buchweitz proved that there is another equivalence
$\underline{MCM}(S)\rightarrow D_{sing}^{b}(S),$
where $D_{sing}^{b}(S)$ is the Verdier quotient
$D_{sing}^{b}(S)\coloneqq D^{b}(S)/D^{b}_{perf}(S).$
Here $D^{b}(S)$ is the derived category of all complexes of $S$-modules with
finitely generated total cohomology. Such a complex is called perfect if it is
isomorphic in $D^{b}(S)$ to a bounded complex of free $S$-modules. The full
triangulated subcategory formed by the perfect complexes is denoted by
$D^{b}_{perf}(S)$. See also the paper [Orl04] for the proofs of such
equivalences.
Moreover, in such a case, there is a non-degenerate pairing on the morphism
spaces in $HMF(R,w)$. This beautiful formula was firstly introduced by
Kapustin and Li using the path integral method when $k=\mathbb{C}$ (see e.g.
[KL03b]). This formula was mathematically proved in [Mur13] and [DM12].
For simplicity, we can assume $R=k[[x_{1},\cdots,x_{n}]]$. For the explicit
meaning of the notation in the formula, please consult [DM12, Sections 1-3].
###### Theorem 2.4.
[DM12, Theorem 3.4] The pairing
$Hom(X,Y)\otimes_{R}Hom(Y,X[n])\rightarrow k,$
$(F,G)\mapsto(-1)^{\binom{n+1}{2}}\frac{1}{n!}Res[\begin{subarray}{c}tr(FG(dQ)^{n})\\\
\partial_{1}w,\partial_{2}w,\cdots,\partial_{n}w\end{subarray}]=(-1)^{\binom{n+1}{2}}\frac{1}{n!}Res[\begin{subarray}{c}tr(GF(dP)^{n})\\\
\partial_{1}w,\partial_{2}w,\cdots,\partial_{n}w\end{subarray}]$
provides a homologically non-degenerate pairing on the morphism complexes of
the category $MF(R,w)$ associated to the germ of an isolated hypersurface
singularity. Here $P,Q$ are the twisted differentials of $X$ and $Y$
respectively.
###### Proof.
The formula involving the twisted differential $Q$ of $Y$ is proved in [DM12],
and the formula involving the twisted differential $P$ of $X$ can be proved by
the same method. ∎
We conclude this section by briefly recalling the $G$-equivariant version of
Matrix factorizations (see e.g. [PV12, Section 2.1]). Indeed, if $G$ is a
finite group of automorphisms of $R$ which fixes the potential $w$, one
defines the $G$-equivariant $\mathbb{Z}/2$-graded dg-category of matrix
factorizations $MF_{G}(w)$ (and the corresponding homotopy category
$HMF_{G}(w)$), by requiring that all modules and morphisms should be
$G$-equivariant. In other words, the module $E$ in Definition 2.1 should be a
$\mathbb{Z}/2$-graded finitely generated projective $R$-module equipped with a
compatible $G$-action, and $\delta_{E}$ has to be $G$-equivariant. Morphisms
between $G$-equivariant factorizations $\bar{E}$ and $\bar{F}$ should also be
compatible with the action of $G$, i.e.
$Hom_{MF_{G}(w)}(\bar{E},\bar{F})=Hom_{MF(w)}(\bar{E},\bar{F})^{G}.$
## 3\. Stability conditions on cyclic categories
### 3.1. Bridgeland stability conditions
The theory of Bridgeland stability conditions was introduced by Bridgeland in
[Bri07], motivated by Douglas’s work on D-branes and $\Pi$-stability [Dou02].
This theory was further studied by Kontsevich and Soibelman in [KS08]. In this
section, we will review some basic notions in the theory of stability
conditions (see [BBD82], [Bri07], [KS08] and [BLMS17]).
The first notion is $t$-structures on triangulated categories, which was
firstly introduced in [BBD82].
###### Definition 3.1.
Let $\mathcal{D}$ be an triangulated category. A $t$-structure on
$\mathcal{D}$ is a pair of full subcategories $(\mathcal{D}^{\leq
0},\mathcal{D}^{\geq 0})$ satisfying the condition (i), (ii) and (iii) below.
We denote $\mathcal{D}^{\leq n}=\mathcal{D}^{\leq 0}[-n]$, $\mathcal{D}^{\geq
n}=\mathcal{D}^{\geq 0}[-n]$ for every $n\in\mathbb{Z}$. Then the conditions
are:
(i) $Hom(E,F)=0$ for every $E\in\mathcal{D}^{\leq 0}$ and
$F\in\mathcal{D}^{\geq 1}$;
(ii) $\mathcal{D}^{\leq-1}\subset\mathcal{D}^{\leq 0}$ and $\mathcal{D}^{\geq
1}\subset\mathcal{D}^{\geq 0}$.
(iii) every object $E\in\mathcal{D}$ fits into an exact triangle
$\tau^{\leq 0}E\rightarrow E\rightarrow\tau^{\geq 1}E\rightarrow\cdots$
with $\tau^{\leq 0}E\in\mathcal{D}^{\leq 0}$, $\tau^{\geq
1}E\in\mathcal{D}^{\geq 1}$.
The heart of the $t$-structure is $\mathcal{A}=\mathcal{D}^{\leq
0}\cap\mathcal{D}^{\geq 0}$. It is an abelian category (see [HT07, Theorem
8.1.9]). The associated cohomology functors are defined by
$H^{0}(E)=\tau^{\leq 0}\tau^{\geq 0}E$, $H^{i}(E)=H^{0}(E[i])$.
Combining this definition with Harder-Narasimhan filtrations, Bridgeland
defined the notion of stability conditions on a triangulated category in
[Bri07].
###### Definition 3.2.
A Bridgeland stability condition $(\mathcal{P},Z)$ on a triangulated category
$\mathcal{D}$ consists of a group homomorphism
$Z:K_{0}(\mathcal{D})\rightarrow\mathbb{C}$, which factors through a fixed
group homomorphism $K_{0}(\mathcal{D})\rightarrow\Lambda$, where $\Lambda$ a
lattice of finite rank. This group homomorphism is called a central charge.
And full subcategories $\mathcal{P}(\phi)\in\mathcal{D}$ for each
$\phi\in\mathbb{R}$, satisfying the following axioms:
(a) if $E\in\mathcal{P}(\phi)$ is a nonzero object, then
$Z(E)=m(E)exp(i\pi\phi)$ for some $m(E)\in\mathbb{R}_{>0}$,
(b) for all $\phi\in\mathbb{R}$, $\mathcal{P}(\phi+1)=\mathcal{P}(\phi)[1]$,
(c) if $\phi_{1}>\phi_{2}$ and $A_{j}\in\mathcal{P}(\phi_{j})$ then
$Hom_{\mathcal{D}}(A_{1},A_{2})=0$,
(d) for every $0\neq E\in\mathcal{D}$ there exist a finite sequence of real
numbers
$\phi_{1}>\phi_{2}>\cdots>\phi_{m}$
and a sequence of morphisms
$0=E_{0}\xrightarrow{f_{1}}E_{1}\xrightarrow{f_{2}}\cdots\xrightarrow{f_{m}}E_{m}=E$
such that the cone of $f_{j}$ is in $\mathcal{P}(\phi_{j})$ for all $j$.
###### Remark 3.3.
(i) If we allow $m(E)$ to be $0$ for $\phi\in\mathbb{Z}$ in (a), then the pair
$(\mathcal{P},Z)$ is called a weak stability condition. In [KS08], the authors
require the pair $(\mathcal{P},Z)$ to satisfy one extra condition (support
property) to be a stability condition. We do not include this condition
because it is not needed in this paper.
(ii) This notion of Bridgeland stability conditions is categorical. Indeed,
suppose that $H:\mathcal{D}_{1}\xrightarrow{\sim}\mathcal{D}_{2}$ is an exact
equivalence between two triangulated categories, and $\sigma=(\mathcal{P},Z)$
is a Bridgeland stability condition on $\mathcal{D}_{2}$. Then we have a
Bridgeland stability condition $H^{*}\sigma=(H^{*}\mathcal{P},H^{*}Z)$ on
$\mathcal{D}_{1}$, which is defined in the following way:
* •
$H^{*}\mathcal{P}(\phi)=\\{E|H(E)\in\mathcal{P}(\phi)\\}$, for any
$\phi\in\mathbb{R}$.
* •
$H^{*}Z(E)=Z(H(E))$ for any $E$ in $\mathcal{D}_{1}$.
It is easy to check this is a Bridgeland stability condition on
$\mathcal{D}_{1}$.
The data $\mathcal{P}$ of full subcategories $\mathcal{P}(\phi)$ is called a
slicing on $\mathcal{D}$, a slicing can be viewed as a refinement of a
$t$-structure on a triangulated category. Indeed, one can easily check that a
slicing on $\mathcal{D}$ gives us a lot of $t$-structures on $\mathcal{D}$:
for any $\phi\in\mathbb{R}$, we have a $t$-structure
$(\mathcal{P}(>\phi-1),\mathcal{P}(\leq\phi))$ on $\mathcal{D}$.
In particular, we get a heart
$\mathcal{P}(0,1]=\mathcal{P}(>0)\cap\mathcal{P}(\leq 1)$. Hence, a stability
condition $(\mathcal{P},Z)$ gives us a pair $(\mathcal{A},Z)$, where
$\mathcal{A}$ is an abelian category. This construction results in an
equivalent definition of stability conditions (see [Bri07, Proposition 5.3]).
In this paper, we are interested in the triangulated categories with the
special property $[2]\simeq[0]$. Hence we have the following definition.
###### Definition 3.4.
A triangulated category $\mathcal{C}$ is called a cyclic category if there is
canonical isomorphism between two functors
$\beta:[2]\simeq id.$
We usually call $\beta$ a Bott’s isomorphism.
It is easy to see that there exist no $t$-structures on any nontrivial cyclic
categories. Indeed, assume that there is a $t$-structure on $\mathcal{C}$.
Then $[1]\simeq[-1]$ implies that $id_{A[1]}\in Hom(A[1],A[1])\simeq
Hom(A[1],A[-1])=0$ for any object $A$ in the heart, which implies that the
heart is trivial. As a slicing is a refinement of a $t$-structure, any
nontrivial cyclic categories do not admit any Bridgeland stability conditions.
However, if one intuitively think a slicing as a helix parametrization of the
structure of a triangulated category, one would expect to have the notion of
$S^{1}$-gradings on cyclic categories, which is a circle parametrization of
the structure of a cyclic category (see Figure 1 for this intuition).
### 3.2. Real decompositions and liftable commutative diagrams
To make this intuition precise, we need to do some preliminary work.
From here on, we will assume that the $k$-linear cyclic category $\mathcal{C}$
is Krull-Schmidt, i.e. every object decomposes into direct sum of
indecomposable objects in a unique way up to isomorphism, and the endomorphism
ring of an indecomposable object is local.
###### Definition 3.5.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt cyclic category. We say that
$\mathcal{C}$ admits a real decomposition if every space
$Hom_{\mathcal{C}}(E,F)$ admits a grading by $\mathbb{R}$, i.e.
$Hom_{\mathcal{C}}(E,F)=\bigoplus_{a\in\mathbb{R}}Hom_{a}(E,F)$
for any two indecomposable objects $E,F\in\mathcal{C}$. And these gradings
should satisfy the following conditions.
1. (1)
For any morphism $f\in Hom_{a}(E,F)$, we also have $f[1]\in
Hom_{a}(E[1],F[1])$.
2. (2)
For any two morphisms $f,g$ with
$f\in Hom_{a}(E,F),\ g\in Hom_{b}(F,G),$
we have
$g\circ f\in Hom_{(a+b)}(E,G).$
3. (3)
For any indecomposable object $E\in\mathcal{C}$, the morphisms $id_{E}\in
Hom_{0}(E,E)$ and $\beta_{E}\in Hom_{0}(E[2],E)$.
4. (4)
A morphism $f\in Hom_{a}(E,F)$ is called a homogeneous morphism of degree $a$.
For any isomorphism $f$, if $f$ is homogeneous, its degree is $0$.
This decomposition also defines a function
$q:\\{Nontrivial\ homogeneous\ morphims\\}\rightarrow\mathbb{R},$
which sends a nontrivial homogeneous morphism to its degree. The function $q$
is called the degree function associated with the real decomposition of
$\mathcal{C}$.
###### Remarks 3.6.
(i) The degree function $q$ is also called $R$-charge of homogeneous morphism
in physics literature (see e.g. [Wal05]). The real decomposition endows a
graded local algebra structure on $Hom_{\mathcal{C}}(E,E)$ for any
indecomposable object $E\in\mathcal{C}$, and a graded
$Hom_{\mathcal{C}}(E,E)-Hom_{\mathcal{C}}(F,F)$ bi-module structure on
$Hom_{\mathcal{C}}(E,F)$ for any two indecomposable objects $E,F$.
(ii) If $\mathcal{C}$ is just a $k$-linear Krull-Schmidt category, we can
define a similar notion of $G$-decomposition, where $G$ is an arbitrary group
instead of $\mathbb{R}$. In this case, we only need the decomposition to
satisfy conditions (2) and (4) in Definition 3.5. And all the results in this
subsection holds in this generality.
We have the following lemma from the definition of real decomposition.
###### Lemma 3.7.
If $\mathcal{C}$ admits a real decomposition and $E,F$ are two isomorphic
indecomposable objects, then there exists an isomorphism $f\in Hom_{0}(E,F)$.
And $f$ induces an isomorphism between $\mathbb{R}$-graded vector spaces
$Hom_{\mathcal{C}}(F,G)\xrightarrow{\circ f}Hom_{\mathcal{C}}(E,G).$
###### Proof.
Since $\mathcal{C}$ is Krull-Schmidt. The sum of two non-invertible morphisms
in $Hom(E,F)$ is still not invertible. Hence any isomorphism must have a
homogeneous summand $f$ which is also an isomorphism. By condition (4) in
Definition 3.5, $f$ is of degree $0$. Condition (2) in Definition 3.5 implies
the lemma. ∎
###### Remark 3.8.
Take $G$ to be $E$ in the lemma, we get that $f^{-1}$ is also a homogeneous
morphism of degree $0$.
Given a real decomposition of $\mathcal{C}$, we can define quasi-homogeneous
morphisms and liftable quasi-homogeneous morphisms.
###### Definition 3.9.
Suppose a morphism $f:A\rightarrow B$ can be written in the following way
$f:\bigoplus_{i=1}^{l}A_{i}\rightarrow\bigoplus_{j=1}^{m}B_{j},$
where $A_{i},B_{j}$ are nontrivial indecomposable objects for all $i,j$. Then
$f$ is called quasi-homogeneous if all its direct summand
$f_{ji}:A_{i}\rightarrow B_{j}$ are homogeneous morphisms for arbitrary $i,j$.
We also need the following terminology to define liftable quasi-homogeneous
morphisms.
###### Definition 3.10.
(1) Given a real decomposition on $\mathcal{C}$, we call the following diagram
$E_{0}\xleftrightarrow{f_{0}}E_{1}\xleftrightarrow{f_{1}}\cdots\xleftrightarrow{f_{n-1}}E_{n}$
a connecting path from $E_{0}$ to $E_{n}$ if $E_{i}$ is an indecomposable
object for any $0\leq i\leq n$ and $f_{j}$ is a nontrivial homogeneous
morphism in either $Hom_{\mathcal{C}}(E_{j},E_{j+1})$ or
$Hom_{\mathcal{C}}(E_{j+1},E_{j})$ for any $0\leq j\leq n-1$. We usually use
$l$ to denote a connecting path, and let
$q(l)\coloneqq\Sigma_{i=0}^{n-1}Sign(f_{i})q(f_{i}),$
where
$Sign(f_{i})=\begin{cases}1,&\text{if}f_{i}\in
Hom_{\mathcal{C}}(E_{i},E_{i+1});\\\ -1&\text{if}f_{i}\in
Hom_{\mathcal{C}}(E_{i+1},E_{i}).\end{cases}$
In the special case when $E_{0}=E_{n}$, we say that $l$ is a connecting loop.
A connecting path $l$ is called simple if it contains no connecting loops.
(2) The cyclic category $\mathcal{C}$ is called connective if for any two
indecomposable objects $E,F$, there exists a connecting path between them.
(3) Let D be a commutative digram in $\mathcal{C}$, if all the morphisms in D
are quasi-homogeneous. One could say that a connecting path
$l:E_{0}\xleftrightarrow{f_{0}}E_{1}\xleftrightarrow{f_{1}}\cdots\xleftrightarrow{f_{n-1}}E_{n}$
is in D if any homogeneous morphism $f_{i}$ is a homogeneous direct summand of
a quasi-homogeneous morphism in that commutative diagram.
###### Remarks 3.11.
(1) The degree of a connecting path depends on its direction. Indeed, a
connecting path $l$ from $E$ to $F$ can also be viewed as a connecting path
from $F$ to $E$, which we denote it by $\bar{l}$. We have
$q(l)=-q(\bar{l}).$
(2) The property of being connective is independent of the real decomposition,
it is a property of the category $\mathcal{C}$ itself.
(3) For any commutative digram D in $\mathcal{C}$, there is an associated
directed graph $G(\textbf{D})$ whose vertices are objects in D, and edges are
the morphisms in D. We are only interested in commutative diagrams whose
associated graph is simple, i.e. with neither self loops nor multiple edges.
So every commutative diagram in this paper is assumed to be of such kind.
All diagrams below are assumed to be in a $k$-linear Krull-Schmidt cyclic
category $\mathcal{C}$ which admits a real decomposition. We introduce two
special kinds of such commutative diagrams .
###### Definition 3.12.
Let D be a commutative diagram in $\mathcal{C}$. We say that D is liftable if
the following conditions are satisfied:
1. (1)
All the morphisms in D are quasi-homogeneous.
2. (2)
For any connecting loop $l$ in D, we have $q(l)=0$.
A commutative diagram D is called locally liftable, if for every simply
connected sub-diagram $\textbf{D}^{\prime}\subset\textbf{D}$, we have that
$\textbf{D}^{\prime}$ is liftable.
A liftable commutative diagram D is called connective, if for any two
nontrivial indecomposable summands $A,B$ of the objects in D, there is a
connecting path $l:A\dashrightarrow B$ in D.
###### Remarks 3.13.
(1) A sub-diagram in D is a commutative diagram $\textbf{D}^{\prime}$ whose
associated graph $G(\textbf{D}^{\prime})$ is a sub-graph of $G(\textbf{D})$,
and $\textbf{D}^{\prime},\textbf{D}$ share the same object and morphism over
the same vertex and edge respectively. The sub-diagram is called simply
connected if the associated graph $G(\textbf{D}^{\prime})$ is simply
connected.
(2) Let D be a liftabe commutative diagram and
$f:\bigoplus_{i=1}^{l}A_{i}\rightarrow\bigoplus_{j=1}^{m}B_{j}$ be a quasi-
homogeneous morphism in D. We can define a degree matrix $q(f)$ of $f$ in D.
Indeed, if there exists a connecting path $l$ from $A_{i}$ to $B_{j}$ in D, we
define the $ji-th$ entry of $q(f)$ to be $q(l)$. Otherwise, this entry is not
defined.
Note that the degree matrix $q(f)$ depends on the liftable commutative diagram
D. We suppress this dependence in the notation.
###### Example 3.14.
Let
$f=\begin{pmatrix}f_{11}&\cdots&f_{1l}\\\ \cdots&\cdots&\cdots\\\
f_{m1}&\cdots&f_{ml}\end{pmatrix}$
be a liftable homogeneous morphism. If moreover $f$ is connective, i.e. for
arbitrary $1\leq i\leq l,1\leq j\leq m$, there exist a connecting path from
$A_{i}$ to $B_{j}$. Every entry in the degree matrix $q(f)$ is well-defined,
and if we denote
$R(f)\coloneqq\begin{pmatrix}exp(2\pi i\cdot q(f)_{11})&\cdots&exp(2\pi i\cdot
q(f)_{1l})\\\ \cdots&\cdots&\cdots\\\ exp(2\pi i\cdot
q(f)_{m1}))&\cdots&exp(2\pi i\cdot q(f)_{ml})\par\end{pmatrix}.$
It is easy to show that the matrix $R(f)$ is of rank 1. We usually call $R(f)$
the $R$-matrix of $f$ when $f$ is a connective liftable quasi-homogeneous
morphism.
We have the following simple lemma.
###### Lemma 3.15.
For any chain D of quasi-homogeneous morphisms
$A_{0}\xrightarrow{f_{0}}A_{1}\xrightarrow{f_{1}}\cdots\xrightarrow{f_{n-1}}A_{n}.$
If the chain D is liftable, the following commutative digram
${A_{1}}$${A_{2}}$${\cdots}$${A_{n-1}}$${A_{0}}$${A_{n}}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{2}}$$\scriptstyle{f_{n-2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{0}}$$\scriptstyle{f_{1}\circ
f_{0}}$$\scriptstyle{f_{n-2}\circ\cdots\circ
f_{0}}$$\scriptstyle{f_{n-1}\circ\cdots\circ f_{0}}$
is also liftable.
###### Proof.
For any $1\leq i\leq n$, we can write
$A_{0}=\bigoplus_{j=1}^{m}A_{0,j},\ \ A_{i}=\bigoplus_{k=1}^{l}A_{i,k}.$
Let $f_{kj}:A_{0,j}\rightarrow A_{i,k}$ to denote the corresponding summand in
$f_{i-1}\circ\cdots\circ f_{0}$. The morphism $f_{kj}$ can be written as the
sum of homogeneous morphisms from $A_{0,j}$ to $A_{i,k}$ in D. As D is
liftable, these homogeneous morphisms are of the same degree. Hence any
nontrivial morphism $f_{kj}$ is a homogeneous morphism of degree equal to
$q(l)$, where $l:A_{0,j}\dashrightarrow A_{i,k}$ is a connecting path in D.
One can easily show that the commutative diagram is liftable by this
observation. ∎
###### Remark 3.16.
In fact, the same argument can show that the associated $(n+1)$-complete
commutative digram is also liftable.
###### Lemma 3.17.
Consider the following diagram
${\bigoplus_{i=1}^{l}A_{i}}$${\bigoplus_{j=1}^{m}B_{j}}$${\bigoplus_{k=1}^{n}C_{k}}$$\scriptstyle{f}$$\scriptstyle{g}$
where
$f=\begin{pmatrix}f_{11}&\cdots&f_{1l}\\\ \cdots&\cdots&\cdots\\\
f_{m1}&\cdots&f_{ml}\end{pmatrix}\ \ \ \ \ \
g=\begin{pmatrix}g_{11}&\cdots&g_{1l}\\\ \cdots&\cdots&\cdots\\\
g_{n1}&\cdots&g_{nl}\end{pmatrix}$
are quasi-homogeneous morphisms. Suppose that this diagram is a sub-digram of
a liftable commutative diagram D, and can be completed into a commutative
diagram by a morphism $h$.
${\bigoplus_{i=1}^{l}A_{i}}$${\bigoplus_{j=1}^{m}B_{j}}$${\bigoplus_{k=1}^{n}C_{k}}$$\scriptstyle{f}$$\scriptstyle{g}$$\scriptstyle{h}$
Then there is a quasi-homogeneous morphism $\bar{h}$ such that if we add
$\bar{h}$ to the digram D, we get a liftable digram $\bar{\textbf{D}}$ and the
following diagram
${\bigoplus_{i=1}^{l}A_{i}}$${\bigoplus_{j=1}^{m}B_{j}}$${\bigoplus_{k=1}^{n}C_{k}}$$\scriptstyle{f}$$\scriptstyle{g}$$\scriptstyle{\bar{h}}$
is a liftable commutative diagram.
###### Proof.
If we write
$h=\begin{pmatrix}h_{11}&\cdots&h_{1m}\\\ \cdots&\cdots&\cdots\\\
h_{n1}&\cdots&h_{nm}\end{pmatrix}$
we claim that if we take
$\bar{h}=\begin{pmatrix}\bar{h}_{11}&\cdots&\bar{h}_{1m}\\\
\cdots&\cdots&\cdots\\\ \bar{h}_{n1}&\cdots&\bar{h}_{nm}\end{pmatrix}$
where
$\bar{h}_{kj}=\begin{cases}\text{Summand of}\ h_{kj}\ \text{in degree}\
q(l),&\text{if there is a connecting path}\ l:B_{j}\dashrightarrow
C_{k}\in\textbf{D};\\\ 0,&\text{otherwise}.\end{cases}$
Then $\bar{h}$ satisfy the statement in the lemma.
First of all, the new diagram is lifatable by construction. Indeed, for any
connecting loop $l$ in $\bar{\textbf{D}}$, we can replace the segments
$\bar{h}_{kj}$ in $l$ by a connecting path $l_{kj}:B_{j}\dashrightarrow
C_{k}\in\textbf{D}$. This does not change the degree of the loop by the
definition of $\bar{h}_{kj}$. Hence we get a new loop
$l^{\prime}\in\textbf{D}$ with $q(l)=q(l^{\prime})=0$.
The next thing is to check that $g=\bar{h}\circ f$. It suffices to prove that
$\begin{pmatrix}\bar{h}_{k1}&\cdots&\bar{h}_{km}\end{pmatrix}\begin{pmatrix}f_{11}&\cdots&f_{1l}\\\
\cdots&\cdots&\cdots\\\
f_{m1}&\cdots&f_{ml}\end{pmatrix}=\begin{pmatrix}g_{k1}&\cdots&g_{kl}\end{pmatrix}$
for any $1\leq k\leq n$.
For any given $1\leq i_{1}\leq l$, we let
$J\coloneqq\\{j_{1},j_{2},\cdots,j_{s}\\}=\\{1\leq j\leq m|f_{ji_{1}}\neq
0\\}.$
The case $J=\emptyset$ is trivial. We assume that $J\neq\emptyset$. If there
is no connecting paths from $B_{j_{r}}$ to $C_{k}$ in D for any $1\leq r\leq
s$, the morphism $g_{ki_{1}}$ must be $0$. Hence
$\Sigma_{j=1}^{m}\bar{h}_{kj}f_{ji_{1}}=0=g_{ki_{1}}.$
On the other hand, if there is an integer $1\leq r_{1}\leq s$ such that there
is a connecting path from $B_{j_{r_{1}}}$ to $C_{k}$ in D. Then there is a
connecting path
$B_{j_{r}}\xleftarrow{f_{j_{r}i_{1}}}A_{i_{1}}\xrightarrow{f_{j_{r_{1}}i_{1}}}B_{j_{r_{1}}}\dashrightarrow
C_{k}$
for any $1\leq r\leq s$. Hence we have that $\bar{h}_{kj_{r}}f_{j_{r}i_{1}}$
are homogeneous morphisms of same degree for any $1\leq r\leq s$, and this
degree equals to $q(g_{ki_{1}})$ if $g_{ki_{1}}\neq 0$. This implies that
$\Sigma_{j=1}^{m}\bar{h}_{kj}f_{ji_{1}}=g_{ki_{1}}$ if $g_{ki_{1}}\neq 0$. If
$g_{ki_{1}}=0$, we know that $\Sigma_{j=1}^{m}\bar{h}_{kj}f_{ji_{1}}$ is a
homogeneous summand of $\Sigma_{j=1}^{m}h_{kj}f_{ji_{1}}=g_{ki_{1}}=0$, which
is also $0$. This completes the proof.
∎
###### Remark 3.18.
One thing worth noticing is that, as we can see in the proof, the morphism
$\bar{h}$ may depend on the ambient commutative liftable diagram D.
###### Proposition 3.19.
Suppose the following diagram
${A_{1}}$${\cdots}$${A_{i}}$${A_{i+1}}$${\cdots}$${A_{n-1}}$${A_{0}}$${A_{n}}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{i-1}}$$\scriptstyle{f_{i+1}}$$\scriptstyle{f_{n-2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{0}}$$\scriptstyle{g}$
is a sub-diagram of a liftable commutative diagram D, and it can be completed
into a commutative diagram by a morphism $h$.
${A_{1}}$${\cdots}$${A_{i}}$${A_{i+1}}$${\cdots}$${A_{n-1}}$${A_{0}}$${A_{n}}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{i-1}}$$\scriptstyle{h}$$\scriptstyle{f_{i+1}}$$\scriptstyle{f_{n-2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{0}}$$\scriptstyle{g}$
Then there is a quasi-homogeneous morphism $\bar{h}$ such that the following
diagram
${A_{1}}$${\cdots}$${A_{i}}$${A_{i+1}}$${\cdots}$${A_{n-1}}$${A_{0}}$${A_{n}}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{i-1}}$$\scriptstyle{\bar{h}}$$\scriptstyle{f_{i+1}}$$\scriptstyle{f_{n-2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{0}}$$\scriptstyle{g}$
is a liftable commutative diagram.
###### Proof.
The proof is similar to the proof of Lemma 3.17, we sketch the proof for
readers’ convenience.
We construct $\bar{h}$ and $\overline{f_{n-1}\circ\cdots\circ f_{i+1}\circ h}$
as in the proof of Lemma 3.17. By Lemma 3.15 and Lemma 3.17, we know that the
following diagram
${A_{1}}$${\cdots}$${A_{i}}$${A_{i+1}}$${\cdots}$${A_{n-1}}$${A_{0}}$${A_{n}}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{i-1}}$$\scriptstyle{\overline{f_{n-1}\circ\cdots\circ
f_{i+1}\circ
h}}$$\scriptstyle{f_{i+1}}$$\scriptstyle{f_{n-2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{0}}$$\scriptstyle{g}$
is liftable and commutative. It suffices to prove that
$\overline{f_{n-1}\circ\cdots\circ f_{i+1}\circ h}=f_{n-1}\circ\cdots\circ
f_{i+1}\circ\bar{h}$. We let $f\coloneqq f_{n-1}\circ\cdots\circ f_{i+1}$.
As in the proof of Lemma 3.17, we can assume that $A_{i},A_{n}$ are
indecomposable objects. Hence we can write
$A_{i+1}=\bigoplus_{j=1}^{m}D_{j},\ \ h=\begin{pmatrix}h_{1}\\\ \cdots\\\
h_{m}\end{pmatrix},\ \ f=\begin{pmatrix}f_{1}&\cdots,f_{m}\end{pmatrix}.$
Let
$J\coloneqq\\{j_{1},j_{2},\cdots,j_{s}\\}=\\{1\leq j\leq m|f_{j}\neq 0\\}.$
This implies that there is connecting path form $D_{j_{r}}$ to $A_{n}$ in D
for any $1\leq r\leq s$. Thus the argument in proof of Lemma 3.17 shows that
$\Sigma_{j=1}^{m}f_{j}\bar{h_{j}}$ is the homogeneous summand of
$\Sigma_{j=1}^{m}f_{j}h_{j}$ in the right degree. Hence
$f\circ\bar{h}=\overline{f\circ h}$. ∎
###### Corollary 3.20.
Let D be any liftable commutative diagram, suppose that we can add a morphism
$h$ into D such that the new diagram $\textbf{D}^{\prime}$ is still
commutative. Then there exists a quasi-homogeneous morphism $\bar{h}$ such
that, if we add $\bar{h}$ into D in the same position, the new diagram
$\bar{\textbf{D}}$ is a lifatable commutative diagram.
###### Proof.
We can construct $\bar{h}$ as in the proof of Lemma 3.17, the new diagram is
obviously liftable. And the commutativity follows from Lemma 3.15 and
Proposition 3.19. ∎
Sometimes, we need to glue two liftable commutaitve diagrams along a common
sub-diagram. In general the glued diagram will not be liftable. However, we
have following easy lemma.
###### Lemma 3.21.
Let $\textbf{D}_{1}$ and $\textbf{D}_{2}$ be two liftable diagrams,
$\textbf{D}_{3}$ be a common sub-diagram of $\textbf{D}_{1}$ and
$\textbf{D}_{2}$, and D be the glued diagram of $\textbf{D}_{1}$ and
$\textbf{D}_{2}$ along $\textbf{D}_{3}$. If $\textbf{D}_{3}$ is connective,
then D is liftable.
###### Proof.
We can focus on the loops which are not in $\textbf{D}_{1}$ or
$\textbf{D}_{2}$. Any loop of such kind $l:E\dashrightarrow E$ in D can be
written as $l_{1}\circ l_{2}\circ\cdots\circ l_{n}$ where
$l_{i}:E_{i}\dashrightarrow E_{i+1}$ is a connecting path in $\textbf{D}_{1}$
or $\textbf{D}_{2}$ depending on the parity of $i$, and the indecomposable
objects $E_{i}$ are in the diagram $\textbf{D}_{3}$ with $E_{0}=E_{n+1}=E$.
As $\textbf{D}_{3}$ is connective, we have connecting paths
$l_{i}^{\prime}:E_{i}\dashrightarrow E_{i+1}$ in $\textbf{D}_{3}$. Therefore,
one can get
$q(l)=\Sigma_{i=1}^{n}q(l_{i})=\Sigma_{i=1}^{n}q(l_{i}^{\prime})=q(l_{1}^{\prime}\circ
l_{2}^{\prime}\circ\cdots\circ l_{n}^{\prime})=0.$
The second equation follows from the assumption that $\textbf{D}_{1}$ and
$\textbf{D}_{2}$ are liftable. Hence, the lemma is proved. ∎
### 3.3. Definition of stability conditions
The following definition is about the compatibility between the triangulated
structure of $\mathcal{C}$ and the real decomposition on $\mathcal{C}$.
###### Definition 3.22.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt cyclic category which admits a
real decomposition. We say that $\mathcal{C}$ is pre-liftable with respect to
the real decomposition if the following conditions hold:
1. (1)
for any liftable quasi-homogeneous morphism $f:A\rightarrow B$, we can
complete it to a distinguished triangle
$A\xrightarrow{f}B\xrightarrow{g}C\xrightarrow{h}A[1]$
such that the following diagram
$\cdots\xrightarrow{g[-1]}A[-1]\xrightarrow{h[-1]}A\xrightarrow{f}B\xrightarrow{g}C\xrightarrow{h}A[1]\xrightarrow{f[1]}B[1]\xrightarrow{g[1]}\cdots$
is liftable,
2. (2)
if we have the following liftable commutative diagram
${X}$${X}$${Y}$${Z}$${X^{\prime}}$${Y[1]}$${Z^{\prime}}$${Y^{\prime}}$${X[1]}$${X[1]}$$\scriptstyle{u}$$\scriptstyle{id}$$\scriptstyle{v\circ
u}$$\scriptstyle{v}$$\scriptstyle{j}$$\scriptstyle{l}$$\scriptstyle{m}$$\scriptstyle{i}$$\scriptstyle{k}$$\scriptstyle{n}$$\scriptstyle{id}$
where all row and columns and are distinguished triangle, then it can be
completed into the following liftable commutative diagram
${X}$${X}$${Y}$${Z}$${X^{\prime}}$${Y[1]}$${Z^{\prime}}$${Y^{\prime}}$${X^{\prime}}$${Z[1]}$${X[1]}$${X[1]}$$\scriptstyle{u}$$\scriptstyle{id}$$\scriptstyle{v\circ
u}$$\scriptstyle{v}$$\scriptstyle{j}$$\scriptstyle{l}$$\scriptstyle{m}$$\scriptstyle{i}$$\scriptstyle{id}$$\scriptstyle{j[1]}$$\scriptstyle{f}$$\scriptstyle{k}$$\scriptstyle{g}$$\scriptstyle{n}$$\scriptstyle{h}$$\scriptstyle{id}$
where all the columns and rows are distinguished triangles.
Now, we are ready to define stability conditions on a $k$-linear Krull-Schmidt
cyclic category.
###### Definition 3.23.
A stability condition on a $k$-linear Krull-Schmidt cyclic category
$\mathcal{C}$ consists of four parts $(\mathcal{Q},Z,\phi,q)$, where
$\mathcal{Q}$ is a circle slicing, i.e. full subcategories $\mathcal{Q}(\psi)$
for any $\psi\in(0,2]$, a central charge
$Z:K_{0}(\mathcal{C})\xrightarrow{v}\Lambda\rightarrow\mathbb{C}$, a map
$\phi$ sending every indecomposable object $E$ to its phase $\phi(E)\in(0,2]$,
and $q$ a degree function of a real decomposition of $\mathcal{C}$. Here
$\mathcal{C}$ is pre-liftable with respect to the real decomposition and these
data satisfy the following compatibility conditions.
1. (1)
We have that $\phi(E[1])\equiv\phi(E)+1(mod\ 2\mathbb{Z})$.
2. (2)
For any indecomposable object $E$, the central charge can be written as
$Z(E)=m(E)e^{i\pi\phi(E)},$
where $m(E)\in\mathbb{R}_{\geq 0}$ and $\phi(E)\in(0,2]$.
3. (3)
For any homogeneous morphism $f:E_{1}\rightarrow E_{2}$ between two
indecomposable objects, we have
$q(f)\equiv\phi(E_{2})-\phi(E_{1})(mod\ 2\mathbb{Z}).$
4. (4)
$\mathcal{Q}(\psi)[1]=\mathcal{Q}(\psi^{\prime})$, where
$\psi^{\prime}\equiv\psi+1\ (mod\ 2\mathbb{Z})$ and
$\psi,\psi^{\prime}\in(0,2]$.
5. (5)
For any object $E\in\mathcal{Q}(\phi)$, we have $m(E)>0$ and $\phi(E)=\phi$.
6. (6)
For any nontrivial homogeneous morphism $f:E_{1}\rightarrow E_{2}$, if
$E_{k}\in\mathcal{Q}(\phi_{k})$ for $k=1,2$, we have $q(f)\geq 0$.
7. (7)
For any indecomposable object $E\in\mathcal{C}$, we have the following
filtration:
${E_{0}}$${E_{1}}$${\cdots}$${E_{n-2}}$${E_{n-1}}$${0=E_{0}}$${E_{1}}$${E_{2}}$${\cdots}$${E_{n-1}}$${E_{n}=E}$${Q_{1}}$${Q_{2}}$${\cdots}$${Q_{n-1}}$${Q_{n}}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{1}}$$\scriptstyle{id}$$\scriptstyle{p_{2}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n-1}}$$\scriptstyle{p_{n}}$
such that it satisfies the following conditions:
* •
the whole diagram is connective and liftable,
* •
for any $1\leq i\leq n$, we have $Q_{i}\in\mathcal{Q}(\phi_{i})$ being
nontrivial semi-stable objects,
* •
the diagram can be completed into the following liftable diagram,
${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${E_{0}}$${E_{1}}$${\cdots}$${E_{n-2}}$${E_{n-1}}$${0=E_{0}}$${E_{1}}$${E_{2}}$${\cdots}$${E_{n-1}}$${E_{n}}$${Q_{1}}$${Q_{2}}$${\cdots}$${Q_{n-1}}$${Q_{n}}$${E_{0}[1]}$${E_{1}[1]}$${\cdots}$${E_{n-2}[1]}$${E_{n-1}[1]}$${E_{1}[1]}$${E_{2}[1]}$${\cdots}$${E_{n-1}[1]}$${E_{n}[1]}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$$\scriptstyle{t_{1}[-1]}$$\scriptstyle{t_{2}[-1]}$$\scriptstyle{t_{n-1}[-1]}$$\scriptstyle{t_{n}[-1]}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{1}}$$\scriptstyle{id}$$\scriptstyle{p_{2}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n-1}}$$\scriptstyle{p_{n}}$$\scriptstyle{t_{1}}$$\scriptstyle{t_{2}}$$\scriptstyle{t_{n-1}}$$\scriptstyle{t_{n}}$$\scriptstyle{f_{1}[1]}$$\scriptstyle{f_{2}[1]}$$\scriptstyle{f_{n-1}[1]}$$\scriptstyle{f_{n}[1]}$$\scriptstyle{p_{1}[1]}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{2}[1]}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n-1}[1]}$$\scriptstyle{p_{n}[1]}$
where the sequences
$E_{i}\xrightarrow{f_{i+1}}E_{i+1}\xrightarrow{p_{i+1}}Q_{i+1}\xrightarrow{t_{i+1}}E_{i}[1]$
are distinguished triangles for all $0\leq i\leq n-1$. We call this diagram
the Harder-Narasimhan diagram of $E$,
* •
for any $2\leq i\leq n$, there exists a real number $c_{i}<0$ such that for
any two indecomposable summands $Q_{i-1,1}$ and $Q_{i,1}$ of $Q_{i-1}$ and
$Q_{i}$ respectively, there exists a simple connecting path from $Q_{i-1,1}$
to $Q_{i,1}$ in the diagram, and we have
$q(l)=c_{i}<0$
any such simple connecting path $l$.
###### Remarks 3.24.
(i) Note that in condition (2), instead of simply requiring that $m(E)>0$ for
all indecomposable objects, we define $\phi(E)$ even when $Z(E)=0$, this is
because we want condition (3) to hold even when one of the central charges of
$E_{1},E_{2}$ is zero.
(ii) A filtration of $E$ as in condition (7) is also called a Harder-
Narasimhan filtration of $E$, $Q_{i}$ is called the $i$-th Harder-Narasimhan
factor of such a filtration. Its uniqueness will be discussed in Section 5.
Also note that we do not require $E_{i},Q_{i}$ to be indecomposable.
(iii) Sometimes, such a stability condition may contain more information than
we need. Though these extra information could be encoded in a geometric
picture, we still want to have an equivalence relation among such stability
conditions. This equivalence relation will also be introduced in Section 5.
(iv) From the definition, one can easily show that for any integer $i$ and any
connecting path $l$ between two indecomposable summands of $Q_{i}$ in
condition (7), we have $q(l)=0$.
(v) To distinguish two different terms of stability conditions, we will call
the notion in Definition 3.23 stability conditions, while call the notion in
Definition 3.2 Bridgeland stability conditions. Usually, the distinction will
be clear in context, depending whether we assume the triangulated category to
be cyclic or not. In the next section, we will see that these two notions of
stability conditions though different in general, are closely related to each
other.
## 4\. The $\mathbb{Z}$-lifts of cyclic categories and stability conditions
Let $Stab_{cyc}(\mathcal{C})$ to denote the set of stability conditions on a
$k$-linear Krull-Schmidt cyclic category. In this section, we will investigate
some basic properties of $Stab_{cyc}(\mathcal{C})$ and other closely related
topological spaces.
We start with the definition of charge triples and charge pairs.
###### Definition 4.1.
Assume that $\mathcal{C}$ is a $k$-linear Krull-Schmidt cyclic category, then
a charge triple $R=(Z,\phi,q)$ consists of a central charge
$Z:K_{0}(\mathcal{C})\rightarrow\Lambda\rightarrow\mathbb{C}$, a map $\phi$
sending every indecomposable object $E$ to its phase $\phi(E)\in(0,2]$, and
$q$ a degree function of a real decomposition on $\mathcal{C}$, which satisfy
conditions (1), (2) and (3) in Definition 3.23.
A pair $(Z,q)$ is called a charge pair if for any two indecomposable objects
$E_{1},E_{2}$ with $Z(E_{i})=m(E_{i})e^{i\theta_{i}}\neq 0$ for $i=1,2$, we
have that
$q(l)=\theta_{2}-\theta_{1}(mod\ 2\ \mathbb{Z})$
for any connecting path $l:E_{1}\dashrightarrow E_{2}$.
###### Remark 4.2.
As we will see Lemma 4.3, in most cases a charge triple $(Z,\phi,q)$ is
determined by its charge pair $(Z,q)$.
Let us fix the set of homogeneous morphisms first. And denote the set of
charge triples and charge pairs of such a fixed set of homogeneous morphisms
on $\mathcal{C}$ by $\textbf{T}(\mathcal{C})$ and $\textbf{P}(\mathcal{C})$
respectively. There are natural topologies on these two sets, which are the
coarsest topologies such that the forgetful maps
$\textbf{T}(\mathcal{C})\rightarrow
Hom(\Lambda,\mathbb{C})\simeq\mathbb{C}^{rank(\Lambda)},\
\textbf{P}(\mathcal{C})\rightarrow
Hom(\Lambda,\mathbb{C})\simeq\mathbb{C}^{rank(\Lambda)},$ $where\
(Z,\phi,q)\mapsto Z,\ (Z,q)\mapsto Z,$
$\textbf{T}(\mathcal{C})\rightarrow\mathbb{R},\
\textbf{P}(\mathcal{C})\rightarrow\mathbb{R},$ $where\ (Z,\phi,q)\mapsto
q(f),\ (Z,q)\mapsto q(f),$
and
$\textbf{T}(\mathcal{C})\rightarrow(0,2]\xrightarrow{e^{i\pi x}}S^{1},$
$where\ (Z,\phi,q)\mapsto\phi(E)$
are continuous for any indecomposable object $E$ and any nonzero homogeneous
morphism $f$. Here $\mathbb{R}$, $S^{1}$ and $\mathbb{C}^{rank(\Lambda)}$ are
endowed with the standard Euclidean topology. There is also a continuous
forgetful map
$\textbf{T}(\mathcal{C})\rightarrow\textbf{P}(\mathcal{C}).$
###### Lemma 4.3.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt connective cyclic category,
the continuous map
$h:\textbf{T}(\mathcal{C})\rightarrow\textbf{P}(\mathcal{C})$
is an isomorphism except on the locus where the central charge $Z$ is trivial.
###### Proof.
Suppose that there exists an indecomposable object $E$ with $Z(E)\neq 0$, then
$\phi(E)$ is determined by $Z(E)$. For any other indecomposable object $F$, by
the assumption that $\mathcal{C}$ is connective, there exists a connecting
path from $E$ to $F$. Hence $\phi(E)$ and $q(l)$ determines $\phi(F)$. ∎
###### Remark 4.4.
One can easily show that on the locus where the central charge is trivial, the
map $h$ is an $S^{1}$ bundle map. By condition (5) in Definition 3.23, there
is no stability conditions above this locus.
In the next subsection, we will show that on distinguished components in
$\textbf{T}(\mathcal{C})$ with vanishing Maslov indices, the stability
conditions can be lifted to be Bridgeland stability conditions.
### 4.1. Maslov index and $\mathbb{Z}$ lifting
Given a real decomposition on $\mathcal{C}$, and assume that $\mathcal{C}$ is
pre-liftable with respect to the real decomposition. We can define what is a
basic loop in such a real decomposition.
###### Definition 4.5.
Assume that $\mathcal{C}$ admits a real decomposition and $q$ is the
associated degree function, a fundamental distinguished hexagon in
$\mathcal{C}$ is the following locally liftable diagram
${\bigoplus_{j=1}^{m}B_{j}}$${\bigoplus_{k=1}^{n}C_{k}}$${\bigoplus_{i=1}^{l}A_{i}}$${\bigoplus_{i=1}^{l}A_{i}[1]}$${\bigoplus_{k=1}^{n}C_{k}[1]}$${\bigoplus_{j=1}^{m}B_{j}[1]}$$\scriptstyle{g}$$\scriptstyle{h}$$\scriptstyle{f}$$\scriptstyle{f[1]}$$\scriptstyle{\beta_{A}\circ
h[1]}$$\scriptstyle{g[1]}$
where
$\bigoplus_{i=1}^{l}A_{i}\xrightarrow{f}\bigoplus_{j=1}^{m}B_{j}\xrightarrow{g}\bigoplus_{k=1}^{n}C_{k}\xrightarrow{h}\bigoplus_{i=1}^{l}A_{i}[1]$
is a distinguished triangle in $\mathcal{C}$.
A basic loop $l$ in $\mathcal{C}$ is $\beta_{A_{i}}\circ f[1]\circ f$, where
$f:A_{i}\dashrightarrow A_{i}[1]$ is a connecting path in the diagram of the
distinguished triangle.
###### Remark 4.6.
The same loop can also be defined as basing at $B_{j}$ or $C_{k}$ as $\beta$
is a natural transformation.
We can define what is the Maslov index of a basic loop.
###### Definition 4.7.
Given a basic loop in $\mathcal{C}$ as in Definition 4.5, its Maslov index is
$M(l)\coloneqq\frac{q(f)-1}{2}.$
As its namesake in Fukaya categories, we will show that if all the Maslov
indices vanish, there is a suitable $\mathbb{Z}$-lift of $\mathcal{C}$.
###### Theorem 4.8.
Given a stability condition $\sigma=(\mathcal{Q},Z,\phi,q)$ on a $k$-linear
Krull-Schmidt cyclic category $\mathcal{C}$, we assume that the Maslov indices
of all basic loops are zero. There are $\mathbb{Z}$-lifts of $\sigma$ and
$\mathcal{C}$, which we denote by $\widetilde{\sigma}$ and
$\widetilde{\mathcal{C}}$ respectively, such that $\widetilde{\sigma}$ is a
Bridgeland stability condition on $\widetilde{\mathcal{C}}$.
###### Proof.
The natural $\mathbb{Z}$-lift of $\mathcal{C}$ is defined in the following
way: the indecomposable objects are pairs $(E,\phi)$, where $E$ is an
indecomposable object in $\mathcal{C}$ and $\phi\in\mathbb{R}$ such that
$\phi(E)\equiv\phi(mod\ 2\mathbb{Z}).$
The morphism space between two such indecomposable objects is defined by the
following formula
$Hom_{\widetilde{\mathcal{C}}}((E_{1},\phi_{1}),(E_{2},\phi_{2}))=\\{f\in
Hom_{\mathcal{C}}(E_{1},E_{2})|f\ is\ homogeneous\ and\
q(f)=\phi_{2}-\phi_{1}\\}.$
Moreover, one can easily show that the composition of morphisms is well
defined because of the condition (2) of real decomposition. Note that we can
define a $k$-linear functor
$\pi:\widetilde{\mathcal{C}}\rightarrow\mathcal{C}$ in a natural way. It sends
objects $(E,\phi)$ to $E$ and be the natural inclusion on the spaces of
morphisms.
The triangulated structure on $\widetilde{\mathcal{C}}$ is defined in the
following way. The shift functor $[1]$ sends $(E,\phi)$ to $(E[1],\phi+1)$,
and is the natural isomorphism on the spaces of morphisms due to condition (1)
of real decomposition.
$Hom_{\widetilde{\mathcal{C}}}((E_{1},\phi_{1}),(E_{2},\phi_{2}))\xrightarrow{\sim}Hom_{\widetilde{\mathcal{C}}}((E_{1}[1],\phi_{1}+1),(E_{2}[1],\phi_{2}+1)).$
Obviously, one has $\pi\circ[1]=[1]\circ\pi$. We define a triangle in
$\widetilde{\mathcal{C}}$ to be a distinguished triangle if and only if its
image under $\pi$ is a distinguished triangle.
We need to check that this indeed define a triangulated category
$\widetilde{\mathcal{C}}$ (for the definition of triangulated category, we
refer to [Nee01, Chapter 1]).
First of all, assume that we have a morphism
$f:\bigoplus_{i=1}^{l}(A_{i},\phi_{i})\rightarrow\bigoplus_{i=1}^{m}(B_{j},\psi_{j})$
in $\widetilde{\mathcal{C}}$. The image of $f$ under $\pi$ is a liftable
quasi-homogeneous morphism, hence by the assumption that $\mathcal{C}$ is pre-
liftable, there exists a distinguished triangle
$\bigoplus_{i=1}^{l}A_{i}\xrightarrow{\pi(f)}\bigoplus_{j=1}^{m}B_{j}\xrightarrow{g}\bigoplus_{k=1}^{n}C_{k}\xrightarrow{h}\bigoplus_{i=1}^{l}A_{i}[1]$
in $\mathcal{C}$, and the diagram is liftable. Therefore, there exists a pre-
image of such distinguished triangle in $\widetilde{\mathcal{C}}$
$\bigoplus_{i=1}^{l}(A_{i},\phi_{i})\xrightarrow{f}\bigoplus_{i=1}^{m}(B_{j},\psi_{j})\xrightarrow{g}\bigoplus_{1\leq
k\leq
n,d\in\mathbb{Z}}(C_{k},\phi(C_{k})+2d)\xrightarrow{h}\bigoplus_{i=1}^{l}(A[1],\phi_{i}+1).$
The last term can be written as $\bigoplus_{i=1}^{l}(A[1],\phi_{i}+1)$ since
all Maslov indices of basic loops vanish.
Hence axioms TR1 is satisfied by our assumption that $\mathcal{C}$ is pre-
liftable and all Maslov indices vanish. By Corollary 3.20, we know that TR3 is
also satisfied. The axiom TR2 hold in $\widetilde{\mathcal{C}}$ since they
already hold in $\mathcal{C}$. The octahedral axiom TR4 follows from condition
(2) in Definition 3.22.
For $\widetilde{\sigma}=(\mathcal{P},\widetilde{Z})$ on
$\widetilde{\mathcal{C}}$, we define the central charge $\widetilde{Z}$ to be
the composition
$K_{0}(\widetilde{\mathcal{C}})\xrightarrow{\pi_{0}}K_{0}(\mathcal{C})\xrightarrow{Z}\mathbb{C}$.
And $\mathcal{P}(\phi)$ to be the full subcategory consists of objects
$(E,\phi)$ such that $E\in\mathcal{Q}(\phi^{\prime})$ where
$\phi^{\prime}\equiv\phi\ (mod\ 2\mathbb{Z})$. One can easily check that
$\widetilde{\sigma}$ is a Bridgeland stability condition on
$\widetilde{\mathcal{C}}$ by unwinding the definitions. ∎
###### Remarks 4.9.
(1) The Octahedral Axiom (TR 4) of $\widetilde{\mathcal{C}}$ holds by the
second condition in Definition 3.22. However, in practice, this condition
could be difficult to check. The reader could ignore that condition, and take
$\widetilde{\mathcal{C}}$ to be just a pre-triangulated category. All the
results in this paper holds in that situation, as we do not use the Octahedral
Axiom in our proofs.
(2) We call the functor $\pi:\widetilde{\mathcal{C}}\rightarrow\mathcal{C}$ a
$\mathbb{Z}$-covering since it is similar to the $\mathbb{Z}$-covering of
topological spaces. We will use $\pi^{*}(\sigma)$ to denote
$\widetilde{\sigma}$. Also note that the $\mathbb{Z}$-lift
$\widetilde{\mathcal{C}}$ only depends on the charge triple. By its
construction we know that the functor $\pi$ is exact and faithful.
(3) The functor $\pi$ induce a surjective homomorphism on Grothendieck groups
$\pi_{0}:K_{0}(\widetilde{\mathcal{C}})\twoheadrightarrow K_{0}(\mathcal{C})$.
Hence the central charge
$Z:K_{0}(\widetilde{\mathcal{C}})\xrightarrow{\pi_{0}}K_{0}(\mathcal{C})\xrightarrow{v}\Lambda\rightarrow\mathbb{C}$
also factors through the given lattice $\Lambda$. In the following, we usually
fix the factorization
$K_{0}(\widetilde{\mathcal{C}})\xrightarrow{\pi_{0}}K_{0}(\mathcal{C})\xrightarrow{v}\Lambda$.
According to Theorem 4.8, the following definitions is natural.
###### Definition 4.10.
(1) We say that $\mathcal{C}$ is liftable with respect to $R=(Z,\phi,q)$ if
$\mathcal{C}$ is pre-liftable with respect to the real decomposition in $R$
and all the Maslov indices vanish.
(2) Let $R_{1}=(Z_{1},\phi_{1},q_{1})$ and $R_{2}=(Z_{2},\phi_{2},q_{2})$ be
two charge triples on $\mathcal{C}$, if for any two connecting paths
$l_{1},l_{2}:E\dashrightarrow F$, we have that $l_{1},l_{2}$ are homogeneous
with respect to $R_{1}$ if and only if $l_{1},l_{2}$ are homogeneous with
respect to $R_{2}$, and moreover
$q_{1}(l_{1})-q_{1}(l_{2})=q_{2}(l_{1})-q_{2}(l_{2}).$
Then we say that $R_{1},R_{2}$ are deformation equivalent.
###### Proposition 4.11.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt connective cyclic category,
and $R_{1}=(Z_{1},\phi_{1},q_{1})$ and $R_{2}=(Z_{2},\phi_{2},q_{2})$ be two
charge triples on $\mathcal{C}$. Suppose that $R_{1},R_{2}$ are deformation
equivalent and $\mathcal{C}$ is liftable with respect to $R_{1}$. Then
$\mathcal{C}$ is also liftable with respect to $R_{2}$.
Moreover, if we denote the $\mathbb{Z}$-lifts by $\widetilde{\mathcal{C}}_{1}$
and $\widetilde{\mathcal{C}}_{2}$ respectively, there exists an equivalence
(not canonical)
$H:\widetilde{\mathcal{C}}_{1}\simeq\widetilde{\mathcal{C}}_{2}$.
###### Proof.
By definition, it is easy to see that $\mathcal{C}$ is pre-liftable with
respect to $R_{2}$. Indeed, for any basic loop $l_{1}:A\dashrightarrow A$ in
$\mathcal{C}$, take $l_{2}=id_{A}$, we get $0=q_{1}(l_{1})=q_{2}(l_{2})$.
Hence $\mathcal{C}$ is also liftable with respect to $R_{2}$.
To construct an equivalence $H$, let $E$ be a nontrivial indecomposable object
in $\mathcal{C}$, we can define
$H:\widetilde{\mathcal{C}}_{1}\rightarrow\widetilde{\mathcal{C}}_{2}$ in the
following way. First of all
$H((E,\phi_{1}(E)+2k))=(E,\phi_{2}(E)+2k),$
for any $k\in\mathbb{Z}$.
Take another indecomposable object $F$, there exists a connecting path $l$
starting from $E$ ending at $F$ in $\widetilde{\mathcal{C}}_{1}$. We set
$H((F,\phi_{1}(E)+q_{1}(l)+2k))\coloneqq(F,\phi_{2}(E)+q_{2}(l)+2k)$
for any $k\in\mathbb{Z}$. Our assumption
$q_{1}(l_{1})-q_{1}(l_{2})=q_{2}(l_{1})-q_{2}(l_{2})$
ensures that it is independent of the choice of connecting paths. As
$\mathcal{C}$ is connective, there exists a connecting path
$l^{\prime}:F\dashrightarrow F[1]$. As
$l^{\prime\prime}\coloneqq\overline{\beta\circ l^{\prime}[1]}:F\dasharrow
F[1]$ is another connecting from $F$ tp $F[1]$. By assumption we know that
$2q_{1}(l^{\prime})=q_{1}(l^{\prime})-q_{1}(l^{\prime\prime})=q_{2}(l^{\prime})-q_{2}(l^{\prime\prime})=2q_{2}(l^{\prime}),$
which implies that $q_{1}(l^{\prime})=q_{2}(l^{\prime})$. Hence we get
$\begin{split}H((F,\phi_{1}(E)+q_{1}(l)+q_{1}(l^{\prime})+2k-1)[1])&=H((F[1],\phi_{1}(E)+q_{1}(l^{\prime}\circ
l)+2k))\\\ &=(F[1],\phi_{2}(E)+q_{2}(l^{\prime})+q_{2}(l)+2k)\\\
&=(F,\phi_{2}(E)+q_{1}(l^{\prime})+q_{2}(l)+2k-1)[1]\end{split}$
This implies that $H\circ[1]=[1]\circ H$ on objects.
For the morphisms, assume that
$f\in Hom_{\mathcal{C}}(G,G^{\prime})$
is a homogeneous morphism between two indecomposable objects
$G,G^{\prime}\in\mathcal{C}$, and
$l:E\dashrightarrow G$
is a connecting path from $E$ to $G$. The morphism $f$ can be lifted to
$f^{\prime}\in
Hom_{\widetilde{\mathcal{C}}_{1}}((G,\phi_{1}(E)+q_{1}(l)+2k),(G^{\prime},\phi_{1}(E)+q_{1}(l)+q_{1}(f)+2k)),$
and we define
$H(f^{\prime})\in
Hom_{\widetilde{\mathcal{C}}_{2}}((G,\phi_{2}(E)+q_{1}(l)+2k),(G^{\prime},\phi_{2}(E)+q_{2}(l)+q_{2}(f)+2k)).$
From the definition of distinguished triangles in
$\widetilde{\mathcal{C}}_{1}$ and $\widetilde{\mathcal{C}}_{2}$, it is easy to
see that $H:\widetilde{\mathcal{C}}_{1}\rightarrow\widetilde{\mathcal{C}}_{2}$
sends distinguished triangles to distinguished triangles. Therefore, $H$ is an
exact functor between triangulated categories.
By the symmetry of $R_{1}$ and $R_{2}$, we can define a functor
$G:\widetilde{\mathcal{C}}_{2}\rightarrow\widetilde{\mathcal{C}}_{1}$ starting
from the same indecomposable object $E\in\mathcal{C}$. It is easy to check
that $H,G$ are inverse to each other as exact functors between triangulated
categories. ∎
###### Remarks 4.12.
(i) As we can see in the proof, different choices of $E$ may result in
different equivalences, hence the equivalence is not canonical. By the
assumption that $\mathcal{C}$ is connective, one can show that there are
$\mathbb{Z}$-copies of equivalences in total.
(ii) This equivalence induces a commutative diagram between coverings.
${\widetilde{\mathcal{C}}_{1}}$${\widetilde{\mathcal{C}}_{2}}$${\mathcal{C}}$$\scriptstyle{\pi_{1}}$$\scriptstyle{H}$$\scriptstyle{\pi_{2}}$
(iii) Consistent choices of such equivalences can be viewed as a connection on
the $\mathbb{Z}$-lifts fibered over a connected component of
$\textbf{T}(\mathcal{C})$ where $\mathcal{C}$ is liftable with respect to the
charge triples in that component.
(iv) In theory, there could be multiple components of charge triples on which
$\mathcal{C}$ is liftable, and the lifted categories are not equivalent to
each other.
## 5\. Uniqueness of Harder-Narasimhan filtration
In this section, we will discuss the uniqueness of Harder-Narasimhan
filtration. We start with the following lemma.
###### Lemma 5.1.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt connective cyclic category,
$\sigma=(\mathcal{Q},Z,\phi,q)$ be a stability condition on $\mathcal{C}$.
Assume that all the Maslov indices of basic loops are nonnegative and let
${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${E_{0}}$${E_{1}}$${\cdots}$${E_{n-2}}$${E_{n-1}}$${0=E_{0}}$${E_{1}}$${E_{2}}$${\cdots}$${E_{n-1}}$${E_{n}=E}$${Q_{1}}$${Q_{2}}$${\cdots}$${Q_{n-1}}$${Q_{n}}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{1}}$$\scriptstyle{id}$$\scriptstyle{p_{2}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n-1}}$$\scriptstyle{p_{n}}$
be a Harder-Narasimhan filtration of an indecomposable object $E$ as in
condition (7) of Definition 3.23 with $n\geq 2$. Further assume that we have
the following liftable commutative diagram between distinguished triangles,
${\cdots}$${E_{i-1}}$${E_{i}}$${Q_{i}}$${E_{i-1}[1]}$${\cdots}$${\cdots}$${E_{i-1}}$${E_{i}}$${Q_{i}}$${E_{i-1}[1]}$${\cdots}$$\scriptstyle{f_{i}}$$\scriptstyle{\alpha}$$\scriptstyle{p_{i}}$$\scriptstyle{id}$$\scriptstyle{\delta}$$\scriptstyle{t_{i}}$$\scriptstyle{\alpha[1]}$$\scriptstyle{f_{i}}$$\scriptstyle{p_{i}}$$\scriptstyle{t_{i}}$
and if we glue two copies of Harder-Narasimhan diagrams along each rows in
this diagram respectively, we get a liftable commutative diagram. Then
$\delta=id$ and $\alpha=id$.
###### Proof.
Firstly, we claim that: there is a connecting path $l:E_{n-1,1}\dashrightarrow
E_{n-1,1}[1]$ in the following distinguished triangle
$E_{n-1}\xrightarrow{f_{n}}E_{n}\xrightarrow{p_{n}}Q_{n}\xrightarrow{t_{n}}E_{n-1}[1],$
where $E_{n-1,1}$ is an indecomposable summand of $E_{n-1}$.
Let $E_{n-1,1}$ be an arbitrary indecomposable summand of $E_{n-1}$, if we
assume that $p[1]\circ t_{n}=0$ where $p:E_{n-1}\rightarrow E_{n-1,1}$ is the
natural projection map. We have the following diagram
${E_{n-1}}$${E_{n}}$${Q_{n}}$${E_{n-1}[1]}$${E_{n-1,1}}$${Q_{n}\oplus
E_{n-1,1}}$${Q_{n}}$${E_{n-1,1}[1]}$$\scriptstyle{f_{n}}$$\scriptstyle{p}$$\scriptstyle{p_{n}}$$\scriptstyle{f}$$\scriptstyle{t_{n}}$$\scriptstyle{id}$$\scriptstyle{p[1]}$$\scriptstyle{0}$
If we can precompse the natural inclusion $i:E_{n-1,1}\rightarrow E_{n-1}$ and
postcompose the natural projection $q:Q_{n}\oplus E_{n-1,1}\rightarrow
E_{n-1,1}$ with the diagram, we can see that $E_{n-1,1}$ is a direct summand
of $E_{n}$. As $E_{n}$ is indecomposable, we get $E_{n-1,1}\simeq E_{n}$. Then
one can easily show that there is a morphism $i^{\prime}:E_{n}\rightarrow
E_{n-1}$ such that $f_{n}\circ i^{\prime}=id$, this implies that $p_{n}=0$,
which contradicts the first condition in Definition 3.23.(7). Hence we showed
that $p\circ t_{n}\neq 0$ for any indecomposable direct summand $E_{n-1,1}$ of
$E_{n-1}$.
This easily implies the claim. Indeed, by the connectivity condition in
Definition 3.23.(7), there is an indecomposable summand $E_{n-1,1}$ with
$f_{n}|_{E_{n-1,1}}\neq 0$, and any summand $p^{\prime}$ of $p_{n}$ is
nonzero. Hence we get a connecting loop $l:E_{n-1,1}\dashrightarrow
E_{n-1,1}[1]$. By the assumption that all Maslov indices are nonnegative, we
have $q(l)\geq 1$.
We use D to denote the glued diagram of two Harder-Narasimhan diagrams. As in
Remark 3.24.(iv), it is easy to see that every summand of $\delta$ is a
homogeneous morphism of degree $0$. Hence, if we replace $\delta$ by $\delta-
id$ in D, the new diagram is still liftable.
By assumption, we get that $(id-\delta)\circ p_{i}=0$. Hence $id-\delta$ is in
the image of the map
$Hom_{\mathcal{C}}(E_{i-1}[1],Q_{i})\xrightarrow{\circ
t_{i}}Hom_{\mathcal{C}}(Q_{i},Q_{i}),$
i.e. we can write $id-\delta=\delta_{i}\circ t_{i}$, where $\delta_{i}$ can be
chosen to be a quasi-homogeneous morphism such that the following diagram is
commutative and liftable by Corollary 3.20.
${\cdots}$${Q_{i}}$${E_{i-1}[1]}$${\cdots}$${Q_{i}}$$\scriptstyle{id-\delta}$$\scriptstyle{t_{i}}$$\scriptstyle{\delta_{i}}$
where the first row is the Harder-Narasimhan diagram of $E$.
We assume that $id-\delta\neq 0$, hence $t_{i}\neq 0,\delta_{i}\neq 0$. We
have the following liftable commutative diagram
${E_{i-2}[1]}$${E_{i-1}[1]}$${Q_{i-1}[1]}$${Q_{i}}$$\scriptstyle{f_{i-1}[1]}$$\scriptstyle{\delta_{i}}$$\scriptstyle{p_{i-1}[1]}$
Our second claim is that $\delta_{i}$ is not in the image of
$Hom_{\mathcal{C}}(Q_{i-1}[1],Q_{i})\xrightarrow{\circ
p_{i-1}[1]}Hom_{\mathcal{C}}(E_{i-1}[1],Q_{i}).$
Suppose the contrary, i.e. there exists a nonzero morphism $\epsilon\in
Hom_{\mathcal{C}}(Q_{i-1}[1],Q_{i})$ with $\delta_{i}=\epsilon\circ
p_{i-1}[1]$. By Corollary 3.20, we can choose $\epsilon$ such that the
following diagram is commutative and liftable.
${E_{i-2}[1]}$${E_{i-1}[1]}$${Q_{i-1}[1]}$${Q_{i}}$$\scriptstyle{f_{i-1}[1]}$$\scriptstyle{\delta_{i}}$$\scriptstyle{p_{i-1}[1]}$$\scriptstyle{\epsilon}$
Let $Q_{i-1,1}[1],Q_{i,1}$ be indecomposable summands of $Q_{i-1}[1],Q_{i}$
respectively such that the homogeneous summand
$\epsilon_{11}:Q_{i-1,1}[1]\rightarrow Q_{i,1}$ is nonzero. By the proof of
Lemma 3.17, this implies that there is an indecomposable summand
$E_{i-1,1}[1]$ with a connecting path
$l_{1}:Q_{i-1,1}[1]\dashrightarrow E_{i-1,1}[1]$
in the liftable morphism $E_{i-1}[1]\xrightarrow{p_{i-1}[1]}Q_{i-1}[1]$, and a
connecting path
$l_{2}:E_{i-1,1}[1]\dashrightarrow Q_{i,1}$
in the Harder-Narasimhan diagram of $E$.
By the connectivity condition in Definition 3.23.(7), one can find a
connecting path
$l_{3}:E_{i-1,1}\dashrightarrow Q_{i,1}$
in the Harder-Narasimhan diagram of $E$.
By the last condition of Definition 3.23.(7), we know that
(1) $0>c_{i}=q(l_{3}\circ l_{1}[-1])=q(l_{3})+q(l_{1}).$
Once again by the connectivity assumption, one can find a connecting path
$l_{4}:E_{i-1,1}\dashrightarrow E_{n,1}$
in the Harder-Narasimhan diagram of $E$. Then we have the following connecting
loop.
${E_{i-1,1}}$${E_{i-1,1}[1]}$${E_{n,1}}$${E_{n,1}[1]}$$\scriptstyle{\bar{l}_{2}\circ
l_{3}}$$\scriptstyle{l_{4}}$$\scriptstyle{l_{4}{[1]}}$$\scriptstyle{l}$
As the Harder-Narasimhan diagram of $E$ is liftable, we know that
(2) $-q(l_{2})+q(l_{3})=q(\bar{l}_{2}\circ l_{3})=q(l)\geq 1>0.$
The inequalities (1) and (2) implies that
$q(\epsilon_{11})=q(l_{1})+q(l_{2})<0,$
this implies that $\epsilon_{11}=0$ by Definition 3.23, which contradicts the
assumption $\epsilon_{11}\neq 0$. Hence our second claim is proved. Therefore,
we get that
$\delta_{i}\circ f_{i-1}[1]\neq 0.$
We use the same argument as in the proof of our second claim to do induction,
in the end, we will get that
$\delta_{i}\circ f_{i-1}[1]\circ\cdots\circ f_{2}[1]\circ f_{1}[1]\neq 0,$
which is impossible since $f_{1}=0$. Hence we proved that $\delta-id=0$.
The proof for $\alpha=id$ is similar. ∎
###### Proposition 5.2.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt connective cyclic category,
$\sigma=(\mathcal{Q},Z,\phi,q)$ be a stability condition on $\mathcal{C}$.
Assume that all the Maslov indices of basic loops are nonnegative and let
${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${E_{0}}$${E_{1}}$${\cdots}$${E_{n-2}}$${E_{n-1}}$${0=E_{0}}$${E_{1}}$${E_{2}}$${\cdots}$${E_{n-1}}$${E_{n}=E}$${Q_{1}}$${Q_{2}}$${\cdots}$${Q_{n-1}}$${Q_{n}}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{1}}$$\scriptstyle{id}$$\scriptstyle{p_{2}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n-1}}$$\scriptstyle{p_{n}}$
be a Harder-Narasimhan filtration of an indecomposable object $E$ as in
condition (7) of Definition 3.23.
Then the compositions $f_{j}\circ f_{j-1}\circ\cdots\circ f_{i}\neq 0$ for any
$2\leq i<j\leq n$.
###### Proof.
Without loss of generality, we can let $j=n$. By induction, we can assume that
$f_{n}\circ f_{n-1}\circ\cdots\circ f_{i+1}\neq 0$, and it suffices to prove
that $f_{n}\circ f_{n-1}\circ\cdots\circ f_{i}\neq 0$. Assume the contrary,
i.e. the composition $f_{n}\circ f_{n-1}\circ\cdots\circ f_{i}$ is trivial.
Since $f_{n}\circ f_{n-1}\circ\cdots\circ f_{i}=0$, we have the following
liftable commutative diagram.
${\cdots}$${\cdots}$${\cdots}$${\cdots}$${E_{i-1}}$${\cdots}$${E_{n-2}}$${E_{n-1}}$${E_{i-1}}$${E_{i}}$${\cdots}$${E_{n-1}}$${E_{n}=E}$${Q_{i}}$${\cdots}$${\cdots}$${Q_{n}}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$$\scriptstyle{f_{i}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{i}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n}}$$\scriptstyle{h_{n}}$
One can easily see that $p_{n}\circ h_{n}=0$ by Definition 3.23. Hence $h_{n}$
factors through $E_{n-1}$, we have following liftable diagram
${\cdots}$${\cdots}$${\cdots}$${\cdots}$${E_{i-1}}$${\cdots}$${E_{n-2}}$${E_{n-1}}$${E_{i-1}}$${E_{i}}$${\cdots}$${E_{n-1}}$${E_{n}=E}$${Q_{i}}$${\cdots}$${\cdots}$${Q_{n}}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$$\scriptstyle{f_{i}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{i}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n}}$$\scriptstyle{h_{n}}$$\scriptstyle{h_{n-1}}$
We claim that this diagram is commutative. It suffices to prove that
$h_{n-1}\circ p_{i}=f_{n-1}\circ\cdots\circ f_{i+1}.$
Denote the difference $h_{n-1}\circ p_{i}-f_{n-1}\circ\cdots\circ f_{i+1}$ by
$\delta$, we have $f_{n}\circ\delta=0$. Hence we have the following liftable
commutative diagram
${\cdots}$${E_{{i-1}}}$${E_{i}}$${Q_{i}}$${\cdots}$${\cdots}$${Q_{n}[-1]}$${E_{n-1}}$${E_{n}}$${\cdots}$$\scriptstyle{f_{i}}$$\scriptstyle{\delta}$$\scriptstyle{\epsilon}$$\scriptstyle{p_{i}}$$\scriptstyle{t}$$\scriptstyle{f_{n}}$
where both rows are in the Harder-Narasimhan diagram of $E$. By the same
argument in the proof of Lemma 5.1, we can show that $\delta=0$. Hence, the
claim is proved.
Continuing the same argument, we get two liftable commutative diagrams
${\cdots}$${\cdots}$${\cdots}$${\cdots}$${E_{i-1}}$${E_{i}}$${\cdots}$${E_{n-1}}$${E_{i-1}}$${E_{i}}$${E_{i+1}}$${\cdots}$${E_{n}=E}$${Q_{i}}$${\cdots}$${\cdots}$${Q_{n}}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$$\scriptstyle{f_{i}}$$\scriptstyle{f_{i+1}}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{i}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n}}$$\scriptstyle{h_{i+1}}$$\scriptstyle{h_{n}}$
and
${\cdots}$${\cdots}$${\cdots}$${\cdots}$${E_{i-1}}$${E_{i}}$${\cdots}$${E_{n-1}}$${E_{i-1}}$${E_{i}}$${E_{i+1}}$${\cdots}$${E_{n}=E}$${Q_{i}}$${\cdots}$${\cdots}$${Q_{n}}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$$\scriptstyle{f_{i}}$$\scriptstyle{f_{i+1}}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n}}$$\scriptstyle{h_{i+1}}$$\scriptstyle{h_{n}}$$\scriptstyle{h_{i}}$
Hence we get $f_{i+1}\circ(h_{i}\circ p_{i}-id)=0$. By the same argument in
the proof of Lemma 5.1, one can show that $h_{i}\circ p_{i}=id$. Hence $E_{i}$
is a direct summand of $Q_{i}$, which implies that $f_{i}=0$. This contradicts
the connectivity condition. Hence, the proposition is proved. ∎
###### Theorem 5.3.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt connective cyclic category,
$\sigma=(\mathcal{Q},Z,\phi,q)$ be a stability condition on $\mathcal{C}$.
Assume that all the Maslov indices of basic loops are nonnegative.
Then the Harder-Narasimhan filtration of any indecomposable object $E$ is
unique up an isomorphism.
###### Proof.
Let the following liftable diagram
${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${E_{0}}$${E_{1}}$${\cdots}$${E_{n-2}}$${E_{n-1}}$${0=E_{0}}$${E_{1}}$${E_{2}}$${\cdots}$${E_{n-1}}$${E_{n}=E}$${Q_{1}}$${Q_{2}}$${\cdots}$${Q_{n-1}}$${Q_{n}}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$$\scriptstyle{f_{1}}$$\scriptstyle{f_{2}}$$\scriptstyle{f_{n-1}}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{1}}$$\scriptstyle{id}$$\scriptstyle{p_{2}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{n-1}}$$\scriptstyle{p_{n}}$
be a Harder-Narasimhan diagram of an indecomposable object $E$. By Definition
3.23, we know that for any indecomposable summand $Q_{i,1}$ in $Q_{i}$, there
is a connecting path $l:Q_{i,1}\dashrightarrow E$ in the diagram. By
Definition 3.23.(7), $q(l)$ is independent of the path, only depends on the
integer $i$. Hence, we can denote this degree by $\phi_{i}$. Moreover, we also
have
$\phi_{1}<\phi_{2}<\cdots<\phi_{n}$
by definition.
Suppose we have another liftable diagram
${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${E_{0}}$${E_{1}^{\prime}}$${\cdots}$${E_{m-2}^{\prime}}$${E_{m-1}^{\prime}}$${0=E_{0}}$${E_{1}^{\prime}}$${E_{2}^{\prime}}$${\cdots}$${E_{m-1}^{\prime}}$${E_{m}=E}$${Q_{1}^{\prime}}$${Q_{2}^{\prime}}$${\cdots}$${Q_{m-1}^{\prime}}$${Q_{m}^{\prime}}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$${\cdots}$$\scriptstyle{f_{1}^{\prime}}$$\scriptstyle{f_{2}^{\prime}}$$\scriptstyle{f_{m-1}^{\prime}}$$\scriptstyle{f_{m}^{\prime}}$$\scriptstyle{id}$$\scriptstyle{p_{1}^{\prime}}$$\scriptstyle{id}$$\scriptstyle{p_{2}^{\prime}}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{id}$$\scriptstyle{p_{m-1}^{\prime}}$$\scriptstyle{p_{m}^{\prime}}$
which is another Harder-Narasimhan diagram of an indecomposable object $E$.
Similarly we have a increasing sequence of real numbers
$\phi_{1}^{\prime}>\phi_{2}^{\prime}>\cdots>\phi_{m}^{\prime},$
where $\phi_{i}^{\prime}$ denotes the degree of a connecting path from an
indecomposable summand $Q_{i,1}^{\prime}$ in $Q_{i}^{\prime}$ to $E$ in this
diagram.
As $E$ is an indecomposable object, we have following liftable diagram.
${\cdots}$${E_{n-1}}$${E}$${Q_{n}}$${\cdots}$${\cdots}$${E_{m-1}^{\prime}}$${E}$${Q_{m}^{\prime}}$${\cdots}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{n}}$$\scriptstyle{f_{m}}$$\scriptstyle{p_{m}^{\prime}}$
where the first row is in the first Harder-Narasimhan diagram, and the second
row is in the second Harder-Narasimhan diagram.
We claim that $\phi_{n}=\phi_{m}^{\prime}$. To prove the claim, let us assume
that $\phi_{n}<\phi_{m}^{\prime}$ first. Under such assumption, one can show
that $p_{m}^{\prime}\circ f_{n}=0$. Indeed, if $p_{m}^{\prime}\circ f_{n}\neq
0$, by the proof in Proposition 5.2 and the assumption that
$\phi_{n}<\phi_{m}^{\prime}$, we conclude that
$p_{m}^{\prime}\circ f_{n}\circ f_{n-1}\circ\cdots\circ f_{1}\neq 0,$
which is a contradiction. Hence we get $p_{m}^{\prime}\circ f_{n}=0$.
Therefore, we have a liftable commutative diagram
${\cdots}$${E_{n-1}}$${E}$${Q_{n}}$${\cdots}$${\cdots}$${E_{m-1}^{\prime}}$${E}$${Q_{m}^{\prime}}$${\cdots}$$\scriptstyle{f_{n}}$$\scriptstyle{id}$$\scriptstyle{p_{n}}$$\scriptstyle{t}$$\scriptstyle{f_{m}}$$\scriptstyle{p_{m}^{\prime}}$
However, by the assumption $\phi_{n}<\phi_{m}^{\prime}$, we know that any
homogeneous summand of $t$ is of negative degree, hence $t=0$, which implies
$p_{m}^{\prime}=0$. This contradicts the connectivity condition. Therefore, we
proved $\phi_{n}\geq\phi_{m}^{\prime}$. And by a symmetric argument, we get
$\phi_{n}=\phi_{m}^{\prime}$. This proves the claim.
In the case $\phi_{n}=\phi_{m}^{\prime}$, the proof of Lemma 5.1 could show
that $p_{n}\circ f_{m}=0$ and $p_{m}^{\prime}\circ f_{n}=0$. Hence we have the
following liftable and commutative diagram.
${\cdots}$${E_{n-1}}$${E}$${Q_{n}}$${\cdots}$${\cdots}$${E_{m-1}^{\prime}}$${E}$${Q_{m}^{\prime}}$${\cdots}$${\cdots}$${E_{n-1}}$${E}$${Q_{n}}$${\cdots}$${\cdots}$${E_{m-1}^{\prime}}$${E}$${Q_{m}^{\prime}}$${\cdots}$$\scriptstyle{f_{n}}$$\scriptstyle{\alpha}$$\scriptstyle{id}$$\scriptstyle{p_{n}}$$\scriptstyle{t}$$\scriptstyle{f_{m}}$$\scriptstyle{\alpha^{\prime}}$$\scriptstyle{p_{m}^{\prime}}$$\scriptstyle{id}$$\scriptstyle{t^{\prime}}$$\scriptstyle{f_{n}}$$\scriptstyle{\alpha}$$\scriptstyle{id}$$\scriptstyle{p_{n}}$$\scriptstyle{t}$$\scriptstyle{f_{m}}$$\scriptstyle{p_{m}^{\prime}}$
By Lemma 5.1, we get that $Q_{n}\simeq Q_{m}^{\prime}$ and $E_{n-1}\simeq
E_{m-1}^{\prime}$.
Inductively using the similar argument, one can conclude that $n=m$,
$\phi_{i}=\phi_{i}^{\prime}$, $E_{i}\simeq E_{i}^{\prime}$, $Q_{i}\simeq
Q_{i}^{\prime}$, and the last two class of isomorphisms are compatible. Hence
these two Harder-Narasimhan filtrations are isomorphic.
∎
This theorem leads to the following natural definition of stability
conditions.
###### Definition 5.4.
A stability condition $\sigma$ is called a good stability condition if the
Harder-Narasimhan filtration of any indecomposable object is unique. We denote
such set by $Stab_{cyc}^{u}(\mathcal{C})$.
###### Remark 5.5.
By Theorem 5.3, we know that $Stab_{cyc}^{u}(\mathcal{C})$ differs from
$Stab_{cyc}(\mathcal{C})$ only on the locus where there are some basic loops
with negative Maslov indices.
Also note that, although the non-negativity pf Maslov indices is a sufficient
condition for the uniqueness of Harder-Narasimhan filtration, it is not a
necessary condition.
As mentioned in Remark 3.24.(iii), the space of good stability condition as in
Definition 3.23 may contain more information than we need. Therefore, we have
following equivalence relation between good stability conditions.
###### Definition 5.6.
Let $\sigma_{1}=(\mathcal{Q}_{1},Z_{1},\phi_{1},q_{1})$ and
$\sigma_{2}=(\mathcal{Q}_{2},Z_{2},\phi_{2},q_{2})$ be two good stability
conditions on $\mathcal{C}$, we say that $\sigma_{1}$ is equivalent to
$\sigma_{2}$ if the following conditions are satisfied:
1. (1)
The circle slicings and central charges are the same, i.e.
$\mathcal{Q}_{1}=\mathcal{Q}_{2}$ and $Z_{1}=Z_{2}$.
2. (2)
A morphism $f$ is homogeneous with respect to $\sigma_{1}$ if and only if it
is homogeneous with respect to $\sigma_{2}$.
3. (3)
For any indecomposable objects $E$, the HN filtration of $E$ with respect to
$\sigma_{1}$ is isomorphic to the HN filtration of $E$ with respect to
$\sigma_{2}$.
4. (4)
For any connecting path $l$ from a semi-stable object to another semi-stable
object, we have $q_{1}(l)=q_{2}(l)$.
###### Remark 5.7.
It is obviously an equivalence relation. We use $\sigma_{1}\simeq\sigma_{2}$
to denote such a relation.
We denote the equivalent classes of good stability conditions by
$Stab_{cyc}^{u,e}(\mathcal{C})$. We have the following diagram.
$Stab_{cyc}^{u,e}(\mathcal{C})\twoheadleftarrow
Stab_{cyc}^{u}(\mathcal{C})\hookrightarrow Stab_{cyc}(\mathcal{C}).$
The following two results are the main reasons of introducing this equivalence
relation. The notations are the same as in Proposition 4.11 and Remarks 4.12.
###### Proposition 5.8.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt connective cyclic category,
and $\sigma_{1},\sigma_{2}$ be two equivalent good stability conditions on
$\mathcal{C}$. Then the charge triples $R_{1},R_{2}$ of $\sigma_{1}$ and
$\sigma_{2}$ are deformation equivalent. In particular, the Maslov indices of
a basic loop with respect to $\sigma_{1},\sigma_{2}$ are the same.
Moreover, if $\mathcal{C}$ is liftable with respect to $\sigma_{1}$ (hence
also liftable with respect to $\sigma_{2}$). Then there exists a canonical
equivalence
$H:\widetilde{\mathcal{C}}_{1}\rightarrow\widetilde{\mathcal{C}}_{2}$ between
two $\mathbb{Z}$-coverings of $\mathcal{C}$, such that
$H^{*}\pi_{2}^{*}\sigma_{2}=\pi_{1}^{*}\sigma_{1},$
where $\pi_{1},\pi_{2}$ are projections in the following commutative diagram.
${\widetilde{\mathcal{C}}_{1}}$${\widetilde{\mathcal{C}}_{2}}$${\mathcal{C}}$$\scriptstyle{\pi_{1}}$$\scriptstyle{H}$$\scriptstyle{\pi_{2}}$
###### Proof.
To prove that $R_{1},R_{2}$ are deformation equivalent, let $l$ be a
connecting path from $E$ to $F$, and consider the Harder-Narasimhan
filtrations of $E$ and $F$ respectively. There are connecting paths
$l_{1}:Q_{1,1}\dashrightarrow E$ and $l_{2}:F\dashrightarrow
Q_{m,1}^{\prime}$, where $Q_{1,1}$ is an indecomposable summand in the first
Harder-Narasimhan factor of $E$ and $Q_{m,1}^{\prime}$ is an indecomposable
summand in the last Harder-Narasimhan factor of $F$. Therefore, we get a
connecting path
$l_{2}\circ l\circ l_{1}:Q_{1,1}\dashrightarrow Q_{m,1}^{\prime}.$
By Definition 5.6.(4), we get
$q_{1}(l_{2}\circ l\circ l_{1})=q_{2}(l_{2}\circ l\circ l_{1}),$
which implies that $q_{1}(l)-q_{2}(l)$ is a constant real number for any
connecting path $l:E\dashrightarrow F$. This proves that $R_{1},R_{2}$ are
deformation equivalent.
For the second half of the proposition, we assume that $\mathcal{C}$ is
liftable with respect to $\sigma_{1}$. Our equivalence $H$ is constructed in
the following way.
We start with a semi-stable object $E\in\mathcal{Q}(\phi)$, then proceed as in
the proof of Proposition 4.11. One can easily show that
$H^{*}\pi_{2}^{*}\sigma_{2}=\pi_{1}^{*}\sigma_{1}$
just by unwinding the definitions. This equivalence is independent of the
choice of semi-stable object $E$ because of Definition 5.6.(5). ∎
We have the following theorem relating the stability conditions on
$\mathcal{C}$ to Bridgeland stability conditions on its $\mathbb{Z}$-lift. In
this theorem, we fix the factorization
$K_{0}(\widetilde{\mathcal{C}})\xrightarrow{\pi_{0}}K_{0}(\mathcal{C})\xrightarrow{v}\Lambda$
as in Remarks 4.9.(3).
###### Theorem 5.9.
Let $\sigma$ be a stability condition on a connective cyclic category
$\mathcal{C}$ such that $\mathcal{C}$ is liftable with respect to $\sigma$. By
Theorem 4.8, we have a $\mathbb{Z}$-lift $\widetilde{\mathcal{C}}$ of the
cyclic category $\mathcal{C}$. We use $Stab_{cyc}^{0}(\mathcal{C})$ to denote
the set of stability conditions whose charge triples are deformation
equivalent to the charge triple of $\sigma$, and
$Stab_{cyc}^{0,e}(\mathcal{C})$ to denote the equivalent classes
$Stab_{cyc}^{0}(\mathcal{C})/\simeq$.
Then there is an isomorphism
$Stab(\widetilde{\mathcal{C}})/2\mathbb{Z}\xrightarrow{\sim}Stab_{cyc}^{0,e}(\mathcal{C}),$
where $Stab(\widetilde{\mathcal{C}})$ denotes the space of Bridgeland
stability conditions on $\widetilde{\mathcal{C}}$, and the $2\mathbb{Z}$
action is given by $2k\mapsto[2k]$ for any $k\in\mathbb{Z}$.
###### Proof.
Let us first give a map
$\mathbb{L}:Stab^{0,e}_{cyc}(\mathcal{C})\rightarrow
Stab(\widetilde{\mathcal{C}})/2\mathbb{Z}.$
Fix an indecomposable object $G\in\mathcal{C}$, for any stability condition
$\sigma_{1}\in Stab^{0}(\mathcal{C})$, we have an equivalence
$H:\widetilde{\mathcal{C}}\rightarrow\widetilde{\mathcal{C}}_{1}$
by Proposition 4.11, where $\widetilde{\mathcal{C}}_{1}$ is the
$\mathbb{Z}$-lift of $\mathcal{C}$ with respect to $\sigma_{1}$. We define
$\mathbb{L}(\sigma_{1})=H^{*}\pi_{1}^{*}\sigma_{1},$
where $\pi_{1}:\widetilde{\mathcal{C}}_{1}\rightarrow\mathcal{C}$ is the
natural covering functor. This map $\mathbb{L}$ is well defined on equivalence
classes by Proposition 5.8 and the fact that these equivalent functors form a
connection (as we fix an indecomposable object $G$, see Remark 4.12).
On the other hand, we need to construct a map
$\mathbb{P}:Stab(\widetilde{\mathcal{C}})/2\mathbb{Z}\xrightarrow{}Stab_{cyc}^{0,e}(\mathcal{C}).$
Given a Bridgeland stability condition
$\widetilde{\sigma}_{1}=(\mathcal{P},\widetilde{Z})$ on
$\widetilde{\mathcal{C}}$, we can define a stability condition
$\mathbb{P}(\widetilde{\sigma}_{1})\coloneqq\sigma_{1}=(\mathcal{Q},Z,\phi,q)$
on $\mathcal{C}$ in the following way.
For the circle slicing $\mathcal{Q}$, let $\mathcal{Q}(\phi)$ be the full
subcategory consisting of objects $\pi(\mathcal{P}(\phi))$ for any
$\phi\in(0,2]$.
For the central charge, we know that $\widetilde{Z}$ factors as
$\widetilde{Z}:K_{0}(\widetilde{\mathcal{C}})\xrightarrow{\pi_{0}}K_{0}(\mathcal{C})\xrightarrow{v}\Lambda\xrightarrow{g}\mathbb{C},$
we let $Z:K_{0}(\mathcal{C})\xrightarrow{v}\Lambda\xrightarrow{g}\mathbb{C}$
be the central charge in $\sigma_{1}$.
For the phase function $\phi$, let $E$ be an indecomposable object in
$\mathcal{C}$, if
$Z(E)=m(E)e^{i\pi\phi}\neq 0$
and $\phi\in(0,2]$, we let $\phi(E)=\phi$. In the other case, if $Z(E)=0$, we
let $\phi(E)$ to be any real number in $(0,2]$ and $\phi(E[1])\equiv\phi(E)+1\
(mod\ 2\mathbb{Z})$. In the end, we will show that this is well defined, i.e.
all the possible stability conditions defined in this way are equivalent to
each other.
For the real decomposition, one can show that for any two indecomposable
objects $E^{\prime},F^{\prime}\in\widetilde{\mathcal{C}}$, we define
$Hom_{\mathcal{C}}(\pi(E^{\prime}),\pi(F^{\prime}))=\bigoplus_{k\in\mathbb{Z}}Hom_{\widetilde{\mathcal{C}}}(E^{\prime},F^{\prime}[2k]).$
This defines the same set of homogeneous morphisms as in $\sigma$. We need to
assign appropriate degree of each homogeneous morphism.
Let $f:E^{\prime}\rightarrow F^{\prime}$ be a nonzero morphism between two
indecomposable objects in $\widetilde{\mathcal{C}}$. In the case when
$E^{\prime}\in\mathcal{P}(\phi_{1}),F^{\prime}\in\mathcal{P}(\phi_{2})$, we
define $q(f)=\phi_{2}-\phi_{1}$, which is a positive real number by
definition. In other cases, let us take $S$ to be the set consisting of
unstable indecomposable objects in $\widetilde{\mathcal{C}}$ such that, any
unstable indecomposable object in $\widetilde{\mathcal{C}}$ is isomorphic to
$A^{\prime}[k]$, where $A^{\prime}\in S$ and $k\in\mathbb{Z}$, and for any two
different objects $A^{\prime},B^{\prime}\in S$, we have that $A^{\prime}$ is
not isomorphic to $B^{\prime}[k]$ for any $k\in\mathbb{Z}$.
In the case when $E^{\prime}\in\mathcal{P}(\phi_{1})$, and $F^{\prime}\simeq
A^{\prime}[k]$ where $A^{\prime}\in S$. Let us take
$A^{\prime}\xrightarrow{p_{n}}Q_{n}^{\prime}$ to be the natural morphism from
$A^{\prime}$ to its last Harder-Narasimhan factor $Q_{n}^{\prime}$. Then we
have the following diagram
${E^{\prime}\in\mathcal{P}(\phi_{1})}$${F^{\prime}\simeq
A^{\prime}[k]}$${Q_{n}^{\prime}[k]\in\mathcal{P}(k+\phi_{2})}$$\scriptstyle{f}$$\scriptstyle{p_{n}[k]}$
We define
$q(\pi(p_{n,i}))=\phi_{2}-\phi(A^{\prime}),\ \
q(\pi(f))=k+\phi(A^{\prime})-\phi_{1},$
where $p_{n,i}$ is a summand of the morphism of $p_{n}$.
The case when $F^{\prime}\in\mathcal{P}(\phi_{2})$ and $E^{\prime}\simeq
A^{\prime}[k]$ where $A^{\prime}\in S$ is similar. Indeed, we have the
following diagram in $\widetilde{\mathcal{C}}$
${E^{\prime}}$${F^{\prime}\in\mathcal{P}(\phi_{2})}$${Q_{n}^{\prime}[k]\in\mathcal{P}(\phi_{1}+k)}$$\scriptstyle{f}$$\scriptstyle{p_{n}[k]}$
where $p_{n}:A^{\prime}\rightarrow Q_{n}^{\prime}$ is the natural morphism
from $A^{\prime}$ to its last Harder-Narasimhan factor
$Q_{n}^{\prime}\in\mathcal{P}(\phi_{1})$. We define
$q(\pi(p_{n,i}))=\phi_{1}-\phi(A^{\prime}),\ \
q(\pi(f))=\phi_{2}-\phi(A^{\prime})-k.$
The last case is when $E^{\prime}\simeq A^{\prime}[k_{1}]$ and
$F^{\prime}\simeq B^{\prime}[k_{2}]$, we have the following diagram
${E^{\prime}}$${F^{\prime}}$${Q_{n}^{\prime}[k]\in\mathcal{P}(\phi_{1}+k_{1})}$${Q_{m}^{\prime}\in\mathcal{P}(\phi_{2}+k_{2})}$$\scriptstyle{f}$$\scriptstyle{p_{n}[k_{1}]}$$\scriptstyle{p_{m}[k_{2}]}$
where $p_{n}:A^{\prime}\rightarrow Q_{n}^{\prime},\
p_{m}:B^{\prime}\rightarrow Q_{m}^{\prime}$ are the natural morphisms from
$A^{\prime},B^{\prime}$ to their last Harder-Narasimhan factors
$Q_{n}^{\prime}\in\mathcal{P}(\phi_{1}),Q_{m}^{\prime}\in\mathcal{P}(\phi_{2})$
respectively. We define
$q(\pi(f))=k_{2}-k_{1}+\phi(B^{\prime})-\phi(A^{\prime}).$
This gives the degree function of the real decomposition on $\mathcal{C}$.
Thus, we got our data $\sigma_{1}=(\mathcal{Q},Z,\phi,q)$ on $\mathcal{C}$.
Before proving that $\sigma$ is indeed a stability condition on $\mathcal{C}$
and our map $\mathbb{P}$ is well defined, let us state an easy consequence of
the construction of degrees.
For any connecting path
$l:E^{\prime}\in\mathcal{P}(\phi_{1})\dashrightarrow
F^{\prime}\in\mathcal{P}(\phi_{2})$
in $\widetilde{\mathcal{C}}$ (with respect to $\sigma_{1}$), we have
$q(\pi(l))=\phi_{2}-\phi_{1}.$
This follows directly from the construction of degree function $q$, we leave
the direct check to the reader. We call such property as the invariance of
semi-stable path.
There are several things to check for the construction of $\sigma_{1}$, we
list them below.
1. (1)
The data $\sigma_{1}=(\mathcal{Q},Z,\phi,q)$ is a stability condition on
$\mathcal{C}$,
2. (2)
the charge triple $(Z,\phi,q)$ is deformation equivalent to the charge triple
of $\sigma$,
3. (3)
and the equivalent class of $\sigma_{1}$ is independent of the choices of
$\phi(E)$ for $Z(E)=0$ and the set $S$.
For (1), most conditions in Definition 3.23 are direct consequences of the
construction. We only need to check that the Harder-Narasimhan diagram is
connective and liftable. We prove the connectivity by the induction on the
number of Harder-Narasimhan factors.
Firstly, if $E^{\prime}$ is semi-stable in $\widetilde{\mathcal{C}}$, the
connectivity is obvious. Assume that the connectivity condition is true for
any indecomposable object $E^{\prime}$ with $n-1$ Harder-Narasimhan factors.
We consider an indecomposable object $E^{\prime}$ with $n$ Harder-Narasimhan
factors. Let the following distinguished triangle
$E^{\prime}_{n-1}\xrightarrow{f_{n}}E^{\prime}_{n}=E^{\prime}\xrightarrow{p_{n}}Q_{n}^{\prime}\xrightarrow{}E^{\prime}_{n-1}[1]$
be the last triangle in the Harder-Narasimhan filtration of $E^{\prime}$ with
respect to $\sigma_{1}^{\prime}$. We claim that every summand of $f_{n}$ or
$p_{n}$ is non-trivial. The proof is essentially included in the proof of
Lemma 5.1. We prove the case of $f_{n}$ for readers’ convenience. If there is
an indecomposable summand $E_{n-1,1}^{\prime}$ of $E_{n-1}^{\prime}$ with
$f_{n}|_{E_{n-1,1}^{\prime}}=0$. Then we have the following commutative
diagram
${E_{n-1,1}^{\prime}}$${E_{n}^{\prime}}$${E_{n-1,1}^{\prime}[1]\oplus
E_{n}^{\prime}}$${E_{n-1,1}^{\prime}[1]}$${E_{n-1}^{\prime}}$${E_{n}^{\prime}}$${Q_{n}^{\prime}}$${E_{n-1}^{\prime}[1]}$$\scriptstyle{0}$$\scriptstyle{i}$$\scriptstyle{id}$$\scriptstyle{i[1]}$$\scriptstyle{f_{n}}$$\scriptstyle{p_{n}}$
This implies that $E_{n-1,1}^{\prime}[1]$ is a direct summand of
$Q_{n}^{\prime}$, but this is impossible unless $E_{n-1,1}^{\prime}=0$, as
every Harder-Narasimhan factor of $E_{n-1}$ had bigger phase the phase of
$Q_{n}^{\prime}$. Thus the claim is proved. The connectivity is also proved,
as the Harder-Narasimhan diagram of any indecomposable summand of
$E_{n-1}^{\prime}$ is connective by induction, and every such summand is
connected to $E^{\prime}$ and the summands of $Q_{n}^{\prime}$.
The liftability of Harder-Narasimhan diagram follows from the connectivity and
the invariance of semi-stable path. Indeed, let
$l_{1},l_{2}:A^{\prime}\dashrightarrow B^{\prime}$ be two connecting path in
the Harder-Narasimhan diagram of $E^{\prime}$. There are connecting paths
$l_{3}:Q_{1}\dashrightarrow A^{\prime}$ and $l_{4}:B^{\prime}\dashrightarrow
Q_{n}$ by connectivity condition. By the invariance of semi-stable path, we
get
$q(l_{4}\circ l_{1}\circ l_{3})=q(l_{4}\circ l_{2}\circ l_{3}),$
hence $q(l_{1})=q(l_{2})$ and (1) is proved.
For (2), let $l_{1},l_{2}:E\dashrightarrow F$ be two connecting paths in
$\mathcal{C}$. They can be lifted to connecting paths
$l_{1}^{\prime}:E^{\prime}\dashrightarrow F^{\prime}$ and
$l_{2}^{\prime}:E^{\prime}\dashrightarrow F^{\prime}[2k]$ in
$\widetilde{\mathcal{C}}$ where $2k=q_{0}(l_{2})-q_{0}(l_{1})$ ($q_{0}$ is the
degree function in $\sigma$). From the definition of $q$, one can easily show
that $q(l_{2})-q(l_{1})$ also equals to $2k$. Hence (2) is proved.
For (3), it follows directly from the invariance of semi-stable path. Hence
$\mathbb{P}$ is well defined.
By unwinding the definitions, it is easy to see that
$\mathbb{L}\circ\mathbb{P}=id$ and $\mathbb{P}\circ\mathbb{L}=id$. Hence, the
theorem is proved. ∎
###### Remarks 5.10.
(i) The equivalence relation in Definition 5.6 is the reason why the phases of
unstable objects in Bridgeland stability conditions are not well defined in
general.
(ii) Note that in this paper, we always fix the set of homogeneous morphisms
in the real decomposition. Theorem 5.9 roughly says that the deformation of
stability conditions with fixed set of homogeneous morphisms corresponds to
the deformation of Bridgeland stability conditions on
$\widetilde{\mathcal{C}}$. We will investigate the other deformation direction
(the deformation of the set of homogeneous morphisms) in the future research,
we expect that it corresponds to the deformation of the category
$\widetilde{C}$ itself.
(iii) It could be interesting to compare this theorem with [QZ22, Theorem 1].
We also have the isomorphism
$Stab^{0,e}_{cyc}(\mathcal{T}/2\mathbb{Z})\xrightarrow{\sim}Stab(\mathcal{T})/2\mathbb{Z},$
when the orbit category $\mathcal{T}/2\mathbb{Z}$ admits a triangulated
structure. See [Kel05] for the related notion and results.
We end this section by gently touching another aspect of the space
$Stab_{cyc}(\mathcal{C})$: chirality of objects and morphisms.
### 5.1. Chirality of morphisms and objects
There is a natural chirality of morphisms and objects from the definition of
charge triples.
###### Definition 5.11.
Given a real decomposition on a cyclic category $\mathcal{C}$, let $f$ be a
homogeneous morphism, we say that $f$ is left chiral if $q(f)>0$, and $f$ is
right chiral if $q(f)<0$.
Let $E$ be an indecomposable object in $\mathcal{C}$, we say that $E$ is left
chiral if the Maslov indices of all basic loops involving $E$ are non-
negative. Similarly, $E$ is right chiral if the Maslov indices of all basic
loops involving $E$ are negative. In such cases, we say that $E$ has a well-
defined chirality.
###### Lemma 5.12.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt connective cyclic category,
and $q$ be a real decomposition on $\mathcal{C}$ such that $\mathcal{C}$ is
liftable with respect to $q$. If moreover, every indecomposable object has a
well-defined chirality. Then all indecomposable objects has the same
chirality.
###### Proof.
Suppose that $E$ is a left chiral indecomposable object. For any other
indecomposable object $F$, there is a connecting path between $E$ and $F$.
Since any nontrivial homogeneous morphism $f:M\rightarrow N$ between two
indecomposable objects can be completed to a liftable diagram
$\cdots\rightarrow
M\xrightarrow{f}N\xrightarrow{g}Cone(f)\xrightarrow{h}M[1]\rightarrow\cdots.$
Then every summand of $g$ and $h$ are nontrivial. As the proof has already
appeared twice in the proofs of Lemma 5.1 and Theorem 5.9, we leave it to the
reader. This non-triviality implies that the chirality of $M$ is the same as
the chirality of $N$.
Hence, by chasing the connecting path, we can conclude that $F$ is also left
chiral. ∎
There is a natural involution map $\tau$ on the space
$\textbf{T}(\mathcal{C})$, which is defined in the following way:
$\tau(Z,\phi,q)=(-\bar{Z},\phi^{\prime},-q),$
where $\bar{Z}$ is the conjugate of the central charge $Z$ and
$\phi^{\prime}(E)+\phi(E)=1(mod\ 2\mathbb{Z})$ for any indecomposable object
$E$. Obviously, this involution changes the chirality of every homogeneous
morphism to its opposite.
Notice that the stability condition breaks the chirality symmetry of real
decompositions. In fact, for the forgetful map
$Stab(\mathcal{C})\xrightarrow{\alpha}\textbf{T}(\mathcal{C}),$
there are charge triples $R$ with nontrivial preimage $\alpha^{-1}(R)$ but
$\alpha^{-1}(\tau(R))=\emptyset$. This will be illustrated in the next
section.
## 6\. Stability conditions on Equivariant Matrix factorizations of $A_{2}$
type
In this section, we will present some examples of stability conditions on a
cyclic category. We start with a Schur type lemma in any $k$-linear category.
###### Lemma 6.1.
Let $\mathcal{T}$ be any $k$-linear category, and $E,F$ are two simple objects
in $\mathcal{T}$, i.e. $Hom_{\mathcal{T}}(E,E)=Hom_{\mathcal{T}}(F,F)=k$. Then
the composition
$Hom_{\mathcal{T}}(E,F)\times Hom_{\mathcal{T}}(F,E)\rightarrow
Hom_{\mathcal{T}}(E,E)\xrightarrow{\sim}k$
is trivial unless $E$ is isomorphic to $F$.
###### Proof.
Suppose that the composition is not trivial, i.e. there exist morphisms $a\in
Hom_{\mathcal{T}}(E,F)$ and $b\in Hom_{\mathcal{T}}(F,E)$ such that their
composition $b\circ a=x\cdot id_{E}$, where $x\in k$ is nonzero.
We claim that $a\circ b=x\cdot id_{F}$. Indeed, denote $a\circ b\coloneqq
y\cdot id_{F}$. Then
$x^{2}\cdot id_{E}=(x\cdot id_{E})\circ(x\cdot id_{E})=b\circ a\circ b\circ
a=b\circ(y\cdot id_{F})\circ a=xy\cdot id_{E}.$
Hence, we get $x^{2}=xy$, which implies $x=y$ since $x$ is nonzero. Therefore,
$a$ and $x^{-1}b$ are inverse to each other. ∎
###### Corollary 6.2.
Let $\mathcal{C}$ be a $k$-linear Krull-Schmidt category, and $E$ be a simple
object in $\mathcal{C}$. Then either the composition
$Hom(E,E[1])\times Hom(E,E[1])\rightarrow Hom(E,E[2])\xrightarrow{\sim}k$
is trivial or $E\xrightarrow{\sim}E[1]$.
###### Remark 6.3.
As one can see from the definition of stability conditions, the existence of
non-trivial indecomposable objects $E\simeq E[1]$ forms an obstruction of the
existence of stability conditions on $\mathcal{C}$. Unfortunately, this
obstruction is very common.
However, some finite group action may resolve such obstructions. Let us begin
with weighted homogeneous polynomial $w\in\mathbb{C}[[x_{1},\cdots,x_{N}]]$
and their Abelian automorphisms (the following definitions are taken from
[FJR13, Section 2]).
###### Definition 6.4.
A weighted homogeneous polynomial $w\in\mathbb{C}[[x_{1},\cdots,x_{N}]]$ is a
polynomial for which there exist positive rational numbers
$q_{1},\cdots,q_{n}\in\mathbb{Q}_{>0}$, such that for any
$\lambda\in\mathbb{C}^{*}$
$w(\lambda^{q_{1}}x_{1},\cdots,\lambda^{q_{N}}x_{N})=\lambda
w(x_{1},\cdots,x_{N}).$
We will call $q_{j}$ the weight of $x_{j}$. We define $d$ and $n_{i}$ to be
the unique positive integers such that
$(q_{1},\cdots,q_{N})=(n_{1}/d,\cdots,n_{N}/d)$ with
$gcd(d,n_{1},\cdots,n_{N})=1$.
###### Definition 6.5.
We call $w$ nondegenerate if
1. (1)
the polynomial $w$ contains non monomial of the form $x_{i}x_{j}$, for $i\neq
j$ and
2. (2)
the hypersurface defined by $w=0$ in weighted projective space is non-
singular, or equivalently, the affine hypersurface defined by $w=0$ has
isolated singularity at the origin.
###### Definition 6.6.
There are special groups associated with the polynomial $w$. The first one
$G_{w}$ is defined in the following way
$G_{w}\coloneqq\\{(\alpha_{1},\cdots,\alpha_{N})\in(\mathbb{C}^{*})^{N}|w(\alpha_{1}x_{1},\cdots,\alpha_{N}x_{N})=w(x_{1},\cdots,x_{N})\\}.$
There is special element $J\in G_{w}$ which is defined to be
$J\coloneqq(exp(2\pi iq_{1}),\cdots,exp(2\pi iq_{N})),$
where the $q_{i}$ are the weights defined in Definition 6.4. For any group $G$
with $\langle J\rangle\leq G\leq G_{w}$, we call $G$ an admissible subgroup of
$G_{w}$.
Recall the following famous $ADE$ examples (see e.g. [KST07]).
###### Example 6.7.
The simple singularities can be classified in the following way.
* •
$A_{n}:w=x^{n+1},n\geq 1;$
* •
$D_{n}:w=x^{n-1}+xy^{2},n\geq 4$;
* •
$E_{6}:w=x^{3}+y^{4}$;
* •
$E_{7}:w=x^{3}+xy^{3}$;
* •
$E_{8}:w=x^{3}+y^{5}$.
Figure 2.
By [Kn87] and [BGS87], we know that the category $HMF(R,w)$ has only finitely
many indecomposable objects if and only if $w$ is of simple singularity.
In the following, we present some examples of stability conditions on the
homotopy category of $\mathbb{Z}/3\mathbb{Z}$-equivariant matrix
factorizations of $x^{3}$. And we use results from the paper [Wal05, Section
4.6, Section 7.1] as our starting point, readers should consult these two
sections for the details.
###### Example 6.8.
We draw pictures of deforming stability conditions on the homotopy category of
$\mathbb{Z}/3\mathbb{Z}$-equivariant matrix factorizations of $A_{2}$ case. In
the top left corner of Figure 2, we start with Walcher’s point as described in
[Wal05]. In the deformation procedure, we keep the central charges
$Z(M_{1}^{1})$ and $Z(M_{2}^{1})$ unchanged, move $Z(M_{1}^{2})$ along the
dotted path to $Z(M_{1}^{3})$ (central charge of other indecomposable objects
change correspondingly), and imagine that there is a pillar standing at the
origin such that the homogeneous morphisms could go around the pillar but
could not pass through the pillar along the deformation.
Figure 3.
In Figure 2, we observe the phenomenon of chirality symmetry breaking. In
fact, the charge triples on the left hand side and right hand side are mirror
symmetric to each other. But the stability conditions ar not symmetric, as we
can see in Figure 1, our definition of stability conditions has an implicate
orientation, which breaks the symmetry of charge triples.
Also note that, the charge triples on the left hand side are liftable, while
the charge triples on the right hand side are not.
Given a stability condition $\sigma$ on $\mathcal{C}$, if the central charge
of any nontrivial indecomposable objects is nonzero, we will call $\sigma$ a
strong stability condition, and use
$Stab_{cyc}^{s}(\mathcal{C}),Hom^{s}(\Lambda,\mathbb{C})$ to denote the sets
of strong stability condition on $\mathcal{C}$ and its associated central
charges. The Figure 3 illustrates the nontrivial monodromy of the map
$Stab_{cyc}^{s}(\mathcal{C})\rightarrow Hom^{s}(\Lambda,\mathbb{C})\
(\subset\mathbb{C}^{rank(\Lambda)}).$
Indeed, if we compare Figure 3 with the left hand side of Figure 2, we see
that different paths result in different stability conditions. It is easy to
see that
$H_{1}H_{2}H_{3}=id$
in $Stab^{s}_{cyc}(\mathcal{C})$, where $H_{1},H_{2},H_{3}$ are the
monodromies corresponding to 3 generators of
$\pi_{1}((\mathbb{C}^{*})^{2}\backslash\\{z_{1}+z_{2}=0\\}).$
Moreover, if we go to the lifted space, we get $H_{1}H_{2}H_{3}=[2]$.
## References
* [BBD82] AA Beilinson, J Bernstein, and P Deligne. Faisceaux pervers, analyse et topologie sur les espaces singuliers (i) cirm, 6–10 juillet 1981. Astérisque, 100, 1982.
* [BGS87] R.-O. Buchweitz, G.-M. Greuel, and F.-O. Schreyer. Cohen-Macaulay modules on hypersurface singularities. II. Invent. Math., 88(1):165–182, 1987.
* [BLMS17] Arend Bayer, Martí Lahoz, Emanuele Macrì, and Paolo Stellari. Stability conditions on Kuznetsov components. arXiv preprint arXiv:1703.10839, 2017.
* [Bri07] Tom Bridgeland. Stability conditions on triangulated categories. Ann. of Math. (2), 166(2):317–345, 2007.
* [Con85] Alain Connes. Noncommutative differential geometry. Inst. Hautes Études Sci. Publ. Math., (62):257–360, 1985.
* [CQ97] Joachim Cuntz and Daniel Quillen. Excision in bivariant periodic cyclic cohomology. Invent. Math., 127(1):67–98, 1997.
* [DM12] Tobias Dyckerhoff and Daniel Murfet. The Kapustin-Li formula revisited. Adv. Math., 231(3-4):1858–1885, 2012.
* [Dou02] Michael R. Douglas. Dirichlet branes, homological mirror symmetry, and stability. In Proceedings of the International Congress of Mathematicians, Vol. III (Beijing, 2002), pages 395–408. Higher Ed. Press, Beijing, 2002.
* [Dyc11] Tobias Dyckerhoff. Compact generators in categories of matrix factorizations. Duke Math. J., 159(2):223–274, 2011.
* [Eis80] David Eisenbud. Homological algebra on a complete intersection, with an application to group representations. Trans. Amer. Math. Soc., 260(1):35–64, 1980.
* [FJR13] Huijun Fan, Tyler Jarvis, and Yongbin Ruan. The Witten equation, mirror symmetry, and quantum singularity theory. Ann. of Math. (2), 178(1):1–106, 2013.
* [Goo85] Thomas G. Goodwillie. Cyclic homology, derivations, and the free loopspace. Topology, 24(2):187–215, 1985.
* [HT07] Ryoshi Hotta and Toshiyuki Tanisaki. D-modules, perverse sheaves, and representation theory, volume 236\. Springer Science & Business Media, 2007.
* [Kel05] Bernhard Keller. On triangulated orbit categories. Doc. Math., 10:551–581, 2005.
* [KL03a] Anton Kapustin and Yi Li. D-branes in Landau-Ginzburg models and algebraic geometry. J. High Energy Phys., (12):005, 44, 2003.
* [KL03b] Anton Kapustin and Yi Li. Topological correlators in Landau-Ginzburg models with boundaries. Adv. Theor. Math. Phys., 7(4):727–749, 2003.
* [Kn87] Horst Knörrer. Cohen-Macaulay modules on hypersurface singularities. I. Invent. Math., 88(1):153–164, 1987.
* [KS08] Maxim Kontsevich and Yan Soibelman. Stability structures, motivic Donaldson-Thomas invariants and cluster transformations. arXiv preprint arXiv:0811.2435, 2008.
* [KST07] Hiroshige Kajiura, Kyoji Saito, and Atsushi Takahashi. Matrix factorizations and representations of quivers. II: Type $ADE$ case. Adv. Math., 211(1):327–362, 2007.
* [Mur13] Daniel Murfet. Residues and duality for singularity categories of isolated Gorenstein singularities. Compos. Math., 149(12):2071–2100, 2013.
* [Nee01] Amnon Neeman. Triangulated categories, volume 148 of Annals of Mathematics Studies. Princeton University Press, Princeton, NJ, 2001.
* [NS18] Thomas Nikolaus and Peter Scholze. On topological cyclic homology. Acta Math., 221(2):203–409, 2018.
* [Orl04] D. O. Orlov. Triangulated categories of singularities and D-branes in Landau-Ginzburg models. Tr. Mat. Inst. Steklova, 246(Algebr. Geom. Metody, Svyazi i Prilozh.):240–262, 2004.
* [Orl09] Dmitri Orlov. Derived categories of coherent sheaves and triangulated categories of singularities. In Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Vol. II, volume 270 of Progr. Math., pages 503–531. Birkhäuser Boston, Boston, MA, 2009.
* [PV12] Alexander Polishchuk and Arkady Vaintrob. Chern characters and Hirzebruch-Riemann-Roch formula for matrix factorizations. Duke Math. J., 161(10):1863–1926, 2012.
* [QZ22] Yu Qiu and Xiaoting Zhang. Geometric classification of total stability spaces. arXiv preprint arXiv:2202.00092, 2022.
* [Tre19] David Treumann. Smith theory and geometric Hecke algebras. Math. Ann., 375(1-2):595–628, 2019.
* [Wal05] Johannes Walcher. Stability of Landau-Ginzburg branes. J. Math. Phys., 46(8):082305, 29, 2005.
|
# Magnetic ground state of the Kitaev Na_2Co_2TeO_6 spin liquid candidate
Weiliang Yao<EMAIL_ADDRESS>Present address: Department of Physics, University
of Tennessee, Knoxville, Tennessee 37996, USA International Center for
Quantum Materials, School of Physics, Peking University, Beijing 100871, China
Yang Zhao NIST Center for Neutron Research, National Institute of Standards
and Technology, Gaithersburg, Maryland 20899, USA Department of Materials
Science and Engineering, University of Maryland, College Park, Maryland 20742,
USA Yiming Qiu NIST Center for Neutron Research, National Institute of
Standards and Technology, Gaithersburg, Maryland 20899, USA Christian Balz
ISIS Neutron and Muon Source, STFC Rutherford Appleton Laboratory, Didcot OX11
0QX, United Kingdom J. Ross Stewart ISIS Neutron and Muon Source, STFC
Rutherford Appleton Laboratory, Didcot OX11 0QX, United Kingdom Jeffrey W.
Lynn NIST Center for Neutron Research, National Institute of Standards and
Technology, Gaithersburg, Maryland 20899, USA Yuan Li<EMAIL_ADDRESS>International Center for Quantum Materials, School of Physics, Peking
University, Beijing 100871, China Collaborative Innovation Center of Quantum
Matter, Beijing 100871, China
###### Abstract
As a candidate Kitaev material, Na_2Co_2TeO_6 exhibits intriguing magnetism on
a honeycomb lattice that is believed to be $C_{3}$-symmetric. Here we report a
neutron diffraction study of high quality single crystals under $a$-axis
magnetic fields. Our data support the less common notion of a magnetic ground
state that corresponds to a triple-$\mathbf{q}$ magnetic structure with
$C_{3}$ symmetry, rather than the multi-domain zigzag structure typically
assumed in prototype Kitaev spin liquid candidates. In particular, we find
that the field is unable to repopulate the supposed zigzag domains, where the
only alternative explanation is that the domains are strongly pinned by
hitherto unidentified structural reasons. If the triple-$\mathbf{q}$ structure
is correct then this requires reevaluation of many candidate Kitaev materials.
We also find that fields beyond about 10 Tesla suppress the long range
antiferromagnetic order, allowing new magnetic behavior to emerge different
from that expected for a spin liquid.
The exactly solvable Kitaev model [1] represents a distinct route to quantum
many-body entanglement of spins [2] and has important potential for
topological quantum computing [1, 3]. Pursuit of Kitaev spin liquids (KSLs)
[1] in crystalline materials has fueled intense research [4, 5]. Among
materialization ideas [6, 7, 8, 4, 5], several recently proposed cobalt oxides
[9, 10, 11, 12] are promising, since their 3$d^{7}$ magnetic electrons are
desirable for weakening non-Kitaev interactions [9, 10]. Moreover, unlike
$\alpha$-RuCl_3 [13] and H_3LiIr_2O_6 [14] which are van der Waals materials,
the cobaltates can be grown into large single crystals with relatively few
imperfections [15, 16, 17, 18, 19, 20].
An important common characteristic of the cobaltates and the 4$d$-electron
counterpart $\alpha$-RuCl_3 is their tunability by magnetic fields. Such
external tuning [21, 22, 23, 24, 25] is widely considered necessary for
finding (field-driven) spin liquids, because most KSL candidate materials do
have magnetic order at low temperature [8, 4, 5]. In $\alpha$-RuCl_3, a
hallmark of the tunability is field suppression of thermodynamic signatures of
magnetic order [26, 27], which has led to a flurry of studies of excitations
in the intermediate and high-field states [28, 29, 30, 31, 32, 33, 34, 35, 36,
37, 38]. Indeed, similar field suppression of order and unconventional
transport behaviors have been found in the cobaltates [16, 39, 40, 41, 42, 43,
17, 19, 44, 20], which imply not only chances for finding spin liquids but
also an experimental opportunity – brought by the high crystal quality – for
elucidating the microscopic mechanisms. The latter aspect is significant
because microscopic models of essentially all KSL candidate materials are
currently under debate [45, 46, 47, 48, 49, 39, 50, 51, 52, 53, 54, 55, 56,
57]. From an optimistic perspective, establishing a concrete case for at least
one of them, despite the difficulty of the problem itself, may already provide
valuable insight into many of the candidate materials.
Among the cobaltates, Na_2Co_2TeO_6 has been studied the most by spectroscopic
methods [48, 49, 39, 50, 52, 53, 58, 59, 60, 57]. Its crystal structure (space
group $P6_{3}22$) furthermore stands out among KSL candidate materials for
having, at least nominally, three-fold rotational ($C_{3}$) symmetry about the
$c$-axis [61, 62, 63], whereas many other materials have monoclinic stacking
which removes the $C_{3}$ symmetry. Notably, $C_{3}$ is a symmetry that
becomes broken in the presence of “zigzag” magnetic order [Fig. 1(a)], which
is the most commonly considered form of order in KSL candidate materials [4,
5]. The magnetic ground state of Na_2Co_2TeO_6 was initially reported to be
zigzag based on neutron diffraction [62, 63], which has also been used to
identify zigzag order in other KSL candidate materials [64, 65, 66, 19].
Recently, an alternative novel “triple-$\mathbf{q}$” magnetic state [Fig.
1(b)] was suggested based on a distinct signature in the spin waves [58],
which subsequently received indirect support from magnetic resonance [59, 60].
The $C_{3}$-symmetric triple-$\mathbf{q}$ state can be constructed by adding
zigzag components of three different orientations. For this reason, the
triple-$\mathbf{q}$ and zigzag orders cannot be distinguished by diffraction
[58], unless the $C_{3}$-symmetry breaking is revealed by observing uneven
populations of its orientational domains. The $C_{3}$ structure of
Na_2Co_2TeO_6 is desirable for this purpose, because a weak external
perturbation (e.g., in-plane magnetic field, strain, etc.) can be expected to
selectively populate the domains if the zigzag ground state is realized. Given
the prominence of the zigzag order in KSL research, and since it has not been
ruled out in Na_2Co_2TeO_6 [48, 49, 39, 50, 52, 53], such an explicit test is
much needed.
Figure 1: (a) The zigzag magnetic structure and its orientational domains. (b)
The triple-$\mathbf{q}$ magnetic structure. The moments can be thought of as a
vector sum of the three patterns in (a) extended over the whole lattice. When
one or three of the ZZ$n$ components are reversed, the chirality is reversed
(inset). (c) Temperature dependence of the M1(0.5, 0, 1) reflection in
selected fields, where the long range magnetic order is robust (see Fig. S7 in
[67]). Solid curves are power-law fits to the data (see text). Inset shows the
reciprocal lattice in our setting, where hexagons are boundaries of 2D
Brillouin zones, and the shaded ($H$, 0, $L$) plane is perpendicular to the
field (purple arrows). Empty circles are structural Brillouin zone centers.
Filled circles are magnetic Bragg peaks at the M-points, color-coded with the
zigzag domains in (a). Figure 2: (a) Field evolution of magnetic Bragg peak
M1(0.5, 0, 1) at 2 K, with the sample undergoing a series of field scans after
zero-field cooling (ZFC, see text), as well as after field-cooling (FC) in 10
T. (b) Field evolution of Bragg peak (1, 0, 1) at 2 K. In the virgin zero
field state, the observed intensity is due to nuclear Bragg scattering,
whereas the field-enhanced intensity is a measure of uniform magnetization.
(c) Field evolution of magnetic Bragg peak M2(0, 0.5, 1) at 2 K. Measurements
displayed in the main panels are performed at the maximum of the peak profiles
displayed in the insets. Horizontal dashed lines in (a) and (c) indicate
background level. Error bars indicate statistical uncertainty (1 s.d.).
Here we report our magnetic neutron diffraction study of Na_2Co_2TeO_6 single
crystals in order to test whether in-plane fields along the $a$-axis can
selectively populate magnetic domains. We also examine whether magnetic fields
(up to 10 Tesla) can drive the system into a spin-disordered state, as has
been previously suggested [16, 39, 40]. Our conclusion is that the fields can
do neither of these. While a spin liquid might still be reachable with fields
in other directions [57] and/or greater than 10 T, our results set a
definitive constraint on the zero-field magnetic ground state. Namely, unless
a lower structural symmetry without $C_{3}$ has previously been missed, the
system prefers a $C_{3}$-symmetric triple-$\mathbf{q}$ state over the widely
considered zigzag order.
Our experimental geometry is shown in the inset of Fig. 1(c). Magnetic Bragg
peaks are expected at the M-points of the two-dimensional (2D) Brillouin zone.
They originate either separately from different zigzag domains [ZZ1-ZZ3 in
Fig. 1(a), peaks at M1-M3, respectively], or together from the
triple-$\mathbf{q}$ order. Figure 1(c) displays the temperature ($T$)
dependence of the magnetic peak at M1(0.5, 0, 1). In the zigzag scenario, this
peak arises from the ZZ1 domain, where the in-plane magnetic moments are
collinear with the applied field [Fig. 1(a)]. The transition temperature
($T_{N}\sim 26.5$ K at 0 T) is gradually suppressed by the field, and the data
can be fit with a power-law function: $I=A\,(T_{N}-T)^{2\beta}+B$, where $A$
and $B$ are scale and background constants, respectively, and $\beta$ is the
critical exponent of the order parameter, which changes very little from
0.209(7) to 0.227(13) between 0 T and 6 T. This indicates that the nature of
the magnetic transition barely changes with field, and that it is different
from the 2D Ising case found in $\alpha$-RuCl_3 [30]. The deviation might be
attributable to the fact that $T_{N}$ marks three-dimensional ordering, which
is preceded by a minor 2D transition at a slightly higher temperature [58].
The 2D transition cannot be observed in these data because of the small sample
volume [67].
Figure 2(a) displays the system’s field evolution as seen from the M1(0.5, 0,
1) magnetic peak at 2 K. Starting from an initial state prepared by zero-field
cooling (ZFC), the intensity monotonically decreases with increasing field.
Besides a subtle anomaly at about 1.5 T, the main decrease occurs between
about 6 T and 8.2 T, and a small but finite intensity remains at the highest
field of 10 T, which we will revisit later. At first sight, the intensity
decrease could be attributed to two reasons: (1) the antiferromagnetic order
is suppressed by the field; (2) the zigzag domain ZZ1 responsible for the
measured peak is unfavored by the field and gets transformed into ZZ2 and ZZ3.
To test the relevance of (2), we continued our measurement upon removing and
then reapplying the field. Intriguingly, the intensity recovers to about 2/3
of the original after the field is removed, and the sample appears to have
entered a stable field-trained state – reapplying the field results in a
field-dependent behavior different from the initial field application up to
8.2 T. The data further reveal a hysteretic behavior between 6 T and 8.2 T. A
cleaner procedure to prepare the field-trained state involves field-cooling
(FC) the sample in a 10 T field and then removing the field.
A central issue here is whether or not the partial intensity loss in the
field-trained state is due to domain repopulation. We first note that, with
the structural $C_{3}$ symmetry, a sample prepared by FC can have no ZZ1
domain whatsoever, but this view is defied by the 2/3-recovered intensity. In
Fig. 2(b), we present data measured on an integer-indexed peak (1, 0, 1),
which show that the field-trained state is not different from the ZFC state as
far as uniform magnetization and susceptibility are concerned – the intensity
and its field derivative at 0 T both recover to the original values. Since the
zigzag domains have different susceptibility in a given field direction, this
result implies that there is no zigzag domain repopulation after the field is
removed. As a further test, Fig. 2(c) displays the field evolution of the
M2(0, 0.5, 1) peak, which is associated with domain ZZ2 in the zigzag scenario
(see Fig. S8 in [67] for similar result for the M3 peak). While the behavior
is qualitatively different from the M1 peak below 6 T, there is no intensity
gain on M2 in the field-trained state. We thus conclude that the loss of the
M1 peak intensity is unrelated to domain repopulation. In this context, we
note that some previous related results in $\alpha$-RuCl_3 [26, 33] have been
attributed to zigzag-domain repopulation by small in-plane fields. Those
results are qualitatively similar to our data obtained upon the initial field
application in Figs. 2(a) and (c), and the interpretation was made even in the
absence of $C_{3}$ structural symmetry of $\alpha$-RuCl_3. As the structural
symmetry is expected to make the zigzag domains energetically unequal, it
follows that the magnetization energy must be able to overcome the difference.
In this sense, our results in Na_2Co_2TeO_6 are particularly difficult to
comprehend under the zigzag scenario, because the structural $C_{3}$ symmetry
should make the magnetic domains even easier to repopulate than in
$\alpha$-RuCl_3.
Figure 3: (a)-(c) Elastic scattering in the ($H$, 0, $L$) plane measured at
0.1 K under the specified field conditions. (d) Line-cuts through the data in
(a-c) along $\mathbf{c^{*}}$ at $H=0.5$. (e) $T$ dependence of the signal at
M1(0.5, 0, 1) measured in zero field upon warming the sample, before and after
field training. Black dotted curve illustrates expected $T$ dependence of the
M2(0, 0.5, 1) and M3(-0.5, 0.5, 1) signals after field training, if the
triple-$\mathbf{q}$ scenario is correct (see text).
To reveal where the lost intensity of M1(0.5, 0, 1) has gone in the field-
trained state, Fig. 3(a-c) presents our measurement in an extensive region of
the ($H$, 0, $L$) reciprocal plane. After ZFC, a rod of magnetic scattering
running along $\mathbf{c^{*}}$ is observed at $H=0.5$, in addition to the
sharp peaks at integer $L$. It signifies quasi-2D magnetic correlations [58],
and the signal becomes noticeably enhanced in the field-trained state [Fig.
3(c-d)]. The enhancement occurs upon decreasing the field between 8.2 T and 6
T (Fig. S9 in [67]) and can approximately account for the intensity loss at
integer $L$ (Fig. S13 in [67]). Therefore, field training suppresses $c$-axis
correlations, but it leaves the $L$-integrated 2D diffraction signal at
M1(0.5, 0) unaffected. This reinforces our conclusion of no zigzag domain
repopulation. The field training leaves no significant change in the 2D
correlation length [Fig. 2(a) inset], or in the $c$-axis correlations
characterized by M2,3 [Fig. 2(c)]. Moreover, the lost $c$-axis correlations at
M1(0.5, 0, 1) can be partially recovered [Fig. 3(e)] by warming up the field-
trained sample. The implication of these observations will be discussed later.
We note that the system behaves somewhat differently from $\alpha$-RuCl_3,
where a distinct form of magnetic order perpendicular to the honeycomb plane
can be stabilized by intermediate in-plane fields [68], presumably due to a
more significant role of the system’s inter-plane coupling [35].
Comparing the data in Fig. 3(a-b), we notice enhanced scattering at integer
$H$ and $L$ at 10 T, where no magnetic scattering exists at 0 T (Fig. S11 in
[67]). This additional signal is therefore purely due to field-induced uniform
magnetization. An induced moment of about 2.05(3) $\mu_{\mathrm{B}}$/Co can be
estimated from the data [67], consistent with previous reports [39, 43]. While
this means that the field suppresses antiferromagnetic order by causing
significant spin polarization, the peaks at $H=0.5$ are not fully suppressed
[Fig. 3(b & d)], and their magnetic nature has been confirmed by comparing to
measurements at high temperature (Fig. S12 in [67]). The system is always in a
magnetically ordered state under $a$-axis fields up to 10 T, and is therefore
not yet a spin liquid. Nevertheless, recent studies of Na_2Co_2TeO_6 have
revealed unusual thermal transport properties under in-plane fields, which
implies that the near-polarized state is distinct from a conventional
paramagnet [40, 44]. Similar behaviors are also observed in BaCo_2(AsO_4)_2,
where an intriguing state related to Kitaev interactions has been inferred
near full polarization [17, 69]. These studies motivate further searches for
exotic magnetism in the cobalt-based Kitaev candidate materials.
We now discuss possible scenarios for the field training to cause no
diffraction intensity transfer between the M-points. In the first scenario,
the M-points are associated with spatially separated zigzag domains, as we
illustrate in the upper half of Fig. 4(a). In order for the field training not
to repopulate the domains, they must be completely pinned by the local crystal
lattice regarding their zigzag-chain orientations. Given the high quality of
our crystals, we believe that the pinning is not due to defects, and can only
be explained by a hitherto unidentified departure from the nominal
$C_{3}$-symmetric structure: On the one hand, a tiny orthorhombic distortion
may already produce a strong pinning effect, since the sister compound
Na_3Co_2SbO_6 has demonstrated a large magnetic anisotropy arising from a
small lattice distortion [20]. On the other hand, structural orthorhombicity
might arise from long-period stacking [70] that can be easily missed in
experiments due to the presence of stacking faults. Moreover, the crystal
structure of Na_2Co_2TeO_6 still has some loose ends, including additional
weak Bragg peaks previously seen with both neutron and X-ray diffraction [63,
58]. The diffraction peaks share the same wave vectors as the magnetic ones
seen at low temperature, and may signify a superstructure in the sodium layer
[58]. If the superstructure breaks the $C_{3}$ symmetry, which is yet to be
clarified such as by high-resolution single-crystal diffraction, it may pin
magnetic zigzag domains. In the second scenario, as illustrated in the lower
half of Fig. 4(a), the M-points all belong to the same triple-$\mathbf{q}$
order parameter, which naturally explains the lack of opportunity for
orientational domain repopulation.
Figure 4: (a) Schematic field-training processes under the zigzag (upper half)
and triple-$\mathbf{q}$ (lower half) scenarios. The 10 T state is approximated
as fully spin-polarized. Polygons are color-coded with Fig. 1(a-b). Dashed
lines are boundaries between “hidden” low-symmetry structural domains. Hatches
indicate suppressed $c$-axis correlations. (b) & (c) Schematic stacking in the
(supposed) ZZ1 domain before and after field training. Yellow arrows indicate
randomly reversed layers. (d) & (e) Schematic stacking in the
triple-$\mathbf{q}$ structure before and after field training, color-coded
after Fig. 1(b). Grey rhombuses in (b-e) indicate the structural primitive
cell.
To this end, the field-training effect on the $c$-axis correlations deserves
some thought. Given that the effect is only observed at M1 [Fig. 2(a & c)], we
illustrate plausible changes caused by the training in Figs. 4(b-c) and (d-e),
respectively, for the zigzag and the triple-$\mathbf{q}$ cases. In the former
case, the inter-layer arrangement inside the ZZ1 domain is disturbed by the
training, probably because the field causes a spin-flop-like transition
between 6 T and 8.2 T, as the hysteretic behavior [Fig. 2(a)] suggests.
Indeed, the first-order nature of the transition is expected to strongly
disturb ZZ1, but it would naturally leave ZZ2 and ZZ3 intact. In the latter
case, we note that inside a given honeycomb layer, reversing one zigzag
component in the triple-$\mathbf{q}$ structure would reverse the layer’s spin
chirality [Fig. 1(b)]. Hence, the suppressed $c$-axis correlations at M1, but
not at M2 or M3, imply a scrambled arrangement of the chirality across the
layers [Figs. 4(d-e)]. Importantly, the fact that warming up a field-trained
sample recovers part of the $c$-axis correlations seen at M1, as shown in Fig.
3(e), also has very different explanations in the two cases. In the zigzag
case, the recovery pertains to only the ZZ1 domain, which means that the
diffraction signals at M2 and M3 will not be affected. Since the latter
signals are the same in the ZFC and field-trained states [Fig. 2(c)], upon
warming the sample from 2 K, they are expected to simply follow the ZFC curve
in Fig. 3(e). In contrast, in the triple-$\mathbf{q}$ case, the scrambled
chirality between the layers is not expected to recover easily by thermal
fluctuations. Instead, the pattern in each layer might be able to translate,
which corresponds to reversing two zigzag components simultaneously (Fig. S14
in [67]). Mathematically, such a process would partially recover the $c$-axis
correlations seen at M1, but at the cost of the correlations at M2 and M3. It
means that if one can monitor, e.g., the M2(0, 0.5, 1) peak upon warming the
sample from a field-trained state, the measured intensity would be like the
dotted lines in Fig. 3(e). Such a distinct behavior from the zigzag case, if
confirmed in future studies, will firmly establish the triple-$\mathbf{q}$
scenario. In fact, we believe that such crosstalk between signals at different
wave vectors can be utilized, on very general grounds indeed, for experimental
differentiation between single- and multi-$\mathbf{q}$ magnetic orders
regardless of the crystal structure. The experiment requires a demanding
condition with both a magnet and detector coverage for observing the out-of-
horizontal-plane diffraction peaks.
To conclude, we have investigated the $a$-axis field dependence of magnetic
order in Na_2Co_2TeO_6 with neutron diffraction. In spite of the nominal
$C_{3}$ crystal symmetry, we find that an $a$-axis applied field is unable to
repopulate $C_{3}$-breaking magnetic domains – either such domains exist but
are completely pinned by an as yet unknown low-symmetry structure, or the
magnetic ground state features the $C_{3}$-symmetric triple-$\mathbf{q}$
structure. Our study brings unprecedented insight into the crystal and
magnetic structures of not only Na_2Co_2TeO_6, but also related systems with
presumed zigzag order that may actually be triple-$\mathbf{q}$. Finally, we
show that Na_2Co_2TeO_6 is not yet a spin liquid up to 10 T, but its magnetism
remains highly intriguing and awaits further elucidation.
Note added. A parallel work reports theoretical analyses of
triple-$\mathbf{q}$ order in Na_2Co_2TeO_6, which are consistent with our
results [71].
###### Acknowledgements.
We are grateful for discussions with L. Chen, W. Chen, C. Hess, X. Hong, L.
Janssen, X. Jin, D. Khalyavin, C. Kim, V. Kocsis, W. G. F. Krüger, J.-G. Park,
L. Taillefer, and A. U. B. Wolter. The work at Peking University was supported
by the National Basic Research Program of China (Grant No. 2021YFA1401900) and
the NSF of China (Grant Nos. 12061131004, and 11888101). Access to MACS was
provided by the Center for High Resolution Neutron Scattering, a partnership
between the National Institute of Standards and Technology and the National
Science Foundation under Agreement No. DMR-1508249. We acknowledge ISIS for
beamtime under proposal RB2010025 [72].
## References
* Kitaev [2006] A. Kitaev, Annals of Physics 321, 2 (2006).
* Anderson [1973] P. W. Anderson, Materials Research Bulletin 8, 153 (1973).
* Nayak _et al._ [2008] C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Rev. Mod. Phys. 80, 1083 (2008).
* Takagi _et al._ [2019] H. Takagi, T. Takayama, G. Jackeli, G. Khaliullin, and S. E. Nagler, Nature Reviews Physics 1, 264 (2019).
* Trebst and Hickey [2022] S. Trebst and C. Hickey, Physics Reports 950, 1 (2022).
* Jackeli and Khaliullin [2009] G. Jackeli and G. Khaliullin, Phys. Rev. Lett. 102, 017205 (2009).
* Chaloupka _et al._ [2010] J. c. v. Chaloupka, G. Jackeli, and G. Khaliullin, Phys. Rev. Lett. 105, 027204 (2010).
* Winter _et al._ [2017] S. M. Winter, A. A. Tsirlin, M. Daghofer, J. van den Brink, Y. Singh, P. Gegenwart, and R. Valentí, J. Phys.: Condens. Matter 29, 493002 (2017).
* Liu and Khaliullin [2018] H. Liu and G. Khaliullin, Phys. Rev. B 97, 014407 (2018).
* Sano _et al._ [2018] R. Sano, Y. Kato, and Y. Motome, Phys. Rev. B 97, 014408 (2018).
* Motome _et al._ [2020] Y. Motome, R. Sano, S. Jang, Y. Sugita, and Y. Kato, Journal of Physics: Condensed Matter 32, 404001 (2020).
* Kim _et al._ [2021] C. Kim, H.-S. Kim, and J.-G. Park, Journal of Physics: Condensed Matter 34, 023001 (2021).
* Plumb _et al._ [2014] K. W. Plumb, J. P. Clancy, L. J. Sandilands, V. V. Shankar, Y. F. Hu, K. S. Burch, H.-Y. Kee, and Y.-J. Kim, Phys. Rev. B 90, 041112 (2014).
* Kitagawa _et al._ [2018] K. Kitagawa, T. Takayama, Y. Matsumoto, A. Kato, R. Takano, Y. Kishimoto, S. Bette, R. Dinnebier, G. Jackeli, and H. Takagi, Nature 554, 341 (2018).
* Xiao _et al._ [2019] G. Xiao, Z. Xia, W. Zhang, X. Yue, S. Huang, X. Zhang, F. Yang, Y. Song, M. Wei, H. Deng, and D. Jiang, Cryst. Growth Des. 19, 2658 (2019).
* Yao and Li [2020] W. Yao and Y. Li, Phys. Rev. B 101, 085120 (2020).
* Zhong _et al._ [2020] R. Zhong, T. Gao, N. P. Ong, and R. J. Cava, Science Advances 6, eaay6953 (2020).
* Halloran _et al._ [2022] T. Halloran, F. Desrochers, E. Z. Zhang, T. Chen, L. E. Chern, Z. Xu, B. Winn, M. Graves-Brook, M. Stone, A. I. Kolesnikov, _et al._ , arXiv preprint arXiv:2205.15262 (2022).
* Yan _et al._ [2019] J.-Q. Yan, S. Okamoto, Y. Wu, Q. Zheng, H. D. Zhou, H. B. Cao, and M. A. McGuire, Phys. Rev. Materials 3, 074405 (2019).
* Li _et al._ [2022a] X. Li, Y. Gu, Y. Chen, V. O. Garlea, K. Iida, K. Kamazawa, Y. Li, G. Deng, Q. Xiao, X. Zheng, Z. Ye, Y. Peng, I. A. Zaliznyak, J. M. Tranquada, and Y. Li, arXiv preprint arXiv:2204.04593 (2022a).
* Janssen _et al._ [2016] L. Janssen, E. C. Andrade, and M. Vojta, Phys. Rev. Lett. 117, 277202 (2016).
* Janssen and Vojta [2019] L. Janssen and M. Vojta, Journal of Physics: Condensed Matter 31, 423002 (2019).
* Gordon _et al._ [2019] J. S. Gordon, A. Catuneanu, E. S. S$\mathrm{\o}$rensen, and H.-Y. Kee, Nat. Commun. 10, 2470 (2019).
* Hickey and Trebst [2019] C. Hickey and S. Trebst, Nat. Commun. 10, 530 (2019).
* Li _et al._ [2021] H. Li, H.-K. Zhang, J. Wang, H.-Q. Wu, Y. Gao, D.-W. Qu, Z.-X. Liu, S.-S. Gong, and W. Li, Nature Communications 12, 4007 (2021).
* Sears _et al._ [2017] J. A. Sears, Y. Zhao, Z. Xu, J. W. Lynn, and Y.-J. Kim, Phys. Rev. B 95, 180411 (2017).
* Wolter _et al._ [2017] A. U. B. Wolter, L. T. Corredor, L. Janssen, K. Nenkov, S. Schönecker, S.-H. Do, K.-Y. Choi, R. Albrecht, J. Hunger, T. Doert, M. Vojta, and B. Büchner, Phys. Rev. B 96, 041405 (2017).
* Baek _et al._ [2017] S.-H. Baek, S.-H. Do, K.-Y. Choi, Y. S. Kwon, A. U. B. Wolter, S. Nishimoto, J. van den Brink, and B. Büchner, Phys. Rev. Lett. 119, 037201 (2017).
* Zheng _et al._ [2017] J. Zheng, K. Ran, T. Li, J. Wang, P. Wang, B. Liu, Z.-X. Liu, B. Normand, J. Wen, and W. Yu, Phys. Rev. Lett. 119, 227208 (2017).
* Banerjee _et al._ [2017] A. Banerjee, J. Yan, J. Knolle, C. A. Bridges, M. B. Stone, M. D. Lumsden, D. G. Mandrus, D. A. Tennant, R. Moessner, and S. E. Nagler, Science 356, 1055 (2017).
* Do _et al._ [2017] S.-H. Do, S.-Y. Park, J. Yoshitake, J. Nasu, Y. Motome, Y. S. Kwon, D. Adroja, D. Voneshen, K. Kim, T.-H. Jang, J.-H. Park, K.-Y. Choi, and S. Ji, Nature Physics 13, 1079 (2017).
* Kasahara _et al._ [2018] Y. Kasahara, T. Ohnishi, Y. Mizukami, O. Tanaka, S. Ma, K. Sugii, N. Kurita, H. Tanaka, J. Nasu, Y. Motome, T. Shibauchi, and Y. Matsuda, Nature 559, 227 (2018).
* Banerjee _et al._ [2018] A. Banerjee, P. Lampen-Kelley, J. Knolle, C. Balz, A. A. Aczel, B. Winn, Y. Liu, D. Pajerowski, J. Yan, C. A. Bridges, A. T. Savici, B. C. Chakoumakos, M. D. Lumsden, D. A. Tennant, R. Moessner, D. G. Mandrus, and S. E. Nagler, npj Quantum Materials 3, 8 (2018).
* Hentrich _et al._ [2018] R. Hentrich, A. U. B. Wolter, X. Zotos, W. Brenig, D. Nowak, A. Isaeva, T. Doert, A. Banerjee, P. Lampen-Kelley, D. G. Mandrus, S. E. Nagler, J. Sears, Y.-J. Kim, B. Büchner, and C. Hess, Phys. Rev. Lett. 120, 117204 (2018).
* Balz _et al._ [2019] C. Balz, P. Lampen-Kelley, A. Banerjee, J. Yan, Z. Lu, X. Hu, S. M. Yadav, Y. Takano, Y. Liu, D. A. Tennant, M. D. Lumsden, D. Mandrus, and S. E. Nagler, Phys. Rev. B 100, 060405 (2019).
* Yokoi _et al._ [2021] T. Yokoi, S. Ma, Y. Kasahara, S. Kasahara, T. Shibauchi, N. Kurita, H. Tanaka, J. Nasu, Y. Motome, C. Hickey, S. Trebst, and Y. Matsuda, Science 373, 568 (2021).
* Bruin _et al._ [2022] J. A. N. Bruin, R. R. Claus, Y. Matsumoto, N. Kurita, H. Tanaka, and H. Takagi, Nat. Phys. 18, 410 (2022).
* Lefrançois _et al._ [2022] E. Lefrançois, G. Grissonnanche, J. Baglo, P. Lampen-Kelley, J.-Q. Yan, C. Balz, D. Mandrus, S. E. Nagler, S. Kim, Y.-J. Kim, N. Doiron-Leyraud, and L. Taillefer, Phys. Rev. X 12, 021025 (2022).
* Lin _et al._ [2021] G. Lin, J. Jeong, C. Kim, Y. Wang, Q. Huang, T. Masuda, S. Asai, S. Itoh, G. Günther, M. Russina, Z. Lu, J. Sheng, L. Wang, J. Wang, G. Wang, Q. Ren, C. Xi, W. Tong, L. Ling, Z. Liu, L. Wu, J. Mei, Z. Qu, H. Zhou, J.-G. Park, and J. Ma, Nature Communications 12, 5559 (2021).
* Hong _et al._ [2021] X. Hong, M. Gillig, R. Hentrich, W. Yao, V. Kocsis, A. R. Witte, T. Schreiner, D. Baumann, N. Pérez, A. U. B. Wolter, Y. Li, B. Büchner, and C. Hess, Phys. Rev. B 104, 144426 (2021).
* Li _et al._ [2022b] N. Li, S. Guang, W. Chu, Q. Huang, J. Liu, K. Xia, X. Zhou, X. Yue, Y. Sun, Y. Wang, Q. Li, G. Lin, J. Ma, X. Zhao, H. Zhou, and X. Sun, arXiv preprint arXiv:2201.11396 10.48550/arXiv.2201.11396 (2022b).
* Yang _et al._ [2022] H. Yang, C. Kim, Y. Choi, J. H. Lee, G. Lin, J. Ma, M. Kratochvílová, P. Proschek, E.-G. Moon, K. H. Lee, Y. S. Oh, and J.-G. Park, Phys. Rev. B 106, L081116 (2022).
* Xiao _et al._ [2021] G. Xiao, Z. Xia, Y. Song, and L. Xiao, Journal of Physics: Condensed Matter 34, 075801 (2021).
* Takeda _et al._ [2022] H. Takeda, J. Mai, M. Akazawa, K. Tamura, J. Yan, K. Moovendaran, K. Raju, R. Sankar, K.-Y. Choi, and M. Yamashita, Phys. Rev. Research 4, L042035 (2022).
* Rusnačko _et al._ [2019] J. Rusnačko, D. Gotfryd, and J. Chaloupka, Phys. Rev. B 99, 064425 (2019).
* Maksimov and Chernyshev [2020] P. A. Maksimov and A. L. Chernyshev, Phys. Rev. Research 2, 033011 (2020).
* Laurell and Okamoto [2020] P. Laurell and S. Okamoto, npj Quantum Materials 5, 1 (2020).
* Songvilay _et al._ [2020] M. Songvilay, J. Robert, S. Petit, J. A. Rodriguez-Rivera, W. D. Ratcliff, F. Damay, V. Balédent, M. Jiménez-Ruiz, P. Lejay, E. Pachoud, A. Hadj-Azzem, V. Simonet, and C. Stock, Phys. Rev. B 102, 224429 (2020).
* Samarakoon _et al._ [2021] A. M. Samarakoon, Q. Chen, H. Zhou, and V. O. Garlea, Phys. Rev. B 104, 184415 (2021).
* Kim _et al._ [2022] C. Kim, J. Jeong, G. Lin, P. Park, T. Masuda, S. Asai, S. Itoh, H.-S. Kim, H. Zhou, J. Ma, and J.-G. Park, Journal of Physics: Condensed Matter 34, 045802 (2022).
* Das _et al._ [2021] S. Das, S. Voleti, T. Saha-Dasgupta, and A. Paramekanti, Phys. Rev. B 104, 134425 (2021).
* Sanders _et al._ [2022] A. L. Sanders, R. A. Mole, J. Liu, A. J. Brown, D. Yu, C. D. Ling, and S. Rachel, Phys. Rev. B 106, 014413 (2022).
* Yao _et al._ [2022] W. Yao, K. Iida, K. Kamazawa, and Y. Li, Phys. Rev. Lett. 129, 147202 (2022).
* Winter [2022] S. M. Winter, Journal of Physics: Materials 5, 045003 (2022).
* Maksimov _et al._ [2022] P. A. Maksimov, A. V. Ushakov, Z. V. Pchelkina, Y. Li, S. M. Winter, and S. V. Streltsov, Phys. Rev. B 106, 165131 (2022).
* Pandey and Feng [2022] S. K. Pandey and J. Feng, Phys. Rev. B 106, 174411 (2022).
* Lin _et al._ [2022] G. Lin, Q. Zhao, G. Li, M. Shu, Y. Ma, J. Jiao, Q. Huang, J. Sheng, A. Kolesnikov, L. Li, L. Wu, X. Wang, H. Zhou, Z. Liu, and J. Ma (2022), https://doi.org/10.21203/rs.3.rs-2034295/v1.
* Chen _et al._ [2021] W. Chen, X. Li, Z. Hu, Z. Hu, L. Yue, R. Sutarto, F. He, K. Iida, K. Kamazawa, W. Yu, X. Lin, and Y. Li, Phys. Rev. B 103, L180404 (2021).
* Lee _et al._ [2021] C. H. Lee, S. Lee, Y. S. Choi, Z. H. Jang, R. Kalaivanan, R. Sankar, and K.-Y. Choi, Phys. Rev. B 103, 214447 (2021).
* Kikuchi _et al._ [2022] J. Kikuchi, T. Kamoda, N. Mera, Y. Takahashi, K. Okumura, and Y. Yasui, arXiv preprint arXiv:2206.05409 (2022).
* Viciu _et al._ [2007] L. Viciu, Q. Huang, E. Morosan, H. Zandbergen, N. Greenbaum, T. McQueen, and R. Cava, Journal of Solid State Chemistry 180, 1060 (2007).
* Lefrançois _et al._ [2016] E. Lefrançois, M. Songvilay, J. Robert, G. Nataf, E. Jordan, L. Chaix, C. V. Colin, P. Lejay, A. Hadj-Azzem, R. Ballou, and V. Simonet, Phys. Rev. B 94, 214416 (2016).
* Bera _et al._ [2017] A. K. Bera, S. M. Yusuf, A. Kumar, and C. Ritter, Phys. Rev. B 95, 094424 (2017).
* Ye _et al._ [2012] F. Ye, S. Chi, H. Cao, B. C. Chakoumakos, J. A. Fernandez-Baca, R. Custelcean, T. F. Qi, O. B. Korneta, and G. Cao, Phys. Rev. B 85, 180403(R) (2012).
* Sears _et al._ [2015] J. A. Sears, M. Songvilay, K. W. Plumb, J. P. Clancy, Y. Qiu, Y. Zhao, D. Parshall, and Y.-J. Kim, Phys. Rev. B 91, 144420 (2015).
* Cao _et al._ [2016] H. B. Cao, A. Banerjee, J.-Q. Yan, C. A. Bridges, M. D. Lumsden, D. G. Mandrus, D. A. Tennant, B. C. Chakoumakos, and S. E. Nagler, Phys. Rev. B 93, 134423 (2016).
* [67] See Supplemental Material at xxx for methods and additional data and analyses.
* Balz _et al._ [2021] C. Balz, L. Janssen, P. Lampen-Kelley, A. Banerjee, Y. H. Liu, J.-Q. Yan, D. G. Mandrus, M. Vojta, and S. E. Nagler, Phys. Rev. B 103, 174417 (2021).
* Zhang _et al._ [2022] X. Zhang, Y. Xu, T. Halloran, R. Zhong, C. Broholm, R. Cava, N. Drichko, and N. Armitage, Nat. Mater. (2022).
* Spitz _et al._ [2022] L. Spitz, T. Nomoto, S. Kitou, H. Nakao, A. Kikkawa, S. Francoual, Y. Taguchi, R. Arita, Y. Tokura, T.-h. Arima, and M. Hirschberger, J. Am. Chem. Soc. 144, 16866 (2022).
* [71] W. G. F. Krüger, W. Chen, X. Jin, Y. Li, and L. Janssen, to appear.
* Jin _et al._ [2021] X. Jin, G. He, C. Balz, W. Yao, Y. Li, and X. Li (2021), Investigation of spin excitations in Na2Co2TeO6 single crystals, STFC ISIS Neutron and Muon Source, https://doi.org/10.5286/ISIS.E.RB2010025.
* Lynn _et al._ [2012] J. Lynn, Y. Chen, S. Chang, Y. Zhao, S. Chi, W. Ratcliff, B. G. Ueland, and R. W. Erwin, Journal of research of the National Institute of Standards and Technology 117, 61 (2012).
* Rodriguez _et al._ [2008] J. A. Rodriguez, D. M. Adler, P. C. Brand, C. Broholm, J. C. Cook, C. Brocker, R. Hammond, Z. Huang, P. Hundertmark, J. W. Lynn, N. C. Maliszewskyj, J. Moyer, J. Orndorff, D. Pierce, T. D. Pike, G. Scharfstein, S. A. Smee, and R. Vilaseca, Measurement Science and Technology 19, 034023 (2008).
* Bewley _et al._ [2011] R. Bewley, J. Taylor, and S. Bennington, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 637, 128 (2011).
* Azuah _et al._ [2009] R. T. Azuah, L. R. Kneller, Y. Qiu, P. L. Tregenna-Piggott, C. M. Brown, J. R. Copley, and R. M. Dimeo, Journal of Research of the National Institute of Standards and Technology 114, 341 (2009).
* Ewings _et al._ [2016] R. Ewings, A. Buts, M. Le, J. Van Duijn, I. Bustinduy, and T. Perring, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 834, 132 (2016).
* Shirane _et al._ [2002] G. Shirane, S. M. Shapiro, and J. M. Tranquada, _Neutron scattering with a triple-axis spectrometer: basic techniques_ (Cambridge University Press, 2002).
Supplemental Material for “Magnetic ground state of the Kitaev Na_2Co_2TeO_6
spin liquid candidate”
## I Single crystals and neutron diffraction experiments
Samples used in three neutron diffraction experiments are shown in Fig. S5,
which were prepared with the same method as in [16] and [53]. According to
previous thermodynamic and diffraction studies on a large number of crystals,
we had confirmed that Na_2Co_2TeO_6 only has one major magnetic ordering
transition at $\sim 26.5$ K [16, 53]. This suggests stacking fault is not
severe for samples prepared with our method, in comparison with
$\alpha$-RuCl_3 [66].
Neutron diffraction measurements were performed in the BT-7 triple-axis
spectrometer [73] with an incident neutron energy $E_{i}$ = 14.7 meV and the
multiaxis crystal spectrometer (MACS) [74] with $E_{i}$ = 5.0 meV, both at
NIST Center for Neutron Research (NCNR), USA. Additional neutron diffraction
data come from the experiment performed in the time-of-flight spectrometer LET
[75] with $E_{i}$ = 12.0 meV at the ISIS Spallation Neutron Source, the
Rutherford Appleton Laboratory, UK. One single crystal with mass of about 30
mg was used in the BT-7 experiment. Coaligned single crystal arrays of about
800 mg and 750 mg were used in the MACS and LET experiments, respectively. The
space group $P6_{3}22$ is used with $a=b=5.28$ $\rm\AA$, $c=11.22$ $\rm\AA$
[63]. Wave vector is defined as
$\textbf{Q}=H\textbf{a*}+K\textbf{b*}+L\textbf{c*}$, with
$a^{*}=b^{*}=\frac{4\pi}{\sqrt{3}a}$, $c^{*}=\frac{2\pi}{c}$. In all these
experiments, the single crystals were aligned with the ($H$, 0, $L$) plane
horizontal (Fig. 1 inset and Fig. S5). Magnetic field (up to 10 T in BT-7 and
MACS, and up to 8.8 T in LET) was applied vertically, which is parallel to the
two-dimensional honeycomb lattice (Fig. 1 inset). Data reductions were
performed with DAVE [76] for BT-7 and MACS data, and Horace [77] for LET data.
In the main text, the data of Fig. 1(c), Fig. 2(a) and (b), and Fig. 3(e) are
from BT-7; the data of Fig. 3(a)-(d) are from MACS; the data of Fig. 2(c) are
from LET.
Figure S5: (a)-(c) Single crystal samples used in BT-7, MACS, and LET
experiments, respectively.
## II Additional Field Dependence Data
Fig. S6 presents the field dependence of (0.5, 0, 1) at higher temperatures
measured with BT-7. The bifurcation between the data of increasing and
decreasing field still persists at 12 K and 15 K, with the anomaly in the
field axis basically unchanged. By further increasing temperature, the two
sets of data completely overlap with each other and show clear transition to a
paramagnetic state at 8 T and 7 T for 18.5 K and 21 K, respectively.
Figure S6: (a) - (d) Field dependence of the intensity at (0.5, 0, 1) measured
with BT-7 at 12 K, 15 K, 18.5 K, and 21 K, respectively. The shaded regions in
(a) and (b) indicate the bifurcation of the field increasing and decreasing
processes. The red arrows in (c) and (d) indicate the transition to a
paramagnetic state.
Characteristic temperatures and fields for H $\parallel$ a obtained from this
study [Fig. S6 and Fig. 1(b) in the main text], as well as those from previous
reports [16, 40] are summarized in the phase diagram in Fig. S7. Below $\sim$8
T, the phase boundary determined from neutron diffraction is consistent with
magnetic susceptibility and magnetization measurements. A new phase boundary
close to 8 T is identified in this study and is related to a first-order phase
transition.
Figure S7: Phase diagram about in-plane magnetic field (H $\parallel$ a) and
temperature. The phase boundary determined by magnetic susceptibility and
magnetization is adapted from [16] and [40], respectively.
Fig. S8(a) shows reciprocal space coverage of field dependence measurements
with LET. In these measurements, we fixed the sample at a specific rotation
angle so that the out-of-plane magnetic Bragg peak (0, 0.5, 1) [red circle in
Fig. S8(a)] can be covered, then we changed the magnetic field. The measurable
trajectory is an arc in the (-0.25+$H$, 0.5, $L$) plane. The field dependence
of (0, 0.5, 1) is presented in Fig. 2(c) in the main text. There is no field
dependence for the background [grey circle in Fig. S8(a) and Fig. S8(b)],
which proves our measurements are reliable. Due to the nearly symmetrical
detector distribution vertically, we can simultaneously measure another
equivalent magnetic Bragg peak (0.5, -0.5, 1) at the (0.25+$H$, -0.5, $L$)
plane [green circle in Fig. S8(c)]. Fig. S8(d) shows its field dependence,
which is similar with (0, 0.5, 1). Note that during the interval between
field-increase and field-decrease measurements, we made other measurements at
8.8 T, 2 K with sample rotation. So we had corrected the sample movement for
the field-decrease measurement of (0.5, -0.5, 1). Its intensity is multiplied
by a factor of 1.09.
Figure S8: (a) and (c) The reciprocal space coverage for the field dependence
measurements performed with LET at 2 K. The presented data are the constant
energy cuts (E = 0 meV, elastic) at 0 T in (-0.25 + $H$, 0.5, $L$) and (0.25 +
$H$, -0.5, $L$) planes for (a) and (c), respectively. The inset of (a) shows
the positions of two measured magnetic peaks in the Brillouin zone boundary
(with $L=1$). (b) and (d) Field dependence of the background [at (0.18, 0.5,
1.5), grey circle in (a)] and a simultaneously measurable magnetic peak [(0.5,
-0.5, 1), green circle in (c)]. The field dependence of (0, 0.5, 1) [red
circle in (a)] is presented in the main text [Fig. 2(c)].
Fig. S9 shows the field dependence of the intensity at (0.5, 0, 1.5), which
shows opposite behavior to the intensity at (0.5, 0, 1). The intensity at
(0.5, 0, 1.5) gets enhanced after field training, proving that the lost
intensity at (0.5, 0, 1) goes into non-integer-$L$ positions.
## III Additional Momentum-Scan Data
Fig. S10 shows $H$-scans at selected structural Bragg peaks, based on which
the field-induced moment is estimated (Section IV). The non-magnetic nature of
these peaks at 0 T can be confirmed by the measurement above TN, as showed in
Fig. S11. Fig. S12 compares $H$-scans at 10 T, 2 K and 0 T, 35 K. It
demonstrates that the remaining intensity at 10 T, 2 K is from magnetic
scattering. Fig. S13 shows long $L$-scans from BT-7 measurements at 0 T before
and after field training. The intensity at (0.5, 0, 1) goes into non-
integer-$L$ positions, which is consistent with Fig. 3(d) in the main text
(from MACS measurements).
## IV Evaluation of Field-induced Moment
For a structural Bragg peak, the integrated intensity is proportional to the
modulus square of the structure factor $F_{N}(\textbf{Q})$:
$I_{N}(\textbf{Q})=A|F_{N}(\textbf{Q})|^{2},$ (S1)
where
$F_{N}(\textbf{Q})=\sum_{j}b_{j}e^{i\textbf{Q}\cdot\textbf{r}_{j}}e^{W_{j}}.$
(S2)
For a magnetic Bragg peak, the integrated intensity is proportional to the
modulus square of the the magnetic structure factor
$\textbf{F}_{M}(\textbf{Q})$:
$I_{M}(\textbf{Q})=B|\textbf{F}_{M}(\textbf{Q})|^{2},$ (S3)
where
$\textbf{F}_{M}(\textbf{Q})=\sum_{j}\frac{\gamma
r_{0}}{2}g_{j}f_{j}(Q)\textbf{S}_{\perp
j}e^{i\textbf{Q}\cdot\textbf{r}_{j}}e^{W_{j}}.$ (S4)
$\textbf{S}_{\perp j}$ is the spin size at site $j$ that is detectable by
neutrons:
$\textbf{S}_{\perp
j}=\textbf{S}_{j}-\hat{\textbf{Q}}(\hat{\textbf{Q}}\cdot\textbf{S}_{j}),$ (S5)
where $\hat{\textbf{Q}}$ is the unit vector of Q and $\textbf{S}_{j}$ is the
spin vector at site $j$. The magnetic moment at site $j$ in Bohr magneton is
$g_{j}S_{j}$. The term $\frac{\gamma r_{0}}{2}$ in (4) containing the
classical electron radius ($r_{0}$) and gyromagnetic ratio ($\gamma$) acts as
an effective scattering length of per Bohr magneton, which is 2.695 fm.
The full integrated intensity in the magnetic fields is the sum of $I_{N}$ and
$I_{M}$:
$I(\textbf{Q})=I_{N}(\textbf{Q})+I_{M}(\textbf{Q}).$ (S6)
Since we measure at low temperature, we approximate the Debye–Waller factor
($e^{W_{j}}$) to be unity in (2) and (4).
The factors A and B in (1) and (3) contain information about the number
density of the structural or magnetic unit cells. A and B also take account
for influences of resolution, geometrical, and absorption factors in a real
scattering process. These factors will be the same for structural and magnetic
reflections at a specific position in the reciprocal space.
For an estimation of the field induced moment, we assume all spins are
pointing along the field (within the honeycomb plane), so that they can be
fully detected by neutrons. Thus spin vectors defined in (4) are parallel with
each other. This assumption is reasonable as the intensity of AFM peaks is
already quite weak at 10 T (see Fig. 2 and 3 in the main text). Next we assume
the moment sizes in two cobalt sites are the same (they are different but
close according to previous neutron diffraction studies [62, 63, 49]).With
these assumptions, we can reduce (4) into
$\textbf{F}_{M}(\textbf{Q})=\frac{\gamma
r_{0}}{2}g\textbf{S}f(Q)\sum_{j}e^{i\textbf{Q}\cdot\textbf{r}_{j}},$ (S7)
where $g\textbf{S}$ is the moment size (in $\mu_{B}$) along the field and is
to be calculated. $f(Q)$ is the magnetic form factor of Co^2+ ion. Therefore,
for a specific position [e.g. (1, 0, 1)] $A=B$ in (1) and (3).
$F_{N}(\textbf{Q})$ can be calculated directly based on reported crystal
structure and also the summation in (7). Finally, we can solve (1), (3), and
(6) to get the moment size in (7).
The calculation is based on $H$-scans at structural Bragg peaks as showed in
Fig. S10. The results are listed in Table 1. The average of five trustable
measurements gives a magnetic moment of 2.05(3) $\mu_{B}$/Co^2+. We had
referred to [78] in the above calculation procedure.
Table 1: Field induced magnetic moments (in $\mu_{\rm{B}}$/Co^2+) deduced from structural Bragg peaks in two diffraction experiments. | (1, 0, 0) | (1, 0, 1) | (1, 0, 2) | (1, 0, 3)
---|---|---|---|---
BT-7 | - | 2.08(6) | -111This moment could not be determined reliably due to background problem, see Fig. S10(b). | 2.01(7)
MACS | 2.06(7) | 1.97(8) | 2.13(8) | -
## V Temperature dependence behavior expected from the “Triple-q” order after
field training
After ZFC, the chirality distribution is shown in Fig. S14(a), where we have
assumed a uniform chirality for simplicity. With the application of in-plane
magnetic field, the opposite spin chirality is introduced in certain layers,
which causes partial loss of $c$-axis correlation. For the opposite spin
chirality, we have four kinds of distributions with respect to the initial one
[upper part of Fig. S14(a)], as showed in Fig. S14(b). We note that these four
distributions affect the in-plane Bragg peaks differently. The pattern I in
Fig. S14(b) will lead to peak broadening for M1, while the M2 and M3 are
intact. In the same vein, patterns II, III, and IV will respectively lead to
peak broadening for M2, M3, and all these three kinds of peaks. From the
experiment, we know that M2 and M3 do not get broadened after field training,
therefore pattern I conforms to our case. We expect these four patterns are
the same in energy. Therefore, when warming up from the field trained state,
pattern I would be activated to other three patterns. As discussed above, the
pattern II and III will not cause peak broadening for M1, but for M2 and M3,
respectively. So, peak of M1 can be narrowed along $c$-axis, which is exactly
what we have observed. As a consequence, peaks of M2 and M3 are expected to
lose intensity faster than directly warming from a ZFC state [Fig. 3(e) in the
main text]. When the temperature is higher enough, the four patterns occur
with the same probability, which will lead to the converge of the temperature
dependence behaviors for three kinds of peaks. This process amounts to thermal
redistribution of field induced opposite spin chirality.
Figure S9: Field dependence of the intensity of (0.5, 0, 1.5) at 2 K from BT-7
measurements. Figure S10: (a)-(c) $H$-scans through (1, 0, 1), (1, 0, 2), and
(1, 0, 3) at 0 T (red) and 10 T (blue) from BT-7 measurements. (d)-(f)
$H$-scans through (1, 0, 0), (1, 0, 1), and (1, 0, 2) at 0 T (red) and 10 T
(blue) from MACS measurements. The solid lines are fits with Gaussian
profiles. Figure S11: $H$-scans through (1, 0, 1) at 0 T, 2 K and 0 T, 35 K
from BT-7 measurements. Figure S12: $H$-scans through (0.5, 0, 1) (a) and
(0.5, 0, 2) (b) at 10 T, 2 K and 0 T, 35 K from BT-7 measurements. Figure S13:
(a) $L$-scans performed on BT-7 through (0.5, 0, 1) at 0 T after ZFC and field
training. (b) Intensity difference (0 T, trained - 0 T, ZFC) between the two
measurements in (a). Figure S14: (a) $c$-axis stacking of the “triple-q” order
after ZFC. The upper part shows the spin chirality distribution in the
honeycomb lattice, which follows Fig. 1(b) in the main text. (b) Four kinds of
(opposite) spin chirality distributions (I - IV) in the honeycomb lattice.
Magnetic Bragg peaks in the Brillouin zone are displayed at the corner of each
panel. The colored dots follow the inset of Fig. 1(c) in the main text. The
crosses show the Bragg peaks where broadening happens due to the loss of
partial $c$-axis correlation.
|
# Federated Learning-Based Cell-Free Massive MIMO System for Privacy-
Preserving
Jiayi Zhang, , Jing Zhang, Derrick Wing Kwan Ng, and Bo Ai J. Zhang and J.
Zhang are with the School of Electronic and Information Engineering, Beijing
Jiaotong University, Beijing, China. (email: jiayizhang@bjtu.edu.cn).D. W. K.
Ng is with School of Electrical Engineering and Telecommunications, University
of New South Wales, Sydney, N.S.W., Australia (email: w.k.ng@unsw.edu.au).B.
Ai is with State Key Laboratory of Rail Traffic Control and Safety, Beijing
Jiaotong University, Beijing 100044, China (email: boai@bjtu.edu.cn).
###### Abstract
Cell-free massive MIMO (CF mMIMO) is a promising next generation wireless
architecture to realize federated learning (FL). However, sensitive
information of user equipments (UEs) may be exposed to the involved access
points or the central processing unit in practice. To guarantee data privacy,
effective privacy-preserving mechanisms are defined in this paper. In
particular, we demonstrate and characterize the possibility in exploiting the
inherent quantization error, caused by low-resolution analog-to-digital
converters (ADCs) and digital-to-analog converters (DACs), for privacy-
preserving in a FL CF mMIMO system. Furthermore, to reduce the required uplink
training time in such a system, a stochastic non-convex design problem that
jointly optimizing the transmit power and the data rate is formulated. To
address the problem at hand, we propose a novel power control method by
utilizing the successive convex approximation approach to obtain a suboptimal
solution. Besides, an asynchronous protocol is established for mitigating the
straggler effect to facilitate FL. Numerical results show that compared with
the conventional full power transmission, adopting the proposed power control
method can effectively reduce the uplink training time under various practical
system settings. Also, our results unveil that our proposed asynchronous
approach can reduce the waiting time at the central processing unit for
receiving all user information, as there are no stragglers that requires a
long time to report their local updates.
###### Index Terms:
Cell-free massive MIMO, federated learning, power control, differential
privacy.
## I Introduction
Massive multiple-input multiple-output (MIMO) is an unprecedented technique to
increase both the spectral efficiency (SE) and energy efficiency (EE) of
communication systems. As a result, massive MIMO has already been utilized in
practical cellular systems [1, 2, 3, 4, 5]. However, some serious concerns
have recently been raised about data privacy. Indeed, mobile devices nowadays
are often equipped with high computing capabilities enabling them to collect
and process large amounts of data [6, 7]. Specifically, numerous applications
perform data preprocessing and classification for predicting possible future
events via using various machine learning technologies [8, 9]. The vast amount
of data of devices is generally collected for numerous private applications
carrying sensation information and thus naturally causes privacy concerns. On
the other hand, it is generally challenging to transmit all the data to a
central processing unit (CPU) for training a deep learning model. Besides, due
to the limited resources of wireless systems, sending a large amount of data
through wireless link is not always possible as it would introduce expensive
communication costs and exceedingly long communication delays.
### I-A Related Works
In order to address the above challenges, it is necessary to design a novel
machine learning (ML) technology such that each user equipment (UE) can be
trained locally based on the data it collects and collaboratively establish a
shared global learning model. One of the most promising decentralized learning
methods to achieve this goal is FL [10, 11]. In particular, multiple UEs are
allowed to jointly train a global ML model without having to exchange raw data
among them or transfer their data to the CPU [12]. Specifically, the CPU first
broadcasts the latest global model to all the participating UEs. Next, the UEs
calculate the corresponding local update based on the available data and then
send their local models back to the CPU. Repeat these steps until a certain
level of global model accuracy is reached. In this way, only local model
parameters are exchanged, thereby reducing the required communication
signaling overhead.
Despite raw data sharing wireless channel can be avoided via FL, UEs’
sensitive information can still be possibly revealed through any form of the
leaked information. For example, a malicious CPU can perform a model inversion
attack [13] to infer the presence of individual data samples. Moreover, other
adversaries can apply differential attacks to the wireless communication phase
that performs data exchanged between the CPU and distributed access points
(APs) [10]. In the literature, there are three popular techniques for
maintaining privacy, including anonymization, data encryption and differential
privacy (DP) [14, 15, 16], with different drawback. For instance,
anonymization strategies do not guarantee complete level of protection from
adversaries; cryptographic techniques are computationally expensive. In
contrast, differential privacy is easy to implement and provides provable
privacy guarantee. Specifically, DP prevents the sensitive information of UEs
from being easily exposed even if the CPU/adversaries can access the model
parameters and acquire the knowledge of the adopted training mechanism [17].
In fact, one appealing approach to realize DP is via dedicated noise injection
[18]. The main idea of this approach is to deliberately introduce some noises
to the uploaded local model updates such that the CPU/adversaries cannot infer
any sensitive information from exploiting the actual data. Recently,
remarkable efforts have been made to investigate DP mechanisms for wireless FL
through artificial noise injection. For instance, in [19], Gaussian noise was
added to the local updated data and the power control was applied to realize
different levels of DP protection. Besides, the results in [18] and [20]
showed that the inherent channel noise can be exploited for guaranteeing DP
FL. Indeed, by deploying a proper power control, one can harness the channel
noise to achieve privacy for free. Also, in [21], the inherent hardware-
induced distortion was exploited to facilitate local model updates and a power
allocation strategy was proposed to provide guaranteed DP. On the other hand,
the existence of straggler effect also creates a bottleneck in realizing
effective FL in wireless networks [22]. By definition, the CPU needs to wait
until it receives training updates from all UEs before processing any next
steps. Therefore, some straggler UEs with unfavorable links may greatly slow
down the entire FL process and reduce its practicality [22, 23].
### I-B Contributions
All the aforementioned works, e.g. [18, 19, 20], only assume simple wireless
environments, e.g. Gaussian channels with additive white noise, which does not
consider the fluctuation of practical wireless channels. Also, in practice,
when the number of UEs increases, the required training time could be
significantly prolonged. In such scenarios, to serve a large number of UEs via
the same time/frequency resources, cell-free massive multiple-input multiple-
output (CF mMIMO) systems have been applied for supporting FL [24]. Herein,
multiple distributed APs are connected to the CPU through capacity-unlimited
fronthaul links to serve UEs coherently. Since the large number of APs can
provide a rich macro-diversity gain for ensuring uniform received power
strength, the performance of CF mMIMO supported FL is less prone to UEs with
weak communication links. However, the current literature focuses on cell-free
massive MIMO systems implemented with federate learning is still limited. A
scheme for CF mMIMO networks to support any FL framework was proposed in [25]
for the first time. An optimization problem was also formulated to jointly
optimize the local accuracy, transmit power, data rate, and users¡¯ processing
frequency, and is solved by employing the online successive convex
approximation approach. Also, a UE selection approach was proposed in [26] to
mitigate the straggler effect with UE sampling for FL in CF mMIMO networks. It
selects only a small subset of UEs for participating in one FL process. In
[27], a novel scheme that asynchronously executes the iterations of FL
processes was designed for multicasting downlink and conventional uplink
transmission protocols. However, the privacy-preserving is not considered in
[25, 26, 27]. In [28], the authors developed and analyzed a privacy-preserving
channel estimation schemes in CF mMIMO systems. Yet, the design in [28] only
provides the data privacy protection during the channel estimation phase,
therefore the data information transmitted by the UE in the payload data
transmission phase still with high potential of leakage. Besides, practical
digital CF mMIMO communication systems adopt low-resolution analog-to-digital
converters (ADCs) and digital-to-analog converters (DACs) to reduce associated
power consumption and hardware cost [29]. In fact, these inherent noise can be
exploited to enhance DP that is somewhat overlooked in the literature.
Motivated by the above discussion, we consider a practical CF mMIMO supported
FL framework and demonstrate that the inherent quantization noise caused by
low-resolution ADCs and DACs in CF mMIMO can be exploited as a useful privacy-
preserving mechanism. Our contributions are listed as follows:
* •
First, we capitalize the quantization noise introduced by the low-resolution
ADCs and DACs to prevent the CPU/adversaries from exploiting the actual local
gradient updates to infer sensitive information and hence, realize privacy-
preserving. Within this proposed framework, we derive an upper-bound to
characterize the privacy violation probability and adopt it for formulating a
privacy preservation condition.
* •
Then, we provide the closed-from convergence analysis of the DP mechanism,
taking into account the quantization noise, the length of the local updates,
and the total data size. It can be observed from the upper bound of the
average optimality gap that the noise added in the initial iterations is less
damaging to the final optimality gap than that added in later iterations.
Besides, the initial optimality gap decays geometrically as the number of
iteration increases. Note that the quantization noise is exploited for privacy
protection, therefore, different quantization accuracies can realize different
DP protection levels.
* •
Third, in order to minimize the uplink training time of CF mMIMO-supported-FL,
we formulate a stochastic nonconvex optimization problem that jointly
optimizes the transmit power and the data rate, subject to the practical
constraints on energy consumption of the UEs with different quantization
accuracies of the ADCs and DACs. Our numerical results reveal that the
proposed power control method can effectively reduce the uplink training time
under different privacy protection levels. Besides, our power control method
still performs well over various baseline schemes for different number of APs
and UEs.
* •
Finally, we propose an asynchronous FL protocol to alleviate the straggler
effect with two simple parameters, lag tolerance and lag percent,
respectively. We also compare the performance of synchronous FL and
asynchronous FL and the empirically analyze the impacts caused by lag
tolerance and lag percent.
_Notations:_ Throughout the paper, $\mathbb{R}$ and $\mathbb{C}$ represents
the sets of all real and complex values, respectively. We denote a complex
zero-mean normal distribution with variance $\sigma^{2}$ by ${{\cal
N}_{\mathbb{C}}}\left({0,{\sigma^{2}}}\right)$. The cardinality of a set
$\mathcal{A}$ is denoted by $\left|{\cal A}\right|$. Furthermore, the
difference between two sets is defined as ${\cal A}^{\prime}-{\cal
A}^{\prime\prime}=\left\\{{\left.x\right|x\in{\cal A}^{\prime},x\notin{\cal
A}^{\prime\prime}}\right\\}$. The union of sets ${{\cal A}_{1}},\cdots,{{\cal
A}_{K}}$ is represented by ${\cal A}=\bigcup\nolimits_{k=1}^{K}{{{\cal
A}_{k}}}$. Boldfaced lower-case letters, e.g., $\bf{a}$, represents the
vectors, ${\bf{a}}^{H}$, ${\bf{a}}^{*}$, and $\left\|{\bf{a}}\right\|$ denote
Hermitian transpose, conjugate, and Euclidean norm of $\bf{a}$, respectively.
## II System Model
Figure 1: Illustrations of a CF mMIMO system supported FL with low-resolutions
ADCs equipped at the APs.
As shown in Fig. 1, we consider a CF mMIMO supported FL system consisting of
$K$ single-antenna UEs and $L$ single-antenna APs [30]. All APs and UEs are
randomly located in an $D\times D$ area. Each UE is served by all the APs over
the same time/frequency resources. A CPU is connected to all APs via ideal
wireless fronthaul. UE $k$, $k\in\left\\{{1,\cdots,K}\right\\}$, is equipped
with its own local dataset ${{\cal
B}_{k}}\buildrel\Delta\over{=}\left\\{{\left({{{\bf{x}}_{kn}},{y_{kn}}}\right)}\right\\}_{n=1}^{{B_{k}}}$,
where $B_{k}$ is the data size and ${\left({{{\bf{x}}_{kn}},{y_{kn}}}\right)}$
is the corresponding $n$th data sample. The objective of FL is to find an
$d\times 1$ optimal model vector ${\bf{w}}$ that minimizes the global loss
function [21]:
$\mathop{\text{minimize}}\limits_{\bf{w}}{\rm{}}F\left({\bf{w}}\right)=\frac{1}{K}\sum\limits_{i=1}^{K}{{F_{i}}}\left({\bf{w}}\right),$
(1)
where ${B_{{\rm{tot}}}}=\sum\limits_{i=1}^{K}{{B_{k}}}$ and
${{F_{k}}\left({\bf{w}}\right)}$ is the local loss function which is given by
${F_{k}}\left({\mathbf{w}}\right)=\frac{1}{{{B_{k}}}}\sum\nolimits_{\left({{{\mathbf{x}}_{k}},{y_{k}}}\right)\in{\mathcal{B}_{k}}}{f\left({{\mathbf{w}},{{\mathbf{x}}_{k}},{y_{k}}}\right)},$
(2)
where ${f\left({{\bf{w}};{{\bf{x}}_{k}},{y_{k}}}\right)}$ is the sample-wise
loss function that quantifies the prediction error of the model ${\bf{w}}$ on
the training samples ${{\bf{x}}_{kn}}$ with respect to the labels $y_{kn}$.
### II-A Learning Protocol
In order to address problem (1), we apply the distributed stochastic gradient
descent (SGD) [31, 32] at the CPU and the UEs. Note that the CPU and the UEs,
respectively, act as the central server and the clients in the general FL
framework. The APs with CF mMIMO are only used to relay the training updates
between the CPU and the UEs. The specific procedure is summarized as follows.
* •
Step 1: Downlink communication for model download. The CPU broadcasts the
current model, i.e., ${\bf{w}}^{t}$, to all the UEs, where $t$ represents the
communication round, $t=1,\cdots,T$.
* •
Step 2: Local computation. Each UE computes the gradient of the local loss
function in (2) via
$\nabla{F_{k}}\left({{{\bf{w}}^{t}}}\right)=\frac{1}{{{B_{k}}}}\sum\limits_{\left({{{\bf{x}}_{k}},{y_{k}}}\right)\in{{\cal
B}_{k}}}\nabla{f\left({{{\bf{w}}^{t}},{{\bf{x}}_{k}},{y_{k}}}\right)},\forall
t,$ (3)
where $\nabla{F_{k}}\left({{{\bf{w}}^{t}}}\right)$ is the gradient of
${F_{k}}\left({{{\bf{w}}^{t}}}\right)$.
* •
Step 3: Uplink communication for model upload. The UEs send the gradient of
the local loss function to the CPU utilizing the same time and frequency
resources.
* •
Step 4: Global computation. Based on the received signal, the CPU obtains an
estimated $\widehat{\nabla F}\left({{{\bf{w}}^{t}}}\right)$ of the global
gradient by computing
$\nabla
F\left({{{\bf{w}}^{t}}}\right)=\frac{1}{{{B_{{\rm{tot}}}}}}\sum\limits_{i=1}^{K}{\nabla{F_{i}}\left({{{\bf{w}}^{t}}}\right)}.$
(4)
Then, the CPU updates the current global model as
${{\bf{w}}^{t+1}}={{\bf{w}}^{t}}-\eta\widehat{\nabla
F}\left({{{\bf{w}}^{t}}}\right),$ (5)
where ${\eta}$ denotes the learning rate.
Note that Steps 2 to 4 are repeated until a convergence criterion is met.
### II-B Communication Model
The channel coefficient between AP $l$ and UE $k$ is denoted as
$h_{kl}\in\mathbb{C}$, which is modeled as
${h_{kl}}=\sqrt{{\beta_{kl}}}{g_{kl}}$, where $\beta\in\mathbb{R}$ represents
the large-scale fading and $g_{kl}\in\mathbb{C}$ represents the small-scale
fading coefficient, respectively [33]. We adopt the block fading model where
$h_{kl}$ is a constant in each time-frequency block. Without loss of
generality, each block contains ${\tau_{c}}$ channel uses, which consists of
$\tau_{p}$ channel uses dedicated for acquiring the channel state information
(CSI) and $\tau_{c}-\tau_{p}$ channel uses for the uncoded transmission of the
$d$-dimensional gradient vector [18]. Besides, we assume that perfect CSI is
available at the APs [18]. In the following, we derive the uplink training
expressions when consider inherent noise induced by the low-resolution ADCs at
the APs or low-resolution DACs at the UEs.
#### II-B1 Low-resolution ADCs equipped at the APs
At communication round $t$ of Step 4, the received signal
${\mathbf{y}}_{l}^{t}$ at AP $l$ is given as
${\mathbf{y}}_{l}^{t}=\sum\limits_{i=1}^{K}{\sqrt{\frac{{p_{i}^{t}}}{{{{\left\|{{\mathbf{s}}_{i}^{t}}\right\|}^{2}}}}}{h_{il}}{\mathbf{s}}_{i}^{t}}+{{\mathbf{n}}_{l}},$
(6)
where
${\mathbf{s}}_{i}^{t}=\left|{{{{\mathcal{B}}}_{i}}}\right|\nabla{F_{i}}\left({{{\mathbf{w}}^{t}}}\right),\;\;\;\forall
i\in\left\\{{1,\cdots,K}\right\\},$ (7)
where ${p_{k}^{t}}$ denotes the transmit power for UE $i$,
${{\mathbf{n}}_{l}}$ is the additive noise with independent
${\mathcal{N}_{\mathbb{C}}}\left({0,{\sigma^{2}}{{\mathbf{I}}_{d}}}\right)$,
and ${\sigma^{2}}$ is the noise power per antenna. We adopt the linear
additive quantization noise model [34] to capture the quantization loss and
the noise caused by low-resolution ADCs, which yields
$\mathcal{Q}\left({{\mathbf{y}}_{l}^{t}}\right)=\alpha\sum\limits_{i=1}^{K}{\sqrt{\frac{{p_{i}^{t}}}{{{{\left\|{{\mathbf{s}}_{i}^{t}}\right\|}^{2}}}}}{h_{il}}{\mathbf{s}}_{i}^{t}}+\alpha{{\mathbf{n}}_{l}}+{\mathbf{n}}_{l}^{{\text{uq}}},$
(8)
where $\alpha=\frac{{\pi\sqrt{3}}}{2}{2^{-2b}}$ is a linear gain depending on
the number of quantization bits adopted in ADCs, $b$, and
${\mathbf{n}}_{l}^{{\text{uq}}}$ represents the additive Gaussian noise with
covariance matrix111The linear gain $\alpha$ for different quantization bits
can be approximated according to [29]. In general, one can adopt $b=10$ to
mimic the perfect ADCs case. [35]
${{\mathbf{R}}_{{\mathbf{n}}_{l}^{{\text{uq}}}}^{t}}=\alpha\left({1-\alpha}\right)\left({\sum\limits_{i=1}^{K}{{p_{i}^{t}}{\beta_{il}}}+{\sigma^{2}}}\right){{\mathbf{I}}_{d}}.$
(9)
We consider a fully distributed CF mMIMO system, in which the data detection
is performed at the APs [36, 37]. When applying the maximum-ratio combining
(MRC) for low computational complexity, the local processed signal for UE $k$
at AP $l$ at communication round $t$ is given as
${\mathbf{\hat{s}}}_{kl}^{t}=\alpha\sum\limits_{i=1}^{K}{\sqrt{\frac{{p_{i}^{t}}}{{{{\left\|{{\mathbf{s}}_{i}^{t}}\right\|}^{2}}}}}h_{kl}^{*}{h_{il}}{\mathbf{s}}_{i}^{t}}+\alpha
h_{kl}^{*}{{\mathbf{n}}_{l}}+h_{kl}^{*}{\mathbf{n}}_{l}^{{\text{uq}}}.$ (10)
Then, the APs convey the local processed signal to the CPU. The received
signal from all the APs at the CPU is given as
${\mathbf{r}}_{k}^{t}=\sum\limits_{l=1}^{L}{{\mathbf{\hat{s}}}_{kl}^{t}}=\alpha\sum\limits_{l=1}^{L}{\sum\limits_{i=1}^{K}{\sqrt{\frac{{p_{i}^{t}}}{{{{\left\|{{\mathbf{s}}_{i}^{t}}\right\|}^{2}}}}}h_{kl}^{*}{h_{il}}{\mathbf{s}}_{i}^{t}}}+{\bm{\omega}}_{k}^{t},$
(11)
where
${\bm{\omega}}_{k}^{t}=\alpha\sum\limits_{l=1}^{L}{h_{kl}^{*}{{\mathbf{n}}_{l}}}+\sum\limits_{l=1}^{L}{h_{kl}^{*}{\mathbf{n}}_{l}^{{\text{uq}}}}$
is the effective noise distributed according to
${\mathcal{N}_{\mathbb{C}}}\left({0,{{\left({m_{k}^{t}}\right)}^{2}}{{\mathbf{I}}_{d}}}\right)$,
with
${{\left({m_{k}^{t}}\right)}^{2}}={{\alpha^{2}}\sum\limits_{l=1}^{L}{{\beta_{kl}}{\sigma^{2}}}++\alpha\left({1-\alpha}\right)\sum\limits_{l=1}^{L}{{\beta_{kl}}}\left({\sum\limits_{i=1}^{K}{{p_{i}}{\beta_{il}}}+{\sigma^{2}}}\right)}$.
Now, we rigorously analyze the performance by using the achievable rates.
According to [30, 38], the achievable rate of UE $k$, ${R_{k}}$, is
$\displaystyle{R_{k}}\leqslant{r_{k}},$ (12)
$\displaystyle{r_{k}}=\left({1-\frac{{{\tau_{p}}}}{{{\tau_{c}}}}}\right)B{\log_{2}}\left({1+{\text{SIN}}{{\text{R}}_{k}}}\right),$
(13)
where $B$ is the bandwidth and
${\text{SIN}}{{\text{R}}_{k}}=\frac{{{p_{k}}{A_{k}}}}{{\sum\nolimits_{i\neq
k}^{K}{{p_{i}}}{B_{ki}}+{C_{k}}+{D_{k}}+\sum\nolimits_{i=1}^{K}{{p_{i}}}{E_{ki}}}},$
(14)
where
$\displaystyle{A_{k}}={\alpha^{2}}{\left|{\sum\limits_{l=1}^{L}{{\beta_{kl}}}}\right|^{2}},\;\;\;{B_{ki}}={\alpha^{2}}\sum\limits_{l=1}^{L}{{\beta_{kl}}{\beta_{il}}},$
$\displaystyle{C_{k}}={\alpha^{2}}{\sigma^{2}}\sum\limits_{l=1}^{L}{{\beta_{kl}}},\;\;\;{D_{k}}=d\alpha\left({1-\alpha}\right)\sum\limits_{l=1}^{L}{{\beta_{kl}}}{\sigma^{2}},$
$\displaystyle{E_{ki}}=d\alpha\left({1-\alpha}\right)\sum\limits_{l=1}^{L}{{\beta_{il}}{\beta_{kl}}}.$
(15)
According to [25], the uplink latency of UE $k$ in each iteration involves the
transmission delay of sending the global uplink training update from it to the
APs and from the APs to the CPU, i.e.,
${t_{k}}=\frac{S}{{{R_{k}}}},\;\;\;{t_{l}}=\frac{{KS}}{{\sum\limits_{k=1}^{K}{{R_{k}}}}},$
(16)
respectively, where $S$ is the data size. Therefore, the total uplink training
time is
$T_{\text{time}}=\mathop{\max}\limits_{k}\left\\{{\frac{ST}{{{R_{k}}}}}\right\\}+\frac{{KST}}{{\sum\limits_{k=1}^{K}{{R_{k}}}}}.$
(17)
#### II-B2 Low-resolution DACs at the UEs
With low-resolution DACs equipped at the UEs, the signal sent by UE $i$,
${{{\bf{s}}_{i}}}$, to the APs is given as
${\cal
Q}\left({{{\bf{s}}_{i}}}\right)={\zeta}{{{\bf{s}}_{i}}}+{\bf{n}}_{i}^{\rm{q}},\forall
i\in\left\\{{1,\cdots,K}\right\\},$ (18)
where $\zeta=\frac{{\pi\sqrt{3}}}{2}{2^{-2b}}$ is a linear gain depending on
the number of quantization bits adopted in DACs, $b$, and the elements of
${{{\bf{n}}_{i}}}^{\rm{q}}$ are i.i.d. ${\cal C}{\cal
N}\left({0,{\zeta}\left({1-{\zeta}}\right){p_{i}}}\right)$ random variables
[39, 40]. Besides, according to (17), in synchronous FL [41, 42], which refers
to the CPU needs to wait for receiving the training updates from all the UEs,
the straggler UEs with unfavorable links may greatly slow down the entire FL
process and reduce its practicality.
On the other hand, in asynchronous FL, even if all local model updates are not
received, the aggregate model can be derived. Therefore, in order to mitigate
the straggler effect, the asynchronous communication mode is considered. Then,
the connection relationship between the APs and the UEs can be expressed as
${d_{il}^{t}}=\left\\{{\begin{array}[]{*{20}{c}}{1,l\in{{\cal M}_{i}}},\\\
{0,l\notin{{\cal M}_{i}}},\end{array}}\right.\;\;\;t=1,\cdots,T,$ (19)
where
$\displaystyle{{\cal
M}_{i}^{t}}=\left\\{{l:{d_{il}^{t}}=1,l\in\left\\{{1,\cdots
L}\right\\}}\right\\}.$ (20)
Note that ${d_{il}^{t}=1}$ if the $l$th AP is allowed to serve UE $i$ at
communication round $t$ and 0 otherwise. Therefore, ${{\cal M}_{i}^{t}}$
denotes the subset of APs that serve UE $i$ at communication round $t$.
Therefore, at communication round $t$, the transmitted signal from UE $k$
processed by the low-precision DAC is given as
$\displaystyle{\bf{\mathord{\buildrel{\lower
3.0pt\hbox{$\scriptscriptstyle\smile$}}\over{s}}}}_{k}^{t}\\!\\!=\\!\\!{\cal
Q}\left({{\bf{s}}_{k}^{t}}\right)\\!\\!=\\!\\!{\zeta}{\bf{s}}_{k}^{t}+{\left({{\bf{n}}_{k}^{\rm{q}}}\right)^{t}}.$
(21)
The received signal ${\mathbf{y}}_{l}^{t}$ at AP $l$ is
$\displaystyle{\bf{y}}_{l}^{t}$
$\displaystyle=\sum\limits_{i=1}^{K}{d_{il}^{t}}{h_{il}^{t}{\bf{\mathord{\buildrel{\lower
3.0pt\hbox{$\scriptscriptstyle\smile$}}\over{s}}}}_{i}^{t}}+{\bf{n}}_{l}^{t}=\sum\limits_{i=1}^{K}{{\zeta}{d_{il}^{t}}h_{il}^{t}{\bf{s}}_{i}^{t}}+\sum\limits_{i=1}^{K}{d_{il}^{t}}{h_{il}^{t}{{\left({{\bf{n}}_{k}^{\rm{q}}}\right)}^{t}}}+{\bf{n}}_{l}^{t}=\sum\limits_{i=1}^{K}{{\zeta}{d_{il}^{t}}h_{il}^{t}{\bf{s}}_{i}^{t}}+{\bm{\omega}}_{k}^{t},$
(22)
where
${\bm{\omega}}_{k}^{t}=\sum\limits_{i=1}^{K}{{d_{il}^{t}}h_{il}^{t}{{\left({{\bf{n}}_{k}^{\rm{q}}}\right)}^{t}}}+{\bf{n}}_{l}^{t}$
is the effective noise distributed according to
${\mathcal{N}_{\mathbb{C}}}\left({0,\sigma_{{\rm{eff}}}^{2}{{\bf{I}}_{d}}}\right)$
and
$\sigma_{{\rm{eff}}}^{2}=\sum\limits_{i=1}^{K}{{d_{il}^{t}}\beta_{il}^{t}{\zeta_{i}}}\left({1-{\zeta_{i}}}\right){{\bf{I}}_{d}}+{\sigma^{2}}{{\bf{I}}_{d}}$.
When the MRC scheme is employed, we set ${v_{kl}}=d_{kl}^{t}{h_{kl}}$, the
local processed signal of ${\mathbf{s}}_{kl}^{t}$ at AP $l$ is given as
$\displaystyle{\bf{\hat{s}}}_{kl}^{t}=v_{kl}^{*}{{\bf{y}}_{l}^{t}}=v_{kl}^{*}\sum\limits_{i=1}^{K}{{h_{il}}{\bf{\mathord{\buildrel{\lower
3.0pt\hbox{$\scriptscriptstyle\smile$}}\over{s}}}}_{i}^{t}}+v_{kl}^{*}{{\bf{n}}_{l}}$
$\displaystyle=d_{kl}^{t}h_{kl}^{*}{h_{kl}}{\zeta_{k}}{\bf{s}}_{k}^{t}+d_{kl}^{t}h_{kl}^{*}{h_{kl}}{\left({{\bf{n}}_{k}^{\rm{q}}}\right)^{t}}+d_{kl}^{t}h_{kl}^{*}\sum\limits_{i\neq
k}^{K}{{h_{il}}{\bf{\mathord{\buildrel{\lower
3.0pt\hbox{$\scriptscriptstyle\smile$}}\over{s}}}}_{i}^{t}}+d_{kl}^{t}\hat{h}_{kl}^{*}{{\bf{n}}_{l}}.$
(23)
Then, the achievable SE for UE $k$ at communication round $t$ can be obtained
in the following closed-form
${\rm{SINR}}_{k}^{t}=\frac{{p_{k}^{t}A_{k}^{t}}}{{\sum\limits_{i=1}^{K}{p_{i}^{t}C_{ki}^{t}}+\sum\limits_{i=1}^{K}{p_{i}^{t}E_{ki}^{t}}+F_{k}^{t}}},$
(24)
where
$\displaystyle
A_{k}^{t}={\left({\zeta^{t}}\right)^{2}}{\left|{\sum\limits_{l=1}^{L}{d_{kl}^{t}{\beta_{kl}}}}\right|^{2}},\;\;\;C_{ki}^{t}={\left({\zeta^{t}}\right)^{2}}\sum\limits_{l=1}^{L}{d_{kl}^{t}{\beta_{kl}}}{\beta_{il}},$
$\displaystyle
E_{k}^{t}=d\sum\limits_{l=1}^{L}{d_{il}^{t}{\beta_{kl}}{\beta_{il}}\zeta^{t}\left({1-\zeta^{t}}\right)},\;\;\;F_{k}^{t}=d\sum\limits_{l=1}^{L}{d_{kl}^{t}}{\beta_{kl}}{\sigma^{2}}.$
(25)
Therefore, the total uplink training time is
$T_{\text{time}}=\sum\limits_{t=1}^{T}{\frac{S}{{R_{k}^{t}}}}+\sum\limits_{t=1}^{T}{\frac{{\left|{{{\cal
K}^{t}}}\right|S}}{{\sum\limits_{k\in{\cal K}}{R_{k}^{t}}}}},$ (26)
where
$\mathcal{K}^{t}={\mathcal{D}_{1}^{t}}\cup{\mathcal{D}_{2}^{t}}\cdots{\mathcal{D}_{L}^{t}}$
and ${\mathcal{D}_{l}^{t}}=\left\\{{i:{d_{il}^{t}}=1,i\in\left\\{{1,\cdots
K}\right\\}}\right\\}$. Compared with (16), it can be observed that choosing
an appropriate serving UEs cluster ${{\cal D}_{i}^{t}}$ in each iteration can
effectively reduce the total uplink training time.
## III Differential Privacy Analysis
DP is a privacy mechanism to fight against differential attacks and to ensure
that the sensitive information of UEs is not exposed [10]. The standard
definition of DP imposes a point-wise upper bound on the divergence between
the distributions $P\left({\left.{\bf{y}}\right|{\cal B}}\right)$ and
$P\left({\left.{\bf{y}}\right|{\cal B}^{\prime}}\right)$, where $\bf{y}$ is
the received signal and ${\cal B}$ and ${\cal B}^{\prime}$ are two
“neighboring” global data sets which only differ by one sample at one UE.
In this section, we derive the upper-bound of the privacy preservation
condition and provide the convergence analysis in the CF mMIMO-supported FL
with noise injection by using low-resolution ADCs or DACs.
### III-A Low-resolution ADCs Equipped at the APs
#### III-A1 Privacy Preservation Condition
###### Definition 1.
For two adjacent datasets ${{\mathcal{B}^{{}^{\prime}}_{j}}}$ and
${{\mathcal{B}^{{}^{\prime\prime}}_{j}}}$ with
${\left|{{{\mathcal{B}^{{}^{\prime}}_{j}}}-{{\mathcal{B}^{{}^{\prime\prime}}_{j}}}}\right|}=1$
for some UEs $j$ and
${\left|{{{\mathcal{B}^{{}^{\prime}}_{i}}}-{{\mathcal{B}^{{}^{\prime\prime}}_{i}}}}\right|}=0$
for all $i\neq j$, the communication and learning protocol is
$({\mathbb{\epsilon}},\delta)$-differentially private, where
${\mathbb{\epsilon}}>0$, and $\delta\in\left[{0,1}\right)$, when we have the
following inequality for UE $k$ [18]:
$\Pr\left({\left.{{\mathbf{r}}_{k}^{t}}\right|{{\mathcal{B}^{{}^{\prime}}_{k}}}}\right)\leqslant\exp\left({\mathbb{\epsilon}}\right)\Pr\left({\left.{{\mathbf{r}}_{k}^{t}}\right|{{\mathcal{B}^{{}^{\prime\prime}}_{k}}}}\right)+\delta,$
(27)
where $\Pr$ refers to the probability of a certain event. After $T$
iterations, the $({\mathbb{\epsilon}},\delta)$ DP condition in (27) can be
written as
$\Pr\left({\left|{\ln\left({\prod\limits_{t=1}^{T}{\frac{{P\left({\left.{{\mathbf{r}}_{k}^{t}}\right|{\mathbf{r}}_{k}^{t-1}\cdots{\mathbf{r}}_{k}^{1},{\mathcal{B}_{k}^{{}^{\prime}}}}\right)}}{{P\left({\left.{{\mathbf{r}}_{k}^{t}}\right|{\mathbf{r}}_{k}^{t-1}\cdots{\mathbf{r}}_{k}^{1},{\mathcal{B}_{k}^{{}^{\prime\prime}}}}\right)}}}}\right)}\right|\\!\leqslant\\!{\mathbb{\epsilon}}}\right)\\!\geqslant\\!1-\delta.$
(28)
The $({\mathbb{\epsilon}},\delta)$-DP condition ensures that for all possible
adjacent datasets, the absolute value of the left side of (28) can be bounded
by ${\mathbb{\epsilon}}$ with probability at least $1-\delta$. Note that the
values ${\mathbb{\epsilon}}$ and $\delta$ stand for the similarity of the
result distribution of the random mechanism performed on the data sets
${\mathcal{B}_{k}^{{}^{\prime}}}$ and ${\mathcal{B}_{k}^{{}^{\prime\prime}}}$,
and are interpreted as a privacy level [43]. The lower ${\mathbb{\epsilon}}$
and $\delta$ indicate a higher level of privacy.
The sensitivity $\Delta_{k}^{t}$ of the noiseless received signal
${\mathbf{r}}_{k}^{t}-{\bm{\omega}}_{k}^{t}$ is defined as
$\displaystyle\Delta_{k}^{t}$
$\displaystyle=\mathop{\max}\limits_{\mathcal{B}_{k}^{{}^{\prime}},\mathcal{B}_{k}^{{}^{\prime\prime}}}\left\|{\alpha\sum\limits_{l=1}^{L}{\sum\limits_{i=1}^{K}{\sqrt{\frac{{p_{i}^{t}}}{{{{\left\|{{\mathbf{s}}_{i}^{t}\left({\mathcal{B}_{k}^{{}^{\prime}}}\right)}\right\|}^{2}}}}}h_{kl}^{*}{h_{il}}{\mathbf{s}}_{i}^{t}\left({\mathcal{B}_{k}^{{}^{\prime}}}\right)}}}\right.$
$\displaystyle\left.{-\alpha\sum\limits_{l=1}^{L}{\sum\limits_{i=1}^{K}{\sqrt{\frac{{p_{i}^{t}}}{{{{\left\|{{\mathbf{s}}_{i}^{t}\left({\mathcal{B}_{k}^{{}^{\prime\prime}}}\right)}\right\|}^{2}}}}}h_{kl}^{*}{h_{il}}{\mathbf{s}}_{i}^{t}\left({\mathcal{B}_{k}^{{}^{\prime\prime}}}\right)}}}\right\|.$
(29)
Equation (III-A1) can be bounded as
$\Delta_{k}^{t}\leqslant\mathop{\max}\limits_{i}2\alpha\sqrt{p_{i}^{t}}\left|{h_{kl}^{*}{h_{il}}}\right|$
(30)
with the help of [19] and the triangular inequality. Then, according to (11),
we can obtain
${\ln\\!\\!\left(\\!{\prod\limits_{t=1}^{T}{\frac{{P\left({\left.{{\mathbf{r}}_{k}^{t}}\right|{\mathbf{r}}_{k}^{t-1}\cdots{\mathbf{r}}_{k}^{1},{\mathcal{B}_{k}^{{}^{\prime}}}}\right)}}{{P\left({\left.{{\mathbf{r}}_{k}^{t}}\right|{\mathbf{r}}_{k}^{t-1}\cdots{\mathbf{r}}_{k}^{1},\mathcal{B}_{k}^{{}^{\prime\prime}}}\right)}}}}\right)}\\!\\!=\\!\\!{\sum\limits_{t=1}^{T}{\ln}\\!\\!\left(\\!\\!{\frac{{\exp\left({\frac{{{{\left\|{{\bm{\omega}}_{k}^{t}}\right\|}^{2}}}}{{2{{\left({m_{k}^{t}}\right)}^{2}}}}}\right)}}{{\exp\left({\frac{{{{\left\|{{\bm{\omega}}_{k}^{t}+{\mathbf{v}}_{k}^{t}}\right\|}^{2}}}}{{2{{\left({m_{k}^{t}}\right)}^{2}}}}}\right)}}}\right)},$
(31)
where
$\displaystyle{\mathbf{v}}_{k}^{t}$
$\displaystyle=\alpha\sum\limits_{l=1}^{L}\sum\limits_{i=1}^{K}\left({\sqrt{\frac{{p_{i}^{t}}}{{{{\left\|{{\mathbf{s}}_{i}^{t}\left({\mathcal{B}_{k}^{{}^{\prime\prime}}}\right)}\right\|}^{2}}}}}h_{kl}^{*}{h_{il}}{\mathbf{s}}_{i}^{t}\left({\mathcal{B}_{k}^{{}^{\prime\prime}}}\right)}\right.$
$\displaystyle\left.{-\sqrt{\frac{{p_{i}^{t}}}{{{{\left\|{{\mathbf{s}}_{i}^{t}\left({\mathcal{B}_{k}^{{}^{\prime}}}\right)}\right\|}^{2}}}}}h_{kl}^{*}{h_{il}}{\mathbf{s}}_{i}^{t}\left({\mathcal{B}_{k}^{{}^{\prime}}}\right)}\right),$
(32)
with $\left\|{{\mathbf{v}}_{k}^{t}}\right\|\leqslant\Delta_{k}^{t}$.
Following similar steps as in [43, Appendix A], we can then obtain the upper-
bound on the privacy preservation condition
$\displaystyle\Pr\left({\left|{\sum\limits_{t=1}^{T}{\frac{{2{{\left({{\bm{\omega}}_{k}^{t}}\right)}^{T}}{\mathbf{v}}_{k}^{t}+{{\left\|{{\mathbf{v}}_{k}^{t}}\right\|}^{2}}}}{{2{{\left({m_{k}^{t}}\right)}^{2}}}}}}\right|>{\mathbb{\epsilon}}}\right)$
$\displaystyle\mathop{\leqslant}\limits^{\left(i\right)}\Pr\left({\left|{\sum\limits_{t=1}^{T}{\frac{{{{\left({{\bm{\omega}}_{k}^{t}}\right)}^{T}}{\mathbf{v}}_{k}^{t}}}{{{{\left({m_{k}^{t}}\right)}^{2}}}}}}\right|>{\mathbb{\epsilon}}-\sum\limits_{t=1}^{T}{\frac{{{{\left\|{{\mathbf{v}}_{k}^{t}}\right\|}^{2}}}}{{2{{\left({m_{k}^{t}}\right)}^{2}}}}}}\right)$
$\displaystyle=2\Pr\left({\sum\limits_{t=1}^{T}{\frac{{{{\left({{\bm{\omega}}_{k}^{t}}\right)}^{T}}{\mathbf{v}}_{k}^{t}}}{{{{\left({m_{k}^{t}}\right)}^{2}}}}}>{\mathbb{\epsilon}}-\sum\limits_{t=1}^{T}{\frac{{{{\left\|{{\mathbf{v}}_{k}^{t}}\right\|}^{2}}}}{{2{{\left({m_{k}^{t}}\right)}^{2}}}}}}\right)$
$\displaystyle\mathop{\leqslant}\limits^{\left({ii}\right)}2\frac{1}{{\sqrt{2\pi\Lambda}}}\int_{{\mathbb{\epsilon}}-\Lambda}^{\infty}{\frac{x}{{{\mathbb{\epsilon}}-\Lambda}}\exp}\left({-\frac{{{x^{2}}}}{{2\Lambda}}}\right)dx,$
(33)
where $(i)$ follows the inequality
$\Pr\left({\left|{X+a}\right|>{\mathbb{\epsilon}}}\right)\leqslant\Pr\left({\left|X\right|+a>{\mathbb{\epsilon}}}\right)$
for an arbitrary $a\geqslant 0$, and $(ii)$ is due to
$\displaystyle\Pr\left({\sum\limits_{t=1}^{T}{\frac{{{{\left({{\bm{\omega}}_{k}^{t}}\right)}^{T}}{\mathbf{v}}_{k}^{t}}}{{{{\left({m_{k}^{t}}\right)}^{2}}}}}>{\mathbb{\epsilon}}-{\Lambda}}\right)$
$\displaystyle=\frac{1}{{\sqrt{2\pi\Lambda}}}\int_{{\mathbb{\epsilon}}-\Lambda}^{\infty}{\exp}\left({-\frac{{{x^{2}}}}{{2\Lambda}}}\right)dx$
$\displaystyle\leqslant\frac{1}{{\sqrt{2\pi\Lambda}}}\int_{{\mathbb{\epsilon}}-\Lambda}^{\infty}{\frac{x}{{{\mathbb{\epsilon}}-\Lambda}}\exp}\left({-\frac{{{x^{2}}}}{{2\Lambda}}}\right)dx$
$\displaystyle=\frac{{\sqrt{\Lambda}}}{{\sqrt{2\pi}\left({{\mathbb{\epsilon}}-\Lambda}\right)}}\exp\left({-\frac{{{{\left({{\mathbb{\epsilon}}-\Lambda}\right)}^{2}}}}{{2\Lambda}}}\right),$
(34)
where
$\Lambda\triangleq\sum\limits_{t=1}^{T}{{{\left({\frac{{\Delta_{k}^{t}}}{{m_{k}^{t}}}}\right)}^{2}}}$.
Finally, the closed-from $({\mathbb{\epsilon}},\delta)$-DP condition is given
by
$\frac{{\sqrt{2\Lambda}}}{{\sqrt{\pi}\left({{\mathbb{\epsilon}}-\Lambda}\right)}}\exp\left({-\frac{{{{\left({{\mathbb{\epsilon}}-\Lambda}\right)}^{2}}}}{{2\Lambda}}}\right)<\delta.$
(35)
#### III-A2 Convergence Analysis
At the $t$-th iteration, the CPU estimates the scaled local gradient as
${\mathbf{r}}_{k}^{t}$, and then the global gradient is estimated as
$\displaystyle\widehat{\nabla
F}\left({{{\mathbf{w}}^{t-1}}}\right)=\frac{1}{{{B_{{\text{tot}}}}}}\sum\limits_{k=1}^{K}{{\mathbf{r}}_{k}^{t-1}}=\alpha\nabla
F\left({{{\mathbf{w}}^{t-1}}}\right)+\frac{\alpha}{{{B_{{\text{tot}}}}}}I^{t-1}+\frac{1}{{{B_{{\text{tot}}}}}}\sum\limits_{k=1}^{K}{{\bm{\omega}}_{k}^{t-1}},$
(36)
where
$I^{t-1}\triangleq\sum\limits_{k=1}^{K}{\sqrt{\frac{{p_{k}^{t-1}}}{{{{\left\|{{\mathbf{s}}_{k}^{t-1}}\right\|}^{2}}}}}{\mathbf{s}}_{k}^{t-1}}\sum\limits_{l=1}^{L}{\sum\limits_{i\neq
k}^{K}{{h_{kl}}h_{il}^{*}}}$. With the help of [18, Assumption 1], we have the
following equality
$\displaystyle F\left({{{\mathbf{w}}^{t}}}\right)$
$\displaystyle\leqslant\\!F\left({{{\mathbf{w}}^{t\\!-\\!1}}}\right)\\!\\!+\\!\\!{\left[{\nabla
F\left({{{\mathbf{w}}^{t\\!-\\!1}}}\right)}\right]^{T}}\left[{{{\mathbf{w}}^{t}}\\!\\!-\\!\\!{{\mathbf{w}}^{t\\!-\\!1}}}\right]\\!\\!+\\!\\!\frac{M}{2}{\left\|{{{\mathbf{w}}^{t}}\\!-\\!{{\mathbf{w}}^{t\\!-\\!1}}}\right\|^{2}}$
$\displaystyle=F\left({{{\mathbf{w}}^{t-1}}}\right)\\!-\\!{\left[{\nabla
F\left({{{\mathbf{w}}^{t-1}}}\right)}\right]^{T}}\left[{\alpha\eta\nabla
F\left({{{\mathbf{w}}^{t-1}}}\right)+\frac{{\alpha\eta}}{{{B_{{\text{tot}}}}}}{I^{t-1}}+\frac{\eta}{{{B_{{\text{tot}}}}}}\sum\limits_{k=1}^{K}{{\bm{\omega}}_{k}^{t-1}}}\right]$
$\displaystyle+\frac{M}{2}\left\|{\alpha\eta\nabla
F\left({{{\mathbf{w}}^{t-1}}}\right)+\frac{{\alpha\eta}}{{{B_{{\text{tot}}}}}}{I^{t-1}}}\right.{\left.{+\frac{\eta}{{{B_{{\text{tot}}}}}}\sum\limits_{k=1}^{K}{{\bm{\omega}}_{k}^{t-1}}}\right\|^{2}},$
(37)
where $M$ is a positive constant. Setting $\eta=\frac{1}{M}$ and taking
expectation over the randomness of additive noise, we have
$\displaystyle\mathbb{E}\left\\{{F\left({{{\mathbf{w}}^{t}}}\right)}\right\\}$
$\displaystyle\leqslant
F\left({{{\mathbf{w}}^{t-1}}}\right)-\frac{\alpha}{M}{\left\|{\nabla
F\left({{{\mathbf{w}}^{t-1}}}\right)}\right\|^{2}}\\!\\!-\\!\\!{\left[{\nabla
F\left({{{\mathbf{w}}^{t-1}}}\right)}\right]^{T}}{\frac{\alpha}{{M{B_{{\text{tot}}}}}}{I^{t-1}}}$
$\displaystyle+\frac{{{\alpha^{2}}}}{{2M}}{\left\|{\nabla
F\left({{{\mathbf{w}}^{t-1}}}\right)}\right\|^{2}}+\left({{{\left[{\nabla
F\left({{{\mathbf{w}}^{t-1}}}\right)}\right]}^{T}}\frac{{{\alpha^{2}}}}{{M{B_{{\text{tot}}}}}}{I^{t-1}}}\right)$
$\displaystyle+\frac{1}{{2M}}{\left\|{\frac{\alpha}{{{B_{{\text{tot}}}}}}{I^{t-1}}}\right\|^{2}}+\frac{1}{{2M}}\frac{d}{{B_{{\text{tot}}}^{2}}}\sum\limits_{k=1}^{K}{{{\left({m_{k}^{t}}\right)}^{2}}}.$
(38)
According to [18, Assumption 2], we obtain
$\displaystyle\mathbb{E}\left\\{{F\left({{{\mathbf{w}}^{t}}}\right)}\right\\}-{F^{*}}$
$\displaystyle\leqslant\frac{{M-\alpha\left({2-\alpha}\right)\mu}}{M}\left({\mathbb{E}\left\\{{F\left({{{\mathbf{w}}^{t-1}}}\right)}\right\\}-{F^{*}}}\right)$
$\displaystyle+\frac{{{\alpha^{2}}}}{{2MB_{{\text{tot}}}^{2}}}{\left\|{\frac{\alpha}{{{B_{{\text{tot}}}}}}{I^{t-1}}}\right\|^{2}}+\frac{1}{{2M}}\frac{d}{{B_{{\text{tot}}}^{2}}}\sum\limits_{k=1}^{K}{{{\left({m_{k}^{t-1}}\right)}^{2}}},$
(39)
where $\mu$ is a positive constant. Therefore, by applying the above
inequality repeatedly through $T$ iterations, the results follow immediately.
Finally, the average optimality gap after $T$ iteration is upper bounded by
$\displaystyle\mathbb{E}\left\\{{F\left({{{\mathbf{w}}^{t}}}\right)}\right\\}-{F^{*}}$
$\displaystyle\leqslant{\left({1-\frac{{\alpha\left({2-\alpha}\right)\mu}}{M}}\right)^{T}}\left({\mathbb{E}\left\\{{F\left({{{\mathbf{w}}^{1}}}\right)}\right\\}-{F^{*}}}\right)$
$\displaystyle+\frac{{{\alpha^{2}}}}{{2MB_{{\text{tot}}}^{2}}}\sum\limits_{t=1}^{T}{{{\left({1-\frac{{\alpha\left({2-\alpha}\right)\mu}}{M}}\right)}^{T-t}}}{\left\|{\frac{\alpha}{{{B_{{\text{tot}}}}}}{I^{t}}}\right\|^{2}}$
$\displaystyle+\frac{1}{{2M}}\frac{d}{{B_{{\text{tot}}}^{2}}}\sum\limits_{t=1}^{T}{{{\left({1-\frac{{\alpha\left({2-\alpha}\right)\mu}}{M}}\right)}^{T-t}}}\sum\limits_{k=1}^{K}{{{\left({m_{k}^{t}}\right)}^{2}}}.$
(40)
The first item in (III-A2) reflects the initial optimality gap
$\left({\mathbb{E}\left\\{{F\left({{{\mathbf{w}}^{1}}}\right)}\right\\}-{F^{*}}}\right)$
increases with the increase of $T$, and the third considers the effect of
effective additional noise power. Interestingly, the bound in (III-A2)
indicates that the quantization noise added in the initial iteration would
enlarge the final optimality gap less than the noise added in the later
iterations. This is because the contribution of the noise added in the
iteration $t$ is affected by a factor
${{{\left({1-\frac{{\alpha\left({2-\alpha}\right)\mu}}{M}}\right)}^{T-t}}}$.
### III-B Low-resolution DACs at the UEs
In this case, the privacy preservation condition becomes
$\displaystyle{{\cal L}_{{\cal B},{\cal
B}^{\prime}}}\left({{\bf{y}}_{l}^{t}}\right)=\ln\left({\prod\limits_{t=1}^{T}{\frac{{P\left({\left.{{\bf{y}}_{l}^{t}}\right|{\bf{y}}_{l}^{t-1},\cdots{\bf{y}}_{l}^{1},{\cal
B}}\right)}}{{P\left({\left.{{\bf{y}}_{l}^{t}}\right|{\bf{y}}_{l}^{t-1},\cdots{\bf{y}}_{l}^{1},{\cal
B}^{\prime}}\right)}}}}\right)=\sum\limits_{t=1}^{T}{\ln}\left({\frac{{P\left({\left.{{\bf{y}}_{l}^{t}}\right|{\bf{y}}_{l}^{t-1},\cdots{\bf{y}}_{l}^{1},{\cal
B}}\right)}}{{P\left({\left.{{\bf{y}}_{l}^{t}}\right|{\bf{y}}_{l}^{t-1},\cdots{\bf{y}}_{l}^{1},{\cal
B}^{\prime}}\right)}}}\right),$ (41)
where
$\displaystyle
P\left({\left.{{\bf{y}}_{l}^{t}}\right|{\bf{y}}_{l}^{t-1},\cdots{\bf{y}}_{l}^{1},{\cal
B}}\right)=\frac{1}{{\sigma_{{\rm{eff}}}^{2}\sqrt{2\pi}}}{\rm{exp}}\left({-\frac{{{{\left\|{{\bf{y}}_{l}^{t}-\sum\limits_{i=1}^{K}{{\alpha_{i}}d_{il}^{t}h_{il}^{t}{\bf{x}}_{i}^{t}\left({{{\cal
B}_{i}}}\right)}}\right\|}^{2}}}}{{2{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}\right),$
(42)
Then, (41) can be rewritten as
$\displaystyle{{\cal L}_{{\cal B},{\cal
B}^{\prime}}}\left({{\bf{y}}_{l}^{t}}\right)=\sum\limits_{t=1}^{T}{\ln}\left({{{{\rm{exp}}\left({-\frac{{{{\left\|{{\bf{y}}_{l}^{t}-\sum\limits_{i=1}^{K}{{\alpha^{t}}h_{il}^{t}{\bf{s}}_{i}^{t}\left({{{\cal
B}_{i}}}\right)}}\right\|}^{2}}}}{{2{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}\right)}}/{{{\rm{exp}}\left({-\frac{{{{\left\|{{\bf{y}}_{l}^{t}-\sum\limits_{i=1}^{K}{{\alpha^{t}}h_{il}^{t}{\bf{s}}_{i}^{t}\left({{{{\cal
B}^{\prime}}_{i}}}\right)}}\right\|}^{2}}}}{{2{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}\right)}}}\right)$
$\displaystyle=\sum\limits_{t=1}^{T}{\ln}\left({{{{\rm{exp}}\left({-\frac{{{{\left\|{{\bm{\omega}}_{l}^{t}}\right\|}^{2}}}}{{2{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}\right)}}/{{{\rm{exp}}\left({-\frac{{{{\left\|{{\bm{\omega}}_{l}^{t}+{\bf{v}}_{l}^{t}}\right\|}^{2}}}}{{2{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}\right)}}}\right)=\sum\limits_{t=1}^{T}{\frac{{{{\left\|{{\bf{v}}_{l}^{t}}\right\|}^{2}}+2{{\left({{\bm{\omega}}_{l}^{t}}\right)}^{T}}{\bf{v}}_{l}^{t}}}{{2{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}=\Gamma_{l}^{t}.$
(43)
Since the sensitivity $\Delta_{l}^{t}$ is given as
$\displaystyle{\bf{v}}_{l}^{t}=\sum\limits_{i=1}^{K}{{\alpha_{i}}\left|{h_{il}^{t}}\right|}\left({{\bf{x}}_{i}^{t}\left({{{{\cal
B}^{\prime}}_{i}}}\right)-{\bf{x}}_{i}^{t}\left({{{\cal
B}_{i}}}\right)}\right),\left\|{{\bf{v}}_{l}^{t}}\right\|\leq
2\mathop{\max}\limits_{i}\sqrt{p_{i}^{t}}\left|{h_{il}^{t}}\right|=\Delta_{l}^{t},$
(44)
the upper-bound on the privacy preservation condition can be derived as
$\displaystyle\Pr\left({\left|{\Gamma_{l}^{t}}\right|>{\mathbb{\epsilon}}}\right)$
$\displaystyle\leq\Pr\left({\left|{\sum\limits_{t=1}^{T}{\frac{{{{\left({{\bm{\omega}}_{l}^{t}}\right)}^{T}}{\bf{v}}_{l}^{t}}}{{{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}}\right|>{\mathbb{\epsilon}}-\sum\limits_{t=1}^{T}{\frac{{{{\left\|{{\bf{v}}_{l}^{t}}\right\|}^{2}}}}{{2{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}}\right)$
$\displaystyle=2\Pr\left({\sum\limits_{t=1}^{T}{\frac{{{{\left({{\bm{\omega}}_{l}^{t}}\right)}^{T}}{\bf{v}}_{l}^{t}}}{{{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}}\right)>{\mathbb{\epsilon}}-\sum\limits_{t=1}^{T}{\frac{{{{\left\|{{\bf{v}}_{l}^{t}}\right\|}^{2}}}}{{2{{\left({\sigma_{{\rm{eff}}}^{2}}\right)}^{2}}}}}$
$\displaystyle\leq
2\Pr\left({\Lambda>{\mathbb{\epsilon}}-\sum\limits_{t=1}^{T}{\frac{1}{2}{{\left({\frac{{\Delta_{l}^{t}}}{{\sigma_{{\rm{eff}}}^{2}}}}\right)}^{2}}}}\right)$
$\displaystyle=\frac{2}{{\sqrt{2\pi\sum\limits_{t=1}^{T}{{{\left({\frac{{\Delta_{l}^{t}}}{{\sigma_{{\rm{eff}}}^{2}}}}\right)}^{2}}}}}}\int_{b}^{\infty}{\exp\left({-\frac{{{x^{2}}}}{{2\sum\limits_{t=1}^{T}{{{\left({\frac{{\Delta_{l}^{t}}}{{\sigma_{{\rm{eff}}}^{2}}}}\right)}^{2}}}}}}\right)}dx,$
(45)
where
$\displaystyle\begin{array}[]{l}b=-\sum\limits_{t=1}^{T}{{{\left({\frac{{\Delta_{l}^{t}}}{{{\sigma_{{\rm{eff}}}}}}}\right)}^{2}}}>0,\;\;\;\frac{2}{{\sqrt{2\pi\nu}}}\int_{-\frac{\nu}{2}}^{\infty}{\exp\left({-\frac{{{x^{2}}}}{{2\nu}}}\right)}dx<\delta,\;\;\;\nu=\sum\limits_{t=1}^{T}{{{\left({\frac{{\Delta_{l}^{t}}}{{{\sigma_{{\rm{eff}}}}}}}\right)}^{2}}}.\end{array}$
(47)
Besides, the convergence can be proved following similar steps as in Section
III-A-(2).
## IV Power Control for Training Time Minimization and An asynchronous FL
protocol
In this section, an optimization problem is formulated to jointly optimize the
transmit power and data rate under the practical constraints on UEs¡¯ energy
consumption with different quantization accuracies of the ADCs/DACs. Note that
the quantization noise is used for privacy protection, therefore, different
quantization accuracies realizes different DP protection levels. By applying
the successive convex approximation approach, we design a computationally-
efficient algorithm to obtain a suboptimal solution of the power allocation.
Besides, we propose an asynchronous FL protocol to alleviate the staleness,
efficiency and better utilize the progress made by stragglers.
### IV-A Low-resolution ADCs Equipped at the APs
According to (12) and (17), the problem of FL training time minimization with
low-resolution ADCs in the CF mMIMO system can be formulated as
$\displaystyle\mathop{\text{minimize}}\limits_{{\mathbf{p}},{\mathbf{R}}}\;\;\;T_{\text{time}}$
(47a)
$\displaystyle\;\;{\text{s.}}{\text{t.}}\;\;\;0\leqslant{p_{k}}\leqslant{p_{\max}},$
(47b) $\displaystyle\;\;\;\;\;\;\;\;{R_{k}}\leqslant{r_{k}},$ (47c)
where
${\mathbf{p}}={\left[{{p_{1}},\cdots,{p_{K}}}\right]^{T}},{\mathbf{R}}={\left[{{R_{1}},\cdots,{R_{K}}}\right]^{T}}$.
By introducing slack variables ${u_{k}}$, $x$, $x_{1}$, and $x_{2}$, we
reformulate (47a) as follows:
$\displaystyle\mathop{\text{minimize}}\limits_{{\mathbf{u}},{\mathbf{R}},x}\;\;\;x$
(48a) $\displaystyle\;{\text{s.}}{\text{t.}}\;\;\;x\geqslant{x_{1}}+{x_{2}},$
(48b)
$\displaystyle\;\;\;\;\;\;\;\;{x_{1}}\geqslant\frac{ST}{{{R_{k}}}},\;\;\;\;\;\;\;\;{x_{2}}\geqslant\frac{{KST}}{{\sum\limits_{k=1}^{K}{{R_{k}}}}},$
(48c)
$\displaystyle\;\;\;\;\;\;\;\;{R_{k}}\leqslant\left({1-\frac{{{\tau_{p}}}}{{{\tau_{c}}}}}\right)B\times{\log_{2}}\left({1+\frac{{u_{k}^{2}{A_{k}}}}{{\sum\limits_{i\neq
k}^{L}{u_{i}^{2}}{B_{ki}}+{C_{k}}+{D_{k}}+\sum\limits_{i=1}^{K}{u_{i}^{2}}{E_{ki}}}}}\right),$
(48d)
$\displaystyle\;\;\;\;\;\;\;\;u_{k}^{2}\leqslant{p_{\max}},\;\;\;\;\;\;\;\;{u_{k}}\geqslant
0.$ (48e)
Note that different quantization accuracies realize different DP protection
levels. Specifically, in order to satisfy the
$({\mathbb{\epsilon}},\delta)$-differentially private, according to (35), the
appropriate value of the linear gain $\alpha$ which depends on the number of
quantization bits needs to be selected. At the same time, according to (14),
the rate $R_{k}$ is affected by $\alpha$. However, due to the nonconvex
constraint (48d), (55) is still challenging. To address the problem at hand,
we exploit the fact that a function
$f\left({x,y}\right)=\log_{2}\left({1+\frac{{{{\left|x\right|}^{2}}}}{y}}\right)$
has the following lower bound [25]:
$\displaystyle f\left({x,y}\right)$
$\displaystyle\geqslant\log_{2}\left({1+\frac{{{{\left|{{x^{\left(n\right)}}}\right|}^{2}}}}{{{y^{\left(n\right)}}}}}\right)-\frac{{{{\left|{{x^{\left(n\right)}}}\right|}^{2}}}}{{{y^{\left(n\right)}}}}+2\frac{{{x^{\left(n\right)}}x}}{{{y^{\left(n\right)}}}}-\frac{{{{\left|{{x^{\left(n\right)}}}\right|}^{2}}\left({{{\left|x\right|}^{2}}+y}\right)}}{{{y^{\left(n\right)}}\left({{{\left|{{x^{\left(n\right)}}}\right|}^{2}}+{y^{\left(n\right)}}}\right)}},$
(49)
where $n$ denotes the $n$-th iteration of SCA, $x\in\mathbb{R},y>0$, and
${y^{\left(n\right)}}>0$. Therefore, the concave lower bound in (48d) is
$\displaystyle{R_{k}}\leqslant\left({1-\frac{{{\tau_{p}}}}{{{\tau_{c}}}}}\right)B\left({{{\log}_{2}}\left({1+\frac{{{{\left|{\Upsilon_{k}^{\left(n\right)}}\right|}^{2}}}}{{\Pi_{k}^{\left(n\right)}}}}\right)-\frac{{{{\left|{\Upsilon_{k}^{\left(n\right)}}\right|}^{2}}}}{{\Pi_{k}^{\left(n\right)}}}+2\frac{{\Upsilon_{k}^{\left(n\right)}{\Upsilon_{k}}}}{{\Pi_{k}^{\left(n\right)}}}-\frac{{{{\left|{\Upsilon_{k}^{\left(n\right)}}\right|}^{2}}\left({\Upsilon_{k}^{2}+{\Pi_{k}}}\right)}}{{\Pi_{k}^{\left(n\right)}\left({{{\left|{\Upsilon_{k}^{\left(n\right)}}\right|}^{2}}+\Pi_{k}^{\left(n\right)}}\right)}}}\right),$
(50)
where ${\Upsilon_{k}}={u_{k}}\sqrt{{A_{k}}}$, and
${\Pi_{k}}=\sum\limits_{i\neq
k}^{L}{u_{i}^{2}}{B_{ki}}+{C_{k}}+{D_{k}}+\sum\limits_{i=1}^{K}{{p_{i}}}{E_{ki}}$.
Finally, at iteration $t$, problem (48) can be approximated by the following
convex problem for given point $u_{k}^{\left(n\right)}$:
$\mathop{\min}\limits_{\left\\{{{\mathbf{u}},{\mathbf{R}}}\right\\}\in\mathcal{F}}x,$
(51)
where $\mathcal{F}\triangleq\left\\{{\text{(48b), (48c), (48e),
(50)}}\right\\}$ is a convex feasible set. As a result, it can be solved by
convex optimization. Besides, we can further tighten the bounds in (48)
iteratively, which is suboptimal in Algorithm 1. Note that the proposed
algorithm can achieve a suboptimal solution of (48) and the corresponding
convergence can be proved via a similar approach in [25]. It is noticed that
problem (51) is solving simple convex programs. Therefore, the complexity of
Algorithm 1 to solve the problem (51) in the suboptimal method is in
polynomial time.
Algorithm 1 A Suboptimal Algorithm for (48)
1:Set the maximum transmit power for each UE as ${p_{\max}}$; Large-scale
fading coefficients ${\beta_{lk}},\forall l,k$. Initial values for
$u_{k}^{\left(0\right)},\forall k$, and the tolerance $\varepsilon\geq 0$. Set
up $n=1$.
2:The optimal solutions
$u_{k}^{{\rm{opt}}}=u_{k}^{\left(n\right)},R_{k}^{{\rm{opt}}}=R_{k}^{\left(n\right)},\;\forall
k$.
3:Iteration $n$:
* •
Solve (51) to get its optimal solution
$\left\\{{{{\mathbf{u}}^{*}},{{\mathbf{R}}^{*}}}\right\\}$
* •
Update
$\left\\{{{{\mathbf{u}}^{\left(n\right)}},{{\mathbf{R}}^{\left(n\right)}}}\right\\}=\left\\{{{{\mathbf{u}}^{*}},{{\mathbf{R}}^{*}}}\right\\}$.
4:Stop if $\left|{x-{x^{\left(n\right)}}}\right|\leqslant\varepsilon$.
Otherwise, go to Step 5.
5:Set $n=n+1$, then go to Step 3.
### IV-B Low-resolution DACs at the UEs
The problem of FL training time minimization with low-resolution DACs in the
CF mMIMO system can be formulated as
$\displaystyle\mathop{\min}\limits_{{\bf{R}},{\bf{u}}}\;\;\;w$ (52a)
$\displaystyle\;{\rm{s.}}{\rm{t.}}\;\;\;\;w\geq{\rm{}}{t_{1}}+{t_{2}},$ (52b)
$\displaystyle\;\;\;\;\;\;\;\;\;\;{t_{1}}\geq\sum\limits_{t=1}^{T}{\frac{{{S_{u,k}}}}{{R_{k}^{t}}}},\;\;\;{t_{2}}\geq\sum\limits_{t=1}^{T}{\frac{{\left|{{{\cal
K}^{t}}}\right|{S_{u}}}}{{\sum\limits_{k\in{\cal K}}{R_{k}^{t}}}}},$ (52c)
$\displaystyle\;\;\;\;\;\;\;\;\;\;R_{k}^{t}\leq\left({1-\frac{{{\tau_{p}}}}{{{\tau_{c}}}}}\right)B{\log_{2}}\left({1+{\rm{SINR}}_{k}^{t}}\right),$
(52d)
$\displaystyle\;\;\;\;\;\;\;\;\;{\left({u_{k}^{t}}\right)^{2}}\leq{p_{\max}},$
(52e)
where
${\rm{SINR}}_{k}^{t}=\frac{{p_{k}^{t}A_{k}^{t}}}{{\sum\limits_{i=1}^{K}{p_{i}^{t}B_{ki}^{t}}+\sum\limits_{i=1}^{K}{p_{i}^{t}C_{ki}^{t}}+E_{k}^{t}}},$
(53)
and $A_{k}^{t}$, $B_{ki}^{t}$, $C_{ki}^{t}$, and $E_{k}^{t}$ are defined in
(II-B2). Note that problem (52) can be solved follows the similar steps to
problem (48).
### IV-C An Asynchronous FL Protocol
Synchronous FL considers synchronous communication during training round
between the CPU and UEs. For the former, the CPU should wait until getting
response from sufficient UEs. Unfortunately, some UEs may be unresponsive in
the training process due to vulnerable wireless network. Then, the CPU drops
such UEs and proceeds on to the next training process. On the contrary,
asynchronous FL enables all UEs to directly send update information to the CPU
after every local round that is dropped in synchronous FL optimization. In
general, asynchronous FL can achieve faster convergence when wireless links
are vulnerable and heterogeneous across the UEs. Therefore, asynchronous FL
has drawn more interests in recent papers.
It can be observed from problem (52) that $d_{kl}$ would determine the
required training time. Therefore, we propose a simple asynchronous FL
protocol based on the large-scale fading coefficient. Specifically, a lag-
tolerant algorithm which allows some UEs to stay asynchronous with the CPU is
provided. Note that straggler UEs refer to UEs that are slower and are still
training locally based on outdated models. Generally, the UE should start
training based on the latest global model received from the CPU.
At the communication iteration $t$, each AP classifies all UEs into three
categories: synchronous, asynchronous and need-to-be-synchronized,
respectively. First, the synchronous UEs refers to the UEs that are served by
at least one AP and complete its local model update based on the latest global
model transmitted from the CPU. Then, since the channel gains of straggler UEs
are not strong enough, they may dramatically slow down the whole FL process.
In order to mitigate the straggler effect, the local training of straggler UEs
are still based on the last version of the global model. Therefore, the
straggler UEs are also called asynchronous UEs. Besides, we assume that all
the UEs need to update their local training model at least are forced to
synchronize after at most $T_{\text{tol}}$ blocks so that the global model
will not be poisoned by the seriously outdated local models, where
$T_{\text{tol}}$ is called the lag tolerance.
In this model, at iteration $t$, the $l$th AP only serves $K_{0,l}\leq K$ APs
corresponding to the $K_{0,l}$ largest large-scale fading coefficients. The
main question arising immediately is how to choose $K_{0,l}$. Naturally, we
can choose $K_{0,l}$ UEs satisfying
$\sum\limits_{i=1}^{{K_{0,l}}}{\frac{{{{\bar{\beta}}_{il}}}}{{\sum\limits_{i^{\prime}=1}^{K}{{\beta_{i^{\prime}l}}}}}}\geq\nu\%,$
(54)
where $\left\\{{{{\bar{\beta}}_{1l}},\cdots,{{\bar{\beta}}_{Kl}}}\right\\}$ is
the sorted (in descending order) set of the set
$\left\\{{{\beta_{1l}},\cdots,{\beta_{Kl}}}\right\\}$, and $\nu$ is the lag
percent. After choosing ${\cal D}_{1}^{t},\cdots,{\cal D}_{L}^{t}$, where
${\mathcal{D}_{l}^{t}}=\left\\{{i:{d_{il}^{t}}=1,i\in\left\\{{1,\cdots
K}\right\\}}\right\\}$, we can follow the same method as in Section IV-A to an
optimized power control.
## V Numerical Results
In this section, we first evaluate the performance of the proposed power
control algorithm under different DP protection levels in the CF mMIMO
supported FL network. Note that the DP mechanism is realized by the
quantization noise. Therefore, we adopt ADCs and DACs with different
quantization bits to achieve different DP protection levels. More
specifically, we adopt similar parameters setting as in [25]. Note that we do
not propose a new FL framework but rather apply a existed FL framework in CF
mMIMO systems. We focus on how to reduce the training time using different
quantization bits or under differential privacy levels. Therefore, the
simulation on real datasets to see the effectiveness of the considered FL
framework has already performed in [18] and hence, they are not considered in
this paper. Note that the unit of time is seconds in the following figures.
Figure 2: Uplink training time against the number of quantization bits with
$L=10$, $K=3$, $D=1$ km, and $p_{\max}=200$ mW.
Fig. 2 evaluates the uplink training time as a function of the number of
quantization bits in the synchronized model. We also compare the performance
of applying the proposed power control method and the full power transmission
with the synchronized model. It can be seen from Fig. 2 that the uplink
training time decreases as the number of quantization bits increase since the
quantization error reduces. Specifically, the performance of $b=3$ is close to
that of $b=10$, which we refer to as the perfect ADCs case. Therefore,
reducing the quantization bits from 10 to 3 only has a slight impact on the
performance of uplink training time. Also, the fronthaul load can be relaxed
significantly since the data size is reduced. Furthermore, applying the
proposed power control method leads to a huge reduction in terms of uplink
training time. In particular, the uplink training time reduces 35% and 21%
when $b=1$ and $b=10$, respectively.
Figure 3: Uplink training time against the number of APs with $K=3$, $D=1$ km,
and $p_{\max}=200$ mW.
Fig. 3 shows the uplink training time against the number of APs in the
synchronized model. As expected, the uplink training time decreases along with
the increase of the number of APs, which is resulted from from the higher
macro-diversity gain for enhancing the detection performance. Besides, when
$b=1$, applying the proposed power control method leads to a 42% reduction in
terms of uplink training time compared with the one with simple full power
transmission. Although the performance gap between the proposed power control
and the full power transmission reduces when the number of APs increases, the
advantage of our power control method is still obvious for a reasonable number
of APs. Furthermore, the performance with $b=3$ approaches that of perfect
ADCs. Note that the latter means the quantization error is not dominated and
hence, the DP also no longer exists. This indicates that one can use low-
resolution ADCs to realize privacy-preserving with reduced hardware cost and
fronthaul load without increasing the uplink training time noticeably.
Figure 4: Uplink training time against the number of UEs with $L=30$, $D=1$
km, and $p_{\max}=200$ mW.
Fig. 4 compares the uplink training times as a function of the number of UEs
in the synchronized model. As can be seen, the performance of uplink training
time becomes longer when the system has more UEs. This is because the mutual
interference becomes stronger for a larger number of UEs. However, our
proposed power control method can effectively mitigate the impact of mutual
interference. In particular, the performance of uplink training time is
effectively reduced by 48%. Moreover, the performance gap between $b=3$ and
$b=10$ is also negligible, although the gaps are enlarged with the increases
of the number of UEs. For example, when $K=10$, taking $b=3$ leads to 5%
performance loss.
Figure 5: Uplink training time against the number of quantization bits with
$L=10$, $K=4$, $D=1$ km, and $p_{\max}=200$ mW.
Fig. 5 presents the performance of synchronous mode and synchronous mode as a
function of the number of quantization bits when low-resolution DACs are
equipped at the UEs. It can be observed that utilizing the asynchronous mode
can effectively reduce the training time. By using the asynchronous mode, we
can decrease the number of UEs that each AP needs to be served. Consequently,
the uplink training time reduces sharply. In particular, when $b=1$, the
uplink training time with the asynchronous mode and power control is 2.5 times
shorter than that of the synchronous mode.
Figure 6: Uplink training time against the lag tolerance with $L=10$, $K=4$,
$D=1$ km, and $p_{\max}=200$ mW.
Fig. 6 shows the uplink training time against the lag tolerance with lag
percent $\nu=85$. As expected, the uplink training time first decreases along
with the increase of the number of blocks, which is resulted from that data
exchange is less often. Then, the uplink training time increases as the SE of
UEs are declined. Besides, the efficiency of our power control method is
obvious. When the number of blocks is 8, applying the proposed power control
method leads to a 42% reduction in terms of uplink training time.
Figure 7: Uplink training time against $v$% with $L=10$, $K=4$, $D=1$ km, and
$p_{\max}=200$ mW.
Fig. 7 shows the uplink training time against the lag percent with a lag
tolerant is equal to 4, which reflects the number of UEs served by the APs in
each round of communication. It can be observed from Fig. 7 that when
$\nu<80\%$, the uplink training time decreases as $\nu$ increases. The reason
is that the straggler UEs who significantly slow down the whole FL process is
not served by the APs in every communication iteration. However, when $\nu$
continues to reduce, the uplink training time begins to increase, since the
number of users served in each round of communication decreases, which leads
to the decreases of SE that in turn increases the training time. Therefore,
Fig. 7 shows that choosing an appropriate value of $\nu$ can effectively
reduce the training time.
## VI Conclusions
In this paper, we studied the DP in wireless FL enabled by CF mMIMO systems
with low-resolution ADCs and DACs. By introducing the quantization noise as
the DP mechanism, we derived an expression of the privacy preservation
condition and provided convergence analysis for the proposed model. Targeting
at the uplink training time minimization, we jointly optimized the transmit
power and data rate under different DP protection levels. The simulation
results showed that our proposed power control method can effectively reduce
the uplink training time in all considered cases. In order to mitigate the
effect of straggler, we proposed an asynchronous FL protocol which
incorporates a UE selection algorithm based on the large-scale fading
coefficient decoupling the CPU and the selected UEs for a reduction of uplink
training time and for tackling the tradeoff between faster convergence and
lower communication overhead. To further improve the system performance, some
future extensions can be considered, e.g., channel allocation.
## References
* [1] E. Björnson, E. G. Larsson, and T. L. Marzetta, “Massive MIMO: Ten myths and one critical question,” _IEEE Commun. Mag._ , vol. 54, no. 2, pp. 114–123, Feb. 2016.
* [2] T. L. Marzetta, _Fundamentals of massive MIMO_. Cambridge University Press, 2016.
* [3] E. Dahlman, S. Parkvall, and J. Skold, _5G NR: The next generation wireless access technology_. Academic Press, 2020.
* [4] S. Ahmadi, _5G NR: Architecture, technology, implementation, and operation of 3GPP new radio standards_. Academic Press, 2019.
* [5] X. Chen, D. W. K. Ng, W. Yu, E. G. Larsson, N. Al-Dhahir, and R. Schober, “Massive access for 5G and beyond,” _IEEE J. Sel. Areas Commun._ , vol. 39, no. 3, pp. 615–637, Mar. 2021.
* [6] S. Wiedemann, K.-R. Müller, and W. Samek, “Compact and computationally efficient representation of deep neural networks,” _IEEE Trans. neural netw. learn. syst._ , vol. 31, no. 3, pp. 772–785, Mar. 2019.
* [7] A. Ghasempour, “Internet of things in smart grid: Architecture, applications, services, key technologies, and challenges,” _Inventions_ , vol. 4, no. 1, p. 22, 2019.
* [8] F. Hussain, S. A. Hassan, R. Hussain, and E. Hossain, “Machine learning for resource management in cellular and IoT networks: Potentials, current solutions, and open challenges,” _IEEE Commun. Surveys Tuts._ , vol. 22, no. 2, pp. 1251–1275, 2020.
* [9] B. Qolomany, I. Mohammed, A. Al-Fuqaha, M. Guizani, and J. Qadir, “Trust-based cloud machine learning model selection for industrial IoT and smart city services,” _IEEE Internet Things J._ , vol. 8, no. 4, pp. 2943–2958, Feb. 2020.
* [10] O. A. Wahab, A. Mourad, H. Otrok, and T. Taleb, “Federated machine learning: Survey, multi-level classification, desirable criteria and future directions in communication and networking systems,” _IEEE Commun. Surv. Tut._ , to appear, 2021.
* [11] S. Wang, Y. Hong, R. Wang, Q. Hao, Y.-C. Wu, and D. W. K. Ng, “Edge federated learning via unit-modulus over-the-air computation,” _IEEE Trans. Commun._ , vol. 70, no. 5, pp. 3141–3156, May 2022.
* [12] M. M. Amiri and D. Gündüz, “Federated learning over wireless fading channels,” _IEEE Trans. Wireless Commun._ , vol. 19, no. 5, pp. 3546–3557, May 2020.
* [13] M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in _Proc. ACM SIGSAC Conf. Comput. Commun. Secur._ , Oct. 2015, pp. 1322–1333.
* [14] L. Sweeney, “k-anonymity: A model for protecting privacy,” _Int. J. Uncertainty, Fuzziness Knowl. Based Sys._ , vol. 10, no. 5, pp. 557–570, 2002\.
* [15] T. Wang, Z. Zheng, M. H. Rehmani, S. Yao, and Z. Huo, “Privacy preservation in big data from the communication perspective¡ªa survey,” _IEEE Commun. Surveys Tuts._ , vol. 21, no. 1, pp. 753–778, 2018.
* [16] C. Dwork, “A firm foundation for private data analysis,” _Commun. ACM_ , vol. 54, no. 1, pp. 86–95, 2011.
* [17] C. Ma, J. Li, M. Ding, H. H. Yang, F. Shu, T. Q. Quek, and H. V. Poor, “On safeguarding privacy and security in the framework of federated learning,” _IEEE Netw._ , vol. 34, no. 4, pp. 242–248, Apr. 2020.
* [18] D. Liu and O. Simeone, “Privacy for free: Wireless federated learning via uncoded transmission with adaptive power control,” _IEEE J. Sel. Areas Commun._ , vol. 39, no. 1, pp. 170–185, Jan. 2020.
* [19] K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, and H. V. Poor, “Federated learning with differential privacy: Algorithms and performance analysis,” _IEEE Trans. Inf. Forensics and Secur._ , vol. 15, pp. 3454–3469, Apr. 2020.
* [20] Y. Koda, K. Yamamoto, T. Nishio, and M. Morikura, “Differentially private aircomp federated learning with power adaptation harnessing receiver noise,” _Proc. IEEE GLOBECOM_ , pp. 1–6, Dec. 2020.
* [21] S. R. Aghdam, E. Amid, M. Furdek _et al._ , “Privacy-preserving wireless federated learning exploiting inherent hardware impairments,” _arXiv:2102.10639_ , 2021.
* [22] T. T. Vu, D. T. Ngo, H. Q. Ngo, M. N. Dao, N. H. Tran, and R. H. Middleton, “User selection approaches to mitigate the straggler effect for federated learning on cell-free massive MIMO networks,” _arXiv:2009.02031_ , 2020\.
* [23] W. Wu, L. He, W. Lin, R. Mao, C. Maple, and S. Jarvis, “SAFA: A semi-asynchronous protocol for fast federated learning with low overhead,” _IEEE Trans. Comput._ , vol. 70, no. 5, pp. 655–668, May 2020.
* [24] J. Zhang, E. Björnson, M. Matthaiou, D. W. K. Ng, H. Yang, and D. J. Love, “Prospective multiple antenna technologies for beyond 5G,” _IEEE J. Sel. Areas Commun._ , vol. 38, no. 8, pp. 1637–1660, Aug. 2020.
* [25] T. T. Vu, D. T. Ngo, N. H. Tran, H. Q. Ngo, M. N. Dao, and R. H. Middleton, “Cell-free massive MIMO for wireless federated learning,” _IEEE Trans. Wireless Commun._ , vol. 19, no. 10, pp. 6377–6392, Oct. 2020.
* [26] T. T. Vu, D. T. Ngo, H. Q. Ngo, M. N. Dao, N. H. Tran, and R. H. Middleton, “Straggler effect mitigation for federated learning in cell-free massive MIMO,” in _Proc. IEEE ICC_ , 2021, pp. 1–6.
* [27] T. T. Vu, H. Q. Ngo, T. L. Marzetta, and M. Matthaiou, “How does cell-free massive MIMO support multiple federated learning groups?” in _Proc. IEEE SPAWC_ , 2021, pp. 401–405.
* [28] J. Xu, X. Wang, P. Zhu, and X. You, “Privacy-preserving channel estimation in cell-free hybrid massive MIMO systems,” _IEEE Trans. Wireless Commun._ , Jan. 2021.
* [29] X. Hu, C. Zhong, X. Chen, W. Xu, H. Lin, and Z. Zhang, “Cell-free massive MIMO systems with low resolution ADCs,” _IEEE Trans. Commun._ , vol. 67, no. 10, pp. 6844–6857, Oct. 2019.
* [30] H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson, and T. L. Marzetta, “Cell-free massive MIMO versus small cells,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 3, pp. 1834–1850, Mar. 2017.
* [31] M. M. Amiri and D. Gündüz, “Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air,” _IEEE Trans. Signal Process._ , vol. 68, pp. 2155–2169, 2020.
* [32] Z. Jiang, A. Balu, C. Hegde, and S. Sarkar, “Collaborative deep learning in fixed topology networks,” _Advances in Neural Information Processing Systems_ , 2017.
* [33] J. Zhang, J. Zhang, E. Björnson, and B. Ai, “Local partial zero-forcing combining for cell-free massive MIMO systems,” _IEEE Trans. Commun._ , vol. 69, no. 12, pp. 8459–8473, Dec. 2021.
* [34] A. K. Fletcher, S. Rangan, V. K. Goyal, and K. Ramchandran, “Robust predictive quantization: Analysis and design via convex optimization,” _IEEE J. Sel. Areas Commun._ , vol. 1, no. 4, pp. 618–632, Apr. 2007.
* [35] J. Zhang, L. Dai, Z. He, S. Jin, and X. Li, “Performance analysis of mixed-ADC massive MIMO systems over Rician fading channels,” _IEEE J. Sel. Areas Commun._ , vol. 35, no. 6, pp. 1327–1338, Jun. 2017.
* [36] J. Zhang, J. Zhang, D. W. K. Ng, S. Jin, and B. Ai, “Improving sum-rate of cell-free massive MIMO with expanded compute-and-forward,” _IEEE Trans. Signal Process._ , vol. 70, pp. 202–215, Nov. 2022.
* [37] J. Zheng, J. Zhang, E. Björnson, Z. Li, and B. Ai, “Cell-free massive MIMO-OFDM for high-speed train communications,” _IEEE J. Sel. Areas Commun., to appear_ , 2022.
* [38] Z. Wang, J. Zhang, B. Ai, C. Yuen, and M. Debbah, “Uplink performance of cell-free massive MIMO with multi-antenna users over jointly-correlated Rayleigh fading channels,” _IEEE Trans. Wireless Commun._ , vol. 21, no. 9, pp. 7391–7406, Sep. 2022.
* [39] Y. Zhang, H. Cao, M. Zhou, X. Qiao, S. Wu, and L. Yang, “Cell-free massive MIMO with few-bit ADCs/DACs: AQNM versus bussgang,” in _Proc. IEEE VTC-Spring_ , 2020, pp. 1–5.
* [40] Y. Zhang, H. Cao, M. Zhou, X. Qiao, and L. Yang, “Rate analysis of cell-free massive MIMO with one-bit ADCs and DACs,” in _Proc. IEEE PIMRC_ , 2019, pp. 1–6.
* [41] Y. Lu, X. Huang, K. Zhang, S. Maharjan, and Y. Zhang, “Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles,” _IEEE Trans. Veh. Technol._ , vol. 69, no. 4, pp. 4298–4311, 2020\.
* [42] Y. Chen, Y. Ning, M. Slawski, and H. Rangwala, “Asynchronous online federated learning for edge devices with non-iid data,” in _Proc. IEEE International Conference on Big Data (Big Data)_ , 2020, pp. 15–24.
* [43] C. Dwork, A. Roth _et al._ , “The algorithmic foundations of differential privacy.” _Foundations and Trends in Theoretical Computer Science_ , vol. 9, no. 3-4, pp. 211–407, 2014.
|
# On the Design of Communication-Efficient Federated Learning for Health
Monitoring
††thanks: D. Chu is with the School of Information and Communication
Engineering of the University of Electronic Science and Technology of China,
Chengdu, China, email<EMAIL_ADDRESS>W. Jaafar is with the
Software and Information Technology Engineering department of École de
Technologie Supérieure, QC, Canada, email<EMAIL_ADDRESS>H.
Yanikomeroglu is with the Systems and Computer Engineering department of
Carleton University, ON, Canada, email<EMAIL_ADDRESS>
Dong Chu, Wael Jaafar, and Halim Yanikomeroglu
###### Abstract
With the booming deployment of Internet of Things, health monitoring
applications have gradually prospered. Within the recent COVID-19 pandemic
situation, interest in permanent remote health monitoring solutions has
raised, targeting to reduce contact and preserve the limited medical
resources. Among the technological methods to realize efficient remote health
monitoring, federated learning (FL) has drawn particular attention due to its
robustness in preserving data privacy. However, FL can yield to high
communication costs, due to frequent transmissions between the FL server and
clients. To tackle this problem, we propose in this paper a communication-
efficient federated learning (CEFL) framework that involves clients clustering
and transfer learning. First, we propose to group clients through the
calculation of similarity factors, based on the neural networks
characteristics. Then, a representative client in each cluster is selected to
be the leader of the cluster. Differently from the conventional FL, our method
performs FL training only among the cluster leaders. Subsequently, transfer
learning is adopted by the leader to update its cluster members with the
trained FL model. Finally, each member fine-tunes the received model with its
own data. To further reduce the communication costs, we opt for a partial-
layer FL aggregation approach. This method suggests partially updating the
neural network model rather than fully. Through experiments, we show that CEFL
can save up to to 98.45% in communication costs while conceding less than 3%
in accuracy loss, when compared to the conventional FL. Finally, CEFL
demonstrates a high accuracy for clients with small or unbalanced datasets.
###### Index Terms:
Federated learning, health monitoring, communication cost.
## I Introduction
Internet of Things (IoT) technology has raised in recent years allowing its
application in several areas such as e-health wearable devices, smart homes,
autonomous cars, etc. IoT technology has been constantly improving our lives,
and one of the most rapidly developing IoT services is health monitoring.
Indeed, IoT sensors can be used to observe a patient’s condition, detect early
an illness, or alert the medical staff about a critical health condition [1].
Within a pandemic situation where the medical staff is constantly under
pressure, remote health monitoring has been rapidly developing to partially
alleviate this burden. Supported by IoT devices, remote health systems can
reliably ensure self-treatment at home, detect and monitor emergency
situations such as a heart attack or falling of an elderly, and automatically
calling for assistance from the adequate first responder staff [2].
In order for these services to be efficient, data need to be collected from
IoT devices, filtered, and processed. For instance, to predict a falling
event, motion data need to be analyzed, while electroencephalography (EEG)
signal and heart rate history data can serve to synthesize an in-depth report
and pre-diagnose an illness. Data processing would typically rely on cloud
computing platforms [3]. To make remote health monitoring more accurate,
large-scale machine learning (ML) approaches can be leveraged. However, the
associated centralized data and model training process bring serious data
security and privacy concerns.
Rather than learning from centrally collected user data and being exposed to
the risk of privacy leakage, federated learning (FL) can address this problem
using a collaborative model through the communication of only the training
model, while keeping the training process and data at the local level, i.e.,
close to patients. However, FL raises other concerns, such as high
communication costs and systems heterogeneity [4]. Indeed, frequent
communications can rapidly become the bottleneck of the FL development. This
is caused mainly by the important number of communication rounds between the
server and clients and the size of transmitted data.
In this context, we aim here to reduce the communication costs of FL, applied
for a specific health monitoring service example, i.e., patients activity
detection. To do so, we propose the integration of graph clustering and
transfer learning techniques into FL, which would drastically reduce the data
exchange rounds. Specifically, FL is realized among only a fraction of the
available clients with highly significant data features, and with limited data
exchange. To the best of our knowledge, this work is among the first ones that
combines such techniques in order to reduce the communication costs of FL.
Subsequently, the main contributions of this paper can be summarized as
follows:
* •
We propose to cluster FL clients based on their mutual similarity, measured
from their neural network weights, then we select a cluster leader for each
one of them.
* •
Next, we propose to perform federated learning among only a fraction of the
clients, i.e., cluster leaders, thus cutting down the number of clients
participating in FL.
* •
To further reduce the communication costs, we opt for partial-layer FL
aggregation, where we select the weights representing the most interesting FL
features only.
* •
Through experiments, we demonstrate the efficiency of our FL method in
drastically reducing the communication costs at the expense of a slight loss
in FL accuracy, compared to the conventional FL approach.
The rest of the paper is organized as follows. Section II reviews related
works. Section III describes the conventional FL system. Section IV presents
the proposed FL framework. In Section V, experimental results are provided to
evaluate the performances of our FL approach and validate its efficiency from
both the accuracy and communication cost perspectives. Finally, Section VI
concludes the paper.
## II Related Works
There is a growing need for health monitoring to be cost-efficient, reliable,
and accessible. Thus, federated learning, as a subdivision of machine learning
that guarantees data privacy, is suitable for healthcare applications.
For electronic health records, Liu et al. proposed federated-autonomous deep
learning to train different parts of the FL model using all or specific data
sources [5]. To cope with the unbalanced distributed datasets in the edge
computing system, Duan et al. built a self-balancing FL framework that uses
data augmentation and multi-client rescheduling [6]. Similarly, a cloud-edge
based FedHome framework was proposed by Wu et al. to handle the unbalanced and
non-independent and identically distributed data via the generative
convolutional autoencoder, thus realizing accurate and personalized health
monitoring [7]. Moreover, an efficient activity recognition application based
on FL has been developed in [8]. FL was used to mitigate the privacy violation
problem and to reduce data collection costs for centralized training, while
Fang et al. proposed in [9] privacy preservation and communication costs
reduction through the use of a lightweight encryption protocol. Focusing on
the FL communication efficiency only, researchers proposed compression-based
methods to reduce the size of the communicated model. For instance, Konečný et
al. proposed to reduce the size of the uplink data through structured and
sketched updates, where an update is learned from a restricted parameterized
space and compressed prior to upload [10]. Also, sparse ternary compression
was proposed in [11], which is proved to be more robust and converge faster
than the federated averaging benchmark. Other communication cost reduction
approaches include FedPAQ [12] that allows only partial device participation
and periodic server averaging for quantized message uploads, and CMFL [13],
which reduces the number of updates by eliminating irrelevant ones to the
global model improvement tendency.
## III FL Preliminaries
In the conventional federated learning, users train a global neural network
model collaboratively without having to share their local data. FL aims to
realize an empirical global optimization through the iterative global
aggregation and edge model update. For a system with $N$ clients, let $D_{n}$
be the dataset of client $n$ and $f_{i}(w)$ the loss minimization objective of
sample $i$. The objective is to minimize the training loss function $F_{n}(w)$
for client $n$, where
$\small F_{n}(w)=\frac{1}{|D_{n}|}\sum_{i\in D_{n}}f_{i}(w),$ (1)
where $|D_{n}|$ is the number of data samples in $D_{n}$. In each FL training
round $t$, participating clients get from the FL server the latest global
neural network model $\omega(t)$. Then, each client executes a number of local
training episodes $\varepsilon$ based on its local data. At the end of
$\varepsilon$ episodes, each client sends its local model $\omega({t+1})$ to
the server, and the latter aggregates all received local models into its own
global model as follows:
$\small F(w)=\sum_{n=1}^{N}\frac{|D_{n}|}{|D|}F_{n}(w),$ (2)
where $|D|$ denotes the number of data samples from all clients. This process
describes one global FL round, where the conventional optimization objective
of FL is given by (2).
Unlike conventional FL, we present next our proposed method for communication
costs reduction in FL, called communication-efficient FL (CEFL).
## IV Proposed CEFL Framework
Figure 1: The CEFL framework.
The CEFL is depicted in Fig. 1 where we distinguish between two training
sessions, namely FL and transfer learning, which are detailed below.
### IV-A Federated Learning Session
Before running FL rounds, four steps need to be executed:
* •
Step 1 (Building the clients’ similarity graph): We begin by quantifying the
clients’ mutual similarity. First, we train the local models for a small
number of episodes and extract their neural network weights. Then, to evaluate
the similarity factor of two clients $i$ and $j$, denoted $d_{ij}$, we
calculate the Euclidean distances between their weights corresponding to the
same network layer, and sum them over all layers:
$\small d_{ij}=\sum_{l=1}^{L}\left\|\omega_{i}^{l}-\omega_{j}^{l}\right\|,$
(3)
where $L$ is the number of neural network layers for the clients’ model,
$\omega_{i}^{l}$ is the set of neural network weights at the $l^{th}$ layer of
client $i$’s model, and $||\cdot||$ is the Euclidean distance operator.
Consequently a graph $G(V,E)$ can be built, where vertices $V$ and edges $E$
represent the clients and similarity factors, respectively. For accurate
representation within the graph, we assign the weights $S_{ij}$ to the edges
rather than $d_{ij}$, where
$\small S_{ij}=-d_{ij}+d_{\min}+d_{\max},$ (4)
and $d_{\min}$ and $d_{\max}$ are the minimum and maximum values of $d_{ij}$,
$\forall i,\forall j$, respectively. Hence, a large $S_{ij}$ refers to high
similarity and small $S_{ij}$ to low similarity. In Fig. 2.a, a similarly
graph example is illustrated.
* •
Step 2 (Clients clustering): Given the similarity graph, we adopt the Louvain
algorithm to detect community structures, i.e., clients with strong
similarities, within $G(V,E)$ [14]. Our choice of clustering algorithm is
motivated by its fast convergence, implementation simplicity, and
customizability. The Louvain algorithm is a greedy approach that allows to
optimize the modularity as it runs. The modularity score (between -0.5 and 1)
measures the relative density of edges inside communities with respect to
those outside communities. When using the Louvain algorithm, the number of
clusters needs to be specified according to the demand for cluster sizes. In
Figs. 2.b and 2.c, we depict the graph clustering step.
* •
Step 3 (Leader selection): Following step 2, we designate one client to be the
leader of the cluster. Its responsibility consists on participating in the FL
session. The leader is selected as the one sharing the highest similarity with
the clients of its cluster. In other words, a client $c_{k}$ is the leader of
cluster $k$ only if
$\small c_{k}=\arg\max_{i}\sum_{j\in C_{k};j\neq i}S_{ij},$ (5)
where $C_{k}$ is the set of clients in cluster $k$. The above steps lay the
foundation for FL among the cluster leaders.
* •
Step 4 (Partial-layer FL aggregation): Instead of the conventional FL that
updates the whole neural network model weights, we opt here for a partial
aggregation strategy, aiming to preserve more distinctive cluster features.
The partial aggregation strategy assumes that neural networks are divided into
base layers and personalized layers to combat statistical heterogeneity [15].
When performing FL among cluster leaders, each leader uploads all or only a
part of the trained weights to the server, while they receive only the updated
weights for the base layers.
Let $B$ and $(L-B)$ be the number of base layers and personalized layers,
respectively. Since base layers are typically the first ones in the neural
network model, the weight update in the $(t+1)^{th}$ training round is given
by:
$\small\omega_{\rm gl}({t+1})=\sum_{k=1}^{K}a_{k}\omega_{c_{k}}(t),$ (6)
where $K$ is the number of cluster leaders participating in the FL round,
$a_{k}\in[0,1]$ is the weight factor of cluster leader $c_{k}$ in the global
aggregation such that $\sum_{k=1}^{K}a_{k}=1$, and $\omega_{\rm gl}$ (resp.
$\omega_{c_{k}}$) represents the updated global (resp. local) neural network
weights (resp. of leader $c_{k}$, $\forall k=1,\ldots,K$) of base layers. Once
the global neural network model is updated, the FL server broadcasts the
aggregation outcome for the $B$ base layers to the cluster leaders. The latter
update their base layers weights as follows:
$\small\omega_{c_{k}}^{l}(t+1)=\omega_{\rm gl}^{l}(t+1),\quad\forall
l\in[1,B],\;k=1,\ldots,K.$ (7)
The above process is repeated for $T$ FL training rounds, as described in
Algorithm 1, and where the function Louvain is the clustering algorithm.
(a) Similarity graph
(b) Similarity graph prior to clustering
(c) Clustering result
Figure 2: Clustering clients based on similarity: (a) Building the similarity
graph among clients. Each client is represented with its neural network model.
The edges’ values are the similarity factors calculated with eq.(4). (b) A
different representation of the similarity graph prior to clustering. Each
node is a client. (c) Clustering outcome. Nodes with different colors
represent clients clustered together.
0: Nbr. of clients $N$, nbr. of clusters $K$, nbr. of FL rounds $T$.
Initialization : 1: Get $\omega_{i},\forall i=1,\ldots,N$ after a short local
training Clients clustering: 2: for $i\leftarrow 1$ to $N-1$ do 3: for
$j\leftarrow i+1$ to $N$ do 4: Update similarity graph $G(V,E)$ using eq.(4)
5: end for 6: end for 7: Get $\\{c_{1},\ldots,c_{K}\\}\leftarrow$
Louvain$(G(V,E),K)$ Federated learning : 8: while $t<T$ do 9: Update the
global neural network model based on eq.(6) 10: Broadcast $\omega_{\rm
gl}(t+1)$ to cluster leaders 11: for $k\leftarrow 1$ to $K$ do 12: Update
local model’s base layers using eq.(7) 13: Train local model for
$\varepsilon$ episodes 14: end for 15: end while 15: Cluster leaders’ neural
network models.
Algorithm 1 Federated Learning Session
### IV-B Transfer Learning Session
After the $T$ rounds of FL, $c_{k}$’s neural network weights
$\omega_{c_{k}}(T)$ include weights of base layers, trained through FL, and
weights of personalized layers, trained only with the local data. Transfer
learning consists on sending the pre-trained model weights of each leader to
the members of its cluster. Consequently, members’ models are initialized with
the leader’s model weights as follows:
$\small\omega_{j}=\omega_{c_{k}}(T),\quad\forall j\in C_{k}.$ (8)
Subsequently, each cluster model (other than the leader) starts training its
model using its own dataset for at most $\eta$ episodes or until convergence.
This training process is equivalent to individual training and does not
require any further communication among clients and/or FL server.
### IV-C Communication Cost
Communication cost of CEFL is decomposed into 4 parts:
1. 1.
The upload of neural network weights of all clients at the short initial
individual training to initialize clustering.
2. 2.
The upload of base layers weights of leaders to the server in each FL round.
3. 3.
The broadcast of base layers’ weights from the server to leaders in each FL
round.
4. 4.
The transmission of all model weights from each leader to its cluster members
in the transfer learning session.
Let $\delta_{l}$ be the data size (in bits) of the weights in layer $l$ of the
neural network model of any client/server. Hence, the total amount of data
transiting in the FL system, denoted by $\Delta$, is given by
$\displaystyle\Delta$ $\displaystyle=$ $\displaystyle
N\sum_{l=1}^{L}\delta_{l}+KT\sum_{l=1}^{B}\delta_{l}+T\sum_{l=1}^{B}\delta_{l}+K\sum_{l=1}^{L}\delta_{l}$
(9) $\displaystyle=$
$\displaystyle\left(N+K\right)\sum_{l=1}^{L}\delta_{l}+T\left(K+1\right)\sum_{l=1}^{B}\delta_{l}\;\rm{(bits)}.$
Assuming that each bit has a unitary cost, then the communication cost is
equal to $\Delta$.
## V Experimental Evaluation
### V-A Dataset and Preprocessing
The FL experiments conducted on this work are related to a health monitoring
application, where collected data from patients is analyzed to identify their
activities. Specifically, we relied on the public dataset MobiAct[16]. MobiAct
is a dataset for activity recognition, where data from 67 patients, between
the ages of 20 and 47, is obtained and labelled. Data is collected using the
patients’ smartphones when they are performing different activities. The
application focuses on four types of fall activities and nine types of daily
activities. For the sake of simplicity, we decide to focus only on activities
that indicate possible upcoming falling, thus reducing the number of
recognizable classes to eight types only, namely the initial fall activity
classes, i.e., forward-lying, front-knees-lying, sideward-lying, and sack-
sitting-chair, three fall-like activity classes, i.e., sit chair, car step in,
car step out, and one daily activity class including all of standing, walking,
jogging, jumping, stairs up and stairs down types.
We opt here for the data preprocessing method proposed in [17], which samples
3-axial accelerations and angular velocities data using sliding windows and
converts them into the bitmap format. Given a subject’s 3-axial sampled signal
of one activity, a sliding window with a given slide interval $I_{type}$ that
moves along the entire signal is used to capture signal features111The slide
interval refers to the number of sampling points between the starting points
of two successive windows.. Data captured by each sliding window is used to
construct a red-green-blue (RGB) bitmap image, where data of accelerations and
angular velocities from one axis are taken as pixel RGB values. For efficiency
purposes, we optimize the slide interval size between windows. Indeed, in the
MobiAct dataset, different types of activities are recorded for different time
durations, denoted $t_{type}$. Since all fall activities are sampled for 10
seconds, we empirically initialize the reference sliding interval $I_{0}$ to
be 40. However, some activities, such as walking, are recorded for up to 10
minutes. In order to avoid making processed dataset more unbalanced, we
propose to adjust the sliding intervals for different activities, called
$I_{type}$, to their recorded duration. Let $t_{0}=10$ seconds be the
reference duration, then, the slide interval for each activity should be
customized according to
$\small I_{\rm type}=I_{0}\frac{t_{\rm type}}{t_{0}}.$ (10)
### V-B Experimental Setup
The neural network model used in this paper is the fall detection
convolutional neural network (FD-CNN) proposed in [17]. FD-CNN takes as input
the 3-channel $20\times 20$ RGB bitmap image. It is composed of 2
convolutional layers, 2 subsampling layers, and 2 fully-connected layers. The
filter size of the 2 convolutional layers is $5\times 5$, while the filter
numbers are 3 and 32, respectively. A $2\times 2$ max-pooling layer follows
each of the convolutional layers, while the fully connected layers include 512
and 8 units, respectively. FD-CNN adopts ReLU as the activation function,
while the softmax function is used in the last fully-connected layer to
normalize the output to a probability distribution. The learning rate is set
to $10^{-4}$. The neural network is trained by the Adam optimizer with a batch
size of 32, and the cross-entropy loss function is adopted to measure the
classification performance. Finally, we set $a_{k}=1/K$, $\forall
k=1,\ldots,K$.
For our experiments, we compare the performance of CEFL with the following
baselines:
* •
Regular FL: It is the conventional federated learning between the server and
all clients, where FD-CNN is the same training model for the server and
clients.
* •
FedPer: It is a federated learning approach with base layers and personalized
layers that combats the degradation from statistical heterogeneity [15]. The
neural network model for the server and clients is FD-CNN.
* •
Individual Training: Training is conducted by the clients themselves without
any data exchange or model communication. The neural network of each client is
FD-CNN.
### V-C Results and Discussion
First, in the proposed CEFL framework, the number of clusters $K$ is an
important parameter to determine. Its choice might influence how
representative the chosen leaders are. To clarify its impact, we evaluate in
Fig. 3 the FL accuracy performance, calculated as the average clients’
accuracy, for different $K$ values. When $K$ grows from 2 to 20, accuracy
gradually reduces from 88.2% to 86.81%, making $K=2$ the optimal number of
clusters in CEFL. Consequently, we fix $K=2$ for the remaining experiments.
Figure 3: CEFL accuracy vs. nbr. of rounds (different $K$).
In Table I, we compare the proposed CEFL to the aforementioned baselines in
terms of complexity (number of training/aggregation rounds/episodes),
accuracy, and communication cost (in megabytes - MB). Although Regular FL
presents the best accuracy, its communication cost is the highest due to
frequent model updates between the server and clients. Through partial-FL
aggregation, FedPer reduces the communication cost by 0.5% only compared to
Regular FL, while the accuracy drops from 91.07% to 88.78%, which is not
consistent with the performance illustrated in [15]. This outcome might result
from the dataset type and applied changes in the preprocessing step. Both
Regular FL and FedPer run for $350\times 8=2800$ training episodes. In
contrast, Individual training requires no communications, however, it reaches
a low accuracy of 84.86% due to the limited scale of local datasets. In terms
of complexity, it runs the lowest number of episodes equal to 350. Finally,
the proposed CEFL cuts the communication cost down from 79730 MB to 1231 MB
when compared to Regular FL, which is a 98.45% in cost savings. This
significant reduction comes at the price of a slightly lower accuracy of
88.2%, which is comparable to the FedPer performance, but still at a fraction
of the communication cost. Also, CEFL runs for $100\times 8+350=1150$ episodes
that is 60% lower than the one for Regular FL and FedPer.
TABLE I: Comparison of different training models Method | Training Rounds | Accuracy (%) | Communication Cost (MB)
---|---|---|---
Federated Learning | Local episodesb | |
Aggregation rounds | Local episodes per aggregation | | |
Regular FL | 350 | 8 | – | 91.07 | 79730
FedPer | 350 | 8 | – | 88.78 | 79357
Individual Training | – a | – | 350 | 84.86 | 0
CEFL | 100 | 8 | 350 | 88.20 | 1231
a – means not applicable. b Local training occurs outside from the FL process. | |
The convergence behaviour of the aforementioned methods is depicted in Fig. 4.
Regular FL converges the fastest due to the continuous participation of all
clients in the training process. Our method CEFL converges also fast due to
transfer learning between leaders and cluster members. In contrast, FedPer
converges slowly since there is no information sharing for the personalized
layers. Finally, Individual Training converges the slowest due to the
independent operation of clients. The shaded areas around curves indicate the
standard deviation of accuracy. As it can be seen, CEFL and Regular FL
demonstrate the most stable performance behaviour, while the remaining two
methods have a higher deviation, which indicates an unstable convergence
trend.
Figure 4: Accuracy vs. nbr. of rounds (different training methods).
To study the impact of dataset heterogeneity, we evaluate the accuracy
performances of 3 clients with different dataset features in Fig. 5.
Specifically, we select clients 4, 31, and 50 characterized as follows: Client
4 owns 831 training samples that are representative of all 8 activity classes;
Client 31 has only 101 training samples that are from the four types of fall
activities only; and Client 50 has 570 training samples, with a predominance
of 431 samples from a single activity class (daily activity). From Fig. 5, we
notice that the accuracy of Client 4 is the highest compared to others. This
is expected since it has the highest number and most representative data
distribution among the targeted classes. In contrast, the accuracy of Client
31 is the worst due to its small-sized and unbalanced dataset. Although Client
50 has a relatively large-sized dataset, its unbalance significantly impacts
the accuracy performance. Nevertheless, the accuracy of the different methods
are almost the same for Client 50, while a significant gap is present for
Client 31. Consequently, we conclude that our method has a similar performance
to Regular FL when the client’s dataset is small or highly unbalanced, while
for clients with large or relatively balanced datasets, a slight performance
gap can be noticed as in Fig. 3.
(a) Client 4
(b) Client 31
(c) Client 50
Figure 5: Accuracy convergence for three clients with different dataset
distributions.
## VI Conclusion
This paper proposes CEFL for health monitoring. CEFL consists of two steps: FL
among cluster leaders and transfer learning from leaders to cluster members.
Our method CEFL can reduce communication costs due to its small-scale FL among
only selected leaders, selected through graph clustering and based on clients’
mutual similarity. By inheriting trained models from the corresponding leader,
cluster members can rapidly achieve an acceptable accuracy on their own
datasets. Compared to baselines, CEFL achieves a great balance between
communication and accuracy. Specifically, the communication cost can be
reduced up to 98.45% at the expense of less than 3% accuracy degradation,
compared to the best baseline. Moreover, for clients with small or highly
imbalanced datasets, CEFL yields as high accuracy as the best baseline with at
a fraction of the communication cost.
## Acknowledgment
This research is supported by the Natural Sciences and Engineering Research
Council of Canada, Huawei Canada, and a MITACS Globalink scholarship.
## References
* [1] S. Selvaraj and S. Sundaravaradhan, “Challenges and opportunities in IoT healthcare systems: a systematic review,” _SN Appl. Sci._ , vol. 2, no. 1, pp. 1–8, Jan. 2020.
* [2] M. S. Rahman, N. C. Peeri, N. Shrestha, R. Zaki, U. Haque, and S. H. Ab Hamid, “Defending against the novel coronavirus (COVID-19) outbreak: How can the internet of things (IoT) help to save the world?” _Health Policy Technol._ , vol. 9, no. 2, p. 136, Jun. 2020.
* [3] P. Verma, S. K. Sood, and S. Kalra, “Cloud-centric IoT based student healthcare monitoring framework,” _J. Ambient Intelli. Humanized Comput._ , vol. 9, no. 5, pp. 1293–1309, Jan. 2018.
* [4] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” _IEEE Sig. Process. Mag._ , vol. 37, no. 3, pp. 50–60, May 2020.
* [5] D. Liu, T. Miller, R. Sayeed, and K. D. Mandl, “FADL: Federated-autonomous deep learning for distributed electronic health record,” _arXiv preprint arXiv:1811.11400_ , 2018.
* [6] M. Duan, D. Liu, X. Chen, Y. Tan, J. Ren, L. Qiao, and L. Liang, “Astraea: Self-balancing federated learning for improving classification accuracy of mobile deep learning applications,” in _Proc. IEEE Int. Conf. Comp. Design (ICCD)_ , 2019, pp. 246–254.
* [7] Q. Wu, X. Chen, Z. Zhou, and J. Zhang, “FedHome: Cloud-edge based personalized federated learning for in-home health monitoring,” _IEEE Trans. Mob. Comput. (Early Access)_ , pp. 1–1, 2020.
* [8] K. Sozinov, V. Vlassov, and S. Girdzijauskas, “Human activity recognition using federated learning,” in _Proc. IEEE Int. Conf. Parallel & Dist. Process. Apps. (ISPA)_, 2018, pp. 1103–1111.
* [9] C. Fang, Y. Guo, N. Wang, and A. Ju, “Highly efficient federated learning with strong privacy preservation in cloud computing,” _Computers & Security_, vol. 96, p. 101889, Sep. 2020.
* [10] J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” _arXiv preprint arXiv:1610.05492_ , 2016.
* [11] F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, “Robust and communication-efficient federated learning from non-IID data,” _IEEE Trans. Neural Nets. Learn. Syst._ , vol. 31, no. 9, pp. 3400–3413, Sep. 2019.
* [12] R. Amirhossein, M. Aryan, H. Hamed, J. Ali, and P. Ramtin, “FedPAQ: A communication-efficient federated learning method with periodic averaging and quantization,” _arXiv preprint arXiv:1909.13014_ , 2020.
* [13] W. Luping, W. Wei, and L. Bo, “Cmfl: Mitigating communication overhead for federated learning,” in _Proc. IEEE Int. Conf. Dist. Comput. Syst. (ICDCS)_ , 2019, pp. 954–964.
* [14] V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre, “Fast unfolding of communities in large networks,” _J. Stat. Mechanics: Theory and Experiment_ , vol. 2008, no. 10, p. P10008, Oct. 2008.
* [15] M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary, “Federated learning with personalization layers,” _arXiv preprint arXiv:1912.00818_ , 2019.
* [16] G. Vavoulas, C. Chatzaki, T. Malliotakis, M. Pediaditis, and M. Tsiknakis, “The mobiact dataset: Recognition of activities of daily living using smartphones,” in _Proc. Int. Conf. Info. Commun. Technol. Ageing Well and e-Health_ , vol. 2, 2016, pp. 143–151.
* [17] J. He, Z. Zhang, X. Wang, and S. Yang, “A low power fall sensing technology based on FD-CNN,” _IEEE Sensors J._ , vol. 19, no. 13, pp. 5110–5118, Jul. 2019.
|
# The evaporation of concentrated polymer solutions is insensitive to relative
humidity
Max Huisman SUPA and School of Physics and Astronomy, The University of
Edinburgh, Peter Guthrie Tait Road, Edinburgh EH9 3FD, United Kingdom Paul
Digard Department of Infection and Immunity, The Roslin Institute, The
University of Edinburgh, Easter Bush Campus, Edinburgh EH25 9RG, United
Kingdom Wilson C. K. Poon SUPA and School of Physics and Astronomy, The
University of Edinburgh, Peter Guthrie Tait Road, Edinburgh EH9 3FD, United
Kingdom Simon Titmuss SUPA and School of Physics and Astronomy, The
University of Edinburgh, Peter Guthrie Tait Road, Edinburgh EH9 3FD, United
Kingdom
###### Abstract
A recent theory suggests that the evaporation kinetics of macromolecular
solutions is insensitive to the ambient relative humidity (RH) due to the
formation of a ‘polarisation layer’ of solutes at the air-solution interface.
We confirm this insensitivity up to RH $\approx 80\%$ in the evaporation of
polyvinyl alcohol solutions from open-ended capillaries. To explain the
observed drop in evaporation rate at higher RH, we need to invoke compressive
stresses due to interfacial polymer gelation. Moreover, RH-insensitive
evaporation sets in earlier than theory predicts, suggesting a further role
for a gelled ‘skin’. We discuss the relevance of these observations for
respiratory virus transmission via aerosols.
††preprint: APS/123-QED
‘Hindered evaporation’ from a water-air interface through some barrier is
ubiquitous in applications. For example, as paint or ink dries, a ‘skin’ may
form rapidly at the interface [1]. This affects the ‘open time’ during which a
second coat may be applied without imperfections of the first coat showing
through [2]. The skin may buckle [3] or give rise to bubbles [4] and diminish
coating quality. Such applications have motivated a number of fundamental
studies [5, 6, 7, 4, 8].
Hindered evaporation is also important for understanding biological response
to environmental relative humidity, RH = $a_{e}\times 100\%$, where $a_{e}$ is
the water activity in the air. The evaporation rate of water through the inert
wax cuticle of a leaf is proportional to $(1-a_{e})$, but is humidity
independent in human skin for RH $\lesssim 85\%$ [9]. It is suggested that the
phase behavior of the lipid mixture in the stratum corneum, the outer layer of
skin, turns it into an active barrier [10] that responds to changes in $a_{e}$
to maintain ‘evaporative homeostasis’. The observation of liquid crystallinity
at the water-air interface of lipid solutions displaying RH-independent
evaporation rate up to RH $\approx 80\%$ [9] supports this picture.
A recent theory [11] suggests that RH-independent evaporation does not require
such ‘active’ response, but occurs whenever a concentrated ‘polarisation
layer’ of solutes builds up at the water-air interface due to the advective
flux towards the interface driven by evaporation balanced by an opposite
diffusive flux. The evaporation rate is primarily controlled by
$a(\varphi_{i})$, the water activity at the interface with solute volume
fraction $\varphi_{i}$. If $a(\varphi)$ drops sharply enough at high
$\varphi$, the evaporation rate becomes RH insensitive. This effect is
enhanced if the mutual diffusivity of the solution $D(\varphi)$ has a similar
dependence on $\varphi$. Since such $a(\varphi)$ and $D(\varphi)$ occur widely
in macromolecular solutions, RH-insensitive evaporation should be generic even
without ‘active’ polarisation layers. Using measured $a(\varphi)$ and
$D(\varphi)$, Salmon et al. predict RH-independent evaporation of polyvinyl
alcohol (PVA) solutions up to near saturation (RH $\to 100\%$).
There are multiple motivations for verifying this theory. Fundamentally, it is
important to know whether there is indeed a generic mechanism for RH-
insensitive evaporation, and, if so, what active response [10] may add.
Applications to coatings will also benefit from a verified predictive theory
for the effect of RH.
More topically, the respiratory droplets that transmit SARS-CoV2 and other
pathogens contain a mixture of salt, lipids and glycoproteins (mucins), the
latter at high concentrations deep inside the lungs [12]. Their evaporative
kinetics controls air-borne transmission [13, 14, 15, 16, 17]. Previous
empirical studies suggest an intriguing non-monotonic dependence of viral
viability on RH [18]. A meta-analysis correlates SARS-CoV2 infectivity with RH
but provides no clear picture [19]. As evaporative kinetics probably plays a
role in explaining seasonal variability of respiratory virus transmission [20,
21], recent studies have considered macromolecular effects on the RH
dependence of droplet drying and virus viability [22, 23, 24].
Figure 1: (a) Experimental schematic showing one of the five
$50\text{\,}\mathrm{mm}$ capillaries emerging from a $30\text{\,}\mathrm{mm}$
section of $20\text{\,}\mathrm{mm}$ diameter perspex tubing glued to a
microscope slide. The 5 capillaries increase evaporation rate, thus allowing
faster experiments. (b) Mass loss, $m(t)$, of pure water (H2O, open diamonds)
and H2O + PVA at $\varphi=0.008$ (filled circles) solutions at RH $=50$% and
$T=$291\text{\,}\mathrm{K}$$. Inset: time exponents $\beta$ of a power-law fit
to the late-time $m(t)$ for PVA solutions at different RH. (c) PVA $m(t)$ at
different RH plotted against time $t$. Inset PVA $m(t)$ versus $t^{1/2}$.
Dashed lines are linear fits at long times, after a transient period indicated
by the dotted line where $t>300~{}$min or $t^{1/2}>15~{}$min1/2. (d)
$\blacktriangle$: the mass loss rate of pure water, $\alpha_{0}=\dot{m}(t)$,
normalised to the value at RH = 25%; $\triangle$: the same quantity for 0.9%
(w/w) NaCl solutions; $\bullet$: the slope of the linear portions in the inset
of part (c) for $\varphi=0.008$ PVA solutions, ${\rm d}m(t)/{\rm
d}t^{1/2}=(2\sqrt{D_{0}}\rho A)\alpha$, normalised to the value at RH = 25%.
Motivated by these reasons, we have tested experimentally the theory of Salmon
et al. for RH-insensitive hindered evaporation [11]. One of the geometries
they treated, unidirectional drying from one end of a capillary whose other
end is connected to a constant solute concentration reservoir, was previously
used to study the drying of lipidic [9, 25, 26] and saliva-containing
solutions [23]. They made testable predictions for PVA solutions up to a
single numerical prefactor.
Based on a previous set-up [9, 25, 27], Fig. 1(a), we connected 5 rectangular
borosilicate capillaries
($$0.20\text{\,}\mathrm{mm}$\times$4\text{\,}\mathrm{mm}$$, VitroTubes) to a
liquid reservoir. To ensure evaporation only at the open end of each
capillary, we covered the liquid in the reservoir with a thin layer of
1-Octadecene (Sigma Aldrich). The set-up rested on a Sartorious Secura 224-1S
high-precision scale to quantify evaporative mass loss. A sealed enclosure
over the sample connected to a Cellkraft P-10A humidifier controlled the RH
and temperature $T$, with set-points confirmed using external probes. This
obviates the need for blowing constant-RH air at the interface [9], which may
perturb drying. The humidifier airflow was tuned to minimize disturbance on
mass measurements.
A drop shape analyser (Kruss EasyDrop DSA20E) observes the side view at the
open end of the capillary. We adjusted the reservoir level to obtain an
initial flat water-air interface. The degree of flatness did not visibly
change over an experiment, thus minimising curvature effects [28]. The mass
loss rate of water should then follow
$\dot{m}(t)=kAc_{\rm{sat}}(1-a_{\rm{e}})\stackrel{{\scriptstyle\rm
def}}{{=}}\alpha_{0},$ (1)
with $A$ the capillary’s cross section, and $k$ and $c_{\rm{sat}}$ the mass
transfer coefficient and the saturation concentration of water in air [29].
Indeed, $m(t)$ is linear at RH = 50%, Fig. 1(b), and $\alpha_{0}$ is strictly
proportional to $(1-a_{\rm e})$ [30].
At early times, the evaporation of a polymer solution with volume fraction
$\varphi\ll 1$ should follow Eq. 1. Once a significant polarisation layer
forms, the reduced interfacial water activity, $a(\varphi_{i})$, controls the
evaporation rate, which now reads
$\dot{m}(t)=kAc_{\rm{sat}}(a(\varphi_{i})-a_{\rm{e}})$. As the polarisation
layer grows, $\varphi_{i}$ increases asymptotically towards $\varphi^{*}$
given by $a(\varphi^{*})=a_{\rm e}$, and the evaporation rate becomes sub-
linear as $a(\varphi_{i})\to a(\varphi^{*})$. Solving the case for constant
$D(\varphi)=D_{0}$, the single-coil diffusivity, to highlight the role of
water activity, Salmon et al. predict
$\dot{m}(t)=f\rho
A\sqrt{\frac{\varphi^{*}D_{0}}{2\varphi_{0}t}}\stackrel{{\scriptstyle\rm
def}}{{=}}\alpha\rho A\sqrt{\frac{D_{0}}{t}},$ (2)
where $f$ is a numerical constant of order unity,
$\rho\approx$998\text{\,}\mathrm{k}\mathrm{g}\mathrm{m}^{-3}$$ is the mass
density of the water and $\varphi_{0}$ is the reservoir concentration. At
$298\text{\,}\mathrm{K}$, 99% hydrolyzed PVA with molecular weight
$M_{\rm{W}}=$ 85k-124k has a hydrodynamic radius
$R_{\rm{H}}=10.5\pm$0.5\text{\,}\mathrm{nm}$$ [31]. For our PVA with about 50%
higher $M_{\rm w}$, $R_{\rm{H}}\approx\sqrt{1.5}\times 10.5~{}\rm{nm}\approx
13~{}\rm{nm}$, from which we estimate
$D_{0}\approx$16\text{\,}\mathrm{\SIUnitSymbolMicro}\mathrm{m}^{2}\mathrm{s}^{-1}$$.
Equation 2 also defines a dimensionless $\alpha\propto{\rm d}m/{\rm d}t^{1/2}$
(cf. Fig. 1(c) inset) to facilitate comparison with data,
$\alpha=(2\rho A\sqrt{D_{0}})^{-1}\frac{{\rm d}m}{{\rm
d}t^{1/2}}=f\sqrt{\frac{\varphi^{*}}{2\varphi_{0}}}.$ (3)
We dissolved PVA ($M_{\rm w}=$ 146k-186k, 99% hydrolized, Sigma Aldrich) in
milli-Q water. Figure 1(b) shows $m(t)$ for $\varphi=0.008$ ($=1\%$ (w/w)111We
use a mass density of PVA, $\rho_{\mathrm{PVA}}=1250\,\mathrm{kgm^{-3}}$
correpsonding to the middle of the 1190-1310 $\mathrm{kgm^{-3}}$ range given
at https://pubchem.ncbi.nlm.nih.gov/compound/11199 ) at
$T=$291\text{\,}\mathrm{K}$$ and RH = 50%. An initially linear regime of
constant evaporation rate becomes sub-linear as time progresses. For each RH
studied, Fig. 1(c), the sub-linear ‘falling rate’ regime is consistent with a
$t^{1/2}$ scaling (inset) [11].
Figure 2: Comparison of the measured evaporation prefactor $\alpha$ as a
function of RH ( $\bullet$, with error bars giving the standard deviation)
with theoretical predictions [11] for constant polymer diffusivity and a water
activity $a(\varphi)$ without elasticity, Eq. 4 (dashed line), and with
elasticity, Eq. 6 (full line).
To highlight the near-RH independence up to 80% and contrast this with the
behavior of pure water, Fig. 1(d) plots $\alpha_{0}/\alpha_{0}^{(25)}$ and
$\alpha/\alpha^{(25)}$ against RH, where the denominators are the respective
values at RH = 25% 222While $\alpha^{\prime}=\dot{m}(t)$ is the evaporation
rate for H2O, it is incorrect to call $\alpha\propto{\rm dm}/{\rm d}t^{1/2}$
an ‘evaporation rate’ for PVA (or lipid [9]) solutions: see Eq. 2.. This
difference is not seen for 0.9% (w/w) NaCl solutions, Fig. 1(d) [$\triangle$]:
the macromolecular nature of PVA is key. Moreover, no ordering is needed:
crossed-polar observation of the open end of capillaries with evaporating PVA
solution showed no liquid crystallinity.
To test the theory further, consider the absolute value of the evaporative
prefactor, $\alpha$, in the falling rate regime. Experimental values
calculated using measured ${\rm d}m/{\rm d}t^{1/2}$ and known $(\rho,A,D_{0})$
as input, Eq. 3, are shown in Fig. 2 ( $\bullet$). To compare with theory, we
solved $a(\varphi^{*})=a_{e}$ to give $\varphi^{*}=a^{-1}(a_{e})$ using a
parameterisation of the experimentally measured $a(\varphi)$ [34, 11]
$a(\varphi)=(1-\varphi)\exp{[\varphi+\chi\varphi^{2}]},$ (4)
with the Flory-Huggins parameter
$\chi(\varphi)=3.94-3.42(1-\varphi)^{0.09}.$ (5)
This ‘cliff-like’ $a(\varphi)$, Eq. 4, means $\varphi^{*}=a^{-1}(a_{e})$
varies little for $0<\rm RH\lesssim 95\%$, engendering even lower RH-
dependence for $\alpha\propto\sqrt{\varphi^{*}}$, Eq. 3. Our predicted
$\varphi^{*}$ and a prefactor of $f=1.12$, Eq. 3, account quantitatively for
the observed $\alpha(\rm RH)$ up to RH $\approx 80\%$, Fig. 2 (dashed line);
discrepancies at higher RH require explanation.
Figure 3: Bright-field image of the polarization layer at the water-air
interface at the end ($t\sim 10^{3}~{}$min) of a typical experiment at
RH$=50\%$.
Bright-field imaging of the polarisation layer, Fig. 3, shows late-stage
delamination consistent with quasi-periodic buckling, allowing air ingress to
give bright-field contrast. Such buckling is typical of a stiff film atop a
compliant substrate when the film is under considerable compressive stress
[35]. In our case, we have a stiff gel skin [36, 37, 1] covering a more
compliant, viscoelastic polarisation layer. The resulting stress contributes
to the osmotic pressure [38] and modifies Eq. 4 to
$a(\varphi)=(1-\varphi)\exp{\left[\varphi+\chi\varphi^{2}-\frac{K_{\rm{g}}\nu_{1}}{k_{\rm{B}}T}\ln{\left(\frac{\varphi}{\varphi_{g}}\right)}\right]},$
(6)
with $\nu_{1}$ the molecular volume of water and $K_{\rm g}$ the osmotic
modulus of the gelled skin [30]. Equation 6 well fits our data up to RH
$\approx 90\%$, Fig. 2 (solid line), with
$K_{\rm{g}}=10\pm$1\text{\,}\mathrm{MPa}$$ and $\varphi_{g}=0.24\pm 0.02$ as
fit parameters.
As already noted, the apparently linear late time
($t\gtrsim$300\text{\,}\mathrm{min}$$) data in Fig. 1(c) [inset] seems
consistent with $m(t)\sim t^{0.5}$. Fitting this data to $m(t)\sim t^{\beta}$
gives $0.5<\beta<0.6$, with no systematic dependence on RH, Fig. 1(b) [inset]:
evaporation is always somewhat faster than theory [11] predicts, probably due
to the parallel pathways provided by air ingress following the buckling
delamination of the polymer skin from the glass wall, giving higher
permeability than an intact polarisation layer.
Figure 4: The dependence of evaporation rate on time plotted on log-log
scales. Dotted lines represent for early times the the dilute evaporation
regime, Eq. 1, and for long times the diffusion limited evaporation regime,
Eq. 2 (with $f=1$), the transition point is taken as the intercept.
For a final test of theory, we show $\dot{m}(t)$ in Fig. 4. Salmon et al. [11]
predict that, before a polarisation layer is established, $\dot{m}(t)$ should
be constant, $\sim t^{0}$, and essentially that of pure water and decreases
linearly with RH, Eq. 1. Including the nearly-universal $t^{-1/2}$ scaling,
Eq. 2 with $f=1$, on the same plot defines a critical time, $t_{c}$, for the
transition between these two regimes. Measured $\dot{m}(t)$ (full lines) and
theory (dashed lines) disagree on two accounts. First, rather than a
systematic decrease with RH, the constant $\dot{m}$ at short times appears to
vary randomly with RH. This scatter is comparable to the reproducibility
between runs at a single RH, suggesting that $\dot{m}$ may be RH independent
at early times. Secondly, the observed cross-over from $t^{0}$ to $t^{-1/2}$
scaling occurs at a time that is randomly scattered about
$t\lesssim${10}^{4}\text{\,}\mathrm{s}$$, while the theoretical ‘knee’
systematically shifts to longer times as RH increases.
These discrepancies motivate the search for extra physics beyond Salmon et al.
[11], which may again relate to a gelled polymer skin at the air-solution
interface. Its formation, in $\lesssim$1\text{\,}\mathrm{s}$$ in our case
[30], gives rise to an interfacial concentration that remains close to
$\varphi_{g}$ for some time [37, 36, 39]. In this ‘skin-limited’ period, the
water flux into the skin from the solution is set by the pressure gradient at
the skin [40], and therefore by $\varphi=\varphi_{g}$, rather than
$\varphi^{*}=a^{-1}(a_{e})$ [11]. This predicts RH-independent, constant
evaporation rate at early times, consistent with the observed random scatter
of $\dot{m}$ values.
The rate at which the interface concentration increases from $\varphi_{g}$ to
$\varphi^{*}$ is set by an RH-independent skin-limited evaporation rate;
moreover, we have seen that $\varphi^{*}$ is RH insensitive. So, the duration
of the skin-limited period, $t_{c}$, is also insensitive to RH, as observed.
In summary, a PVA-water mixture is found to evaporate almost independently of
the ambient RH during early constant evaporation rate and late falling-rate
regimes. The latter agrees with a theoretical model [11] predicated on the
formation of a polymer polarization layer at the air-water interface driven by
advection. We find evidence that the interfacial gelation of the PVA
contributes to the early-stage RH-insensitivity. These findings may prove
significant for the evaporation of pathogen-containing respiratory droplets,
which include surfactants and gel-forming macromolecules that have been shown
to encapsulate dried droplets [41, 18]. Our tractable experimental system
provides a baseline for investigating the role of these more complex solutes
in viral transmission via respiratory droplets [42].
We thank Paul Harris and Andrew Schofield for technical assistance and Jean-
Baptiste Salmon, Jay Marsden, Veronica McKinny, Davide Marenduzzo and Patrick
Warren for insightful discussions. Aidan Brown lent us a humidifier purchased
under EPSRC EP/S001255/1. MH was funded by the University of Edinburgh.
For the purpose of open access, the author has applied a Creative Commons
Attribution (CC BY) licence to any Author Accepted Manuscript version arising
from this submission.
## References
* Tirumkudulu and Punati [2022] M. S. Tirumkudulu and V. S. Punati, Langmuir 38, 2409 (2022).
* Holl _et al._ [2001] Y. Holl, J. L. Keddie, P. J. McDonald, and W. A. Winnik, Drying modes of polymer colloids, in _Film Formation in Coatings_ (American Chemical Society, 2001) Chap. 1, pp. 2–26.
* Pauchard and Allain [2003] L. Pauchard and C. Allain, Europhys. Lett. (EPL) 62, 897 (2003).
* Arai and Doi [2012] S. Arai and M. Doi, Eur. Phys. J. E 35, 57 (2012).
* Bornside _et al._ [1989] D. E. Bornside, C. W. Macosko, and L. E. Scriven, J. Appl. Phys. 66, 5185 (1989).
* de Gennes [2002] P. G. de Gennes, Eur. Phys. J. E 7, 31 (2002).
* Okuzono _et al._ [2006a] T. Okuzono, K. Ozawa, and M. Doi, Phys. Rev. Lett. 97, 136103 (2006a).
* Raju _et al._ [2022] L. T. Raju, C. Diddens, J. Rodríguez-Rodríguez, M. N. van der Linden, X. Zhang, D. Lohse, and U. Sen, (2022), arXiv:2211.06528 .
* Roger _et al._ [2016] K. Roger, M. Liebi, J. Heimdal, Q. D. Pham, and E. Sparr, Proc. Natl. Acad. Sci. (USA) 113, 10275 (2016).
* Sparr and Wennerström [2001] E. Sparr and H. Wennerström, Biophys. J. 81, 1014 (2001).
* Salmon _et al._ [2017] J. B. Salmon, F. Doumenc, and B. Guerrier, Phys. Rev. E 96, 032612 (2017).
* Nicas _et al._ [2005] M. Nicas, W. W. Nazaroff, and A. Hubbard, J. Occup. Environ. Hyg. 2, 143 (2005).
* Wells [1934] W. F. Wells, Am. J. Epidemiol. 20, 611 (1934).
* Xie _et al._ [2007] X. Xie, Y. Li, A. T. Chwang, P. L. Ho, and W. H. Seto, Indoor Air 17, 211 (2007).
* Netz [2020] R. R. Netz, J. Phys. Chem. B 124, 7093 (2020).
* Rezaei and Netz [2021] M. Rezaei and R. R. Netz, Phy. Fluids 33, 091901 (2021).
* Dbouk and Drikakis [2020] T. Dbouk and D. Drikakis, Physics of Fluids 32, 093312 (2020).
* Huynh _et al._ [2022] E. Huynh, A. Olinger, D. Woolley, R. K. Kohli, J. M. Choczynski, J. F. Davies, K. Lin, L. C. Marr, and R. D. Davis, Proc. Natl. Acad. Sci. (USA) 119, e2109750119 (2022).
* Mecenas _et al._ [2020] P. Mecenas, R. T. d. R. M. Bastos, A. C. R. Vallinoto, and D. Normando, PLOS ONE 15, 1 (2020).
* Ali _et al._ [2022] S. T. Ali, B. J. Cowling, J. Y. Wong, D. Chen, S. Shan, E. H. Lau, D. He, L. Tian, Z. Li, and P. Wu, Sci. Total Environ. 818, 151724 (2022).
* Azziz Baumgartner _et al._ [2012] E. Azziz Baumgartner, C. N. Dao, S. Nasreen, M. U. Bhuiyan, S. Mah-E-Muneer, A. A. Mamun, M. A. Y. Sharker, R. U. Zaman, P.-Y. Cheng, A. I. Klimov, M.-A. Widdowson, T. M. Uyeki, S. P. Luby, A. Mounts, and J. Bresee, J. Infect. Dis. 206, 838 (2012).
* Poon _et al._ [2020] W. C. K. Poon, A. T. Brown, S. O. L. Direito, D. J. M. Hodgson, L. Le Nagard, A. Lips, C. E. MacPhee, D. Marenduzzo, J. R. Royer, A. F. Silva, J. H. J. Thijssen, and S. Titmuss, Soft Matter 16, 8310 (2020).
* Merhi _et al._ [2022] T. Merhi, O. Atasi, C. Coetsier, B. Lalanne, and K. Roger, Proc. Natl. Acad. Sci. (USA) 119, e2204593119 (2022).
* Pease _et al._ [2022] L. F. Pease, N. Wang, G. R. Kulkarni, J. E. Flaherty, and C. A. Burns, Int. Commun. Heat Mass Transf. 131, 105746 (2022).
* Roger _et al._ [2018] K. Roger, E. Sparr, and H. Wennerström, Phys. Chem. Chem. Phys. 20, 10430 (2018).
* Roger and Crassous [2021] K. Roger and J. J. Crassous, Proc. Natl. Acad. Sci. (USA) 118, e2105530118 (2021).
* Andersson _et al._ [2018] J. M. Andersson, K. Roger, M. Larsson, and E. Sparr, ACS Cent. Sci. 4, 1315 (2018).
* Thomson [1872] W. Thomson, Proc. R. Soc. Edin. 7, 63–68 (1872).
* Cussler [1997] E. Cussler, _Diffusion: Mass Transfer in Fluid Systems_ , Cambridge Series in Chemical Engineering (Cambridge University Press, 1997).
* [30] See electronic supplement at http://XXX.
* Perfetti _et al._ [2020] M. Perfetti, N. Gallucci, I. R. Krauss, A. Radulescu, S. Pasini, O. Holderer, G. D’Errico, G. Vitiello, G. O. Bianchetti, and L. Paduano, Macromolecules 53, 852 (2020).
* Note [1] We use a mass density of PVA, $\rho_{\mathrm{PVA}}=1250\,\mathrm{kgm^{-3}}$ corresponding to the middle of the 1190-1310 $\mathrm{kgm^{-3}}$ range given at https://pubchem.ncbi.nlm.nih.gov/compound/11199.
* Note [2] While $\alpha^{\prime}=\dot{m}(t)$ is the evaporation rate for H2O, it is incorrect to call $\alpha\propto{\rm dm}/{\rm d}t^{1/2}$ an ‘evaporation rate’ for PVA (or lipid [9]) solutions: see Eq. 2.
* Jeck _et al._ [2011] S. Jeck, P. Scharfer, W. Schabel, and M. Kind, Chem. Eng. Process. 50, 543 (2011).
* Li _et al._ [2012] B. Li, Y.-P. Cao, X.-Q. Feng, and H. Gao, Soft Matter 8, 5728 (2012).
* Okuzono _et al._ [2006b] T. Okuzono, K. Ozawa, and M. Doi, Phys. Rev. Lett. 97, 136103 (2006b).
* Ozawa _et al._ [2006] K. Ozawa, T. Okuzono, and M. Doi, Jpn. J. Appl. Phys. 45, 8817 (2006).
* Leibler and Sekimotot [1993] L. Leibler and K. Sekimotot, Macromolecules 26, 6937 (1993).
* Okuzono and Doi [2008] T. Okuzono and M. Doi, Phys. Rev. E 77, 030501(R) (2008).
* Punati and Tirumkudulu [2022] V. S. Punati and M. S. Tirumkudulu, Soft Matter 18, 214 (2022).
* Vejerano and Marr [2018] E. P. Vejerano and L. C. Marr, J. Royal Soc. Interface 15, 1 (2018).
* Zanin _et al._ [2016] M. Zanin, P. Baviskar, R. Webster, and R. Webby, Cell Host & Microbe 19, 159 (2016).
|
# Physical effects of gravitational waves: pedagogical examples
Hayato Motohashi Division of Liberal Arts, Kogakuin University, 2665-1
Nakano-machi, Hachioji, Tokyo 192-0015, Japan Teruaki Suyama Department of
Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo
152-8551, Japan
###### Abstract
General relativity describes gravitation in terms of the geometry of
spacetime. It predicts the existence of gravitational waves (GWs) that stretch
and compress spacetime and were detected recently by state-of-the-art
interferometer observations. Yet, for those who are not familiar with general
relativity, it may be difficult to understand how the GWs actually stretch and
compress spacetime. In this paper, after reviewing the fact that the effects
of GWs can be understood as a force in Newtonian mechanics, we consider
extreme and readily perceivable situations where large-amplitude GWs pass
through the human body and the Earth, and demonstrate that GWs cause phenomena
that commonly occur in daily life by back-of-the-envelope calculations. Our
analysis provides intellectual and pedagogical materials for understanding the
nature of GWs for nonexperts in general relativity.
## I Introduction
On February 11, 2016, the LIGO Scientific Collaboration and Virgo
Collaboration announced the first direct detection of gravitational waves
(GWs) generated by the coalescence of binary black holes (BHs) Abbott:2016blz
; LIGOpress . It was a historical milestone for mankind to finally prove the
prediction of Einstein’s general relativity proposed about 100 years ago. GWs
are ripples in spacetime. Now, it is almost common knowledge not only among
scientists but also the general public that spacetime is stretched and
compressed upon the passage of GWs. Many people are interested in GWs and
general relativity; hence, it is time to think about how to cultivate its
intuitive understanding Schutz:2021xns . For a good review on the history of
GWs, see Cervantes-Cota:2016zjc .
Compared with, for example, electromagnetic waves, there are several obstacles
to people being aware of the effects of GWs in their daily lives. First of
all, it is difficult to imagine what happens when GWs stretch and compress
spacetime. In particular, a common misconception is as follows: “Rulers should
also be stretched and compressed similarly to space. Thus, are GWs not
physically observable inherently?” Actually this is not the case, and GWs do
lead to physically observable effects. In principle, one can notice the
passage of GWs if their amplitude is sufficiently large to cause visible
distortions (see Fig. 1). Although this conceptual issue has already been
addressed in depth in the literature (e.g., see Misner:1973prb ; Creighton ;
Maggiore:2007ulw ; Saulson:1997ck ), in the next section, we briefly provide
the physical explanation of the reason why GWs can yield observable effects,
which is basically a restatement of what is already known. Nevertheless, we
believe that it helps readers confronted with this conceptual question
recognize the reality of GWs and easily understand the subsequent sections
where we study how the GWs with extreme amplitudes cause visible effects on
daily phenomena.
The second reason why people do not commonly think about GWs as much as they
do about electromagnetic waves is their extremely small amplitude. For
instance, in the case of GWs emitted from a typical binary BH merger at a
cosmological distance, the change in distance between the Earth and the Sun is
merely about one-tenth of the Bohr radius! It is this smallness of GW
amplitude that had prevented direct detection for about 100 years and rendered
GWs irrelevant to daily physical phenomena.
Figure 1: Artistic image of extremely exaggerated physical effects of GWs.
Owing to the passage of GWs, the Earth is stretched and compressed in a
direction perpendicular to the propagation of GWs. In addition to this effect,
two effects on the light arriving at the observer, i.e., color change of the
light due to the red-/blue-shift and the change in light trajectories, also
distort the appearance of the Earth.
Although these peculiar circumstances have led to the general public being
less aware of GWs, given their scientific significance, it is still
regrettable if only the experts of general relativity can appreciate their
physical properties*1*1*1There are several educational papers that aim to
spread the basic concept and recent experimental achievements of general
relativity and GWs Schutz:1984nf ; Farr:2011cw ; Burko:2016vnu ;
Mathur:2016cox ; Hilborn:2017liy . . Fortunately, the physical effects of GWs
can be understood through the familiar Newtonian mechanics Misner:1973prb ;
Maggiore:2007ulw . The aim of this paper is to provide intellectual and
pedagogical materials for understanding the nature of GWs within the framework
of Newtonian mechanics and at the level of the back-of-the-envelope
calculation, by which students who have learned Newtonian mechanics but are
not familiar with relativity can obtain a clear view of the physical effects
of GWs. To this end, we consider extreme and readily perceivable situations.
As it is often the case when learning a particular subject in physics, ideal
examples enable us to easily understand the essence of the physical effects of
GWs without being concerned with non-GW effects that are unnecessary for the
current purpose but may become dominant in more realistic situations. We
perform a quantitative analysis of several parameter regions in the
amplitude–frequency space in which GWs would visibly affect us. Additionally,
considering a BH binary system as the source of GWs, we shall estimate the
corresponding BH mass and distance from the Earth.
We stress that our results should not be regarded as a warning of the risk of
damage to human beings posed by GWs in actual daily life. As we shall see
below, GWs may readily and perceivably exert their effects only at a
sufficiently close distance from massive BHs.
## II Physical effects of GWs: general arguments
Before presenting our clear examples of the physical effects of GWs, in this
section we provide a general argument on how we can, in principle, detect GWs
by addressing the following conceptual question that people who are not
familiar with general relativity might ask: “If the passage of GWs causes
nothing but the stretching and the compression of space, do they also stretch
and compress a ruler for measuring the change in the size of the body
simultaneously, making the effects of GWs unobservable?” Although the answer
to this question is already provided in several reports (e.g., Misner:1973prb
; Creighton ; Maggiore:2007ulw ; Saulson:1997ck ), we consider it useful not
to skip this issue but to recapitulate the essential point of the answer here
in order for readers to understand the background and hence the subsequent
results of our analyses.
In a nutshell, an essential point in any experiment to detect the physical
effects of GWs is the existence of an “absolute ruler”. Here, the absolute
ruler refers to the ruler (not any ordinary ruler) that does not change upon
the passage of GWs. The effects of GWs can then be perceived as the change of
the system measured using the absolute ruler.
The absolute ruler is a set of fundamental physical constants that are
literally absolute and do not change in the presence of GWs. The relevant
physical constants depend on the types of system and measurement considered.
Here, we focus on the following two cases of physical phenomena caused by GWs
and clarify how GWs induce physical effects that can be detected.
1. 1.
Variation of light travel time
2. 2.
Variation of the size of a deformable or semirigid body with time
The relevant physical constants in each phenomenon are the speed of light $c$
(case 1) and those used to determine the electromagnetic force at the
microscopic level, such as the electric charge (case 2).
Case 1 is actually implemented in current laser interferometers (such as
LIGO). In laser interferometers, the distance $L$ between the beam splitter
and the mirror changes upon the passage of GWs, and consequently the travel
time of the laser light over that distance, which the detector actually
measures, also changes. Although the propagation speed of the laser light is
not affected by GWs, the wavelength $\lambda$ is stretched and compressed by
them; hence, the laser light is red-/blue-shifted. Thus, the ratio $L/\lambda$
remains constant. Nevertheless, this does not mean that the laser
interferometer cannot detect GW signals since it can detect the change in
light travel time Saulson:1997ck ; Farr:2011cw .
In case 2, the physical phenomena induced by the distortion of a deformable
body or vibrations of a semirigid body are measured. To understand this
qualitatively, let us model the material of a body as a collection of point
masses connected by springs. This model provides a convenient alternative to
the electromagnetic force acting between atoms. The properties of the springs,
such as their natural length and restoring force, are determined by physical
constants, such as the electric charge and electron mass, and are not affected
by GWs. When GWs pass through, the GWs change the length of the springs from
their natural length. For the atoms and springs, this action is perceived as a
physical force because natural length is absolute. As a result, the atoms
oscillate around the equilibrium position, which, at the macroscopic level,
can be seen as vibrations of the body.
For case 2, we can derive the effective force induced by GWs without resorting
to general relativity. Consider a system consisting of two point masses
floating in space, each with mass $m$ and separated by a distance $L$. When a
gravitational wave with amplitude $h(t)$ passes through the system, the
distance between the masses (as measured by an absolute ruler) becomes
$\sqrt{1+h(t)}L$ according to general relativity. If we assume that $h(t)\ll
1$, which is valid in most cases explored in this paper, we can use the
binomial approximation $(1+\epsilon)^{1/2}\approx 1+\frac{1}{2}\epsilon$ to
say that the distance is approximately $[1+\frac{1}{2}h(t)]L$, meaning that an
observer with an absolute ruler would say that each mass is oscillating toward
or away from the system’s center of mass with an acceleration of
$a=\frac{1}{2}\ddot{h}(\frac{1}{2}L)$ (since each mass is $\frac{1}{2}L$ from
the center) = $\frac{1}{4}\ddot{h}L$. However we might interpret this in
general relativity, from a Newtonian perspective, this acceleration must be
caused by a force, meaning that each mass behaves as if it were experiencing a
force of magnitude
$F=\frac{1}{4}m{\ddot{h}}(t)L$ (1)
toward or away from each the system’s center of mass. If the two masses
happened to be connected by a spring, this effective force would measurably
compress or expand the spring: indeed, the spring would behave as if it were
being compressed by a total force of $F=\frac{1}{2}m{\ddot{h}}(t)L$.
The above discussion demonstrates that the stretching and compression of space
by GWs do yield observable physical effects. In the following, we focus on
case 2 and provide three extreme examples (destruction of organs, earthquakes,
and tides) with which readers unfamiliar with general relativity can imagine
the physical effects of GWs intuitively in terms of Newtonian mechanics. As
the waveform of a GW, we consider a monochromatic wave given by
$h(t)=h_{0}\cos(2\pi f_{\rm gw}t).$ (2)
## III Example 1: Organ destruction
The first example we study is the potential destruction of the human body
caused by passing GWs. With the typical realistic strain, the effect of GWs on
the human body is totally negligible. However, as we have discussed in §II,
for a sufficiently large amplitude of GWs, in principle, GWs could destroy
organs or tissues inside our body. Let us provide an order-of-magnitude
estimation of the stretching of GWs at which organs are destroyed by
determining the force exerted by GWs of various amplitudes and frequencies on
the parts of an organ and comparing against the maximum force the organ can
tolerate without being destroyed.
In the order-of-magnitude estimation, we may replace an organ with two massive
objects separated by a distance equivalent to the size of the organ and solve
Eq. (1) as a force acting on the organ. Taking $L=10~{}{\rm cm}$ and
$m=0.1~{}{\rm kg}$ as representative values for an organ (e.g., heart) and
scaling frequency $f_{\rm gw}$ relative to $100~{}{\rm Hz}$, which is the
typical frequency for an event measured by LIGO, Virgo, or KAGRA, we find that
the amplitude of the total compressive force on the organ is approximately
given by
$F\approx 10^{3}~{}{\rm N}~{}{\left(\frac{f_{\rm gw}}{100~{}{\rm
Hz}}\right)}^{2}~{}h_{0}.$ (3)
Roughly speaking, an organ will collapse if more than $\sim 10^{3}~{}{\rm N}$
is exerted on it Rosen:2008 . By adopting this criterion, we find that an
organ is destroyed if the GW amplitude $h_{0}$ satisfies
$h_{0}\gtrsim{\left(\frac{f_{\rm gw}}{100~{}{\rm Hz}}\right)}^{-2}.$ (4)
The red solid line in Fig. 2 shows the boundary of this equality as a function
of $f_{\rm gw}$. The lower limit of $f_{\rm gw}$ around $10^{2}~{}{\rm Hz}$
originates from the criterion $h_{0}<1$ for which our linear approximation is
valid. Cases where $h_{0}>1$ are beyond the scope of this paper. The upper
limit of $f_{\rm gw}$ is the inverse of the time that a sound wave travels
across an organ:
$f_{\rm gw}<\frac{c_{\rm o}}{L}=10^{5}~{}{\rm Hz}~{}{\left(\frac{c_{\rm
o}}{10^{3}~{}{\rm m/s}}\right)}{\left(\frac{L}{10~{}{\rm cm}}\right)}^{-1},$
(5)
where $c_{\rm o}$ is the speed of sound passing across the organ. Above this
frequency, the force from GWs changes its direction before it affects the
whole organ, and it is not clear whether the organ is destroyed even if the
force given by Eq. (3) is applied. Hence, in our analysis, we consider the
frequency range that satisfies this condition. In Fig. 2, the boundaries
$h_{0}=1$ and $f_{\rm gw}=10^{5}~{}{\rm Hz}$ are shown as red dashed lines.
Figure 2: GW amplitudes $h_{0}$ sufficiently strong to cause large effects in
terms of frequency $f_{\rm gw}$. The solid lines indicate the threshold beyond
which human lives would be threatened, whereas the dashed lines indicate the
possible boundaries of the validity of the approximations we employed. The
red, green, and blue lines are, respectively, the thresholds, beyond which the
GWs cause organ destruction, major earthquakes, and large tides.
## IV Example 2: Earthquakes
Next, we consider an earthquake that would be induced by the passage of GWs. A
pioneering work was performed by Dyson, who calculated the seismic response of
the Earth to the passage of 1 Hz GWs Dyson:1997gv . Constraints using
potential seismic signatures have been extensively studied Coughlin:2014sca .
Dyson’s computation heavily relies on tensorial equations and general
relativity, which are beyond the scope of this paper. Instead, here, we
consider a back-of-the-envelope calculation. Quite remarkably, results of our
crude calculation reproduce those of Dyson’s computation within a factor of
${\cal O}(1)$.
An earthquake is simply the vibration of the ground propagating as a sound
wave. When a GW of frequency $f_{\rm gw}$ is passing through the Earth, a
sound wave with the same frequency is induced and propagates. Let us model the
motion of a segment of the Earth medium with a harmonic oscillator in which a
point mass is attached to a spring of length $L$. As we have discussed in §II,
GWs induce the acceleration of the harmonic oscillator of the order of
$\pi^{2}f_{\rm gw}^{2}Lh_{0}$. From the shape of the plane wave $e^{2\pi
ix/\lambda_{\rm e}}$, where $\lambda_{\rm e}$ is the wavelength of a
transverse seismic wave passing through the Earth, the phases of the ground
motion at different locations are nearly coherent (i.e., the phase difference
is $<2\pi$) if the separation is less than $\lambda_{\rm e}$. In other words,
modeling the vibration of the ground as oscillations of the harmonic
oscillator is valid for the segment of the Earth medium with a (typical) size
$\lesssim\lambda_{\rm e}$. This means that we should take $L$ to be
$\lambda_{\rm e}$. Then, the acceleration of the ground is estimated as
$a=\pi^{2}c_{\rm e}f_{\rm gw}h_{0},$ (6)
where $c_{\rm e}=\lambda_{\rm e}f_{\rm gw}$ is the speed of a transverse
seismic wave passing through the Earth, which is typically $3\times 10^{3}$
m/s. Note also that since $f_{\rm gw}=c/\lambda_{\rm gw}=c_{\rm
e}/\lambda_{\rm e}$ it holds that $\lambda_{\rm e}\ll\lambda_{\rm gw}$. The
acceleration (6) is the same as that obtained by Dyson up to a factor of
${\cal O}(1)$ Dyson:1997gv .
The induced earthquake will be catastrophic if the acceleration is greater
than the gravitational acceleration $g\approx 9.8$ m/s2. Thus, by equating
acceleration (6) to $g$, we obtain the critical GW amplitude $h_{0}$ that
yields a serious earthquake:
$h_{0}=\frac{g}{\pi^{2}c_{\rm e}f_{\rm gw}}.$ (7)
The green solid line in Fig. 2 shows $h_{0}$ that causes an earthquake with
ground acceleration comparable to the gravitational acceleration. As in
Dyson:1997gv , we focus on the GW frequency range $1~{}{\rm mHz}<f_{\rm
gw}<1~{}{\rm kHz}$. In this range, it is a good approximation to treat the
surface of the Earth as a flat plane since the wavelength of the induced
seismic waves is much smaller than the size of the Earth and larger than the
surface irregularities.
## V Example 3: Tides
The third effect we consider is tides, i.e., the rise and fall of sea levels,
which are usually generated by the gravitational forces from the Moon and Sun.
Similarly, when GWs pass through the Earth, they can generate tides. Tides
involve very complicated processes and it is difficult to analyze those
generated by the passage of GWs. As in the previous two examples, we will
evaluate tides by a crude approximation. To this end, we treat the Earth as a
rock sphere completely encompassed by a water shell of uniform depth $H$. In
this model, we assume that the rock sphere is so rigid that it is not deformed
by GWs and only the water shell is freely deformed (since it is liquid) by the
force given by Eq. (1).
For this model, the first task is to determine the length scale $L$. We make a
rough estimate; if the period of GWs is longer than one day, the ocean on the
global scale will coherently respond to GWs similarly to the case in response
to the tidal force generated by the Moon’s gravitational force. In this case,
the natural scale of $L$ will be the Earth radius $R_{\oplus}$. Assuming
$L=R_{\oplus}$, the force from the GWs exerted on the volume element $\Delta
V$ is given by
$F\simeq\pi^{2}f_{\rm gw}^{2}\rho_{\rm w}R_{\oplus}h_{0}\Delta V,$ (8)
where $\rho_{\rm w}$ is the density of water. Conversely, if the period of GWs
is much shorter than one day but longer than the time scale of the sound wave
traveling across the depth $H$, the response of the sea level will not be the
same as that in the low-frequency case.
To understand this situation, let us consider the motion of a harmonic
oscillator with a periodic external force as a simple model:
$m{\ddot{x}}=-kx+m\omega^{2}h_{0}L\sin(\omega t)$. Here, $x$ and the external
force are assumed to represent the change in sea level and the effect by GWs,
respectively. Note that we have taken the coefficient of the external force to
be $m\omega^{2}h_{0}L$; with this expression, $h_{0}L$ does not depend on
$\omega$ and is simply proportional to $h_{0}$ [see Eq. (1)]. Assuming
$x=x_{0}\sin(\omega t)$, $x_{0}$ is given by
$x_{0}=\frac{h_{0}L\omega^{2}}{\omega_{0}^{2}-\omega^{2}},$ (9)
where $\omega_{0}=\sqrt{k/m}$. In the low-frequency limit
$\omega\ll\omega_{0}$, we have $x_{0}=h_{0}L\omega^{2}/\omega_{0}^{2}$,
namely, $x_{0}$ is determined by the balance between the term $-kx$ and the
external force. On the other hand, in the high-frequency limit
$\omega\gg\omega_{0}$, we have $x_{0}=-h_{0}L$. Apparently, this might look
inconsistent with the standard result of the forced harmonic oscillator, for
which the amplitude goes to zero in the high-frequency limit. The point is
that, as stressed above, the GW driving force actually increases with
frequency for a given GW amplitude $h_{0}$, and this leads to the conclusion
that in the high-frequency limit $x_{0}$ becomes a constant value $-h_{0}L$
independent of the frequency of the external force $\omega$. Given these
observations, it would be reasonable to consider that the tide induced by GWs
becomes independent of $f_{\rm gw}$ for $f_{\rm gw}\gg f_{\rm day}=1/{\rm
day}$.
The next nontrivial task is to estimate the magnitude of the tide induced by
the force given in Eq. (8), which, in principle, can be done by solving the
shallow-water equation. However, even at the level of the order-of-magnitude
estimate, how this can be achieved is nontrivial. To overcome this issue, we
simply compare force (8) with the tidal force induced by the Moon. For $f_{\rm
gw}<f_{\rm day}$, we assume that the level of the tide induced by GWs is the
same as that of an ordinary tide if the two forces have the same magnitude.
The tidal force from the Moon acting on the volume element $\Delta V$ is
estimated as
$F_{\rm L}\simeq\frac{GM_{\rm L}\rho_{\rm w}\Delta V}{D^{3}}R_{\oplus}.$ (10)
Here, $M_{\rm L}$ is the lunar mass, and $D$ is the lunar distance. By
equating Eq. (8) to Eq. (10), we obtain the minimum $h_{0}$ that yields a tide
comparable to the ordinary one as
$h_{0}=\frac{GM_{\rm L}}{\pi^{2}D^{3}f_{\rm gw}^{2}}.$ (11)
On the other hand, for $f_{\rm gw}\gg f_{\rm day}$, on the basis of the above
argument on the forced oscillator, we assume that the critical $h_{0}$ is
given by the right-hand side of Eq. (11) with $f_{\rm gw}$ being replaced by
$f_{\rm day}$.
The blue solid line in Fig. 2 shows $h_{0}$ that causes a tide comparable to
that caused by the Moon. The increase in sea level to this amount will
threaten the lives of many people residing in coastal regions. The upper limit
for $f_{\rm gw}\lesssim 1~{}{\rm Hz}$ is imposed by the condition for the
validity of the shallow-water approximation that the time for sound to pass
across the depth of the ocean is shorter than the GW frequency.
## VI GWs from BH binaries
In the previous sections, we considered extreme situations where the GWs cause
visible effects on humans by causing stretching and shrinking of space, and we
obtained the parameter regions in the amplitude–frequency space, as shown in
Fig. 2. Now it is interesting to consider converting the parameter regions for
the GWs to those for BH binaries. The strongest GWs are emitted during the
last moment of the merging of BHs in a binary system. Therefore, it is natural
to try to relate GWs to BH binary systems in such a way.
We assume that a BH binary system consists of two equal-mass BHs each of which
has mass $M$ and is located at the distance $r$ from the Earth. Precise
modeling of the GW waveform from such a phase requires numerous computations
based on general relativity. Since we are working at the level of the order-
of-magnitude estimate, we do not use the results obtained from full
computations but make a rough estimate of the GW amplitude and frequency as
follows.
At the last inspiral phase, the distance between BHs is comparable to the size
of the BHs, i.e., the Schwarzschild radius $r_{\rm s}=2GM/c^{2}$, and the BHs
are orbiting at a relativistic speed. In this extreme situation, the
disturbance of the gravitational field caused by the motion of the BHs, which
propagates outward as GWs, would be comparable to the gravitational field
generated by the BHs when they are not moving. Thus, it is natural to think
that the GW amplitude around the BHs is also ${\cal O}(1)$. The GW amplitude
attenuates inversely proportionally to the propagation distance. Thus, by
denoting $r$ as the distance to the GW source, we find that the GW amplitude
$h_{0}$ on the Earth would be on the order of
$h_{0}\simeq\frac{2GM}{c^{2}r}.$ (12)
The period of GWs is given by the time scale of the change of the GW source,
which is $r_{\rm s}/c$. Therefore, the frequency of the GWs would be on the
order of
$f_{\rm gw}\simeq\frac{c}{r_{\rm s}}=\frac{c^{3}}{2GM}.$ (13)
This clearly shows that there is a one-to-one correspondence between $f_{\rm
gw}$ and $M$.
In the previous sections, we estimated the critical values of $h_{0}$ above
which phenomena catastrophic to humans occur at various values of $f_{\rm
gw}$. By using Eq. (13) to replace $f_{\rm gw}$ with $M$, we can draw curves
corresponding to the critical values of $h_{0}$ in the $M$–$r$ plane, as shown
in Fig. 3. As expected, the results in Fig. 3 suggest that the GWs would cause
serious damage only if too-heavy BHs merged at too-close distances from the
Earth. It would be a good exercise to think about the farthest and/or lightest
parameter value for each case. For instance, let us consider the farthest
parameter value $(r,M)\sim(10^{15}{\rm km},10^{10}M_{\odot})$ for the tide. It
means that a large tide would be induced by GWs if two supermassive BHs each
with a mass of $\sim 10^{10}M_{\odot}$ merged at about 100 light years from
the Earth. To get a sense of this situation, let us compare the distance and
the mass $(r,M)\sim(10^{15}{\rm km},10^{10}M_{\odot})$ of a BH with those of
Sgr A∗, which is the supermassive BH at the center of the Milky Way galaxy.
The mass of Sgr A∗ is $4.2\times 10^{6}M_{\odot}$ and the distance is
$2.7\times 10^{4}$ light years from the Earth Abuter2019 . Therefore, even the
farthest parameter value means that too-heavy BHs merged at too-close
distances from the Earth. If such supermassive BHs existed, the Solar System
would be destroyed by their gravitational force far before they merge.
Therefore, the earthquakes and tides generated by GWs are totally irrelevant
to reality. However, when space travel to a BH binary systems becomes popular
in the future, the effect of GWs on organs may be relevant during such travel.
In that case, the red boundary in Fig. 3 must be included as a caution in the
flight plan for astronauts.
Figure 3: Parameter region corresponding to Fig. 2 in terms of the BH mass
$M$ and the distance $r$ from the Earth assuming that GWs are emitted at the
last inspiral stage of two BHs of equal mass. The BH mass $M$ is normalized by
the solar mass $M_{\odot}\approx 1.99\times 10^{30}$kg.
## VII Conclusion
GWs are ripples of spacetime traveling across the universe. Not only spacetime
but also everything in nature are stretched and compressed by GWs. Therefore,
in principle, they could cause some large effects. After clarifying the
conceptual issue about the detectability of GWs and translating the effects of
GWs into the language of Newtonian mechanics, we performed the order-of-
magnitude estimation for visible phenomena, such as organ destruction,
earthquakes, and tides, that would be caused by large-amplitude GWs. We
obtained the amplitude–frequency space in which GWs would cause some large
effects. As a reference, we converted the parameter regions to those in BH
mass–distance space. These are interesting examples highlighting the nature of
GWs and are useful for gaining some understanding of the theory of relativity.
###### Acknowledgements.
H.M. was supported by Japan Society for the Promotion of Science (JSPS)
Grants-in-Aid for Scientific Research (KAKENHI) Grants No. JP18K13565 and No.
JP22K03639. T.S. was supported by the MEXT Grant-in-Aid for Scientific
Research on Innovative Areas No. 17H06359, No. 19K03864, and No. 21H05453.
## References
* (1) LIGO Scientific, Virgo collaboration, B. Abbott et al., Observation of Gravitational Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116 (2016) 061102 [arXiv:1602.03837].
* (2) http://mediaassets.caltech.edu/gwave.
* (3) B. F. Schutz, Intuition in Einsteinian Physics, arXiv:2106.01820.
* (4) J. L. Cervantes-Cota, S. Galindo-Uribarri and G.-F. Smoot, A Brief History of Gravitational Waves, Universe 2 (2016) 22 [arXiv:1609.09400].
* (5) C. W. Misner, K. S. Thorne and J. A. Wheeler, Gravitation. W. H. Freeman, San Francisco, 1973.
* (6) J. D. E. Creighton and W. G. Anderson, Gravitational-wave physics and astronomy: An introduction to theory, experiment and data analysis, Wiley Series in Cosmology. Wiley-VCH, 2011.
* (7) M. Maggiore, Gravitational Waves. Vol. 1: Theory and Experiments, Oxford Master Series in Physics. Oxford University Press, 2007.
* (8) P. R. Saulson, If light waves are stretched by gravitational waves, how can we use light as a ruler to detect gravitational waves?, Am. J. Phys. 65 (1997) 501.
* (9) B. F. Schutz, Gravitational waves on the back of an envelope, Am. J. Phys. 52 (1984) 412.
* (10) B. Farr, G. Schelbert and L. Trouille, Gravitational Wave Science in the High School Classroom, Am. J. Phys. 80 (2012) 898 [arXiv:1109.3720].
* (11) L. M. Burko, Gravitational Wave Detection in the Introductory Lab, Phys. Teacher 55 (2017) 288 [arXiv:1602.04666].
* (12) H. Mathur, K. Brown and A. Lowenstein, An analysis of the LIGO discovery based on Introductory Physics, Am. J. Phys. 85 (2017) 676 [arXiv:1609.09349].
* (13) R. C. Hilborn, Gravitational waves from orbiting binaries without general relativity, Am. J. Phys. 86 (2018) 186 [arXiv:1710.04635].
* (14) J. Rosen, J. Brown, S. De, M. Sinanan and B. Hannaford, Biomechanical properties of abdominal organs in vivo and postmortem under compression loads, Journal of Biomechanical Engineering 130 (2008) 021020.
* (15) F. Dyson, Seismic response of the earth to a gravitational wave in the 1-Hz band, Astrophys. J. 156 (1969) 529.
* (16) M. Coughlin and J. Harms, Upper Limit on a Stochastic Background of Gravitational Waves from Seismic Measurements in the Range 0.05–1 Hz, Phys. Rev. Lett. 112 (2014) 101102 [arXiv:1401.3028].
* (17) GRAVITY collaboration, R. Abuter et al., A geometric distance measurement to the Galactic Center black hole with 0.3% uncertainty, Astron. Astrophys. 625 (2019) L10 [arXiv:1904.05721].
|
Prerak Srivastava1, Antoine Deleforge1, Archontis Politis2, Emmanuel Vincent1
# HOW TO (VIRTUALLY) TRAIN YOUR SPEAKER LOCALIZER
###### Abstract
Learning-based methods have become ubiquitous in speaker localization.
Existing systems rely on simulated training sets for the lack of sufficiently
large, diverse and annotated real datasets. Most room acoustics simulators
used for this purpose rely on the image source method (ISM) because of its
computational efficiency. This paper argues that carefully extending the ISM
to incorporate more realistic surface, source and microphone responses into
training sets can significantly boost the real-world performance of speaker
localization systems. It is shown that increasing the training-set realism of
a state-of-the-art direction-of-arrival estimator yields consistent
improvements across three different real test sets featuring human speakers in
a variety of rooms and various microphone arrays. An ablation study further
reveals that every added layer of realism contributes positively to these
improvements.
Index Terms: source localization, direction-of-arrival, image source,
directivity, room acoustic simulation
## 1 Introduction
Far-field deep learning based speech processing systems often require large
amount of training data, as demonstrated by their application to various tasks
such as speech recognition [1, 2], speaker localization [3, 4], speech
enhancement [5, 6] and diarization [7, 8]. For increased generalization, the
study in [9] suggest the use of extensive training sets that cover the range
of variability found in real test sets. Due to the difficulty of obtaining
enough real data with enough diversity, these training sets are typically
simulated, an approach called virtually supervised learning in, e.g., [10,
11]. For those applications on which the effect of reverberation is
detrimental, such as speaker localization or multi-microphone speech
enhancement, the use of room acoustics simulators is widespread since it
allows physical modeling of the room impulse response (RIR) from the source to
the receivers, which includes the inter-channel differences utilized by the
methods as well as reverberation. Depending on the type of simulation method
used, diverse data can be generated in terms of source and receiver position,
acoustic material characteristics, or room geometry, which is necessary for
the generalization of the methods to a wide range of real acoustic scenes.
Among the various room acoustics simulators under use, the most widespread by
far rely on the image source method (ISM) for shoebox rooms [12, 13]. Its
implementation simplicity and speed allow rapid generation of RIRs for
thousands of rooms with randomized dimensions, wall absorption coefficients,
and source-receiver positions. More elaborate geometrical [14] or wave-based
room acoustics simulators [15], that allow complex geometries and more
accurate modeling of propagation effects, are not currently fast enough for
this. In the context of supervised learning, they have been used to pre-
compute a few large-scale single-channel datasets covering a limited set of
conditions only [16, 17]. Even though shoebox ISM simulation employs stronger
acoustical simplifications than more advanced room acoustics simulation
methods, it has proven useful in training localization models that then
perform adequately well on real datasets [18, 19, 20, 21, 22]. Most of these
studies are using the simplest form of shoebox ISM, namely, broadband
omnidirectional sources and receivers and a global wall- and frequency-
independent absorption coefficient, which is also the fastest to simulate.
More realistic simulation conditions, such as directional sources and
receivers and frequency-dependent surface absorption profiles, can be
integrated into shoebox ISM with little computational overhead, at the expense
of more complex implementation and specification of simulation parameters.
Very few studies perform simulations under these more realistic conditions,
except when specialized multichannel receivers are used, e.g., spherical [23],
Ambisonic [19, 20], or binaural [24, 25].
Despite the widespread use of shoebox ISM-based simulators for training
speaker localization systems, the effect of integrating more realistic
simulation conditions at training time on the localization performance on real
recordings at test time has hardly been studied. Receiver directivity does not
only concern specialized multichannel receivers, but also common arrays of
omnidirectional microphones above some frequency, due to the microphone
mounting or the size of the capsule. Source directivity seems also crucial
considering that most methods focus on speech signals, with human speakers
being highly directive [26]. Realistic surface absorption profiles can also
have a drastic effect on the reverberation reaching the microphones, in terms
of spatial distribution and power spectrum. A previous work by the authors
showed a significant performance increase on an acoustic parameter estimation
task when measured source and receiver directivities and a natural
distribution of frequency-dependent absorption coefficients were integrated in
the simulations [11]. In the context of speaker localization, we are only
aware of one recent study [27] which analyses the effects of speaker
directivity and diffuse late reverberation modeling in the simulations. While
diffusion showed no significant impact, the source directivity was found to
have a positive impact when testing on speech signals convolved with measured
directive RIRs.
In this work, we claim that more realistic simulation at training time can
significantly improve real-world localization performance, in a wide variety
of scenarios. Results on three multichannel datasets of real human speakers
captured in various rooms and with various microphone arrays support this
claim. The effects of realistic source and receiver directivities and surface
absorption profiles at training time are carefully studied in isolation and in
combination for each dataset.
The structure of the rest of the paper is as follows. Section 2 introduces the
_naive_ and _advanced_ ISM-based simulation modes and Section 3 introduces the
real test sets considered in this paper. The experimental setup is described
in Section 4 and the results are analyzed in Section 5. We conclude in Section
6.
## 2 Image Source Simulation
### 2.1 Generalized Image Source Method
Simulation of multichannel reverberant speech can be expressed as
$\mathbf{x}[t]=(\mathbf{h}*s)[t]+\mathbf{n}[t]$ (1)
where $*$ denotes convolution,
$\mathbf{h}(t)=[h_{1}(t),...,h_{M}(t)]^{\mathrm{T}}$ the vector of RIRs from
the source to $M$ microphones,
$\mathbf{n}(t)=[n_{1}(t),...,n_{M}(t)]^{\mathrm{T}}$ additive noise and $s$
the dry source signal. RIR generation based on the ISM for shoebox geometries
requires at least the dimensions of the (cuboid) room, the source and receiver
coordinates, and a global wall absorption coefficient. An extended version of
the ISM that takes into account frequency-dependent walls, propagation, and
directivity effects can be expressed in the frequency domain as
$\displaystyle\widehat{\mathbf{h}}(f)=\sum_{k=0}^{K}\,$
$\displaystyle\frac{\exp(-\jmath 2\pi fr_{k}/cF_{\textrm{s}})}{r_{k}}\cdot
d_{\textrm{air}}(r_{k},f)\cdot d_{k}(f)$
$\displaystyle\cdot\widehat{g}_{\textrm{src}}(-\widetilde{\mathbf{r}}_{k},f)\cdot\widehat{\mathbf{g}}_{\textrm{mic}}(\widetilde{\mathbf{r}}_{k},f).$
(2)
Here, $c$ is the speed of sound and $F_{s}$ is the sampling frequency.
$\mathbf{r}_{k}$ is the position vector of the $k$-th image source w.r.t. the
microphone array center, $r_{k}=||\mathbf{r}_{k}||$ is the image-source-to-
array distance and $\widetilde{\mathbf{r}}_{k}=\mathbf{r}_{k}/r_{k}$ is the
direction unit vector. $d_{\textrm{air}}(r,f)$ is the distance-dependent air
attenuation while $d_{k}(f)$ is the compound reflection coefficient of all the
surfaces that the $k$-th image has been reflected from.
$\widehat{\mathbf{g}}_{\textrm{mic}}(\widetilde{\mathbf{r}})$ denotes the
vector of $M$ microphone directivity responses defined with their phase center
at the array center, for direction-of-arrival (DOA) $\widetilde{\mathbf{r}}$.
$\widehat{g}_{\textrm{src}}(-\widetilde{\mathbf{r}})$ denotes the source
directivity response for direction-of-departure $-\widetilde{\mathbf{r}}$. An
alternative implementation of the multichannel receivers can instead utilize
individual positions, distances, and DOAs for each microphone in the array,
and integrate its local directivity excluding inter-channel array propagation
effects. That implementation is suitable for open arrays of directional
microphones of known directivity, but it is not suitable for more complex
directional arrays that include scattering effects, such as spherical arrays
or head-related transfer functions.
### 2.2 Advanced v/s Naive Simulation
In the following, we distinguish the _naive_ mode of ISM simulation often used
in practice for model training, where a) wall absorption is assumed to be
frequency-independent and equal for all 6 room surfaces, i.e.,
$d_{k}(f)=d^{o_{k}}$ with $o_{k}$ being the reflection order and b) sources
and receivers are assumed to be omnidirectional, i.e.,
$\widehat{g}_{\textrm{src}}(\widetilde{\mathbf{r}},f)=\widehat{\mathbf{g}}_{\textrm{mic}}(\widetilde{\mathbf{r}},f)=1$.
Additionally we define an _advanced_ mode of ISM simulation that incorporates
more informed choices on the directivity and absorption components. Regarding
absorption, the coefficients in 6 octave bands are drawn from a naturally
balanced mix of distributions corresponding to reflective and absorptive wall,
ceiling, and floor materials, as described in [10, 11]. These coefficients are
then interpolated using half-cosine octave bands in the discrete Fourier
domain and turned into minimum-phase responses. Note that both modes of
simulation are tuned to yield comparable distributions of reverberation time.
Regarding source directivity, the spatially interpolated measured
directivities of a head-and-torso-with-mouth simulator (Brüel & Kjaer HATS
4128-C) and two directive loudspeakers (Genelec 8020 and YAMAHA DXR8) taken
from the DIRPAT dataset [28] are integrated into the simulation. Regarding
microphone array directivities, scenario-based informed choices are made, as
detailed in Section 4. These extensions of the ISM are detailed in [11] and
are currently available as open source code in the branch dev/dirpat of the
pyroomacoustics simulator [29].
## 3 Real Test Sets
To examine the impact of increased ISM realism at training time, we evaluate
virtually-supervised localization performance on three real datasets of human
speakers with spatio-temporal annotations of their activity and position with
respect to the microphone array. The selected publicly available datasets are
captured in a variety of rooms, and with a different microphone array each. We
focus specifically on datasets featuring real human speakers as opposed to
ones generated by convolving dry speech signals with measured RIRs. While the
latter are commonly used in the speaker localization literature, the former
are closer to real world conditions. This study focuses on the most elementary
localization task, namely, single-source DOA estimation in $[0,180]^{\circ}$
from a two-second signal recorded using a single microphone pair.
### 3.1 VoiceHome-2 [30]
This dataset is specifically made for distant speech processing applications
in domestic environments. It consists of short commands for smart home devices
in French, collected in reverberant conditions and uttered by 12 native French
speakers. The data is recorded in 12 different rooms corresponding to 4
houses, with fully annotated geometry, under quiet or noisy conditions. It is
captured by a microphone array consisting of 8 MEMS placed near the corner of
a cubic baffle. For this study, a two-channel sub-array with aperture 10.4 cm
is selected, and 360 two-second speech recordings in quiet conditions are
used.
### 3.2 DIRHA [31]
This multichannel dataset consists of recordings done in the living room and
kitchen of a typical apartment. Microphones in different configurations are
placed on the walls and ceiling of the 2 rooms. 6 native English speakers are
chosen to speak sentences taken from the Wall Street Journal news text corpus.
Annotations of individual microphone positions and speaker positions are
provided. For this study, a wall-mounted two-channel microphone array with
aperture 30 cm placed in the living room is selected, and 410 two-second
speech recordings from the living room are used.
### 3.3 STARSS22 [32]
This dataset contains recordings of naturally acted scenes of human
interaction with spatio-temporal annotations of events belonging to 13 target
classes, of which speech is a dominant one. This corpus is part of the
development set for Task 3 of the DCASE Challenge 2022. It was captured in
facilities at Tampere University and at Sony, with the recording, annotation,
and organization of acoustic scenes kept similar on both sites. The Eigenmike
spherical microphone array is used to deliver the dataset in two spatial
formats, one of which is a tetrahedral sub-array of omnidirectional
microphones mounted on a rigid spherical baffle. The corpus is more
challenging than the other two in the sense that speakers are free to move and
turn naturally during discussions, and that it contains intentional and
unintentional sound events other than speech with diffuse and directional
ambient noise at substantial levels. We carefully pre-processed the data to
extract 2,100 two-second non-overlapping speech excerpts from microphones 6
and 10 out of the tetrahedral sub-array, with an aperture of 6.8 cm.
The three curated test sets add up to a total of 95 minutes DOA-annotated,
two-channel, real human speech recordings.
## 4 Scenario-Based Simulation and Training
### 4.1 Model
To select a state-of-the-art learning-based localization method for this
study, we examined the extensive literature review in [4]. We selected the
convolutional neural network architecture proposed by He et al. [33],
specifically its most recent version in [34], due to its multiple
distinguishing features. Namely, it works with arbitrary microphone arrays, it
is initially designed for DOA estimation, it has been successfully applied to
real speech recordings, it is readily extendable to multiple localization,
detection and counting (not addressed in this study), and its architecture is
available through open source code. The model is trained over different
simulated training sets using the ADAM optimizer with a learning rate of
$10^{-4}$ and batches of size 16 for a maximum of 110 epochs, with early
stopping on validation sets. We used the same input features as in [34],
namely, concatenated short-time Fourier transforms with 50% overlap and 42.7
ms windows, except that all the signals considered in this study are down-
sampled to 16 kHz, for consistency.
### 4.2 Scenario Based Training
For all the simulated training sets considered, the default air absorption
model of pyroomacoustics is used for $d_{\mathrm{air}}$ and image sources are
simulated up to order 20. A total of 40k shoebox rooms of sizes uniformly
drawn at random in $[3,10]\times[3,10]\times[2,4.5]$ m are simulated, each
containing a source and a two-microphone array placed uniformly at random with
a minimum source-array and device-wall distance of $30$ cm. The obtained RIRs
are convolved with speech excerpts from the Librispeech corpus [35] according
to (1), yielding 40k two-second two-channel reverberated speech samples, of
which 38k are used for training and 2k for validation. We experimented with
supervising the model with sets containing 10k to 60k samples. While a strong
performance improvement was observed from 10k to 60k, diminishing returns were
hit around 40k. Further improvements may nonetheless be achievable using even
larger sets.
Uncorrelated white Gaussian noise and diffuse speech-shaped noise convolved
with the late part of a random RIR in the same room are added to the
reverberated signals. As detailed in [11], noise levels are tuned based on
reference scenarios according to the source and receiver considered. This
yields consistent bell-shaped signal-to-noise ratio distributions in the range
$[15,75]$ dB with a peak at $40$ dB for all of the training sets considered in
this study. While a detailed analysis of the specific impact of noise at
training time would be of interest, this is out of the scope of this paper.
For each of the three real test sets described in Section 3, one naive and one
advanced simulated training set is built based on the two modes of simulation
described in Section 2 for walls, sources and receivers. For naive sets, ideal
omnidirectional receivers are placed at random inside the room using the same
apertures as the ones of their corresponding test sets. For VoiceHome-2, the
simulated arrays are identical in the naive and advanced sets. This is because
MEMS are known to be close to omnidirectional and the directivity of the
VoiceHome-2 array is not available. Moreover, early experiments revealed that
using mismatched microphone responses at training time was detrimental to the
results, a phenomenon also reported in [11]. For DIRHA, the advanced
simulation places the arrays on the room walls, which is equivalent to
simulating microphones with a half-sphere directivity. For STARSS22, the
advanced simulation uses the measured directivity pattern of the relevant sub-
array of the Eigenmike.111We used the measured directivity made available by
Franz Zotter et al. from Graz University: https://phaidra.kug.ac.at/o:69292.
## 5 Experiments and Results
To evaluate the virtually-supervised localization systems, two complementary
metrics are used for each test set, namely, the mean angular error (MAE, in
degrees) and the ratio of sources localized with an error less than
$10^{\circ}$ (Recall, in $\%$), which showed to be an adequate threshold to
prune out outliers. Models trained using naive and advanced simulations are
compared to the classical learning-free steered response power with phase
transform (SRP-PHAT) localization method, as implemented in [29].
### 5.1 Simulated Test Sets
We start by comparing the three methods on two test sets simulated in naive
and advanced modes under the VoiceHome-2 scenario. The results are shown in
Table 1. First, while all the methods perform well on the naive test set, with
nearly perfect recall achieved by both trained models, their performance
drastically drops on the advanced test set. This suggests that the presence of
realistic wall, source and receiver responses significantly hardens the
localization task, even under identical noise and reverberation-time
distributions. We have not found direct evidence of this in prior literature
and believe this could provide a helpful guideline to improve the evaluation
of localization methods on synthetic datasets. Second, as expected, the
learning-based methods strongly outperform the learning-free one and the
advanced model generalizes significantly better to advanced conditions than
the naive one. Moreover, the former seems to perform nearly as well as the
latter on naive conditions, despite the mismatch. This strengthens the
evidence that speaker localization is inherently more challenging in more
realistic conditions.
Table 1: Localization results on naive and advanced simulated test sets following the VoiceHome-2 scenario. | Simulated Test Sets
---|---
| Naive | Advanced
Training | $\uparrow$ Recall | $\downarrow$ MAE | $\uparrow$ Recall | $\downarrow$ MAE
Naive | $96\%$ | $2.6^{\circ}$ | $74\%$ | $8.5^{\circ}$
Advanced | $95\%$ | $3.0^{\circ}$ | $80\%$ | $6.7^{\circ}$
SRP-PHAT | $75\%$ | $11.1^{\circ}$ | $50\%$ | $20.8^{\circ}$
Table 2: Localization results on three real test sets achieved by the SRP-PHAT baseline and by the supervised model [34] trained using various simulation modes. Mean angular errors (MAE) are displayed with their $95\%$ confidence interval. Bold numbers indicate the best system in each column and the systems statistically equivalent to it. Statistical significance was assessed using McNemar's test for the Recall metric and $95\%$ confidence intervals over angular error differences for the MAE metric. Real Test Sets $\rightarrow$ | VoiceHome-2 [30] | DIRHA [31] | STARS22 [32]
---|---|---|---
Methods | $\uparrow$ Recall | $\downarrow$ MAE $({}^{\circ})$ | $\uparrow$ Recall | $\downarrow$ MAE $({}^{\circ})$ | $\uparrow$ Recall | $\downarrow$ MAE $({}^{\circ})$
SRP-PHAT | $70\%$ | $9.9\pm 1.5$ | $61\%$ | $15.0\pm 2.3$ | $45\%$ | $14.9\pm 0.6$
Naive Training | $78\%$ | $7.6\pm 1.2$ | $77\%$ | $8.4\pm 1.4$ | $57\%$ | $12.9\pm 0.6$
Advanced Training | $\mathbf{85\%}$ | $\mathbf{5.8\pm 0.8}$ | $\mathbf{84\%}$ | $\mathbf{6.3\pm 1.0}$ | $61\%$ | $\mathbf{11.4\pm 0.5}$
Ablation study | |
without wall realism | $83\%$ | $6.2\pm 0.8$ | $81\%$ | $7.5\pm 1.4$ | $59\%$ | $12.1\pm 0.6$
without source realism | $82\%$ | $7.1\pm 1.1$ | $80\%$ | $7.8\pm 1.2$ | $\mathbf{63\%}$ | $\mathbf{11.4\pm 0.6}$
without receiver realism | N/A | N/A | $78\%$ | $8.3\pm 1.5$ | $53\%$ | $13.4\pm 0.6$
### 5.2 Real Test Sets
The three methods are compared on the three real datasets. As can be seen in
the top part of Table 2, advanced training significantly outperforms naive
training by 4 to 7 recall points and $2^{\circ}$ MAE margins across all three
datasets, despite using the exact same network architecture. It also largely
outperforms the classical SRP-PHAT baseline by 15 to 23 recall points and
$3^{\circ}$ to $9^{\circ}$ MAE margins. As predicted in Section 3, the STARS22
dataset proved most challenging for all the methods. Note that even under the
quiet and static conditions of VoiceHome-2 and DIRHA, the baseline results are
far from perfect. This shows that two-channel DOA estimation remains a
challenging task in real-world settings.
The second half of Table 2 presents an ablation study on the proposed advanced
simulation strategy. It reveals that removing any of the three considered
layers of realism results in noticeable performance loss. One exception is the
use of measured source directivity on STARS22, which does not seem to affect
performance. One explanation could be that the human speakers in STARS22
perform significant head rotations, which is not modeled by our framework. The
use of a measured array directivity seems to have the strongest impact on this
dataset, an observation which we have not found in previous literature for
such a simple two-element array. On the other two datasets, the positive
impact of source directivity previously reported in [27] is confirmed, while
realistic wall absorptions seem to offer a comparable boost in performance.
This is new to the best of our knowledge, and may be explained by the presence
of diverse real-world rooms in these datasets. The Python code to reproduce
these experiments from training data simulation to evaluation is available
here: github.com/prerak23/Dir_SrcMic_DOA.
## 6 Conclusion
This paper revealed that simulating realistic wall absorption and
source/receiver directivities at training time can significantly boost the
performance of a virtually supervised speaker localization model across a
large test corpus, featuring real human speakers and a variety of microphone
arrays and rooms. While these aspects have been mostly ignored in the speaker
localization literature, we argue that they can critically benefit both the
evaluation of models and their real world applicability. Several research
avenues are left to explore. Our preliminary findings revealed that the
results are sensitive to the noise distribution at training time, calling for
a careful dedicated study of such effects. We also observed that the
validation learning curves of models trained on advanced simulation tended to
peak earlier than the naive ones. This leads us to believe that there is still
room for improvement on the reported results, e.g., by enriching the diversity
of directivity profiles through data augmentation. Finally, the inclusion of
source and receiver movements at training time or the use of more efficient
stochastic late reverberation models constitute worthwhile research
directions.
## 7 Acknowledgements
This work was made with the support of the French National Research Agency
through project HAIKUS “Artifical Intelligence applied to augmented acoustic
scenes” (ANR-19-CE23-0023). Experiments presented in this paper were carried
out using the Grid’5000 testbed, supported by a scientific interest group
hosted by Inria and including CNRS, RENATER and several Universities as well
as other organizations (see https://www.grid5000.fr).
## References
* [1] A. Gusev, V. Volokhov, T. Andzhukaev, S. Novoselov, G. Lavrentyeva, M. Volkova, A. Gazizullina, A. Shulipa, A. Gorlanov, A. Avdeeva, A. Ivanov, A. Kozlov, T. Pekhovsky, and Y. Matveev, ``Deep speaker embeddings for far-field speaker recognition on short utterances,'' _arXiv preprint arXiv:2002.06033_ , 2020\.
* [2] J. Huang and T. Bocklet, ``Intel far-field speaker recognition system for VOiCES Challenge 2019,'' in _Interspeech_ , 2019, pp. 2473–2477.
* [3] W. Xue, Y. Tong, C. Zhang, G. Ding, X. He, and B. Zhou, ``Sound event localization and detection based on multiple DOA beamforming and multi-task learning,'' in _Interspeech_ , 2020, pp. 5091–5095.
* [4] P.-A. Grumiaux, S. Kitić, L. Girin, and A. Guérin, ``A survey of sound source localization with deep learning methods,'' _Journal of the Acoustical Society of America_ , vol. 152, no. 1, pp. 107–151, 2022.
* [5] Y. Chen, Y. Hsu, and M. R. Bai, ``Multi-channel end-to-end neural network for speech enhancement, source localization, and voice activity detection,'' _arXiv preprint arXiv:2206.09728_ , 2022.
* [6] W. Rao, Y. Fu, Y. Hu, X. Xu, Y. Jv, J. Han, Z. Jiang, L. Xie, Y. Wang, S. Watanabe, Z.-H. Tan, H. Bu, T. Yu, and S. Shang, ``ConferencingSpeech Challenge: Towards far-field multi-channel speech enhancement for video conferencing,'' in _IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)_ , 2021, pp. 679–686.
* [7] S. Horiguchi, Y. Fujita, S. Watanabe, Y. Xue, and K. Nagamatsu, ``End-to-end speaker diarization for an unknown number of speakers with encoder-decoder based attractors,'' _arXiv preprint arXiv:2005.09921_ , 2020.
* [8] N. Ryant, P. Singh, V. Krishnamohan, R. Varma, K. Church, C. Cieri, J. Du, S. Ganapathy, and M. Liberman, ``The third DIHARD diarization challenge,'' _arXiv preprint arXiv:2012.01477_ , 2020.
* [9] E. Vincent, S. Watanabe, A. A. Nugraha, J. Barker, and R. Marxer, ``An analysis of environment, microphone and data simulation mismatches in robust speech recognition,'' _Computer Speech and Language_ , vol. 46, pp. 535–557, 2017\.
* [10] C. Foy, A. Deleforge, and D. Di Carlo, ``Mean absorption estimation from room impulse responses using virtually supervised learning,'' _Journal of the Acoustical Society of America_ , vol. 150, no. 2, pp. 1286–1299, 2021.
* [11] P. Srivastava, A. Deleforge, and E. Vincent, ``Realistic sources, receivers and walls improve the generalisability of virtually-supervised blind acoustic parameter estimators,'' in _International Workshop on Acoustic Signal Enhancement (IWAENC)_ , 2022.
* [12] J. B. Allen and D. A. Berkley, ``Image method for efficiently simulating small-room acoustics,'' _Journal of the Acoustical Society of America_ , vol. 65, no. 4, pp. 943–950, 1979.
* [13] P. M. Peterson, ``Simulating the response of multiple microphones to a single acoustic source in a reverberant room,'' _Journal of the Acoustical Society of America_ , vol. 80, no. 5, pp. 1527–1529, 1986.
* [14] S. Siltanen, T. Lokki, S. Kiminki, and L. Savioja, ``The room acoustic rendering equation,'' _Journal of the Acoustical Society of America_ , vol. 122, no. 3, pp. 1624–1635, 2007.
* [15] S. Bilbao, ``Modeling of complex geometries and boundary conditions in finite difference/finite volume time domain room acoustics simulation,'' _IEEE Transactions on Audio, Speech, and Language Processing_ , vol. 21, no. 7, pp. 1524–1533, 2013.
* [16] C. Chen, C. Schissler, S. Garg, P. Kobernik, A. Clegg, P. Calamia, D. Batra, P. W. Robinson, and K. Grauman, ``SoundSpaces 2.0: A simulation platform for visual-acoustic learning,'' _arXiv preprint arXiv:2206.08312_ , 2022.
* [17] Z. Tang, R. Aralikatti, A. J. Ratnarajah, and D. Manocha, ``GWA: A large high-quality acoustic dataset for audio processing,'' in _ACM SIGGRAPH Conference_ , 2022.
* [18] S. Chakrabarty and E. A. Habets, ``Multi-speaker DOA estimation using deep convolutional networks trained with noise signals,'' _IEEE Journal of Selected Topics in Signal Processing_ , vol. 13, no. 1, pp. 8–21, 2019.
* [19] S. Adavanne, A. Politis, J. Nikunen, and T. Virtanen, ``Sound event localization and detection of overlapping sources using convolutional recurrent neural networks,'' _IEEE Journal of Selected Topics in Signal Processing_ , vol. 13, no. 1, pp. 34–48, 2018.
* [20] L. Perotin, R. Serizel, E. Vincent, and A. Guérin, ``CRNN-based multiple DoA estimation using acoustic intensity features for Ambisonics recordings,'' _IEEE Journal of Selected Topics in Signal Processing_ , vol. 13, no. 1, pp. 22–33, 2019.
* [21] D. Diaz-Guerra, A. Miguel, and J. R. Beltran, ``Robust sound source tracking using SRP-PHAT and 3D convolutional neural networks,'' _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 29, pp. 300–311, 2020.
* [22] T. N. T. Nguyen, W.-S. Gan, R. Ranjan, and D. L. Jones, ``Robust source counting and DOA estimation using spatial pseudo-spectrum and convolutional neural network,'' _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 28, pp. 2626–2637, 2020.
* [23] Y. Koyama, K. Shigemi, M. Takahashi, K. Shimada, N. Takahashi, E. Tsunoo, S. Takahashi, and Y. Mitsufuji, ``Spatial data augmentation with simulated room impulse responses for sound event localization and detection,'' in _IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)_ , 2022, pp. 8872–8876.
* [24] J. Ding, Y. Ke, L. Cheng, C. Zheng, and X. Li, ``Joint estimation of binaural distance and azimuth by exploiting deep neural networks,'' _Journal of the Acoustical Society of America_ , vol. 147, no. 4, pp. 2625–2635, 2020.
* [25] C. Gaultier, S. Kataria, and A. Deleforge, ``VAST: The virtual acoustic space traveler dataset,'' in _Int. Conf. on Latent Variable Analysis and Signal Separation_ , 2017, pp. 68–79.
* [26] R. Gonzalez, T. McKenzie, A. Politis, and T. Lokki, ``Near-field evaluation of reproducible speech sources,'' _Journal of the Audio Engineering Society_ , vol. 70, no. 7/8, pp. 621–633, 2022.
* [27] F. B. Gelderblom, Y. Liu, J. Kvam, and T. A. Myrvoll, ``Synthetic data for DNN-based DoA estimation of indoor speech,'' in _IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)_ , 2021, pp. 4390–4394.
* [28] M. Brandner, M. Frank, and D. Rudrich, ``DIRPAT—database and viewer of 2D/3D directivity patterns of sound sources and receivers,'' in _Audio Engineering Society Convention 144_ , 2018.
* [29] R. Scheibler, E. Bezzam, and I. Dokmanić, ``Pyroomacoustics: A python package for audio room simulation and array processing algorithms,'' in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , 2018, pp. 351–355.
* [30] N. Bertin, E. Camberlein, R. Lebarbenchon, E. Vincent, S. Sivasankaran, I. Illina, and F. Bimbot, ``VoiceHome-2, an extended corpus for multichannel speech processing in real homes,'' _Speech Communication_ , vol. 106, pp. 68–78, 2019.
* [31] M. Ravanelli, L. Cristoforetti, R. Gretter, M. Pellin, A. Sosi, and M. Omologo, ``The DIRHA-English corpus and related tasks for distant-speech recognition in domestic environments,'' in _IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)_ , 2015, pp. 275–282.
* [32] A. Politis, K. Shimada, P. Sudarsanam, S. Adavanne, D. Krause, Y. Koyama, N. Takahashi, S. Takahashi, Y. Mitsufuji, and T. Virtanen, ``STARSS22: A dataset of spatial recordings of real scenes with spatiotemporal annotations of sound events,'' in _Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE)_ , 2022.
* [33] W. He, P. Motlicek, and J.-M. Odobez, ``Deep neural networks for multiple speaker detection and localization,'' in _International Conference on Robotics and Automation (ICRA)_ , 2018, pp. 74–79.
* [34] ——, ``Neural network adaptation and data augmentation for multi-speaker direction-of-arrival estimation,'' _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 29, pp. 1303–1317, 2021.
* [35] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, ``Librispeech: an ASR corpus based on public domain audio books,'' in _IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)_ , 2015, pp. 5206–5210.
|
# BASiS: Batch Aligned Spectral Embedding Space
Or Streicher Ido Cohen Guy Gilboa
Viterbi Faculty of Electrical and Computer Engineering
Technion - Israel Institute of Technology, Haifa, Israel
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Graph is a highly generic and diverse representation, suitable for almost any
data processing problem. Spectral graph theory has been shown to provide
powerful algorithms, backed by solid linear algebra theory. It thus can be
extremely instrumental to design deep network building blocks with spectral
graph characteristics. For instance, such a network allows the design of
optimal graphs for certain tasks or obtaining a canonical orthogonal low-
dimensional embedding of the data. Recent attempts to solve this problem were
based on minimizing Rayleigh-quotient type losses. We propose a different
approach of directly learning the graph’s eigensapce. A severe problem of the
direct approach, applied in batch-learning, is the inconsistent mapping of
features to eigenspace coordinates in different batches. We analyze the
degrees of freedom of learning this task using batches and propose a stable
alignment mechanism that can work both with batch changes and with graph-
metric changes. We show that our learnt spectral embedding is better in terms
of NMI, ACC, Grassman distnace, orthogonality and classification accuracy,
compared to SOTA. In addition, the learning is more stable.
## 1 Introduction
Representing information by using graphs and analyzing their spectral
properties has been shown to be an effective classical solution in a wide
range of problems including clustering [8, 21, 32], classification [13],
segmentation [26], dimensionality reduction [5, 10, 23] and more. In this
setting, data is represented by nodes of a graph, which are embedded into the
eigenspace of the graph-Laplacian, a canonical linear operator measuring local
smoothness.
Incorporating analytic data structures and methods within a deep learning
framework has many advantages. It yields better transparency and understanding
of the network, allows the use of classical ideas, which were thoroughly
investigated and can lead to the design of new architectures, grounded in
solid theory. Spectral graph algorithms, however, are hard to incorporate
directly in neural-networks since they require eigenvalue computations which
cannot be integrated in back-propagation training algorithms. Another major
drawback of spectral graph tools is their low scalability. It is not feasible
to hold a large graph containing millions of nodes and to compute its graph-
Laplacian eigenvectors. Moreover, updating the graph with additional nodes is
combersome and one usually resorts to graph-interpolation techniques, referred
to as Out Of Sample Extension (OOSE) methods.
An approach to solve the above problems using deep neural networks (DNNs),
firstly suggested in [24] and recently also in [9], is to train a network that
approximates the eigenspace by minimizing Rayleigh quotient type losses. The
core idea is that the Rayleigh quotient of a sum of $n$ vectors is minimized
by the $n$ eigenvectors with the corresponding $n$ smallest eigenvalues. As a
result, given the features of a data instance (node) as input, these networks
generate the respective coordinate in the spectral embedding space. This space
should be equivalent in some sense to the analytically calculated graph-
Laplacian eigenvector space. A common way to measure the equivalence of these
spaces is using the Grassman distance. Unfortunately, applying this indirect
approach does not guarantee convergence to the desired eigenspace and
therefore the captured might not be faithful.
An alternative approach, suggested in [18] for computing the diffusion map
embedding, is a direct supervised method. The idea is to compute the embedding
analytically, use it as ground-truth and train the network to map features to
eigenspace coordinates in a supervised manner. In order to compute the ground
truth embedding, the authors used the entire training set. This operation is
very demanding computationally in terms of both memory and time and is not
scalable when the training set is very large.
Figure 1: Toy examples. An Illustration of trained BASiS models over common
spectral-clustering toy examples. Each figure describes the embedding values
given by the network to each point in the space and the clustering results
over selected points. BASiS performs successful clustering and is able to
interpolate and extrapolate the training data smoothly.
Our proposed method is to learn directly the eigenspace in batches. We treat
each batch as sampling of the full graph and learn the eigenvector values in a
supervised manner. A major problem of this kind of scheme is the inconsistency
in the embedding coordinates. Thus, two instances in different batches with
the same features can be mapped to very different eigenspace coordinates. Our
solution is to use affine registration techniques to align the batches.
Further, we use this alignment strategy to also allow changes in the graph
affinity metric. Our proposed method retains the following main qualities: 1)
Scalability. Data is learnt in batches, allowing a training based on large and
complex input sets; 2) OOSE. Out of sample extension is immediate. 3) High
quality approximation of the eigenspace. Since our learning method is direct
and fully supervised, an excellent approximation of the graph eigenspace is
obtained. 4) Robustness to features change. We can train the model also when
features and affinities between nodes change; All the above properties yield a
spectral building block which can be highly instrumental in various deep
learning algorithms, containing an inherent orthogonal low dimensional
embedding of the data, based on linear algebraic theory.
## 2 Settings and Notations
Let $\\{x_{i}\\}_{i=1}^{n}$ be a set of data instances denoted as $X$ which is
a finite set in $\mathbb{R}^{d}$. These samples are assumed to lie on a lower
dimensional manifold $\mathcal{M}$.
These instances are represented as nodes on an undirected weighted graph
$G=(V,E,W)$, where $V$ and $E$ are sets of the vertices and edges,
respectively, and $W$ is the adjacency matrix. This matrix is symmetric and
defined by a distance measure between the nodes. For example, a common choice
is a Gaussian kernel and Euclidean distance,
$W_{ij}=\exp\left(-\frac{||x_{i}-x_{j}||_{2}^{2}}{2\sigma^{2}}\right),$ (1)
where $\sigma$ is a soft-threshold parameter.
The degree matrix $D$ is a diagonal matrix where $D_{ii}$ is the degree of the
$i$-th vertex, i.e., $D_{ii}=\sum_{j}{W_{ij}}$. The graph-Laplacian operator
is defined by,
$L\mathrel{\mathop{\mathchar 58\relax}}=D-W.$ (2)
The graph-Laplacian is a symmetric, positive semi-definite matrix, its
eigenvalues are real, and its eigenvectors form an orthogonal basis. The
eigenvalues of $L$ are sorted in ascending order
$\lambda_{1}\leq\lambda_{2}\leq...\leq\lambda_{n}$, where the corresponding
eigenvectors are denoted by $u_{1},u_{2}...,u_{n}$. The sample $x_{i}$ is
represented in the spectral embedding space as the $i$th row of the matrix
$U=\begin{bmatrix}u_{1}&\cdots&u_{K}\end{bmatrix}\in\mathbb{R}^{n\times K}$,
denoted as $\varphi_{i}$. Thus, more formally, the dimensionality reduction
process can be formulated as
$x_{i}\longmapsto\varphi_{i}=[u_{1}(i),u_{2}(i),...,u_{K}(i)]\in\mathbb{R}^{K},$
(3)
where $K\ll d$. This representation preserves well essential data information
[10, 15, 17, 22]
Alternatively, one can replace the Laplacian definition (2) with
$L_{N}\mathrel{\mathop{\mathchar
58\relax}}=D^{-\frac{1}{2}}LD^{-\frac{1}{2}}=I-D^{-\frac{1}{2}}WD^{-\frac{1}{2}}.$
(4)
This matrix may yield better performances for certain tasks and datasets [25,
26, 27].
## 3 Related Work
OOSE and scalability of graph-based learning methods are ongoing research
topics. Mathematical analyses and analytical solutions to these problems can
be found, for example, in [1, 4, 7, 12, 30]. However, neural networks learning
the latent space of the data usually yield an efficient, robust and reliable
solution for these problems. Moreover, neural network modules can be easily
integrated in larger networks, employing this embedding. For a recent use of
learnable graphs in semi-supervised learning and data visualization see [2].
The effectiveness of modeling PDE’s and certain eigenproblems in grid-free,
mesh-free manner was shown in [3, 29, 6]. We review below the main advances in
eigenspace embedding.
Diffusion Nets [18]. Diffusion Maps (DM) is a spectral embedding, resulting
from the eigendecomposition of
$P\mathrel{\mathop{\mathchar 58\relax}}=WD^{-1},$ (5)
known as the random-walk matrix [10]. More formally, similarly to Eq. 3,
diffusion maps is defined by
$x_{i}\longmapsto\varphi_{i}=[\gamma_{1}^{t}\Phi_{1}(i),\gamma_{2}^{t}\Phi_{2}(i),...,\gamma_{K}^{t}\Phi_{K}(i)]\in\mathbb{R}^{K},$
(6)
where $\\{\Phi_{j}\\}_{j=1}^{K}$ are the first non-trivial eigenvectors of
$P$, $\\{\gamma\\}_{i=j}^{K}$ are the corresponding eigenvalues and $t>0$ is
the diffusion time. Note, that $P$ and $L_{N}$ have the same eigenvectors, in
reverse order with respect to their eigenvalues.
Diffusion Net (DN) is an autoencoder trained to map between the data and the
DM. The loss function of the encoder is defined by,
$\mathcal{L}_{DN}^{e}(\theta^{e})=\frac{1}{2n}\sum_{i=1}^{n}\mathinner{\\!\left\lVert
f_{\theta^{e}}^{e}(x_{i})-\phi_{i}\right\rVert}^{2}+F(\theta^{e},X),$ (7)
where $\theta^{e}$ denotes the encoder’s parameters,
$f_{\theta^{e}}^{e}(x_{i})$ is the encoder output and
$F(\theta^{e},X)=\frac{\mu}{2}\sum_{l=1}^{L-1}{\mathinner{\\!\left\lVert\theta^{e}{{}_{l}}\right\rVert}_{F}^{2}}+\frac{\eta}{2m}\sum_{j=1}^{d}{||(P-\gamma_{j}I)(o^{e}_{j})^{T}||^{2}}$
is a regularization term such that $\theta^{e}{{}_{l}}$ are the weights of the
$l$-th layer, $o^{e}_{j}$ is the $j$-th column of the output matrix, $\mu$ and
$\eta$ are regularization parameters. Note, Diffusion Net requires to compute
the embedding of the training set in advance, meaning _it cannot be trained
with mini-batches_ and therefore has difficulty dealing with large datasets.
SpectralNet1 [24] (SpecNet1). This DNN learns the embedding corresponds to $L$
by minimizing the _ratio-cut_ loss of Ng _et al_. [21], without adding an
orthogonality constraint on the solution, with the loss
$\mathcal{L}_{SN1}(\theta)=\frac{1}{m^{2}}\sum_{i,j=1}^{m}{W_{i,j}||y_{i}-y_{j}||^{2}}=\frac{2}{m^{2}}\textrm{tr}(Y^{T}LY),$
(8)
where $y_{i}=f_{\theta}(x_{i})$ is the network output, $m$ is the batch size,
and tr is the trace operator. In order to calculate the eigenvectors of
$L_{N}$, one should normalize $y_{i},y_{j}$ with the corresponding node
degree. In SpectralNet1 orthogonality of the training is gained by defining
the last layer of the network as a linear layer set to orthogonalize the
output. The last layer’s weights are calculated during training with QR
decomposition over the DNN’s outputs. The authors point out that in order to
get good generalization and approximate orthogonal output at inference, large
batches are required.
SpectralNet2 [9] (SpecNet2). In this recent work the authors suggested to
solve the eigenpair problem of the matrix pencil $(W,D)$. The loss function is
defined by,
$\mathcal{L}_{SN2}(\theta)=\frac{1}{m^{2}}\textrm{tr}\left(-2Y^{T}WY+\frac{1}{m^{2}}Y^{T}DYY^{T}DY\right),$
(9)
where $Y$ is the network’s output. Given the output $Y$, an approximation to
the eigenvectors of $P$, Eq. 5, can be calculated as $\hat{U}=YO$ where
$O\in\mathbb{R}^{K\times K}$ satisfies
$\displaystyle Y^{T}WYO=Y^{T}DYO\Lambda,$ (10)
where $\Lambda$ is a refined approximation of the eigenvalue matrix of
$(W,D)$. Note that Eq. 10 requires a batch for its computation, which may be
problematic at inference. The authors show qualitatively a successful
approximation to the analytical embedding.
(a)
(b)
(c)
(d)
Figure 2: Illustration. The full dataset of three separated clusters, divided
into two subsets and anchors is shown in Fig. 2(a). Figs. 2(b)-2(c) show the
embedding spaces of subset $\\#1$ and subset $\\#2$, respectively. Fig. 2(d)
shows the embedding space of the entire data after aligning the embedding of
subset $\\#1$ to that of subset $\\#2$.
## 4 Our Method
### 4.1 Motivation
Our goal is to learn a model $f\mathrel{\mathop{\mathchar
58\relax}}\mathcal{M}\rightarrow\mathbb{R}^{K}$, where given a sample
$x\in\mathcal{M}$ approximates well the corresponding $\varphi$ of Eq. 3. As
is common with DNN learning, given a large training set, we would like to
train the model by splitting the dataset into batches. A batch can be viewed
as sampling the graph of the training set. A straightforward approach would be
to compute the eigenspace of each batch and to learn a mapping from $x$ to
$\varphi$, using a data loss similar to Diffusion Nets. The problem is that
different samples of the training set most often lead to different embeddings.
Specifically, the same instance $x_{i}$ can be mapped very differently in each
batch.
This can be demonstrated in a very simple toy example, shown in Fig. 2, which
illustrates the core problem and our proposed solution. Three distinct
clusters in Euclidean space are sampled in two trials (batches) and the
eigenspace embedding is computed analytically. Three samples appear in both
subsets, one for each cluster (red color). We refer to the common samples as
_anchors_. The plots of the instances in the embedding space for the two
subsets are shown in Figs. 2(b)-2(c). One can observe the embeddings are
different. Specifically, all anchors, which appear in both samplings, are
mapped differently in a substantial way. It is well known that eigenvector
embedding has a degree of freedom of rotation (as shown for example in [32]).
However, in the case of uneven sampling of clusters there may be also some
scale changes and slight translation (caused by the orthonormality
constraints). We thus approximate these degrees of freedom in the general case
as an affine rigid transformation according to the anchors. Aligning one
embedding space to the other one, using this transformation, yields a
consistent embedding, as can be seen in Fig. 2(d). Following the alignment
process, the embedding can be learnt well using batches.
In the toy example of the Three Moons, see Fig. 3, we show the mapping of $9$
anchor-points, as shown in Fig. 3(b). In Figs. 3(c)-3(d) the analytic
computation of the first two non-trivial eigenvectors are plotted for $20$
different batch samples of size $256$ (out of $9000$ nodes in the entire
dataset), all of which contain the $9$ points. In this simple example anchors
located on the same moon receive approximately the same value. However, in
different batches the embedding of the anchors is clearly inconsistent.
Surely, a network cannot be expected to generalize such a mapping. After
learning the transformation and performing alignment, the embedding values are
consistent. In Figs. 3(e)-3(f) the values are shown after our correction
procedure. This consistency allows to train DNN to learn the desired
embedding, by dividing the data into batches. The result of the trained DNN
model for the Three Moons dataset appears in Fig. 1 (second from left). These
toy examples lead us to the detailed description of our algorithm.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Three-Moons toy example. The full dataset is shown in Fig. 3(a) and
the chosen anchor-nodes in Fig. 3(b). Figs. 3(c)\- 3(d) show the values of the
two leading eigenvectors for the anchors, for 20 different graph-samples.
Figs. 3(e)\- 3(f) show those values after our proposed alignment.
### 4.2 BASiS Algorithm
We propose to calculate the embedding space with batches. To obtain
consistency in this representation, we calculate the first-order approximation
of the distortion obtained in the eigenvector values between different samples
of the data. The main steps of our algorithm are as follows: First we perform
two preliminary initialization steps.
Defining an anchor set. Draw $l$ samples from the data. This subset is denoted
as $V^{a}$ and will be added to any batch in the learning process.
Defining the reference embedding space. We would like to define the embedding
space of the anchor set as a reference space. However, to get more information
about the manifold $\mathcal{M}$, we add $m-l$ samples (randomly) and term it
as the reference set $V^{ref}$. After calculating the embedding
$V^{ref}\to\varphi^{ref}$ (as in Eq. 3), one can extract the coordinates of
the anchor samples,
$V^{a}\to\\{\varphi^{a,ref}_{i}\\}_{i=1}^{l}.$ (11)
Following this initialization, the main steps of the training are as follows:
Calculate the embedding space over a new batch. Draw $m-l$ new samples and add
them to the anchor set. Let us denote the union set as $V^{b}$. We calculate
$\\{\varphi_{i}\\}_{i=1}^{m}$, the embedding of $V^{b}$ and extract the
embedding $\\{\varphi^{a}_{i}\\}_{i=1}^{l}$ corresponding to the anchors.
Calculate the alignment transformation. Now, we calculate the alignment
transformation between $\\{\varphi^{a}_{i}\\}_{i=1}^{l}$ to
$\\{\varphi^{a,ref}_{i}\\}_{i=1}^{l}$. More formally, for
$\varphi^{a},\varphi^{a,ref}\in\mathbb{R}^{K}$ we find
$A\in\mathbb{R}^{K\times K}$ and $b\in\mathbb{R}^{K}$ which minimize
$\min_{A,b}\sum_{i=1}^{l}{\mathinner{\\!\left\lVert\varphi^{a,ref}_{i}-(A\varphi^{a}_{i}+b)\right\rVert}^{2}}.$
(12)
Alternatively, one can define $\hat{\varphi}^{a}=[\varphi^{a},1]$ and find the
transformation $T\in\mathbb{R}^{K\times(K+1)}$ such that
$\min_{T}\sum_{i=1}^{l}{\mathinner{\\!\left\lVert\varphi^{a,ref}_{i}-T\hat{\varphi}^{a}_{i}\right\rVert}^{2}}.$
(13)
In this case there are $K\times(K+1)$ degrees of freedom. Each anchor provides
$K$ constraints, that means at least $K+1$ anchors are needed in order to
solve this least squares problem. Since in many real-world problem there is
noise in the measurements, it is customary to solve such problems in an
overdertermined setting, using a higher number of anchors. In addition, given
a large number of anchors, the transformation can be calculated using best
matches - for example by using the RANdom SAmple Consensus (RANSAC) algorithm
[11].
Batch Alignment. Given the transformation $T$, we can align the embedding
$\\{\varphi_{i}\\}_{i=1}^{m}$ of all the instances of $V^{b}$. We define
$\hat{\varphi}=[\varphi,1]$ and update
$\varphi\leftarrow T\hat{\varphi}.$ (14)
Gradient Step. Now that we have a mechanism that allows to get consistent
embedding, we can train the DNN by dividing the data into batches and use a
simple MSE lose function
$\mathcal{L}_{BASiS}(\theta)=\frac{1}{m}\sum_{i=1}^{m}\mathinner{\\!\left\lVert
y_{i}-\varphi_{i}\right\rVert}^{2},$ (15)
where $y_{i}=f_{\theta}(x_{i})$ is the DNN’s output and $\varphi_{i}$ is the
embedding of $x_{i}$ after alignment.
The full training scheme is detailed in Algorithm 1.
Algorithm 1 BASiS Training Scheme
1:Inputs:
2: data features $\\{x_{i}\\}_{i=1}^{n}$, number of eigenvectors $K$, batch
size $m$, number of anchors $l$.
3:Outputs:
4: Spectral embedding model $f\mathrel{\mathop{\mathchar
58\relax}}\mathcal{M}\rightarrow\mathbb{R}^{K}$
5:Initialize:
6: Define anchor set $V^{a}$. Extract $\\{\varphi^{a,ref}_{i}\\}_{i=1}^{l}$,
the reference embedding of $V^{a}$ using Eq. 11
7:while $\mathcal{L}_{BASiS}(\theta)$ not converged do
8: Draw $m-l$ new samples.
9: Define node set $V^{b}$ as union of the anchors with the new sampled nodes.
10: Calculate the embedding $\\{\varphi_{i}\\}_{i=1}^{m}$ of $V^{b}$.
11: Calculate the optimal transformation $T$, Eq. 13.
12: Align $\\{\varphi_{i}\\}_{i=1}^{m}$ with $T$ , Eq. 14.
13: Do a gradient step of $\mathcal{L}_{BASiS}$, Eq. 15.
14:end while
### 4.3 BASiS for feature perturbation
In the process of network training, the features are often not fixed and
slightly change each iteration during training. In this case the adjacency
values change and hence naturally also the embedding space. We suggest an
algorithm to allow iterative changes in the feature space (inducing different
graph metric). This algorithm is also based on an alignment technique. Similar
to Algorithm 1, we define anchor nodes. Given the current features, we extract
$\\{\varphi^{a,prev}_{i}\\}_{i=1}^{l}$, the current spectral embedding of the
anchors. When the features are updated, we find the new embedding of the
anchors $\\{\varphi^{a,update}_{i}\\}_{i=1}^{l}$. In order to maintain
consistency in the learning process, we find a transformation $T_{G}$, as in
Eq. 13, that aligns the updated anchor embedding to the previous one. Then we
align the entire embedding space according to the calculated transformation.
Algorithm 2 summarizes the proposed method.
Algorithm 2 BASiS under iterative feature change
1:Inputs:
2: $\\{\varphi^{a,prev}_{i}\\}_{i=1}^{l}$ the anchors embedding over previous
features $\\{x_{i}\\}_{i=1}^{n}$, updated features
$\\{\hat{x}_{i}\\}_{i=1}^{n}$.
3:Outputs:
4: $\\{\varphi^{update}_{i}\\}_{i=1}^{n}$ the spectral embedding over the
updated features, aligned to $\\{\varphi^{a,prev}_{i}\\}_{i=1}^{l}$.
5:Calculate the embedding $\\{\varphi^{update}_{i}\\}_{i=1}^{n}$ over the
updated features.
6:Extract $\\{\varphi^{a,update}_{i}\\}_{i=1}^{l}$ the embedding correspond to
the anchors.
7:Calculate the transformation $T_{G}$ between
$\\{\varphi^{a,prev}_{i}\\}_{i=1}^{l}$ and
$\\{\varphi^{a,update}_{i}\\}_{i=1}^{l}$.
8:Align $\\{\varphi^{update}_{i}\\}_{i=1}^{n}$ with $T_{G}$.
Measures | Networks | MNIST | Fashion-MNIST | SVHN | CIFAR-10
---|---|---|---|---|---
$d_{G}$↓ | Diffusion-Net | $0.204\pm 0.058$ | $0.488\pm 0.238$ | $1.909\pm 0.238$ | $1.022\pm 0.250$
| SpecNet1 | $0.386\pm 0.074$ | $0.375\pm 0.132$ | $3.526\pm 0.529$ | $2.256\pm 0.471$
| SpecNet2 | $1.388\pm 0.262$ | $1.976\pm 0.210$ | $1.903\pm 0.242$ | $2.970\pm 0.682$
| BASiS (Ours) | $\bf{0.107\pm 0.038}$ | $\bf{0.284\pm 0.073}$ | $\bf{1.656\pm 0.170}$ | $\bf{0.803\pm 0.085}$
$d_{\perp}$↓ | Diffusion-Net | $0.535\pm 0.365$ | $0.823\pm 0.664$ | $1.532\pm 0.354$ | $2.957\pm 1.837$
| SpecNet1 | $6.296\pm 0.922$ | $6.384\pm 0.899$ | $4.507\pm 0.821$ | $5.169\pm 0.775$
| SpecNet2 | $9.486\pm 0.001$ | $8.561\pm 1.397$ | $4.104\pm 0.269$ | $4.922\pm 0.102$
| BASiS (Ours) | $\bf{0.247\pm 0.076}$ | $\bf{0.590\pm 0.144}$ | $\bf{0.488\pm 0.098}$ | $\bf{0.407\pm 0.095}$
NMI↑ | Diffusion-Net | $0.944\pm 0.041$ | $0.759\pm 0.085$ | $0.645\pm 0.016$ | $0.466\pm 0.034$
| SpecNet1 | $0.911\pm 0.008$ | $0.761\pm 0.011$ | $0.665\pm 0.018$ | $0.443\pm 0.012$
| SpecNet2 | $0.925\pm 0.012$ | $0.759\pm 0.010$ | $0.701\pm 0.009$ | $0.466\pm 0.013$
| BASiS (Ours) | $\bf{0.961\pm 0.001}$ | $\bf{0.798\pm 0.001}$ | $\bf{0.736\pm 0.001}$ | $\bf{0.501\pm 0.001}$
ACC↑ | Diffusion-Net | $0.944\pm 0.030$ | $0.781\pm 0.179$ | $0.687\pm 0.303$ | $0.620\pm 0.062$
| SpecNet1 | $0.963\pm 0.005$ | $0.815\pm 0.029$ | $0.811\pm 0.039$ | $0.637\pm 0.029$
| SpecNet2 | $0.966\pm 0.007$ | $0.801\pm 0.023$ | $0.813\pm 0.015$ | $0.606\pm 0.039$
| BASiS (Ours) | $\bf{0.986\pm 0.001}$ | $\bf{0.865\pm 0.003}$ | $\bf{0.880\pm 0.001}$ | $\bf{0.688\pm 0.001}$
Accuracy(%)↑ | Diffusion-Net | $95.508\pm 1.449$ | $86.207\pm 0.196$ | $86.850\pm 1.386$ | $67.316\pm 2.112$
| SpecNet1 | $92.278\pm 4.776$ | $84.123\pm 1.229$ | $85.154\pm 0.377$ | $65.336\pm 0.626$
| SpecNet2 | $97.026\pm 0.546$ | $85.953\pm 0.240$ | $87.469\pm 0.130$ | $67.093\pm 0.644$
| BASiS (Ours) | $\bf{98.522\pm 0.065}$ | $\bf{87.202\pm 0.187}$ | $\bf{88.021\pm 0.064}$ | $\bf{68.887\pm 0.128}$
Table 1: Spectral embedding performance comparison. Average performance
obtained over 10 different installations of each of the four methods. In each
experiment we learn an embedding space in dimension of 10 for 1000 training
iterations using batches of size 512.
## 5 Experimental Results
In this section we examine the ability of BASiS to learn the graph-spectral
embedding over different datasets quantifying its success using several
performance measures. Our method is able to learn any desired eigen embedding
(since it is supervised by analytic calculations). To fairly compare our
method to the ones mentioned in Sec. 3 we calculate the eigenspace of $L_{N}$
(Eq. 4). For all methods the DNN’s architecture includes $5$ fully connected
(FC) layers with ReLU activation in between (see full details in the
supplementary).
### 5.1 Evaluation Metrics
We evaluate our results using several measures. We calculate the Grassmann
distance (projection metric) [14] between the network output and the
analytically calculated eigenvectors. The squared Grassmann distance between
two orthonormal matrices $Y_{1},Y_{2}\in\mathbb{R}^{n\times K}$ is defined as:
$d_{G}(Y_{1},Y_{2})=K-\sum_{i=1}^{K}{cos^{2}\theta_{i}},$ (16)
where $0\leq\theta_{1}\leq...\leq\theta_{K}\leq\frac{\pi}{2}$ are the
principal angles between the two subspaces $span(Y_{1})$ and $span(Y_{2})$.
The distance is in $[0,K]$ where lower values indicate greater proximity
between the subspaces.
A second measure is the degree of orthogonality of the DNN’s output $Y$. We
use the following orthogonality measure:
$d_{\perp}(Y)=||Y^{T}Y-I||_{F}^{2},$ (17)
where $I$ is an identity matrix and $||\cdot||_{F}$ is the Frobenius norm. For
$Y$ containing columns of orthonormal vectors we get $d_{\perp}(Y)=0$. In
general, we expect this measure to be close to zero in proper embeddings.
To evaluate the potential clustering performance we examined two common
metrics: Normalized mutual information (NMI) and unsupervised clustering
accuracy (ACC). The clustering result is achieved by preforming the K-Means
algorithm over the spectral embedding. Both indicators are in the range
$[0,1]$, where high values indicate a better correspondence between the
clustering result and the true labels. NMI is defined as,
$NMI(c,\hat{c})=\frac{I(c,\hat{c})}{max\\{H(c),H(\hat{c})\\}},$ (18)
where $I(c,\hat{c})$ is the mutual information between the true labels $c$ and
the clustering result $\hat{c}$ and $H(\cdot)$ denotes entropy. ACC is defined
as,
$ACC(c,\hat{c})=\frac{1}{n}\max_{\pi\in\Pi}{\sum_{i=1}^{n}{\mathbbm{1}\\{c_{i}=\pi(\hat{c}_{i})\\}}},$
(19)
where $\Pi$ is the set of possible permutations of the clustering results. To
choose the optimal permutation $\pi$ we followed [24] and used the Kuhn-
Munkres algorithm [19].
Finally, we examine how suitable the embedding is for classification. We
trained (with Cross-Entropy loss) a supervised linear regression model
containing a single fully connected layer without activation, and examined its
accuracy:
$Accuracy(c,\hat{c})=\frac{1}{n}{\sum_{i=1}^{n}{\mathbbm{1}\\{c_{i}=\hat{c}_{i}\\}}}.$
(20)
### 5.2 Spectral Clustering
In this section we examine the ability of our method to learn the spectral
embedding for clustering of different datasets. First, we examine the
performance for well-known spectral clustering toy examples, appearing in Fig.
1. In these examples the dataset includes $9000$ nodes and the model is
trained by calculating the first non-trivial eigenvectors (sampling $256$
nodes, using $1000$ iterations). In all the experiments NMI and ACC of 1.0
were obtained over the test set (i.e., perfect clustering results). In
addition, as demonstrate in Fig. 1, the learnt model is able to generalize the
clustering result and performs smooth interpolation and extrapolation of the
space mapping. We note that no explicit regularization loss is used in our
method, generalization and smoothness are obtained naturally through the
neural training process.
Next we compare the performance of BASiS to those of the models mentioned in
Sec. 3. We examine the results over 4 well-known image datasets: MNIST,
Fashion-MNIST [31], SVHN [20] and CIFAR-10 [16]. For each dataset we first
learn an initial low-dimensional representation, found to be successful for
graph-based learning, by a Siamese network, a convolutional neural network
(CNN) trained in a supervised manner using Contrastive loss
$\displaystyle\mathcal{L}_{Cont}(x_{i},x_{j},\theta)=\mathbbm{1}\\{y_{i}=y_{j}\\}\mathinner{\\!\left\lVert
f^{rep}_{\theta}(x_{i})-f^{rep}_{\theta}(x_{j})\right\rVert}_{2}^{2}$ (21)
$\displaystyle+\mathbbm{1}\\{y_{i}\neq
y_{j}\\}\max(0,\epsilon-\mathinner{\\!\left\lVert
f^{rep}_{\theta}(x_{i})-f^{rep}_{\theta}(x_{j})\right\rVert}_{2})^{2},$
where $f^{rep}_{\theta}(x_{i})$ is the Siamese network’s output for input
image $x_{i}$ labeld as $y_{i}$ , $\epsilon\in\mathbb{R}^{+}$. More details on
the architecture of the Siamese network are in the supplementary material. We
use this representations as inputs to the spectral embedding models. In all
the experiments the graph affinity matrix $W$ is defined by Eq. 1, using the
$50$ nearest neighbors of each node. We use $50$ neighbors in order that all
methods could perform well (see sensitivity analysis hereafter). The model is
trained to learn the first $K$ eigenvectors, where $K$ is equal to the number
of clusters. The batch size is set to $m=512$. For our method, we randomly
select $25$ anchor-nodes from each cluster and use RANSAC to find the best
transformation.
Comparison between the methods is summarized in Table 1. The numbers are
obtained by showing the average and empirical standard deviation of each
measure, as computed based on $10$ experiments.
Since the training process in Diffusion Net is not scalable, in each
initialization we randomly sampled a single batch from the training set and
trained the model with the analytically calculated spectral embedding. In
relation to SpecNet2, as indicated in Sec. 3, to obtain an approximation of
the spectral space, SpecNet2 requires a post-processing stage over the
network’s output. In order to maintain consistency and obtain reasonable
performance for all measures, the post-processing is performed over the entire
test set (this naturally limits the method and increases the level of
complexity at inference). More implementation details are in the
supplementary. Table 1 shows that our method is superior and more stable in
approximating the analytical embedding and in clustering.
We further examined the robustness of the methods to changes in the number of
neighbors for each node. This parameter defines the connectivity of the graph
in the affinity matrix. Fig. 4 shows the average and the empirical standard
deviation of the performance measures, for $50$ training procedures over the
MNIST dataset. It is shown that our method is highly robust and consistent. We
note the high sensitivity of Diffusion Net to this meta-parameter.
(a)
(b)
(c)
(d)
Figure 4: Robustness to the node neighborhood. The average and standard
deviation of $50$ different training experiments over the MNIST dataset, for
different number of neighbors per node.
(a)
(b)
(c)
(d)
Figure 5: Diffusion Maps encoding. Data set of 2000 snapshots of bunny on
rotating display [17]. Fig. 5(a) shows snapshot examples. Fig. 5(b) present
the analytically calculated DM embedding for the full dataset . The test set
analytical embedding is shown in Fig. 5(c) and the network output for the test
set in Fig. 5(d).
### 5.3 Diffusion Maps Encoding
We examine the ability of our model to learn the DM embedding (6). For this
purpose we use the dataset from [17] which includes $2000$ snapshots of toy
bunny located on a rotating surface. Six examples of such frames are shown in
Fig. 5(a). We define a graph using the $20$ nearest neighbors of each node,
and calculate the random-walk matrix $P$ (5). Raw pixels are used as features
(dimension $288,000$). Dimension reduction is performed with DM to
$\mathbb{R}^{2}$. In Fig. 5(b) the analytically calculated embedding obtained
based on the entire dataset is shown. Our model was trained to approximate
this embedding using $1500$ images. Test is performed over $500$ images. Fig.
5(c) shows the analytically calculated embedding over the test snapshots. Fig.
5(d) shows the embedding obtained by our trained model. Our method approximate
well the analytic calculation.
### 5.4 Iterative Change of Features
In this section we illustrate the performance of Algorithm 2 for aligning the
spectral embedding space under changing features. We define two DNN models.
The first one is for feature extraction, trained to minimize the Contrastive
loss (21). The second is trained for calculating the spectral embedding, using
Algorithm 1, based on the output of the features model. Both models are
trained simultaneously. The feature model is trained for $1500$ iterations,
where every tenth iteration we perform an update step for the spectral
embedding model. To maintain consistency under the feature change, we align
the spectral space using Algorithm 2 before performing an update step for the
spectral model. Fig. 6 shows the results of the training process over the
MNIST dataset where the learnt features are of dimension $16$ and the
eigenvectors are of dimension $9$. Fig. 7 shows a visualization (TSNE [28]) of
the test set embedding at the beginning and the end of the training process.
The two modules were able to converge and to reach good clustering results. In
addition, when the loss of the spectral module is sufficiently low (around
iteration 800, marked with a red line in Fig. 6) the clustering performance of
the spectral module is comparable to the analytic embedding, calculated with
the current features (the orange and the green plots tightly follow the blue
plot).
To illustrate the role of the transformation $T_{G}$, we show in Fig. 8 the
results of another experiment using a similar setting. For better
visualization, the training is only for three digits of MNIST: 4,7 and 9. The
embedding (and visualization) is two dimensional. It can be seen that $T_{G}$
imposes consistency of the resulting embedding under feature change, allowing
convergence and good approximation of the eigenvector space.
(a)
(b)
(c)
(d)
Figure 6: Training under feature change. Evolution of measures during training
(MNIST). 6(a)-6(b) losses of the features module and the spectral embedding
module, respectively. 6(c)-6(d) – clustering measures. Blue, analytic
calculation of the eigenvectors based on current features. Orange and green,
output of spectral embedding module (training and validation sets,
respectively).
(a)
(b)
Figure 7: Test set embedding. Visualization (TSNE) of the spectral embedding,
MNIST test set. Fig. 7(a) shows the spectral embedding at the beginning of
training, Fig. 7(b) at the end.
Figure 8: Features change demonstration Left column: the analytically
calculated spectral embedding of $V^{ref}$ obtained over the features module
output. Distortion is a consequence of feature change. Middle column: Spectral
embedding of $V^{ref}$ after alignment with $T_{G}$ . Right column: Network
spectral embedding of the test set. Each row represents the embeddings for the
10th, 100th, 500th and 1500th iteration, respectively.
## 6 Conclusion
In this paper we introduced BASiS, a new method for learning the eigenspace of
a graph, in a supervised manner, allowing the use of batch training. Our
proposed method has shown to be highly robust and accurate in approximating
the analytic spectral space, surpassing all other methods with respect to
Grassman distance, orthogonality, NMI, ACC and accuracy, over various
benchmarks. In addition, we proposed an adaptation of our procedure for
learning the eigenspace during iterative changes in the graph metric (as
common in neural training). Our method can be viewed as a useful building
block for integrating analytical spectral methods in deep learning algorithms.
This enables to effectively use extensive theory and practices available,
related to classical spectral embedding.
Acknowledgements. We acknowledge support by the Israel Science Foundation
(Grant No. 534/19), by the Ministry of Science and Technology (Grant No.
5074/22) and by the Ollendorff Minerva Center.
## References
* [1] Carlos Alzate and Johan AK Suykens. Multiway spectral clustering with out-of-sample extensions through weighted kernel pca. IEEE transactions on pattern analysis and machine intelligence, 32(2):335–347, 2008.
* [2] Angelica I Aviles-Rivero, Philip Sellars, Carola-Bibiane Schönlieb, and Nicolas Papadakis. Graphxcovid: explainable deep graph diffusion pseudo-labelling for identifying covid-19 on chest x-rays. Pattern Recognition, 122:108274, 2022.
* [3] Leah Bar and Nir Sochen. Strong solutions for pde-based tomography by unsupervised learning. SIAM Journal on Imaging Sciences, 14(1):128–155, 2021.
* [4] Mohamed-Ali Belabbas and Patrick J Wolfe. On landmark selection and sampling in high-dimensional data analysis. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 367(1906):4295–4312, 2009.
* [5] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003.
* [6] Ido Ben-Shaul, Leah Bar, and Nir Sochen. Solving the functional eigen-problem using neural networks. arXiv preprint arXiv:2007.10205, 2020.
* [7] Yoshua Bengio, Jean-françcois Paiement, Pascal Vincent, Olivier Delalleau, Nicolas Roux, and Marie Ouimet. Out-of-sample extensions for lle, isomap, mds, eigenmaps, and spectral clustering. Advances in neural information processing systems, 16, 2003.
* [8] Xavier Bresson, Thomas Laurent, David Uminsky, and James H Von Brecht. Multiclass total variation clustering. arXiv preprint arXiv:1306.1185, 2013.
* [9] Ziyu Chen, Yingzhou Li, and Xiuyuan Cheng. Specnet2: Orthogonalization-free spectral embedding by neural networks. arXiv preprint arXiv:2206.06644, 2022.
* [10] Ronald R Coifman and Stéphane Lafon. Diffusion maps. Applied and computational harmonic analysis, 21(1):5–30, 2006.
* [11] Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381–395, 1981.
* [12] Charless Fowlkes, Serge Belongie, Fan Chung, and Jitendra Malik. Spectral grouping using the nystrom method. IEEE transactions on pattern analysis and machine intelligence, 26(2):214–225, 2004.
* [13] Cristina Garcia-Cardona, Ekaterina Merkurjev, Andrea L Bertozzi, Arjuna Flenner, and Allon G Percus. Multiclass data segmentation using diffuse interface methods on graphs. IEEE transactions on pattern analysis and machine intelligence, 36(8):1600–1613, 2014.
* [14] Jihun Hamm and Daniel D Lee. Grassmann discriminant analysis: a unifying view on subspace-based learning. In Proceedings of the 25th international conference on Machine learning, pages 376–383, 2008.
* [15] Ori Katz, Ronen Talmon, Yu-Lun Lo, and Hau-Tieng Wu. Alternating diffusion maps for multimodal data fusion. Information Fusion, 45:346–360, 2019.
* [16] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\.
* [17] Roy R Lederman and Ronen Talmon. Learning the geometry of common latent variables using alternating-diffusion. Applied and Computational Harmonic Analysis, 44(3):509–536, 2018\.
* [18] Gal Mishne, Uri Shaham, Alexander Cloninger, and Israel Cohen. Diffusion nets. Applied and Computational Harmonic Analysis, 47(2):259–285, 2019\.
* [19] James Munkres. Algorithms for the assignment and transportation problems. Journal of the society for industrial and applied mathematics, 5(1):32–38, 1957.
* [20] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011\.
* [21] Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 14, 2001.
* [22] Antonio Ortega, Pascal Frossard, Jelena Kovačević, José MF Moura, and Pierre Vandergheynst. Graph signal processing: Overview, challenges, and applications. Proceedings of the IEEE, 106(5):808–828, 2018.
* [23] Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500):2323–2326, 2000.
* [24] Uri Shaham, Kelly Stanton, Henry Li, Boaz Nadler, Ronen Basri, and Yuval Kluger. Spectralnet: Spectral clustering using deep neural networks. In Proceedings of the 6th International Conference on Learning Representations, 2018.
* [25] Jianbo Shi and Jitendra Malik. Motion segmentation and tracking using normalized cuts. In Sixth international conference on computer vision (IEEE Cat. No. 98CH36271), pages 1154–1160. IEEE, 1998.
* [26] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence, 22(8):888–905, 2000.
* [27] Suman Tatiraju and Avi Mehta. Image segmentation using k-means clustering, em and normalized cuts. Department of EECS, 1:1–7, 2008.
* [28] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
* [29] E Weinan, Jiequn Han, and Arnulf Jentzen. Algorithms for solving high dimensional pdes: from nonlinear monte carlo to machine learning. Nonlinearity, 35(1):278, 2021.
* [30] Christopher Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. Advances in neural information processing systems, 13, 2000.
* [31] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
* [32] Lihi Zelnik-Manor and Pietro Perona. Self-tuning spectral clustering. Advances in neural information processing systems, 17, 2004.
|
# On regular but non-smooth integral curves
Cesar Hilario Mathematisches Institut, Heinrich-Heine-Universität, 40204
Düsseldorf, Germany<EMAIL_ADDRESS>and Karl-Otto Stöhr IMPA -
Instituto Nacional de Matemática Pura e Aplicada - Estrada Dona Castorina,
110, Rio de Janeiro, Brazil 22460-320<EMAIL_ADDRESS>
###### Abstract.
Given a regular but non-smooth geometrically integral curve over an imperfect
field, we establish a bound for the number of iterated Frobenius pullbacks
needed in order to transform a non-smooth non-decomposed point into a rational
point. This provides an algorithm to compute geometric $\delta$-invariants of
non-smooth points and a procedure to construct fibrations with moving
singularities of prescribed $\delta$-invariants. We show that the bound is
sharp in characteristic 2, and we further study the geometry of a pencil of
plane projective rational quartics in characteristic 2 whose generic fibre
attains our bound. On our way, we prove several results on separable and non-
decomposed points that might be of independent interest.
###### 2010 Mathematics Subject Classification:
14G17, 14H05, 14H45, 14D06
## 1\. Introduction
Bertini’s theorem on variable singular points, also known as the Bertini-Sard
theorem, is nowadays one of the most used theorems in algebraic geometry. In
its modern version, it states that in characteristic zero almost every fibre
of a dominant morphism $\phi:T\to B$ of integral smooth algebraic varieties
over an algebraically closed field $k$ is smooth. This is no longer the case
in positive characteristic, as already pointed out by Zariski [Zar44] in the
1940s. The most familiar counterexamples are the quasi-elliptic fibrations
that arise in the classification of smooth surfaces by Bombieri and Mumford in
characteristics $2$ and $3$ (see [BM76, Lan79]).
From the point of view of Grothendieck’s scheme theory, the generic fibre
$C:=T\times_{B}\operatorname{Spec}k(B)$
of the fibration $\phi:T\to B$ is a regular scheme over the function field
$K:=k(B)$ of the base $B$, yet it may happen that the geometric generic fibre
$C\otimes_{K}\overline{K}=C\times_{\operatorname{Spec}(K)}\operatorname{Spec}\overline{K}$
is non-regular. Since such non-regularity occurs precisely when every special
fibre is singular, this reveals a deep connection between the failure of
Bertini’s theorem and the existence of regular schemes $C$ defined over an
imperfect field $K$ that are non-smooth, i.e., for which the base extension
$C\otimes_{K}\overline{K}$ is singular. Such existence represents a striking
feature of geometry in positive characteristic, that results from the fact
that over imperfect fields the notions of smoothness and regularity differ:
every smooth variety (i.e., smooth scheme of finite type over a field) is
regular, but not every regular variety is smooth.111However, a rational point
is smooth if and only if it is regular [FaSc20, Corollary 2.6]. In several
areas such as birational geometry, and particularly in the Minimal Model
Program, these regular but non-smooth schemes cause difficulties when one
tries to apply zero characteristic techniques to positive characteristic
situations; an explicit example: del Pezzo fibrations, where the picture seems
more involved in characteristic $2$ (see [Ma16, p. 404] and [Ko91, Remark
1.2]). As a result, much effort has been devoted to understand their behaviour
and to bound their occurrence (see e.g. [Sc08, JW21, PW22]).
In this paper we explore the above phenomenon in the specific situation where
the variety is a regular geometrically integral curve $C$ over an imperfect
field $K$. If $C|K$ is the generic fibre of a fibration $f:T\to B$ then its
closed points correspond bijectively to the horizontal divisors on the total
space $T$; a closed point is non-smooth (and hence $C$ is non-smooth) if and
only if the corresponding divisor is a moving singularity of the fibration
[SiSt16, Section 1].
Our approach to the non-smoothnes of $C$ relies on a central tool in geometry
in positive characteristic: Frobenius pullbacks. As a non-smooth point
$\mathfrak{p}$ of $C$ cannot be _smoothed_ by performing Frobenius pullbacks,
i.e., as the images of $\mathfrak{p}$ in the sequence of iterated Frobenius
pullbacks
$C\to C^{(p)}\to C^{(p^{2})}\to C^{(p^{3})}\to\cdots$
are non-regular [Sal11, Lemma 2.2] and therefore non-smooth, we consider
instead the images $\mathfrak{p}_{n}\in C_{n}$ of $\mathfrak{p}$ in the
sequence of regular integral curves
$C_{0}=C\to C_{1}\to C_{2}\to C_{3}\to\cdots$
obtained by passing to the normalizations $C_{n}$ of the Frobenius pullbacks
$C^{(p^{n})}$. Our main result, stated below as Theorem 1.1, provides an
explicit description of the integers $n$ such that the image point
$\mathfrak{p}_{n}$ is separable (i.e., the residue field extension
$\kappa(\mathfrak{p}_{n})|K$ is separable) and a fortiori smooth. In
particular, if $\mathfrak{p}$ is non-decomposed, or equivalently purely
inseparable (see Corollary 2.15), then the separable point $\mathfrak{p}_{n}$
is actually rational.
###### Theorem 1.1 (see Theorem 2.21).
Let $C$ be a geometrically integral regular curve over a field $K$ of
characteristic $p>0$. Let $\mathfrak{p}$ be a non-smooth point of geometric
$\delta$-invariant $\delta(\mathfrak{p})>0$.
1. n(i)
The image $\mathfrak{p}_{n}$ of $\mathfrak{p}$ in the normalization $C_{n}$ of
the $n$th Frobenius pullback $C^{(p^{n})}$ of $C$ is separable for all
$n\geq\log_{p}\big{(}2\,\frac{\delta(\mathfrak{p})}{[\kappa(\mathfrak{p}):K]_{sep}}+1\big{)}$;
furthermore, if the integer
$\frac{2}{p-1}\frac{\delta(\mathfrak{p})}{[\kappa(\mathfrak{p}):K]_{sep}}$ is
not a sum of consecutive $p$-powers then $\mathfrak{p}_{n}$ is separable for
all
$n\geq\log_{p}\big{(}2\,\frac{\delta(\mathfrak{p})}{[\kappa(\mathfrak{p}):K]_{sep}}+1\big{)}-1$.
2. n(ii)
Assume in addition that $\mathfrak{p}$ is non-decomposed. Then the image
$\mathfrak{p}_{n}$ is rational for all
$n\geq\log_{p}\big{(}2{\delta(\mathfrak{p})}+1\big{)}$; moreover, if the
integer $\frac{2}{p-1}{\delta(\mathfrak{p})}$ is not a sum of consecutive
$p$-powers then $\mathfrak{p}_{n}$ is rational for all
$n\geq\log_{p}\big{(}2{\delta(\mathfrak{p})}+1\big{)}-1$.
Our motivation originates from the following observation: if an integer $n$ is
known such that the image $\mathfrak{p}_{n}\in C_{n}$ is rational, then an
algorithm developed in [BedSt87] by Bedoya–Stöhr allows to compute the
geometric $\delta$-invariant $\delta(\mathfrak{p})$ and several other
invariants associated to $\mathfrak{p}$.
To prove our results we employ methods of function field theory. Let
$F|K=K(C)|K$ be the function field of the regular integral curve $C|K$. The
function fields of the iterated Frobenius pullbacks $C^{(p^{n})}|K$ and of
their normalizations $C_{n}|K$ are the iterated Frobenius pullbacks of $F|K$:
$F_{n}|K=KF^{p^{n}}|K,\quad(n=0,1,2,\dots).$
In order to study the sequence of normalized curves $C_{n}$ we study the
descending chain of function fields
$F=F_{0}\supset F_{1}\supset F_{2}\supset F_{3}\supset\cdots.$
A closed point of the curve $C$ and its image in $C_{n}$ correspond to a prime
$\mathfrak{p}$ of $F|K$ and its restriction $\mathfrak{p}_{n}$ to $F_{n}|K$.
As a main application of the theorem we get a procedure to construct regular
integral curves $C|K$ equipped with non-decomposed non-smooth closed points
$\mathfrak{p}$ of a given geometric $\delta$-invariant, or equivalently, a
procedure to construct for each natural number $d$ the function fields $F|K$
equiped with non-decomposed singular primes $\mathfrak{p}$ such that
$\delta(\mathfrak{p})=d$. To this end let $n=\lceil\log_{p}(2d+1)\rceil$ or
$n=\lceil\log_{p}(2d+1)\rceil-1$ be the corresponding bound in the theorem.
Each such pair $(F|K,\mathfrak{p})$ can be obtained by starting with a
function field $F_{n}|K$ equipped with a rational prime $\mathfrak{p}_{n}$,
and by constructing an ascending length-$n$ chain of purely inseparable
extensions of function fields
$F_{n}\subset F_{n-1}\subset F_{n-2}\subset\cdots\subset F_{0}=F$
equipped with the (uniquely determined) primes
$\mathfrak{p}_{n},\mathfrak{p}_{n-1},\mathfrak{p}_{n-2},\dots,\mathfrak{p}_{0}=\mathfrak{p}$
lying over the rational prime $\mathfrak{p}_{n}$, such that $F_{i}|K$ is the
Frobenius pullback of $F_{i-1}|K$ for each $i=n,n-1,\dots,1$. The generators
of the purely inseparable extensions $F_{i-1}|F_{i}$ of degree $p$ are
obtained by applying the Riemann-Roch theorem. With the Bedoya–Stöhr algorithm
in mind the generators have to be chosen carefully, so that the sequence of
geometric $\delta$-invariants $d_{n}=0\leq d_{n-1}\leq\dots\leq d_{0}$ ends
with $d=d_{0}$. Looking for decomposed non-smooth points we have to start our
procedure with separable but non-rational primes. We illustrate the method in
our paper [HiSt23] where curves of arithmetic genus $g=3$ equipped with non-
smooth points of geometric $\delta$-invariant $d=3$ are constructed.
On our way to the proof of our theorem we obtain several results on separable
and non-decomposed closed points, which might be of independent interest. In
particular, we prove that the separability degree of the residue field
extension $\kappa(\mathfrak{p})|K$ of a closed point $\mathfrak{p}\in C$
coincides with the number of points in the extended curve
$C\otimes_{K}\overline{K}$ that are mapped into $\mathfrak{p}$ by the natural
morphism $C\otimes_{K}\overline{K}\to C$ (see Proposition 2.10 and Remark
2.18).
We show that the bound in the theorem is sharp in characteristic $p=2$, but
note that an analogous statement is false in characteristic $p>2$.
In the last section of the paper we study a pencil of singular rational
quartics in characteristic $2$, whose generic fibre is a curve $C|K$ for which
the sharp bound for $\delta(\mathfrak{p})=3$ and $p=2$ is attained. We discuss
the geometry of the fibration in detail, and we further find its minimal
regular model, which by a teorem of Lichtenbaum and Shafarevich is uniquely
determined by the function field of $C|K$.
## 2\. Non-smooth points of regular integral curves
There is a one-to-one correspondence between the regular proper integral
curves over (the spectrum of) a given field $K$ and the one-dimensional
function fields with base field $K$ (see [EGA2, Section 7.4]). The function
field $F|K$ corresponding to such a curve $C|K$ is the local ring at the
generic point. Conversely, the points of the curve $C$ different from the
generic point, i.e., the closed points of $C$, are the primes $\mathfrak{p}$
of the function field $F|K$, and their local rings
$\mathcal{O}_{\mathfrak{p}}$ are just the (discrete) valuation rings of $F|K$.
If $U$ is a non-empty open subset of $C$, that is, the complement of a finite
set of closed points, then the space $\Gamma(U,\mathcal{O}_{C})$ of local
sections of the structure sheaf $\mathcal{O}_{C}$ is the intersection of the
local rings $\mathcal{O}_{\mathfrak{p}}$ of the closed points $\mathfrak{p}\in
U$.
In this section we assume that $F|K$ is a one-dimensional separable function
field of genus $g$ in positive characteristic $p$. The separability assumption
on $F|K$ means that the corresponding regular proper integral curve $C|K$ is
geometrically integral, i.e., it remains integral under algebraic extensions
of the base field.
Let $\mathfrak{p}$ be a prime of $F|K$ and consider its (discrete) valuation
ring $\mathcal{O}_{\mathfrak{p}}$. If $K^{\prime}$ is an algebraic extension
of the base field $K$, then the tensor product
$K^{\prime}{\cdot}\mathcal{O}_{\mathfrak{p}}:=K^{\prime}\otimes_{K}\mathcal{O}_{\mathfrak{p}}$
is a semilocal domain with fraction field
$\operatorname{Frac}(K^{\prime}{\cdot}\mathcal{O}_{\mathfrak{p}})=K^{\prime}{\cdot}F:=K^{\prime}\otimes_{K}F$
that coincides with the finite intersection of its localizations (i.e., the
localizations at its maximal ideals). Base extensions of local rings are
therefore semilocal. For this reason, in order to study the behaviour of
$\mathfrak{p}$ under base field extensions it is convenient to work with the
semilocal domains of $F|K$ rather than with its local rings.
Let $\mathcal{O}$ be a semilocal domain of $F|K$ (where
$K\subset\operatorname{Frac}(\mathcal{O})=F$), which is equal to the finite
intersection of its localizations, say
$\mathcal{O}=\mathcal{O}_{1}\cap\dots\cap\mathcal{O}_{r}\,.$
Let $K^{\prime}$ be an algebraic extension of the base field $K$. The
_$K^{\prime}$ -singularity degree_ of $\mathcal{O}$, which is defined as the
$K^{\prime}$-codimension of the extended semilocal ring
$K^{\prime}{\cdot}\mathcal{O}=K^{\prime}\otimes_{K}\mathcal{O}$ in its
integral closure $\widetilde{K^{\prime}{\cdot}\mathcal{O}}$, is finite (see
[Ros52, Theorem 1]) and equal to the sum of the $K^{\prime}$-singularity
degrees of the localizations, i.e.,
$\dim_{K^{\prime}}\widetilde{K^{\prime}{\cdot}\mathcal{O}}/K^{\prime}{\cdot}\mathcal{O}=\sum_{i=1}^{r}\dim_{K^{\prime}}\widetilde{K^{\prime}{\cdot}\mathcal{O}_{i}}/K^{\prime}{\cdot}\mathcal{O}_{i}$
(2.1)
(see [Ros52, p. 172]); indeed, the canonical homomorphism
$\widetilde{K^{\prime}{\cdot}\mathcal{O}}\to\bigoplus_{i=1}^{r}\widetilde{K^{\prime}{\cdot}\mathcal{O}_{i}}/K^{\prime}{\cdot}\mathcal{O}_{i}$
has kernel $K^{\prime}{\cdot}\mathcal{O}$ and is surjective, as follows by
applying the Chinese remainder theorem to the conductor ideals of the rings
$K^{\prime}{\cdot}\mathcal{O}_{i}$.
If $K^{\prime\prime}$ is an algebraic extension field of $K^{\prime}$ then the
$K^{\prime\prime}$-singularity degree of $\mathcal{O}$ is equal to the sum of
the $K^{\prime}$-singularity degree of $\mathcal{O}$ and the
$K^{\prime\prime}$-singularity degree of
$\widetilde{K^{\prime}{\cdot}\mathcal{O}}$, as can be seen from
$\displaystyle\dim_{K^{\prime}}\widetilde{K^{\prime}{\cdot}\mathcal{O}}/K^{\prime}{\cdot}\mathcal{O}$
$\displaystyle=\dim_{K^{\prime\prime}}(K^{\prime\prime}\otimes_{K^{\prime}}\widetilde{K^{\prime}{\cdot}\mathcal{O}})/(K^{\prime\prime}\otimes_{K^{\prime}}K^{\prime}{\cdot}\mathcal{O})$
$\displaystyle=\dim_{K^{\prime\prime}}K^{\prime\prime}{\cdot}\widetilde{K^{\prime}{\cdot}\mathcal{O}}/K^{\prime\prime}{\cdot}\mathcal{O}$
$\displaystyle=\dim_{K^{\prime\prime}}\widetilde{K^{\prime\prime}{\cdot}\mathcal{O}}/K^{\prime\prime}{\cdot}\mathcal{O}-\dim_{K^{\prime\prime}}(\widetilde{K^{\prime\prime}{\cdot}\mathcal{O}}/K^{\prime\prime}{\cdot}\widetilde{K^{\prime}{\cdot}\mathcal{O}}).$
(2.2)
The geometric singularity degree222Another name in the literature is
“geometric $\delta$-invariant”. of a prime $\mathfrak{p}$, denoted
$\delta(\mathfrak{p})$, is defined as the $\overline{K}$-singularity degree of
its local ring $\mathcal{O}_{\mathfrak{p}}$, where $\overline{K}$ is the
algebraic closure of $K$. The prime $\mathfrak{p}$ is called singular if
$\delta(\mathfrak{p})>0$, i.e.,
$\overline{K}{\cdot}\mathcal{O}_{\mathfrak{p}}\subsetneqq\widetilde{\overline{K}{\cdot}\mathcal{O}_{\mathfrak{p}}}$,
i.e., $\mathfrak{p}$ is a non-smooth point of the corresponding regular
integral curve $C|K$.333It might be tempting to use the term “non-smooth
prime”, but the term “singular prime” is already in use in the literature on
function fields.
By Rosenlicht’s genus drop formula (see [Ros52, Theorem 11]) the genus of the
extended function field $\overline{K}F|\overline{K}$ is equal to
$\overline{g}=g-\sum_{\mathfrak{p}}\delta(\mathfrak{p})$ (2.3)
where $g$ is the genus of the function field $F|K$ and the sum is taken over
the singular primes $\mathfrak{p}$ of $F|K$. In particular, the function field
$F|K$ is conservative (i.e., $\overline{g}=g$) if and only if it does not
admit singular primes, or equivalently, if and only if the corresponding
regular integral curve $C|K$ is smooth.
For every non-negative integer $n$ we consider the $n$th _Frobenius pullback_
$F_{n}|K:=F^{p^{n}}{\cdot}K|K$. This function field is uniquely determined by
the property that the extension $F|F_{n}$ is purely inseparable of degree
$p^{n}$ (see [Sti78, p. 33]). Let $\mathfrak{p}_{n}$ be the restriction of the
prime $\mathfrak{p}$ to $F_{n}$, and let $\mathfrak{p}^{(n)}$ be the only
extension of $\mathfrak{p}$ to the purely inseparable base field extension
$F^{(n)}:=K^{p^{-n}}\\!{\cdot}F$. The valuation ring
$\mathcal{O}_{\mathfrak{p}^{(n)}}$ of $\mathfrak{p}^{(n)}$ is the integral
closure $\widetilde{K^{p^{-n}}\\!{\cdot}\mathcal{O}_{\mathfrak{p}}}$ of the
domain $K^{p^{-n}}\\!{\cdot}\mathcal{O}_{\mathfrak{p}}$ in its field of
fractions $K^{p^{-n}}\\!{\cdot}F$. The $n$th Frobenius map $z\mapsto
z^{p^{n}}$ defines an isomorphism between the function fields
$F^{(n)}|K^{p^{-n}}$ and $F_{n}|K$ which maps $\mathfrak{p}^{(n)}$ onto
$\mathfrak{p}_{n}$. As the ramification index of
$\mathfrak{p}^{(n)}|\mathfrak{p}_{n}$ is equal to $p^{n}$ we get
$e(\mathfrak{p}^{(n)}|\mathfrak{p})\cdot
e(\mathfrak{p}|\mathfrak{p}_{n})=p^{n}$ and therefore
$e(\mathfrak{p}^{(n+1)}|\mathfrak{p}^{(n)})\cdot
e(\mathfrak{p}_{n}|\mathfrak{p}_{n+1})=p\;\text{ for each $n$.}$
As the field extensions $F_{n}|F_{n+1}$ are purely inseparable of degree $p$,
each residue field extension
$\kappa(\mathfrak{p}_{n})|\kappa(\mathfrak{p}_{n+1})$ is purely inseparable of
degree $[\kappa(\mathfrak{p}_{n}):\kappa(\mathfrak{p}_{n+1})]\in\\{1,p\\}$.
In this section we ask for an integer $n$ such that the prime
$\mathfrak{p}_{n}$ is rational, i.e., such that its degree
$\deg(\mathfrak{p}_{n})=[\kappa(\mathfrak{p}_{n}):K]$ is equal to $1$. If such
an integer is known then the algorithm developed in [BedSt87] can be applied
to compute several local invariants of $F|K$ such as the singularity degrees
of $\mathfrak{p}$ or the orders of differentials at $\mathfrak{p}$.
Let
$\Delta_{n}=\Delta(\mathfrak{p}_{n}):=\dim_{K^{1/p}}(\widetilde{K^{1/p}{\cdot}\mathcal{O}_{\mathfrak{p}_{n}}}/K^{1/p}{\cdot}\mathcal{O}_{\mathfrak{p}_{n}})$
be the $K^{1/p}$-singularity degree of $\mathfrak{p}_{n}$. The non-negative
integers $\Delta_{0},\Delta_{1},\Delta_{2},\dots$ are divisible by $(p-1)/2$
if $p>2$ (see [St88, Corollary 2.4]) and they decrease rapidly to zero in the
sense that
$\Delta_{n+1}\leq p^{-1}\Delta_{n}\;\text{ for each $n$}$
(see [St88, Proposition 3.5]). In particular $\Delta_{n}=0$ for $n$
sufficiently large, or more precisely
$\Delta_{n}=0\;\text{ whenever }\;p^{n}>\begin{cases}\Delta_{0}&\text{if $p=2$
or $3$},\\\ \frac{2}{p-1}\Delta_{0}&\text{if $p>2$.}\end{cases}$
###### Proposition 2.4.
For every prime $\mathfrak{p}$ of $F|K$ the following equality holds
$\delta(\mathfrak{p})=\Delta_{0}+\Delta_{1}+\Delta_{2}+\cdots.$
In particular, the geometric singularity degree $\delta(\mathfrak{p})$ is a
multiple of $(p-1)/2$.
###### Proof.
By using the $n$-th Frobenius map we see that $\Delta_{n}$ is equal to the
$K^{p^{-(n+1)}}$-singularity degree of $\mathfrak{p}^{(n)}$. Hence by
considering the chain of local rings
$\mathcal{O}_{\mathfrak{p}}=\mathcal{O}_{\mathfrak{p}^{(0)}}\subset\mathcal{O}_{\mathfrak{p}^{(1)}}\subset\mathcal{O}_{\mathfrak{p}^{(2)}}\subset\cdots\subset\mathcal{O}_{\mathfrak{p}^{(n)}}$
we deduce that the $K^{p^{-(n+1)}}$-singularity degree of $\mathfrak{p}$ is
equal to the sum $\Delta_{0}+\Delta_{1}+\dots+\Delta_{n}$. As
$K^{p^{-\infty}}:=\bigcup K^{p^{-n}}$ and therefore
$K^{p^{-n}}\\!{\cdot}F=\bigcup_{n=0}^{\infty}K^{p^{-n}}\\!{\cdot}F=\varinjlim
K^{p^{-n}}\\!{\cdot}F,$
we conclude that the $K^{p^{-\infty}}$-singularity degree of $\mathfrak{p}$ is
equal to $\Delta_{0}+\Delta_{1}+\cdots$.
Let $\mathfrak{p}^{(\infty)}$ be the only extension of $\mathfrak{p}$ to the
purely inseparable base field extension $K^{p^{-\infty}}\\!{\cdot}F$. As the
algebraic closure $\overline{K}$ is separable over $K^{p^{-\infty}}$, the
prime $\mathfrak{p}^{(\infty)}$ is non-singular and so the geometric
singularity degree $\delta(\mathfrak{p})$ of the prime $\mathfrak{p}$
coincides with its $K^{p^{-\infty}}$-singularity degree (see also the proof of
Proposition 2.10 below). ∎
By applying the proposition to the restricted prime $\mathfrak{p}_{n}$ for
each non-negative integer $n$ we obtain
###### Corollary 2.5.
For each prime $\mathfrak{p}$ the following assertions hold
* •
$\delta(\mathfrak{p}_{n})=\Delta_{n}+\Delta_{n+1}+\cdots$;
* •
$\Delta_{n}=\delta(\mathfrak{p}_{n})-\delta(\mathfrak{p}_{n+1})$;
* •
the prime $\mathfrak{p}_{n}$ is non-singular if and only if $\Delta_{n}=0$;
* •
if $\Delta_{n}=0$ and $n>0$, then
$\delta(\mathfrak{p}_{n-1})=\Delta(\mathfrak{p}_{n-1})$;
* •
the prime $\mathfrak{p}_{n}$ is non-singular whenever
$p^{n}>\min\\{\Delta_{0},\frac{2}{p-1}\Delta_{0}\\}$.
Another condition on the values of $\delta(\mathfrak{p})$ and
$\Delta(\mathfrak{p})$ is imposed by the following proposition
###### Proposition 2.6.
Let $\mathfrak{p}$ be a singular prime of $F|K$. Then the degree of
$\mathfrak{p}_{1}$ is a divisor of the integer
$\frac{2}{p-1}\Delta(\mathfrak{p})$.
###### Proof.
Let $K^{\prime}:=K^{1/p}$. As $K^{\prime}{\cdot}\mathcal{O}_{\mathfrak{p}}$ is
a Gorenstein ring (see [St88, Theorem 1.1(b)]) we obtain
$2\Delta_{0}=\dim_{K^{\prime}}\widetilde{K^{\prime}{\cdot}\mathcal{O}_{\mathfrak{p}}}/\mathfrak{c}^{\prime}_{\mathfrak{p}}$
where $\mathfrak{c}^{\prime}_{\mathfrak{p}}$ denotes the conductor ideal of
the domain $K^{\prime}{\cdot}\mathcal{O}_{\mathfrak{p}}$ in its integral
closure $\widetilde{K^{\prime}{\cdot}\mathcal{O}_{\mathfrak{p}}}$. As
$\widetilde{K^{\prime}{\cdot}\mathcal{O}_{\mathfrak{p}}}$ is the discrete
valuation ring $\mathcal{O}_{\mathfrak{p}^{(1)}}$, the non-zero ideal
$\mathfrak{c}_{\mathfrak{p}}^{\prime}$ is a power of the maximal ideal of
$\mathcal{O}_{\mathfrak{p}^{(1)}}$, and so the $K^{\prime}$-dimension of
$\widetilde{K^{\prime}{\cdot}\mathcal{O}_{\mathfrak{p}}}/\mathfrak{c}_{\mathfrak{p}}^{\prime}$
is a multiple of $\deg(\mathfrak{p}^{(1)})=\deg(\mathfrak{p}_{1})$. By [St88,
Corollary 2.4] this dimension is even a multiple of
$(p-1)\deg(\mathfrak{p}_{1})$. ∎
We say that a prime $\mathfrak{p}$ is _separable_ if the residue field
extension $\kappa(\mathfrak{p})|K$ is separable. Every separable prime is non-
singular (see e.g. [FaSc20, Corollary 2.6]). The proposition below shows a
converse.
###### Proposition 2.7.
A prime $\mathfrak{p}$ is separable if and only if it is non-singular and
$\kappa(\mathfrak{p})=\kappa(\mathfrak{p}_{1})$.
###### Proof.
Since the extension $\kappa(\mathfrak{p})|\kappa(\mathfrak{p}_{1})$ is purely
inseparable, it is trivial if $\mathfrak{p}$ is separable. Thus we may assume
$\kappa(\mathfrak{p})=\kappa(\mathfrak{p}_{1})$. Then [Sti78], Satz 2 (ii) and
Korollar 1 of Satz 4, ensure that $\mathfrak{p}$ is singular if and only if
$\kappa(\mathfrak{p})|K$ is inseparable. ∎
###### Proposition 2.8.
Let $\mathfrak{p}$ be a prime of $F|K$. Then for sufficiently large $n$ the
restricted prime $\mathfrak{p}_{n}$ is separable and its residue field
$\kappa(\mathfrak{p}_{n})$ is the separable closure of $K$ in
$\kappa(\mathfrak{p})$.
###### Proof.
As
$\kappa(\mathfrak{p})\supseteq\kappa(\mathfrak{p}_{1})\supseteq\kappa(\mathfrak{p}_{2})\supseteq\dots\supseteq
K$ and $[\kappa(\mathfrak{p}):K]$ is finite we deduce that
$\kappa(\mathfrak{p}_{n})=\kappa(\mathfrak{p}_{n+1})=\cdots$ for sufficiently
large $n$. Moreover, for sufficiently large $n$ the prime $\mathfrak{p}_{n}$
is non-singular by Corollary 2.5, and so $\mathfrak{p}_{n}$ is separable by
Proposition 2.7. As $\kappa(\mathfrak{p}_{n})|K$ is separable and
$\kappa(\mathfrak{p})|\kappa(\mathfrak{p}_{n})$ is purely inseparable we
conclude that $\kappa(\mathfrak{p}_{n})$ is the separable closure of $K$ in
$\kappa(\mathfrak{p})$. ∎
We ask for an explicit description of the integers $n$ for which the
restricted primes $\mathfrak{p}_{n}$ are separable. The answer is rather easy
if the prime $\mathfrak{p}$ is non-singular.
###### Proposition 2.9.
Let $\mathfrak{p}$ be a non-singular prime of $F|K$, and let
$m:=\log_{p}[\kappa(\mathfrak{p}):K]_{insep}$, i.e., let $p^{m}$ be the
inseparability degree of the residue field extension $\kappa(\mathfrak{p})|K$.
Then
$[\kappa(\mathfrak{p}):\kappa(\mathfrak{p}_{i})]=p^{i}\quad\text{for each
$i=1,\dots,m$,}$
and $m$ is the smallest integer such that $\mathfrak{p}_{m}$ is separable.
###### Proof.
In the descending chain
$\kappa(\mathfrak{p})\supseteq\kappa(\mathfrak{p}_{1})\supseteq\kappa(\mathfrak{p}_{2})\supseteq\dots\supseteq
K$
the extensions $\kappa(\mathfrak{p}_{n})|\kappa(\mathfrak{p}_{n+1})$ are
purely inseparable of degree $p$ or $1$ for each $n$. As the prime
$\mathfrak{p}$ and therefore the restricted primes $\mathfrak{p}_{n}$ are non-
singular, it follows from Proposition 2.7 that
$[\kappa(\mathfrak{p}_{n}):\kappa(\mathfrak{p}_{n+1})]=p$ if and only if
$\mathfrak{p}_{n}$ is inseparable. ∎
An analogous result is much more involved if the prime $\mathfrak{p}$ is
singular (see Theorem 2.21). The reason is that the extension
$\kappa(\mathfrak{p}_{n})|\kappa(\mathfrak{p}_{n+1})$ may be trivial when
$\mathfrak{p}_{n}$ is singular, in which case the equalities
$[\kappa(\mathfrak{p}):\kappa(\mathfrak{p}_{i})]=p^{i}$ in the proposition no
longer hold.
We now study the primes of the separable base field extension
$K^{sep}F|K^{sep}$.
###### Proposition 2.10.
Let $\mathfrak{p}$ be a prime of $F|K$. Then the number of the primes of
$K^{sep}F|K^{sep}$ that lie over $\mathfrak{p}$ is equal to the separability
degree $[\kappa(\mathfrak{p}):K]_{sep}$ of the residue field extension
$\kappa(\mathfrak{p})|K$. Moreover, each such extended prime $\mathfrak{q}$
has degree $\deg(\mathfrak{q})=[\kappa(\mathfrak{p}):K]_{insep}$ and geometric
singularity degree
$\delta(\mathfrak{q})=\delta(\mathfrak{p})/[\kappa(\mathfrak{p}):K]_{sep}$.
###### Proof.
Let $L$ be a finite separable extension of $K$, and let
$\mathfrak{q}_{1},\dots,\mathfrak{q}_{r}$ be the primes of $LF|L$ lying over
$\mathfrak{p}$. Then
$[L:K]=\sum_{i=1}^{r}e_{i}f_{i}$
where $e_{i}$ and $f_{i}$ are the ramification indices and the inertia indices
of $\mathfrak{q}_{i}$ over $\mathfrak{p}$ respectively. As the trace of
$L{\cdot}F|F=(L\otimes_{K}F)|F$ is equal to
$\mathrm{tr}_{L|K}\otimes\mathrm{id}_{F}$ where $\mathrm{tr}_{L|K}$ denotes
the trace of the finite separable extension $L|K$, we deduce that the integral
closure
$\mathcal{O}_{\mathfrak{q}_{1}}\cap\dots\cap\mathcal{O}_{\mathfrak{q}_{r}}$ of
$\mathcal{O}_{\mathfrak{p}}$ in $L{\cdot}F$ is equal to
$L\otimes_{K}\mathcal{O}_{\mathfrak{p}}$ (i.e., the $L$-singularity degree of
$\mathcal{O}_{\mathfrak{p}}$ is zero). It follows that the exponents of the
Dedekind different of $L{\cdot}F|F$ are equal to zero, and so the ramification
indices $e_{i}$ are equal to one. It also follows that each residue field
$\kappa(\mathfrak{q}_{i})$ ($i=1,\dots,r$) is generated by the images of
$\kappa(\mathfrak{p})$ and $L$ inside it.
If $L$ contains the separable closure of $K$ in $\kappa({\mathfrak{p}})$ then
$f_{i}=[L:K]/[\kappa({\mathfrak{p}}):K]_{sep}$ and
$r=[\kappa(\mathfrak{p}):K]_{sep}$, so in particular
$\deg(\mathfrak{q}_{i})=\deg(\mathfrak{p})/r=[\kappa(\mathfrak{p}):K]_{insep}$.
Note that the equality $r=[\kappa(\mathfrak{p}):K]_{sep}$ holds without
assuming that $[L:K]$ is finite, and so it holds for $L=K^{sep}$. Since a
similar remark applies to the identity
$\deg(\mathfrak{q}_{i})=[\kappa(\mathfrak{p}):K]_{sep}$, it follows that there
are precisely $[\kappa(\mathfrak{p}):K]_{sep}$ primes of $K^{sep}F|K^{sep}$
lying over $\mathfrak{p}$, each of degree $[\kappa(\mathfrak{p}):K]_{insep}$.
It remains to compute their geometric singularity degrees. By the preceding
paragraph, the $K^{sep}$-singularity degree of $\mathfrak{p}$ is zero. In
light of (2.2), this means that the geometric singularity degree
$\delta(\mathfrak{p})$ is equal to the $\overline{K}$-singularity degree of
the semilocal ring $K^{sep}{\cdot}\mathcal{O}_{\mathfrak{p}}$, which by (2.1)
is equal to the sum of the geometric singularity degrees of the primes of
$K^{sep}F|K^{sep}$ lying over $\mathfrak{p}$. But these primes are conjugated
because the field extension $K^{sep}F|F$ is normal, so it follows that their
geometric singularity degrees coincide, that is, each of them has geometric
singularity degree $\delta(\mathfrak{p})/[\kappa(\mathfrak{p}):K]_{sep}$. ∎
###### Corollary 2.11.
For a prime $\mathfrak{p}$ the following assertions are equivalent
1. n(i)
$\mathfrak{p}$ is separable,
2. n(ii)
$\mathfrak{p}$ decomposes into rational primes in the extended function field
$K^{sep}F|K^{sep}$,
3. n(iii)
there is a finite separable extension field $L$ of $K$ such that
$\mathfrak{p}$ decomposes into rational primes in $LF|L$.
In particular, if $\mathfrak{p}$ is separable and $L=\kappa(\mathfrak{p})$
then $\mathfrak{p}$ decomposes into rational primes in $LF|L$.
###### Corollary 2.12.
The number of the primes of $\overline{K}F|\overline{K}$ lying over a prime
$\mathfrak{p}$ is equal to the separability degree
$[\kappa(\mathfrak{p}):K]_{sep}$ of the residue field extension
$\kappa(\mathfrak{p})|K$.
###### Proof.
Since primes are _non-decomposed_ in purely inseparable base field extensions,
the number of the primes of $\overline{K}F|\overline{K}$ lying over
$\mathfrak{p}$ concides with the number of the primes of $K^{sep}F|K^{sep}$
lying over $\mathfrak{p}$. ∎
###### Corollary 2.13.
Let $\mathfrak{p}$ be a prime of $F|K$. Then $[\kappa(\mathfrak{p}):K]_{sep}$
divides the geometric singularity degree $\delta(\mathfrak{p}_{n})$ and the
$K^{1/p}$-singularity degree $\Delta_{n}=\Delta(\mathfrak{p}_{n})$ for each
non-negative integer $n$. If $p>2$ then $[\kappa(\mathfrak{p}):K]_{sep}$ also
divides the integers $\frac{2}{p-1}\delta(\mathfrak{p}_{n})$ and
$\frac{2}{p-1}\Delta(\mathfrak{p}_{n})$.
###### Proof.
For every prime $\mathfrak{q}$ of $K^{sep}F|K^{sep}$ lying over $\mathfrak{p}$
we have
$\delta(\mathfrak{p})=[\kappa(\mathfrak{p}):K]_{sep}\,\delta(\mathfrak{q})$,
where both $\delta(\mathfrak{p})$ and $\delta(\mathfrak{q})$ are divisible by
$\frac{p-1}{2}$ when $p>2$ (see Proposition 2.4). Analogous statements hold
for the restricted primes $\mathfrak{p}_{n}$. Note now that each
$\mathfrak{p}_{n}$ has separability degree
$[\kappa(\mathfrak{p}_{n}):K]_{sep}=[\kappa(\mathfrak{p}):K]_{sep}$ and
$K^{1/p}$-singularity degree
$\Delta_{n}=\delta(\mathfrak{p}_{n})-\delta(\mathfrak{p}_{n+1})$. ∎
We say that a prime $\mathfrak{p}$ of $F|K$ is decomposed if it is decomposed
in the constant field extension $\overline{K}F|\overline{K}$, i.e., if there
is more than one prime of $\overline{K}F|\overline{K}$ lying over
$\mathfrak{p}$, i.e., if its local ring $\mathcal{O}_{\mathfrak{p}}$ is not
geometrically unibranch. Every rational prime is non-decomposed (see Corollary
2.16). For an example of a decomposed singular prime we refer to [Sal14,
Example 2.5].
In general, it may be hard to decide whether a given prime is non-decomposed.
The corollary below provides a sufficient criterion for a singular prime to be
non-decomposed.
###### Corollary 2.14.
If $p=2$ and the integers $\Delta_{0}$, $\Delta_{1}$, $\Delta_{2},\dots$ are
coprime, then the prime $\mathfrak{p}$ is non-decomposed. Likewise, if $p>2$
and the integers $\frac{2}{p-1}\Delta_{0}$, $\frac{2}{p-1}\Delta_{1}$,
$\frac{2}{p-1}\Delta_{2},\dots$ are coprime, then $\mathfrak{p}$ is non-
decomposed.
The following two corollaries provide characterizations of non-decomposedness
and rationality.
###### Corollary 2.15.
For a prime $\mathfrak{p}$ the following assertions are equivalent
1. n(i)
$\mathfrak{p}$ is non-decomposed,
2. n(ii)
the residue field extension $\kappa(\mathfrak{p})|K$ is purely inseparable,
3. n(iii)
there is an integer $n\geq 0$ such that the prime $\mathfrak{p}_{n}$ is
rational.
###### Proof.
The equivalence between (i) and (ii) follows immediately from Proposition
2.10. We note that $\mathfrak{p}$ is non-decomposed if and only if for some
(and any) integer $n\geq 0$ the prime $\mathfrak{p}^{(n)}$, and therefore the
prime $\mathfrak{p}_{n}$, is non-decomposed. By Proposition 2.8 there is an
integer $n\geq 0$ such that $\mathfrak{p}_{n}$ is separable. Clearly, a
separable prime is purely inseparable if and only if it is rational. ∎
###### Corollary 2.16.
A prime $\mathfrak{p}$ is rational if and only if it is separable and non-
decomposed.
Specializing Proposition 2.9 to the non-decomposed case we get
###### Proposition 2.17.
Let $\mathfrak{p}$ be a non-singular non-decomposed prime of $F|K$, so in
particular $\kappa(\mathfrak{p})|K$ is purely inseparable, say of degree
$p^{m}$. Then $m$ is the smallest integer such that $\mathfrak{p}_{m}$ is
rational.
###### Remark 2.18.
Let $C|K$ denote the regular geometrically integral curve associated to the
function field $F|K$.
1. (i)
Over each non-decomposed prime $\mathfrak{p}$ there lies a unique closed point
in the extended curve
$\overline{C}|\overline{K}:=C\otimes_{K}\overline{K}|\overline{K}$, i.e.,
there is a unique point $x\in\overline{C}$ that is mapped into $\mathfrak{p}$
under the natural morphism $\overline{C}\to C$. The geometric singularity
degree $\delta(\mathfrak{p})$ of $\mathfrak{p}$ coincides with the
$\delta$-invariant $\delta(\overline{C},x)$ of $\overline{C}$ at $x$ as
defined in [IIL20, p. 69].
2. (ii)
By Proposition 2.10, over each prime $\mathfrak{p}$ in $F|K$ there are exactly
$[\kappa(\mathfrak{p}):K]_{sep}$ (non-decomposed) primes in
$K^{sep}F|K^{sep}$, each of singularity degree
$\delta(\mathfrak{p})/[\kappa(\mathfrak{p}):K]_{sep}$. In other words, for
each prime $\mathfrak{p}$ there are precisely $[\kappa(\mathfrak{p}):K]_{sep}$
closed points $x_{i}\in\overline{C}$ ($1\leq
i\leq[\kappa(\mathfrak{p}):K]_{sep}$) that are mapped into $\mathfrak{p}$ by
the morphism $\overline{C}\to C$, each of $\delta$-invariant
$\delta(\overline{C},x_{i})=\delta(\mathfrak{p})/[\kappa(\mathfrak{p}):K]_{sep}$.
3. (iii)
Since singularity degrees are divisible by $(p-1)/2$ (see Proposition 2.4) we
deduce that the $\delta$-invariants $\delta(\overline{C},x)$ of the curve
$\overline{C}$ are all multiples of $(p-1)/2$. In particular, they cannot be
strictly smaller than $(p-1)/2$ unless $C$ is smooth. This provides a new
proof of the smoothness criterion in [IIL20, Theorem 5.7].
We now address the question raised after Proposition 2.9. For a given singular
prime $\mathfrak{p}$ we ask for a specific integer $n$ such that the
restriction $\mathfrak{p}_{n}$ is separable. To get an answer we work with the
partitions of the geometric singularity degree $\delta(\mathfrak{p})$ as the
sum of the $K^{1/p}$-singularity degrees $\Delta_{i}$, as indicated in
Proposition 2.4.
Let $d$ be a positive integer. We consider the partitions
$d=d_{1}+\dots+d_{s}$
of $d$ by positive integers $d_{i}$ satisfying
$d_{i+1}\leq p^{-1}d_{i}\;\text{ for each $i=1,\dots,s-1$}.$
We define
$\tau_{p}(d):=\max\\{s+\min\\{v_{p}(d_{1}),\dots,v_{p}(d_{s})\\}\\},$
where the maximum is taken over all such partitions and $v_{p}(d_{i})$ denotes
the exponent of the largest $p$-power that divides $d_{i}$.
###### Proposition 2.19.
Let $\mathfrak{p}$ be a singular prime of $F|K$. Then the restricted prime
$\mathfrak{p}_{n}$ is separable for all
$n\geq\tau_{p}\big{(}\frac{2}{p-1}\frac{\delta(\mathfrak{p})}{[\kappa(\mathfrak{p}):K]_{sep}}\big{)}$.
Note that according to Corollary 2.13 the integer
$\frac{2}{p-1}\delta(\mathfrak{p})$ is divisible by
$[\kappa(\mathfrak{p}):K]_{sep}$, and so
$\frac{2}{p-1}\frac{\delta(\mathfrak{p})}{[\kappa(\mathfrak{p}):K]_{sep}}$ is
indeed an integer.
###### Proof.
We take
$d:=\frac{2}{p-1}\frac{\delta(\mathfrak{p})}{[\kappa(\mathfrak{p}):K]_{sep}}$
and
$d_{i}:=\frac{2}{p-1}\frac{\Delta(\mathfrak{p}_{i-1})}{[\kappa(\mathfrak{p}):K]_{sep}}$.
Let $s$ be the largest integer such that $d_{s}>0$, that is,
$\Delta(\mathfrak{p}_{s-1})\neq 0$ but $\Delta(\mathfrak{p}_{s})=0$, i.e.,
$\mathfrak{p}_{s-1}$ is singular but $\mathfrak{p}_{s}$ is non-singular. Let
$m=\log_{p}[\kappa(\mathfrak{p}_{s}):K]_{insep}$. By Proposition 2.9, the
prime $\mathfrak{p}_{s+m}$ is separable. By Proposition 2.6, for each
$i=1,\dots,s$ the degree $\deg(\mathfrak{p}_{i})$ is a divisor of
$[\kappa(\mathfrak{p}):K]_{sep}\,d_{i}$. Because
$\deg(\mathfrak{p}_{s})=p^{m}[\kappa(\mathfrak{p}):K]_{sep}$ is a divisor of
each $\deg(\mathfrak{p}_{i})$ we conclude
$m\leq\min\\{v_{p}(d_{1}),\dots,v_{p}(d_{s})\\}$. ∎
To get the desired bound on $n$ so that $\mathfrak{p}_{n}$ is separable it
remains to solve a combinatorics problem, namely, we have to determine the
precise value of $\tau_{p}(d)$. As this will depend on whether $d$ is a sum of
consecutive $p$-powers we introduce the following notation: for a pair of non-
negative integers $j\leq i$ we write
$P^{i}_{j}:=p^{j}+\dots+p^{i}=\sum_{r=j}^{i}p^{r}=\frac{p^{i+1}-p^{j}}{p-1}.$
Note that for every positive integer $i$ the following inequalities hold
$P_{0}^{i-1}<P_{i}^{i}<P_{i-1}^{i}<\cdots<P_{0}^{i}.$
###### Proposition 2.20.
Let $d$ be a positive integer. If $P_{0}^{i-1}\leq d\leq P_{0}^{i}$, then
$\tau_{p}(d)=\begin{cases}i+1&\text{if $d=P^{i}_{j}$ for some $j\leq i$,}\\\
i&otherwise.\\\ \end{cases}$
Equivalently,
$\tau_{p}(d)=\begin{cases}\lceil\log_{p}((p-1)d+1)\rceil&\text{if $d$ is a sum
of consecutive $p$-powers,}\\\
\lceil\log_{p}((p-1)d+1)\rceil-1&\text{otherwise.}\end{cases}$
###### Proof.
As $\tau_{p}(d)=1$ whenever $1\leq d<p$ and $\tau_{p}(p)=\tau_{p}(p+1)=2$, the
case $i=1$ is clear. So we can assume that $i>1$. The partition
$d=(p^{i-1}+(d-P^{i-1}_{0}))+p^{i-2}+\dots+1$ shows that $\tau_{p}(d)\geq i$
whenever $d\geq P_{0}^{i-1}$. Moreover, if $d=P_{j}^{i}$ for some $j\leq i$
then $\tau_{p}(d)>i$, as follows from the partition $d=p^{i}+\dots+p^{j}$.
Thus it suffices to show that if $d\leq P_{0}^{i}$ then any partition
$d=d_{1}+\dots+d_{s}$ different from the ones of the preceding line satisfies
$s+\min\\{v_{p}(d_{1}),\dots,v_{p}(d_{s})\\}\leq i.$
We prove this claim by induction. The base case $i=1$ has been already
settled. We may assume that $s>1$; indeed, if $s=1$ then $v_{p}(d_{1})<i$
because $d_{1}$ is different from $p^{i}$ and not larger than $P_{0}^{i}$. If
$d\leq P_{0}^{i-1}$ then the claim follows from the induction hipothesis. Thus
we may assume that $d>P_{0}^{i-1}$. As
$d_{2}+\dots+d_{s}\leq\frac{d_{1}}{p}+\dots+\frac{d_{1}}{p^{s-1}}<\frac{d_{1}}{p-1}=\frac{d-d_{2}-\dots-
d_{s}}{p-1},$
hence $d_{2}+\dots+d_{s}<p^{-1}d\leq p^{-1}P_{0}^{i}=p^{-1}+P_{0}^{i-1}$ and
therefore $d_{2}+\dots+d_{s}\leq P_{0}^{i-1}$, it follows from the induction
hypothesis that either
$s-1+\min\\{v_{p}(d_{2}),\dots,v_{p}(d_{s})\\}\leq i-1$
or $d_{2}=p^{i-1},\dots,d_{s}=p^{i+1-s}$. In the first case the claim follows.
In the second case it remains to show that $d_{1}$ is not a multiple of
$p^{i+1-s}$. This holds because on the one hand $d_{1}\geq
pd_{2}=p^{i},d_{1}\neq p^{i}$ and therefore $d_{1}-p^{i}>0$, while on the
other hand
$d_{1}-p^{i}=d-p^{i}-d_{2}-\dots-d_{s}=d-P_{i+1-s}^{i}\leq
P_{0}^{i}-P_{i+1-s}^{i}=p^{0}+\dots+p^{i-s}<p^{i-s+1}.\qed$
A combination of Propositions 2.19 and 2.20 yields the desired bound on $n$ so
that the prime $\mathfrak{p}_{n}$ is separable. This depends on the
characteristic $p>0$ and on the geometric singularity degree
$\delta(\mathfrak{p})$ of the singular prime $\mathfrak{p}$. In particular,
when $\mathfrak{p}$ is non-decomposed we obtain a bound on $n$ so that
$\mathfrak{p}_{n}$ is rational, thus answering the question raised just before
Proposition 2.4.
###### Theorem 2.21.
Let $F|K$ be a one-dimensional separable function field of characteristic
$p>0$. For a singular prime $\mathfrak{p}$ the following assertions hold.
1. n(i)
The restriction $\mathfrak{p}_{n}$ of $\mathfrak{p}$ to the $n$th Frobenius
pullback $F_{n}|K=F^{p^{n}}{\cdot}K|K$ is separable for all
$n\geq\log_{p}\big{(}2\,\frac{\delta(\mathfrak{p})}{[\kappa(\mathfrak{p}):K]_{sep}}+1\big{)}$;
moreover, if the integer
$\frac{2}{p-1}\frac{\delta(\mathfrak{p})}{[\kappa(\mathfrak{p}):K]_{sep}}$ is
not a sum of consecutive $p$-powers then $\mathfrak{p}_{n}$ is separable for
all
$n\geq\log_{p}\big{(}2\,\frac{\delta(\mathfrak{p})}{[\kappa(\mathfrak{p}):K]_{sep}}+1\big{)}-1$.
2. n(ii)
Assume in addition that the prime $\mathfrak{p}$ is non-decomposed. Then
$\mathfrak{p}_{n}$ is rational for all
$n\geq\log_{p}\big{(}2{\delta(\mathfrak{p})}+1\big{)}$; moreover, if the
integer $\frac{2}{p-1}{\delta(\mathfrak{p})}$ is not a sum of consecutive
$p$-powers then $\mathfrak{p}_{n}$ is rational for all
$n\geq\log_{p}\big{(}2{\delta(\mathfrak{p})}+1\big{)}-1$.
In the special case where $p>2$ and $\delta(\mathfrak{p})=p(p-1)/2$, the bound
in ii is equal to two. This was obtained previously by Salomão [Sal11,
Corollary 3.3].
In the remaining of this section we show that the bound in Theorem 2.21 ii is
sharp in characteristic $p=2$. In characteristic $p>2$, however, an analogous
statement is false. For instance, it can be proved that in characteristic
$p=3$ the restriction $\mathfrak{p}_{2}$ of a non-decomposed prime
$\mathfrak{p}$ of geometric singularity degree $\delta(\mathfrak{p})=16$ is
rational, although the theorem only guarantees that $\mathfrak{p}_{n}$ is
rational for $n\geq 3$.
###### Proposition 2.22.
The bound provided by Theorem 2.21 ii is sharp in characteristic $p=2$.
To prove the proposition, we must construct for every positive even integer
$d$ a non-decomposed prime $\mathfrak{p}$ with $d=2\delta(\mathfrak{p})$ such
that $\mathfrak{p}_{n-1}$ is non-rational, where $n$ is the smallest integer
allowed by the bound in Theorem 2.21 ii, that is,
$n=\tau_{2}(d)=\begin{cases}i+1&\text{if $d=P^{i}_{j}$ for some $j\leq i$,}\\\
i&\text{if $d\neq P^{i}_{j}$ for all $j\leq i$ and
$P^{i-1}_{0}<d<P^{i}_{0}$.}\end{cases}$
In Example 2.23 below we build for every $i>j>0$ and every $\ell\geq 0$ a non-
decomposed prime $\mathfrak{p}$ with
$P_{j}^{i}+\ell\cdot 2^{j+1}=2\delta(\mathfrak{p})$
such that $\mathfrak{p}_{i}$ and $\mathfrak{p}_{i+1}$ are non-rational and
rational respectively. Similarly, in Example 2.24 we construct for every $i>0$
a non-decomposed prime $\mathfrak{p}$ with
$2^{i}=2\delta(\mathfrak{p})$
such that $\mathfrak{p}_{i}$ and $\mathfrak{p}_{i+1}$ are non-rational and
rational respectively. Before getting to the examples themselves we show how
the proposition is obtained from them.
###### Proof of Proposition 2.22.
In view of the two examples, it is enough to show that if $d$ is a positive
even integer such that $P^{i-1}_{0}<d<P^{i}_{0}$ and $d\neq P^{i}_{j}$ for all
$j\leq i$ (in particular $i>2$), then it can be written as
$d=P_{j}^{i-1}+\ell\cdot 2^{j+1}$ for some $j$ with $0<j<i-1$ and some
$\ell\geq 0$. Indeed, if this were not the case then
$d\not\equiv 2^{j}\pmod{2^{j+1}}\quad\text{for each $j=0,\dots,i-2$,}$
which means $d\equiv 0\pmod{2^{i-1}}$, and therefore
$d\in\\{P^{i}_{i},P^{i}_{i-1}\\}$ as $P^{i-1}_{0}<d<P^{i}_{0}$, a
contradiction. ∎
###### Example 2.23.
Let $i>j>0$ and $\ell\geq 0$. We construct a non-decomposed prime
$\mathfrak{p}$ that realizes the partition of length $s:=i-j+1$
$P_{j}^{i}+\ell\cdot 2^{j+1}=(2^{i}+\ell\cdot 2^{i+1})+2^{i-1}+\dots+2^{j},$
i.e.,
$2\delta(\mathfrak{p})=P_{j}^{i}+\ell\cdot
2^{j+1},\,2\Delta_{0}=2^{i}+\ell\cdot
2^{i+1},2\Delta_{1}=2^{i-1},\dots,2\Delta_{s-1}=2^{j}$
(see the proof of Proposition 2.19), with the property that $\mathfrak{p}_{i}$
and $\mathfrak{p}_{i+1}$ are non-rational and rational respectively. Consider
the function field $F|K=K(y,u)|K$ in characteristic $p=2$ given by the
following relation
$(a+z^{2^{j}})z+y^{2^{i-j}}=0,$
where $z:=u^{2}+y^{1+2\ell}$ and $a\in K\setminus K^{2}$. Then
$y^{2^{i-j}}=(a+z^{2^{j}})z$ and $u^{2}=z+y^{1+2\ell}$, whence the Frobenius
pullbacks of $F|K$ take the form
$F_{n}|K=\begin{cases}K(y,u)|K&\text{if $n=0$},\\\ K(z,y^{2^{n-1}})|K&\text{if
$0<n<s$},\\\ K(z)|K&\text{if $n=s$}.\end{cases}$
Let $\mathfrak{p}$ be the zero of the function $z^{2^{j}}+a$, that is, let
$\mathfrak{p}$ be the only prime of $F|K$ such that
$v_{\mathfrak{p}}(z^{2^{j}}+a)>0$. The restricted prime $\mathfrak{p}_{s}$ is
the $(z^{2^{j}}+a)$-adic prime of the rational function field
$F_{s}|K=K(z)|K$, i.e., $v_{\mathfrak{p}_{s}}(z^{2^{j}}+a)=1$, and therefore
it has residue field
$\kappa(\mathfrak{p}_{s})=K(z(\mathfrak{p}_{s}))=K(a^{1/2^{j}})$ and degree
$\deg(\mathfrak{p}_{s})=2^{j}$. The prime
$\mathfrak{p}_{s+j}=\mathfrak{p}_{i+1}$ is rational with local parameter
$x:=z^{2^{j}}+a$, but the prime $\mathfrak{p}_{s+j-1}=\mathfrak{p}_{i}$ is not
(see Proposition 2.17).
We compute the geometric singularity degree of the non-decomposed prime
$\mathfrak{p}$ by applying the algorithm developed in [BedSt87]. Because $y\in
F_{1}$ and $y^{2^{i-j}}=xz$ is a local parameter at the prime
$\mathfrak{p}_{s}$, for every $0<n<s$ the prime $\mathfrak{p}_{n}$ is ramified
over $F_{n+1}$ with local parameter $y^{2^{n-1}}$. As the differential
$dy^{2^{i}}=x^{2^{j}}dx$ of $F_{i+1}|K=K(x)|K$ has order $2^{j}$ at
$\mathfrak{p}_{i+1}$, this implies by [BedSt87, Theorem 2.3] that
$\delta(\mathfrak{p}_{n})=2\delta(\mathfrak{p}_{n+1})+\tfrac{1}{2}v_{\mathfrak{p}_{i+1}}(dy^{2^{i}})=2\delta(\mathfrak{p}_{n+1})+2^{j-1}\qquad(0<n<s).$
Now, since $u(\mathfrak{p})=z(\mathfrak{p}_{1})^{1/2}$ does not lie in the
residue field
$\kappa({\mathfrak{p}_{1}})=K(z(\mathfrak{p}_{1}))=K(a^{1/2^{j}})$ the prime
$\mathfrak{p}$ is unramified over $F_{1}$, so it follows from [BedSt87,
Theorem 2.3] that
$\delta(\mathfrak{p})=2\delta(\mathfrak{p}_{1})+\tfrac{1}{2}v_{\mathfrak{p}_{i+1}}(du^{2^{i+1}})=2\delta(\mathfrak{p}_{1})+2^{j-1}+\ell\cdot
2^{j},$
where the last equality is due to the fact that the differential
$du^{2^{i+1}}=x^{2^{j}(1+2\ell)}(a^{2}+x^{2})^{\ell}dx$ of $F_{i+1}|K$ has
order $2^{j}(1+2\ell)$ at $\mathfrak{p}_{i+1}$. This shows that the non-
decomposed prime $\mathfrak{p}$ realizes the aforementioned partition of
$P^{i}_{j}+\ell\cdot 2^{j+1}$.
###### Example 2.24.
Let $i>0$. We construct a non-decomposed prime $\mathfrak{p}$ that realizes
the partition of $2^{i}$ of length $s:=1$, i.e.,
$2\delta(\mathfrak{p})=2^{i},\,2\Delta_{0}=2^{i},$
with the property that $\mathfrak{p}_{i}$ and $\mathfrak{p}_{i+1}$ are non-
rational and rational respectively. So let $F|K=K(z,y)|K$ be the function
field in characteristic $p=2$ defined by the equation
$y^{2}=(a+z^{2^{i}})z,$
where $a\in K\setminus K^{2}$. The first Frobenius pullback is equal to
$F_{1}|K=K(z)|K.$
Let $\mathfrak{p}$ be the zero of the function $z^{2^{i}}+a$, so that its
restriction $\mathfrak{p}_{1}$ is the $(z^{2^{i}}+a)$-adic prime of the
rational function field $F_{1}|K=K(z)|K$, i.e.,
$v_{\mathfrak{p}_{1}}(z^{2^{i}}+a)=1$. This implies that $\mathfrak{p}_{1}$ is
a non-decomposed prime of degree $\deg(\mathfrak{p}_{1})=2^{i}$ and that the
primes $\mathfrak{p}_{i+1}$ and $\mathfrak{p}_{i}$ are rational and non-
rational respectively. As $y^{2}=xz$ is a local parameter at
$\mathfrak{p}_{1}$ we conclude
$\delta(\mathfrak{p})=2\delta(\mathfrak{p}_{1})+\frac{1}{2}v_{\mathfrak{p}_{i+1}}(dy^{2^{i+1}})=2^{i-1}.$
## 3\. A pencil of singular quartics in characteristic 2
In this section we study the geometry of a fibration by singular rational
plane projective quartics over the projective line in characteristic $2$. The
generic fibre of this fibration has a singular non-decomposed prime
$\mathfrak{p}$ of geometric singularity degree $\delta(\mathfrak{p})=3$, with
the property that its restriction $\mathfrak{p}_{2}$ is non-rational. This
means that the generic fibre attains the bound provided by Theorem 2.21 ii for
$p=2$ and $\delta(\mathfrak{p})=3$. We determine as well the minimal regular
model of the fibration.
Let $k$ be an algebraically closed field of characteristic $p=2$. Consider the
integral projective algebraic surface over $k$
$S\subset\mathbb{P}^{2}\times\mathbb{P}^{1}$
cut out by the bihomogeneous polynomial equation
$T_{0}(Z^{4}+X^{2}Y^{2}+X^{3}Z)+T_{1}(Y^{4}+X^{2}Z^{2})=0,$ (3.1)
where $X,Y,Z$ and $T_{0},T_{1}$ represent the homogenous coordinates of
$\mathbb{P}^{2}$ and $\mathbb{P}^{1}$ respectively. The surface $S$ has a
unique singular point, namely $P=((1:0:0),(0:1))$, as follows from the
Jacobian criterion. The second projection
$\phi:S\longrightarrow\mathbb{P}^{1},$
which is proper and flat [Har77, Chapter III, Proposition 9.7], yields a
fibration by plane projective quartic curves over $\mathbb{P}^{1}$. The fibre
over each point of the form $(1:c)$ in $\mathbb{P}^{1}$ is isomorphic to the
plane projective quartic cut out by the equation
$Z^{4}+X^{2}Y^{2}+X^{3}Z+c(Y^{4}+X^{2}Z^{2})=0,$
which has a unique singular point at
$P_{c}:=(0:1:c^{1/4}).$
This curve is rational and integral, and its arithmetic genus is equal to $3$,
as follows from the genus-degree formula for plane curves. The singular point
$P_{c}$ is unibranch of singularity degree $3$ and multiplicity $2$ (if
$c^{3}\neq 1$) or $3$ (if $c^{3}=1$), and its tangent line
$\\{(x:y:z)\in\mathbb{P}^{2}\mid x=0\\}$
cuts the quartic curve only at $P_{c}$. We note that the quartic curve is
_strange_ , that is, all its tangent lines pass through the unique common
point $(0:1:0)$. If $c=0$, then this point coincides with the singular point
$P_{c}$, and so each tangent line at a non-singular point intersects the curve
at two points but is not a bitangent. In the opposite case $c\neq 0$, every
such tangent line is a bitangent.
In analogy to the theory of elliptic curves, we notice that the curve $S_{c}$
is _homogeneous_ , that is, for any two non-singular points there is an
automorphism mapping the first point into the second point. Indeed, given a
non-singular point $(x_{0}:y_{0}:z_{0})\in S_{c}$, the projective
transformation
$(x:y:z)\longmapsto(x_{0}x:x_{0}y+y_{0}x:x_{0}z+z_{0}x)$
defines an automorphism of $S_{c}$ mapping $(x_{0}:y_{0}:z_{0})$ into the
point $(1:0:0)$.
Over the point $(0:1)$ of the base $\mathbb{P}^{1}$ the fibre of $\phi$
degenerates to the non-reduced curve
$(Y^{2}+XZ)^{2}=0.$
This is the bad fibre of the fibration in the sense that its behaviour differs
from the generic behaviour of the fibres.
The generic fibre $C$ of the fibration $\phi:S\to\mathbb{P}^{1}$ is the
quartic curve over the function field $K=k(t):=k(T_{1}/T_{0})$ of the base
$\mathbb{P}^{1}$ defined by the homogeneous equation (3.1). Its function field
$F:=K(C)$ coincides with the function field $k(S)$ of the total space $S$.
Dehomogenizing $X\mapsto 1$ and $T_{0}\mapsto 1$ in equation (3.1) we obtain
$F=k(S)=k(t,y,z)=K(y,z)$
where the affine coordinate functions $t$, $y$ and $z$ of the surface $S$
satisfy the equation
$(z^{4}+y^{2}+z)+t(y^{4}+z^{2})=0.$ (3.2)
The function field $F|K=K(y,z)|K$ of $C$ is therefore generated over $K=k(t)$
by the functions $y$ and $z$, which satisfy equation (3.2). The following
proposition lists some properties of the generic fibre $C$ and its singular
primes.
###### Proposition 3.3.
The regular curve $C$ has arithmetic genus $h^{1}(C,\mathcal{O}_{C})=3$. The
genus of the normalization of its extension $C\otimes_{K}\overline{K}$ is
equal to zero. Furthermore, there is a unique singular prime $\mathfrak{p}$ in
$C$, which is non-decomposed and satisfies
1. n(i)
$\delta(\mathfrak{p})=3$, $\delta(\mathfrak{p}_{1})=1$, and
$\delta(\mathfrak{p}_{n})=0$ for each $n\geq 2$;
2. n(ii)
$\deg(\mathfrak{p})=4$, $\deg(\mathfrak{p}_{1})=\deg(\mathfrak{p}_{2})=2$, and
$\deg(\mathfrak{p}_{n})=1$ for each $n\geq 3$.
In particular, the prime $\mathfrak{p}$ attains the bound in Theorem 2.21 ii
for $p=2$ and $\delta(\mathfrak{p})=3$.
###### Proof.
By the genus-degree formula for plane curves, the curve $C$ has arithmetic
genus $h^{1}(C,\mathcal{O}_{C})=3$; equivalently, the function field $F|K$ has
genus $g=3$. Consider the function $u:=z+y^{2}$ in $F=K(y,z)$, and notice that
it satisfies the relation
$tu^{2}+u=z^{4}.$
The first three iterated Frobenius pullbacks of $F|K$ are then given by
$F_{1}|K=K(u,z)|K,\quad F_{2}|K=K(u,z^{2})|K,\quad F_{3}|K=K(u)|K.$
As the latter function field is rational, so is the extended function field
$\overline{K}F|\overline{K}$, i.e., the extended curve
$C\otimes_{K}\overline{K}$ is rational and its normalization has genus
$\overline{g}=0$.
Let $\mathfrak{p}$ denote the only pole of $u$, i.e., let $\mathfrak{p}$ be
the only prime of $F|K$ such that $v_{\mathfrak{p}}(u)<0$. It is non-
decomposed because its restriction $\mathfrak{p}_{3}$ to $F_{3}|K$ is a
rational prime with local parameter $u^{-1}\in F_{3}$. In particular, we can
determine its invariants by applying the algorithm in [BedSt87]. Since the
function $(z^{2}u^{-1})^{2}+t=u^{-1}$ belongs to the maximal ideal of the
local ring $\mathcal{O}_{\mathfrak{p}_{3}}$, the value
$(z^{2}u^{-1})(\mathfrak{p}_{2})=t^{1/2}$ lies outside
$\kappa(\mathfrak{p}_{3})=K$. Thus the prime $\mathfrak{p}_{2}$ of $F_{2}|K$
is unramified over $F_{3}$ with residue field
$\kappa({\mathfrak{p}_{2}})=K(t^{1/2})$, and therefore
$\delta(\mathfrak{p}_{2})=\frac{1}{2}v_{\mathfrak{p}_{3}}(d(z^{2}u^{-1})^{2}\big{)}=0$
by [BedSt87, Theorem 2.3]. As the function $zu^{-1}\in F_{1}$ has fourth power
$(zu^{-1})^{4}=tu^{-2}+u^{-3}$, it follows that the prime $\mathfrak{p}_{1}$
of $F_{1}|K$ is ramified over $F_{2}$ with local parameter $zu^{-1}$, and thus
$\delta(\mathfrak{p}_{1})=\frac{1}{2}v_{\mathfrak{p}_{3}}(d(zu^{-1})^{4})=1$
by [BedSt87, Theorem 2.3].
It remains to determine the invariants of $\mathfrak{p}$. Note that
$\delta(\mathfrak{p})=3$, since on the one hand $\delta(\mathfrak{p})\leq
g=3$, while on the other hand $\Delta_{0}\geq 2\Delta_{1}=2$ (see the
paragraph before Proposition 2.4). Because $g-\overline{g}=3$, it follows from
Rosenlicht’s genus drop formula (2.3) that $\mathfrak{p}$ is the only singular
prime of $F|K$. Moreover, as the function $(\frac{z}{y})^{8}+t^{2}\in F_{3}$
has order $2$ at $\mathfrak{p}_{3}$, the value
$(\frac{z}{y})(\mathfrak{p})=t^{1/4}$ does not belong to
$\kappa(\mathfrak{p}_{1})=K(t^{1/2})$. This proves that $\mathfrak{p}$ has
residue field $\kappa(\mathfrak{p})=K(t^{1/4})$ and degree
$\deg(\mathfrak{p})=4$. ∎
By a theorem of Lichtenbaum–Shafarevich, a (relatively) minimal regular model
of the fibration $S\to\mathbb{P}^{1}$ exists and is unique up to isomorphism
(see [Liu02, Chapter 9, Theorem 3.21 and Corollary 3.24]). In general, it is
difficult to unveil the structure of such a minimal model, but here we can
achieve an explicit description by performing blowups, as described below. We
note that similar results for families of curves on rational normal scrolls
can be found in [St04].
The only singular point $P=((1:0:0),(0:1))$ of the projective surface $S$ is a
rational double point of type $A_{15}$, which can be resolved by blowing up
the surface eight times over the point $P$. This in turn gives a smooth
surface $\widetilde{S}$ and a new proper flat fibration
$f:\widetilde{S}\longrightarrow
S\overset{\phi}{\longrightarrow}\mathbb{P}^{1}$
whose fibers over the points $(1:c)$ of $\mathbb{P}^{1}$ coincide with the
corresponding fibers of $\phi$. Over the point $(0:1)$, the exceptional fibre
$f^{*}(0:1)$ is given by a linear combination of smooth rational curves
$\displaystyle f^{*}(0:1)$
$\displaystyle=2Z+E_{1}^{(1)}+E_{2}^{(1)}+2E_{1}^{(2)}+2E_{2}^{(2)}+3E_{1}^{(3)}+3E_{2}^{(3)}+4E_{1}^{(4)}+4E_{2}^{(4)}$
(3.4)
$\displaystyle\qquad+5E_{1}^{(5)}+5E_{2}^{(5)}+6E_{1}^{(6)}+6E_{2}^{(6)}+7E_{1}^{(7)}+7E_{2}^{(7)}+8E^{(8)},$
which intersect transversely according to the Coxeter-Dynkin diagram in Figure
1. In this diagram the vertex $Z$ represents the strict transform of the bad
fibre, while the dashed line means that the strict transform $H$ of the
horizontal curve $(1:0:0)\times\mathbb{P}^{1}\subset S$ intersects the
exceptional fibre $f^{*}(0:1)$ transversely at the component $E_{2}^{(1)}$ but
does not belong to $f^{*}(0:1)$.
$E^{(1)}_{1}$$E^{(2)}_{1}$$E^{(3)}_{1}$$E^{(4)}_{1}$$E^{(5)}_{1}$$E^{(6)}_{1}$$E^{(7)}_{1}$$E^{(8)}$$Z$$E^{(7)}_{2}$$E^{(6)}_{2}$$E^{(5)}_{2}$$E^{(4)}_{2}$$E^{(3)}_{2}$$E^{(2)}_{2}$$E^{(1)}_{2}$$H$
Figure 1. Dual diagram of the exceptional fibre $f^{*}(0:1)$
Since a fibre meets its components with intersection number zero, equation
(3.4) allows us to compute the self-intersection number of each component of
$f^{*}(0:1)$. Thus
$Z\cdot Z=-4,\qquad E^{(i)}_{j}\cdot E^{(i)}_{j}=-2\>\text{ for each
$i,j$},\quad E^{(8)}\cdot E^{(8)}=-2.$
In particular, by Castelnuovo’s contractibility criterion the smooth
projective surface $\widetilde{S}$ is relatively minimal over
$\mathbb{P}^{1}$, and hence the fibration $\widetilde{S}\to\mathbb{P}^{1}$ is
the minimal regular model of the original fibration $S\to\mathbb{P}^{1}$.
However, as we will see in a moment, the surface $\widetilde{S}$ is not
relatively minimal as an algebraic surface over $\operatorname{Spec}(k)$. In
other words, it contains a smooth rational curve of self-intersection $-1$,
namely the strict transform $H$ of the horizontal curve
$(1:0:0)\times\mathbb{P}^{1}\subset S$. To see this in detail, we note that
the first projection
$S\longrightarrow\mathbb{P}^{2}$
is a birational morphism whose inverse
$\mathbb{P}^{2}\dashrightarrow
S,\quad(x:y:z)\mapsto\big{(}(x:y:z),(y^{4}+x^{2}z^{2}:z^{4}+x^{2}y^{2}+x^{3}z)\big{)}$
is regular at all points of $\mathbb{P}^{2}$ except $(1:0:0)$. The composition
$\widetilde{S}\to S\to\mathbb{P}^{2}$ is then a birational morphism between
smooth projective surfaces that contracts the sixteen smooth rational curves
$E_{1}^{(1)},E_{1}^{(2)},\dots,E_{2}^{(1)}$ and $H$ to the point $(1:0:0)$.
Thus the morphism $\widetilde{S}\to S\to\mathbb{P}^{2}$ factors as a
composition of sixteen blowups, which means that $\widetilde{S}$ can also be
obtained by blowing up the projective plane $\mathbb{P}^{2}$ sixteen times
over $(1:0:0)$. This shows that the smooth projective surface $\widetilde{S}$
is rational, and that the smooth rational curve $H$ has self-intersection
$-1$.
${\widetilde{S}}$${S}$${\mathbb{P}^{2}}$${\mathbb{P}^{1}}$$\scriptstyle{\phi}$$\scriptstyle{\tau}$
Alternatively, we can view the fibration $\phi:S\to\mathbb{P}^{2}$ as a pencil
of quartics in $\mathbb{P}^{2}$ via the rational map
$\tau:\mathbb{P}^{2}\dashrightarrow\mathbb{P}^{1},\quad(x:y:z)\mapsto(y^{4}+x^{2}z^{2}:z^{4}+x^{2}y^{2}+x^{3}z)$
obtained by composing $\phi$ with the inverse of $S\to\mathbb{P}^{2}$. It is
not difficult to see that by resolving the indeterminacy locus of $\tau$,
which is equivalent to resolving the indeterminacy locus of
$\mathbb{P}^{2}\dashrightarrow S$, we obtain precisely the birational morphism
$\widetilde{S}\to\mathbb{P}^{2}$. Indeed, resolving such indeterminacy locus
entails blowing up $\mathbb{P}^{2}$ sixteen times over $(1:0:0)$, which in
turn produces a smooth surface $\overline{S}$ isomorphic to $\widetilde{S}$
and sixteen smooth rational curves $E_{1},E_{2},\dots,E_{16}$ in
$\overline{S}$ of self-intersection $-2,-2,\dots,-1$ respectively, whose
configuration is given by the Dynkin diagram in Figure 2. As in Figure 1, the
dashed line in Figure 2 means that the strict transform $E$ of the bad fibre
$V(Y^{2}+XZ)\subset\mathbb{P}^{2}$ intersects the exceptional fibre of
$\overline{S}$ transversely at the curve $E_{8}$ but does not actually lie in
it. One then sees that under the isomorphism $\overline{S}\cong\widetilde{S}$
the two diagrams correspond, i.e., the isomorphism identifies $E_{16}=H$,
$E_{15}=E_{2}^{(1)}$, $E_{14}=E_{2}^{(2)}$, and so on.
$E_{1}$$E_{2}$$E_{3}$$E_{4}$$E_{5}$$E_{6}$$E_{7}$$E_{8}$$E$$E_{9}$$E_{10}$$E_{11}$$E_{12}$$E_{13}$$E_{14}$$E_{15}$$E_{16}$
Figure 2. Dual diagram of the exceptional fibre of $\overline{S}$
We collect our results on the geometry of the surface $\widetilde{S}$ in the
following theorem.
###### Theorem 3.5.
The fibration $f:\widetilde{S}\to\mathbb{P}^{1}$ is the minimal regular model
of the fibration $\phi:S\to\mathbb{P}^{1}$. Its fibres over the points $(1:c)$
coincide with the corresponding fibres of $\phi$, while its fibre over the
point $(0:1)$ is a linear combination of smooth rational curves as in (3.4),
which intersect transversely according to the diagram in Figure 1.
The strict transform $H\subset\widetilde{S}$ of the curve
$(1:0:0)\times\mathbb{P}^{1}\subset S$ is a horizontal smooth rational curve
of self-intersection $-1$. If we blow down successively the curves $H$,
$E_{2}^{(1)}$, $E_{2}^{(2)}$, $\dots$, $E_{1}^{(2)}$ and $E_{1}^{(1)}$, then
we obtain a surface isomorphic to the projective plane.
By the variant of the Faltings-Mordell theorem for function fields [Sam66,
Vol91], the generic fibre $C|K$ has only finitely many $K$-rational points.
This means that the fibration $\widetilde{S}\to\mathbb{P}^{1}$ has only
finitely many horizontal prime divisors of degree 1 over the base
$\mathbb{P}^{1}$.
###### Proposition 3.6.
The generic fibre $C|K$ has only one $K$-rational point. The corresponding
horizontal prime divisor of degree 1 is the contractible curve $H$.
###### Proof.
Let $\mathfrak{q}$ be the $K$-rational point of $C$ corresponding to the
horizontal curve $H\subset\widetilde{S}$, i.e., let $\mathfrak{q}$ be the only
prime of $F|K$ such that the rational functions $z,y\in K(C)$ in (3.2) satisfy
$z(\mathfrak{q})=y(\mathfrak{q})=0$. Clearly, the function $u:=z+y^{2}\in F$
introduced in the proof of Proposition 3.3 also satisfies $u(\mathfrak{q})=0$.
Seeking a contradiction, we assume that there is another rational prime
$\mathfrak{q}^{\prime}\neq\mathfrak{q}$. As follows from $z^{4}=u+tu^{2}$ and
$y^{2}=z+u$, the value $u(\mathfrak{q}^{\prime})\in K$ is necessarily non-
zero, and so is the value $z(\mathfrak{q}^{\prime})\in K$ because otherwise
the equality
$y(\mathfrak{q}^{\prime})^{2}+tu(\mathfrak{q}^{\prime})^{2}=z(\mathfrak{q}^{\prime})^{4}=0$
contradicts $t\notin K^{2}$. It follows that there exist non-zero polynomials
$f,g,F,G\in k[T]$ with $(f,g)=(F,G)=1$ satisfying the identity
$(\frac{F}{G})^{4}=\frac{f}{g}+t(\frac{f}{g})^{2}$, i.e.,
$F^{4}g^{2}=G^{4}f(g+tf).$
Since $G^{4}$ divides $g^{2}$, the polynomial $f$ is coprime with $G$ and
therefore it is a fourth power in $k[T]$. This implies that
$g=G^{2}g^{\prime}$, $f=f^{\prime 4}$ and $F=f^{\prime}F^{\prime}$ for some
polynomials $f^{\prime},g^{\prime},F^{\prime}$ in $k[T]$. From the relation
$F^{\prime 4}g^{\prime 2}=G^{2}g^{\prime}+tf^{\prime 4}$ we deduce that
$g^{\prime}$ divides $t$, i.e., $g^{\prime}$ is either a constant or a
constant times $t$. In light of $k=k^{2}$, both possibilities yield the
contradiction $t\in K^{2}$. ∎
## References
BedoyaHernandoStöhrKarl-OttoAn algorithm to calculate discrete invariants of
singular primes in function fieldsJ. Number
Theory2719873310–323@article{BedSt87, author = {Bedoya, Hernando}, author =
{St\"ohr, Karl-Otto}, title = {An algorithm to calculate discrete invariants
of singular primes in function fields}, journal = {J. Number Theory}, volume =
{27}, date = {1987}, number = {3}, pages = {310–323}}
BombieriEnricoMumfordDavidEnriques’ classification of surfaces in char. $p$.
iiiInvent. Math.351976197–232@article{BM76, author = {Bombieri, Enrico},
author = {Mumford, David}, title = {Enriques' classification of surfaces in
char. $p$. III}, journal = {Invent. Math.}, volume = {35}, date = {1976},
pages = {197–232}} DieudonnéJeanGrothendieckAlexanderÉléments de géométrie
algébrique. ii. étude globale élémentaire de quelques classes de
morphismesInst. Hautes Études Sci. Publ. Math.19618222 pp.@article{EGA2,
author = {Dieudonn\'e, Jean}, author = {Grothendieck, Alexander}, title =
{\'El\'ements de g\'eom\'etrie alg\'ebrique. II. \'Etude globale
\'el\'ementaire de quelques classes de morphismes}, journal = {Inst. Hautes
Études Sci. Publ. Math.}, date = {1961}, number = {8}, pages = {222 pp.}}
FanelliAndreaSchröerStefanDel pezzo surfaces and mori fiber spaces in positive
characteristicTrans. Amer. Math. Soc.373202031775–1843@article{FaSc20, author
= {Fanelli, Andrea}, author = {Schr\"oer, Stefan}, title = {Del Pezzo surfaces
and Mori fiber spaces in positive characteristic}, journal = {Trans. Amer.
Math. Soc.}, volume = {373}, date = {2020}, number = {3}, pages = {1775–1843}}
HartshorneRobinAlgebraic geometryGraduate Texts in Mathematics, No.
52Springer-Verlag, New York-Heidelberg1977xvi+496 pp.@book{Har77, author =
{Hartshorne, Robin}, title = {Algebraic geometry}, note = {Graduate Texts in
Mathematics, No. 52}, publisher = {Springer-Verlag, New York-Heidelberg}, date
= {1977}, pages = {xvi+496 pp.}} HilarioCesarStöhrKarl-OttoFibrations by plane
quartic curves with a canonical moving
singularityarXiv:2306.08579@article{HiSt23, author = {Hilario, Cesar}, author
= {St\"ohr, Karl-Otto}, title = {Fibrations by plane quartic curves with a
canonical moving singularity}, note =
{\href{https://arxiv.org/abs/2306.08579}{\textsf{arXiv:2306.08579}}}}
ItoKazuhiroItoTetsushiLiedtkeChristianDeformations of rational curves in
positive characteristicJ. Reine Angew. Math.769202055–86@article{IIL20, author
= {Ito, Kazuhiro}, author = {Ito, Tetsushi}, author = {Liedtke, Christian},
title = {Deformations of rational curves in positive characteristic}, journal
= {J. Reine Angew. Math.}, volume = {769}, date = {2020}, pages = {55–86}}
JiLenaWaldronJoeStructure of geometrically non-reduced varietiesTrans. Amer.
Math. Soc.3742021128333–8363@article{JW21, author = {Ji, Lena}, author =
{Waldron, Joe}, title = {Structure of geometrically non-reduced varieties},
journal = {Trans. Amer. Math. Soc.}, volume = {374}, date = {2021}, number =
{12}, pages = {8333–8363}} KollárJánosExtremal rays on smooth threefoldsAnn.
Sci. École Norm. Sup. (4)2419913339–361@article{Ko91, author = {Koll\'ar,
J\'anos}, title = {Extremal rays on smooth threefolds}, journal = {Ann. Sci.
\'Ecole Norm. Sup. (4)}, volume = {24}, date = {1991}, number = {3}, pages =
{339–361}} LangWilliam E.Quasi-elliptic surfaces in characteristic threeAnn.
Sci. École Norm. Sup. (4)1219794473–500@article{Lan79, author = {Lang, William
E.}, title = {Quasi-elliptic surfaces in characteristic three}, journal =
{Ann. Sci. \'Ecole Norm. Sup. (4)}, volume = {12}, date = {1979}, number =
{4}, pages = {473–500}} LiuQingAlgebraic geometry and arithmetic curvesOxford
Graduate Texts in Mathematics, No. 6Oxford University Press, Oxford2002xvi+576
pp.@book{Liu02, author = {Liu, Qing}, title = {Algebraic geometry and
arithmetic curves}, note = {Oxford Graduate Texts in Mathematics, No. 6},
publisher = {Oxford University Press, Oxford}, date = {2002}, pages = {xvi+576
pp.}} MaddockZacharyRegular del pezzo surfaces with irregularityJ. Algebraic
Geom.2520163401–429@article{Ma16, author = {Maddock, Zachary}, title =
{Regular del Pezzo surfaces with irregularity}, journal = {J. Algebraic
Geom.}, volume = {25}, date = {2016}, number = {3}, pages = {401–429}}
PatakfalviZsoltWaldronJoeSingularities of general fibers and the lmmpAmer. J.
Math.14420222505–540@article{PW22, author = {Patakfalvi, Zsolt}, author =
{Waldron, Joe}, title = {Singularities of general fibers and the LMMP},
journal = {Amer. J. Math.}, volume = {144}, date = {2022}, number = {2}, pages
= {505–540}} RosenlichtMaxwellEquivalence relations on algebraic curvesAnn. of
Math. (2)561952169–191@article{Ros52, author = {Rosenlicht, Maxwell}, title =
{Equivalence relations on algebraic curves}, journal = {Ann. of Math. (2)},
volume = {56}, date = {1952}, pages = {169–191}} SalomãoRodrigoFibrations by
nonsmooth genus three curves in characteristic threeJ. Pure Appl.
Algebra215201181967–1979@article{Sal11, author = {Salom\~ao, Rodrigo}, title =
{Fibrations by nonsmooth genus three curves in characteristic three}, journal
= {J. Pure Appl. Algebra}, volume = {215}, date = {2011}, number = {8}, pages
= {1967–1979}} SalomãoRodrigoFibrations by curves with more than one nonsmooth
pointBull. Braz. Math. Soc. (N.S.)4520142267–292@article{Sal14, author =
{Salom\~ao, Rodrigo}, title = {Fibrations by curves with more than one
nonsmooth point}, journal = {Bull. Braz. Math. Soc. (N.S.)}, volume = {45},
date = {2014}, number = {2}, pages = {267–292}} SamuelPierreLectures on old
and new results on algebraic curvesTata Institute of Fundamental Research
Lectures on Mathematics, No. 36Notes by S. AnantharamanTata Institute of
Fundamental Research, Bombay1966ii+127+iii pp.@book{Sam66, author = {Samuel,
Pierre}, title = {Lectures on old and new results on algebraic curves}, series
= {Tata Institute of Fundamental Research Lectures on Mathematics, No. 36},
note = {Notes by S. Anantharaman}, publisher = {Tata Institute of Fundamental
Research, Bombay}, date = {1966}, pages = {ii+127+iii pp.}}
SchröerStefanSingularities appearing on generic fibers of morphisms between
smooth schemesMichigan Math. J.562008155–76@article{Sc08, author = {Schr\"oer,
Stefan}, title = {Singularities appearing on generic fibers of morphisms
between smooth schemes}, journal = {Michigan Math. J.}, volume = {56}, date =
{2008}, number = {1}, pages = {55–76}} Simarra CañateAlejandroStöhrKarl-
OttoFibrations by non-smooth projective curves of arithmetic genus two in
characteristic twoJ. Pure Appl. Algebra220201693282–3299@article{SiSt16,
author = {Simarra Ca{\~n}ate, Alejandro}, author = {St\"ohr, Karl-Otto}, title
= {Fibrations by non-smooth projective curves of arithmetic genus two in
characteristic two}, journal = {J. Pure Appl. Algebra}, volume = {220}, date =
{2016}, number = {9}, pages = {3282–3299}} StichtenothHenningZur
konservativität algebraischer funktionenkörperJ. Reine Angew.
Math.301197830–45@article{Sti78, author = {Stichtenoth, Henning}, title = {Zur
Konservativit\"at algebraischer Funktionenk\"orper}, journal = {J. Reine
Angew. Math.}, volume = {301}, date = {1978}, pages = {30–45}} StöhrKarl-
OttoOn singular primes in function fieldsArch. Math.
(Basel)5019882156–163@article{St88, author = {St\"ohr, Karl-Otto}, title = {On
singular primes in function fields}, journal = {Arch. Math. (Basel)}, volume =
{50}, date = {1988}, number = {2}, pages = {156–163}} StöhrKarl-OttoOn
bertini’s theorem in characteristic $p$ for families of canonical curves in
$\mathbb{P}^{(p-3)/2}$Proc. London Math. Soc. (3)8920042291–316@article{St04,
author = {St\"ohr, Karl-Otto}, title = {On Bertini's theorem in characteristic
$p$ for families of canonical curves in $\PP^{(p-3)/2}$}, journal = {Proc.
London Math. Soc. (3)}, volume = {89}, date = {2004}, number = {2}, pages =
{291–316}} VolochJosé FelipeA diophantine problem on algebraic curves over
function fields of positive characteristicBull. Soc. Math.
France11919911121–126@article{Vol91, author = {Voloch, Jos\'e Felipe}, title =
{A Diophantine problem on algebraic curves over function fields of positive
characteristic}, journal = {Bull. Soc. Math. France}, volume = {119}, date =
{1991}, number = {1}, pages = {121–126}} ZariskiOscarThe theorem of bertini on
the variable singular points of a linear system of varietiesTrans. Amer. Math.
Soc.561944130–140@article{Zar44, author = {Zariski, Oscar}, title = {The
theorem of Bertini on the variable singular points of a linear system of
varieties}, journal = {Trans. Amer. Math. Soc.}, volume = {56}, date = {1944},
pages = {130–140}}
|
# A filtered Hénon map
Vinícius S. Borges<EMAIL_ADDRESS>Marcio Eisencraft<EMAIL_ADDRESS>Telecommunication and Control Engineering Department, Escola Politécnica,
University of São Paulo, Brazil
Published in Chaos, Solitons $\&$ Fractals
https://doi.org/10.1016/j.chaos.2022.112865
###### Abstract
In this paper, we use Lyapunov exponents to analyze how the dynamical
properties of the Hénon map change as a function of the coefficients of a
linear filter inserted in its feedback loop. We show that the generated orbits
can be chaotic or not, depending on the filter coefficients. The dynamics of
the system presents complex behavior, including cascades of bifurcations,
coexistence of attractors, crises, and “shrimps”. The obtained results are
relevant in the context of bandlimited chaos-based communication systems, that
have recently been proposed in the literature.
###### keywords:
Dynamical Systems. Discrete-time filters. Lyapunov exponents. Chaos-based
communication.
††journal: Chaos, Solitons & Fractals
## 1 Introduction
A chaotic signal has three main characteristics: it is bounded, presents
aperiodicity and sensitive dependence on the initial conditions (SDIC) [1].
These properties have stimulated proposals of chaotic signals implementation
proposals in Telecommunications and Signal Processing since the seminal Pecora
and Carroll work [2]. They have shown that two identical systems generating
chaotic signals could be synchronized despite SDIC. Since then, many possible
applications such as chaos-based communication systems (CBCS) [3, 4],
watermarking [5], compressed-sensing [6], image encryption [7, 8], ultra-
wideband communications [9], memristor models [10] and others have appeared.
Recently, the performance of CBCS has been accessed in real transmission
scenarios involving channel distortion, noise, bandwidth constraints and delay
[11, 4]. Since transmission channels are always bandlimited [12], it is
necessary to know and control the bandwidth of the transmitted chaotic
signals. In this regard, the authors of [13] employed a discrete-time non
recursive linear filter built into the chaos generator as a way to control the
bandwidth of chaotic signals. After that, it was demonstrated that the filter
insertion does not affect chaotic synchronization [14], which is essential for
chaos communication. However, the question remained as to under what
conditions the generated signals were still chaotic. The chaotic nature of the
transmitted signals issue was only barely touched in [15].
Bearing this in mind, in the present paper, we analyze the dynamics of the
discrete-time dynamical system obtained when we add a two coefficients non
recursive linear filter in the feedback loop of the Hénon map [16].This system
is a simplified version of the one considered in [13, 14]. Using only two
coefficients permits a more insightful analysis using dynamical systems tools.
Besides, non recursive filters are always BIBO (bounded input, bounded output
stable) stable [17], so any divergence presented by the orbits is caused by
the dynamics and not by the filter itself.
This paper is organized as follows: the system under consideration is
described in Section 2. Its dynamical analysis and the main results are
presented in Section 3 and our conclusions are drafted in Section 4.
## 2 The filtered Hénon map
The Hénon map is given by [16]
$\begin{cases}x_{1}(n+1)=\alpha-\left(x_{1}(n)\right)^{2}+\beta x_{2}(n)\\\
x_{2}(n+1)=x_{1}(n)\end{cases}$ (1)
where ${\alpha,\beta}$ are real parameters, $n=0,1,\ldots$, and the state
variables are $\bm{x}=\left[x_{1},x_{2}\right]$. It has been used as a
paradigm to generate 2-dimensional discrete-time chaotic signals [18, 19].
In [13] it was proposed to filter $x_{1}(n)$ using a non recursive or finite
impulse response (FIR) filter [17] so that
$x_{3}(n)=\sum_{j=0}^{N_{S}-1}c_{j}x_{1}(n-j),$ (2)
where $c_{j},0\leq j\leq N_{S}-1$ are the filter coefficients. Given $N_{S}$
and a frequency response specification, there is a number of different FIR
filter design techniques to obtain the coefficients $c_{j}$ [17].
To allow synchronization in the receiver, $x_{3}(n)$ is fed back in the
dynamical system, so that the resulting system is
$\begin{cases}x_{1}(n+1)=\alpha-\left(x_{3}(n)\right)^{2}+\beta x_{2}(n)\\\
x_{2}(n+1)=x_{1}(n)\\\
x_{3}(n+1)=\sum_{j=0}^{N_{S}-1}{c_{j}x_{1}(n-j{\color[rgb]{0,0,1}+1})}\end{cases}.$
(3)
In [14] this system was used as a way to generate a chaotic low-pass signal so
that it could be transmitted through a communication channel without
distortion. In addition, the authors were able to prove that synchronization
is achieved independently of the filter coefficients. When it comes to the
chaotic nature of the orbits of (3), preliminary numerical results show that
depending on $c_{j}$, the orbits can still be chaotic, can converge to
periodic attractors or diverge towards infinity [15].
Here we consider the simplest non-trivial case of $N_{S}=2$ coefficients. In
this case,
$x_{3}(n+1)=c_{0}x_{1}(n+1)+c_{1}x_{1}(n)$ (4)
and so (3) becomes
$\begin{cases}x_{1}(n+1)=\alpha-\left(x_{3}(n)\right)^{2}+\beta x_{2}(n)\\\
x_{2}(n+1)=x_{1}(n)\\\ x_{3}(n+1)=c_{0}x_{1}(n+1)+c_{1}x_{1}(n)\end{cases}$
(5)
or
$\begin{cases}x_{1}(n+1)=\alpha-(c_{0}x_{1}(n)+c_{1}x_{2}(n))^{2}+\beta
x_{2}(n)\\\ x_{2}(n+1)=x_{1}(n).\end{cases}$ (6)
Thus, the resulting system is a 2-dimensional non-linear dynamical system that
has the original Hénon map (1) as a particular case for $c_{0}=1$ and
$c_{1}=0$. In what follows we unravel the dynamical properties of this system
as a function of the filter parameters $c_{0}$ and $c_{1}$, considering
$\alpha=1.4$ and $\beta=0.3$ fixed as in [16].
## 3 Analysis of the map dynamics
We begin studying the fixed points of (6) and their stability. Then, we
perform a numerical analysis of the periodicity of its orbits and of the
largest Lyapunov exponent of the attractors.
### 3.1 Fixed Points
From (6), we can calculate the fixed points
$\bm{P}_{i}=\left(p_{i},p_{i}\right)$, $i=1,2$ of the filtered Hénon map
solving
$p_{i}=\alpha-\left(c_{0}p_{i}+c_{1}p_{i}\right)^{2}+\beta
p_{i}\Rightarrow(c_{0}+c_{1})^{2}p_{i}^{2}+(1-\beta)p_{i}-\alpha=0.$ (7)
We obtain
$\displaystyle p_{1}$
$\displaystyle=\frac{-(1-\beta)-\sqrt{(1-\beta)^{2}+4\alpha(c_{0}+c_{1})^{2}}}{2(c_{0}+c_{1})^{2}}<0$
(8)
and
$\displaystyle p_{2}$
$\displaystyle=\frac{-(1-\beta)+\sqrt{(1-\beta)^{2}+4\alpha(c_{0}+c_{1})^{2}}}{2(c_{0}+c_{1})^{2}}>0.$
(9)
This way, except for the case $c_{0}=c_{1}=0$, we have two fixed points
$\bm{P}_{1}=\left(p_{1},p_{1}\right)$ and
$\bm{P}_{2}=\left(p_{2},p_{2}\right)$ that depends only on the squared sum of
the filter coefficients $\left(c_{0}+c_{1}\right)^{2}$. For $c_{0}=c_{1}=0$
the system has only one fixed point at
$\left(\frac{\alpha}{1-\beta},\frac{\alpha}{1-\beta}\right)$.
Stabilities of $\bm{P}_{1}$ and $\bm{P}_{2}$ are determined by the largest
absolute value of the eigenvalues of the Jacobian matrix of (6)
$\bm{J}=\begin{bmatrix}-2c_{0}(c_{0}x_{1}+c_{1}x_{2})&-2c_{1}(c_{0}x_{1}+c_{1}x_{2})+\beta\\\
1&0\end{bmatrix},$ (10)
calculated on $\bm{P}_{1}$ and $\bm{P}_{2}$, respectively. They are given by
$\displaystyle\lambda=\max$
$\displaystyle\left\\{\left|-pc_{0}\left(c_{0}+c_{1}\right)+\sqrt{\left(pc_{0}\left(c_{0}+c_{1}\right)\right)^{2}-2pc_{1}\left(c_{0}+c_{1}\right)+\beta}\right|,\right.$
$\displaystyle\left.\left|-pc_{0}\left(c_{0}+c_{1}\right)-\sqrt{\left(pc_{0}\left(c_{0}+c_{1}\right)\right)^{2}-2pc_{1}\left(c_{0}+c_{1}\right)+\beta}\right|\right\\},$
(11)
where $p$ should be substitute for $p_{1}$ of (8) for $\bm{P_{1}}$ or $p_{2}$
of (9) for $\bm{P_{2}}$. If $\lambda>1$ we have an unstable fixed point and
when $\lambda<1$ we have a stable fixed point [1].
For $\bm{P_{1}}$ we numerically found that $\lambda>1$ for all values of
$c_{0}$ and $c_{1}$ so that it is always unstable. When it comes to
$\bm{P_{2}}$, there is a region of stability in the $c_{0}\times c_{1}$ plane,
where $\lambda<1$. This region is shown in black in Figure 1(a). It presents
an odd symmetry with respect to the axis $c_{1}=0$ and it is unbounded in the
direction of the bisector of the 2st and 4rd quadrants $c_{0}+c_{1}=0$.
Typical examples of $x_{1}(n)$ for parameters leading to stable and unstable
$\bm{P_{2}}$ are shown in Figure 1(b) and (c), respectively. In case (c), we
considered $c_{0}=1$ and $c_{1}=0$, which leads to the original chaotic Hénon
map [16].
Figure 1: (a) Stability region of $\bm{P}_{2}$ is shown in black in the
$c_{0}\times c_{1}$ plane; examples of $x_{1}(n)$ for (b) $c_{0}=c_{1}=0.5$
(yellow cross in (a)) leading to a stable $\bm{P}_{2}$ and (c) $c_{0}=1$,
$c_{1}=0$ (red cross in (a)) leading to an unstable $\bm{P}_{2}$ of the
original Hénon map, i.e, without filter.
### 3.2 Largest Lyapunov Exponent
The $k$-th Lyapunov exponent for an orbit with initial condition $\bm{x}(0)$
is defined as
$h_{k}\left(\bm{x}_{0}\right)=\lim_{n\rightarrow\infty}\frac{1}{n}\ln\left(\left\|\bm{J}^{n}\left(\bm{x}(0)\right)\bm{u}_{\bm{k}}\right\|\right),$
(12)
where $\bm{J}^{n}$ is the Jacobian matrix of the $n$-time iterated map
evaluated at point $\bm{x}(0)$ and $\bm{u}_{\bm{k}}$ is the eigenvector
corresponding to the $k$-th largest eigenvalue of this Jacobian matrix. For
sake of simplicity of notation we define $h_{0}\triangleq h$.
In order to verify the presence of chaotic orbits generated by (5), we
numerically analyze the largest Lyapunov exponent $h$ of its orbits. Chaotic
behavior can be identified in a bounded aperiodic orbit when $h>0$ [1].
Numerical estimators of (12) can be obtained by a variety of techniques. Here,
we consider the tangent map method [1].
For the numerical evaluations of $h$, we considered random initial conditions
uniformly distributed in the unit square and 3000 iterations, excluding the
first 500 iterations. For each estimate, we took 25 different initial
conditions, and present the mean value. Our numerical experiments have shown
that these quantities of initial conditions and iterations are sufficient to
determine $h$ of the attractor with an accuracy higher than $10^{-2}$.
Figure 2 shows a general picture of how $h$ varies in the plane $c_{0}\times
c_{1}$ inside the square $[-1.5,1.5]\times[-1.5,1.5]$.
Figure 2: Largest Lyapunov exponent $h$ for the map (6) as a function of
$c_{0}$ and $c_{1}$. White regions represent parameters that generate
divergent orbits. The region marked with a black rectangle is further analyzed
in Figure 5
.
We can clearly see a central region where $h<0$, shown in different shades of
blue. As expected, this region encompasses the black one in Figure 1(a) where
there is a stable fixed point. This central region contains orbits that
converges towards $\bm{P_{2}}$ and period-2 orbits, as can be seen in the
bifurcation diagrams of Figure 3. They were plotted as a function of $c_{1}$
for constant values of $c_{0}$, indicated by the dashed lines in Figure 2. The
strategy of following the attractor was applied, using as initial condition
for each value of $c_{1}$ a point of the attrator obtained for the previous
value of $c_{1}$.
Figure 3: Bifurcation diagram as a function of $c_{1}$ for (a) $c_{0}=0.94$,
(b) $c_{0}=0.70$, (c) $c_{0}=0.50$ and (d) $c_{0}=0$. These values of $c_{0}$
are marked by dashed lines in Figure 2.
Surrounding the blue central region in Figure 2, there are “chaotic regions”
where $h>0$ predominates (in shades of yellow and red) interspersed by islands
where $h<0$ but with higher periodicity, as the biffurcation diagrams of
Figure 3 shows.
Increasing even more $c_{0}$ and $c_{1}$ results in orbits that diverge
towards infinity, except for the region in the vicinity of the bisector of the
2st and 4rd quadrants $c_{0}+c_{1}=0$. These divergence regions are shown in
white in Figure 2 and by the regions without attractor points in the
bifurcation diagrams of Figure 3.
In Figure 3(a), one can see that for $c_{0}=0.94$, the transitions from the
blue region to the “chaotic regions” sometimes present period-doubling
cascades and sometimes are abrupt, via crisis. Inside the chaotic regions
there are windows of periodicity associated with the blue islands in their
inside. The transitions from chaos to divergence are abrupt in both ends of
the diagram.
Figure 3(b) shows a large region with a period-2 stable orbit inside the blue
region of Figure 2 for $c_{0}=0.7$. The transition to chaos is abrupt in the
end of the fixed point $\bm{P_{2}}$ stability region.
A similar patter is presented in Figure 3(c) for $c_{0}=0.5$, with chaos or
quasi-periodicity appearing and disappearing abruptly with changing $c_{1}$.
As an example of this behavior, Figure 4(a) presents a zoom for
$c_{1}\in[0.68;0.72]$ (region marked with a red rectangle in Figure 3(c)).
Figures 4(b) and (c) show examples of an aperiodic orbit for $c_{1}=0.707$ and
a period-3 orbit for $c_{1}=0.708$ respectively. These values of $c_{1}$ are
signaled by red dashed lines in Figures 4(a).
Figure 4: (a) Zoom of the region indicated by the red rectangle in Figure
3(c); example of orbits for (b) $c_{1}=0.707$ and initial condition
$\bm{x}(0)=[0.8,0.8]$; (c) $c_{1}=0.708$. These values of $c_{1}$ are marked
by red dashed lines in (a).
It is relevant to note that in the region of Figure 4 where there are
aperiodic orbits, we observed the presence of two coexistent attractors with
large basins of attraction. For instance, in $c_{1}=0.707$ the aperiodic orbit
with $h=0.00052\pm 0.00044$ is obtained for
$\left\\{x_{1}(0),x_{2}(0)\right\\}\subset[0.7,0.9]$ and a period-3 orbit with
$h=-0.4764\pm 0.0020$ is obtained for
$\left\\{x_{1}(0),x_{2}(0)\right\\}\subset[0,0.7]$.
In figure 3(d), the diagram was obtained for $c_{0}=0$. In this case, the part
of the cutting in the $c_{1}\times c_{2}$ plane that crosses the blue region
in Figure 2 is completely contained in the stability region of $\bm{P_{2}}$
(black region in Figure 1). This way, there is no period-2 orbits and the
fixed point $\bm{P_{2}}$ is present along all this part. When the values of
$c_{1}$ passes the blue central region of Figure 2, we have a period-doubling
cascade transition to chaos.
In Figure 5 we explore with more detail the region $0.74\leq c_{0}\leq 1.14$,
$0.68\leq c_{1}\leq 0.83$ that is marked with a black rectangle in Figure 2.
One can clearly notice fractal-like patterns of periodic islands, popularly
known as _shrimps_ [20, 21, 22], surrounded by chaotic regions. Each shrimps
presents orbits with different periodicity, as can be seen in the enlargements
shown in Figure 6.
Figure 5: Zoom of the region $0.74\leq c_{0}\leq 1.14$, $0.68\leq c_{1}\leq
0.83$ of Figure 2. The regions within the indicated rectangles are analyzed in
more detail in Figures 6 (largest and medium rectangles) and 7 (smallest
rectangle). Figure 6: Zooms of the (a) larger and (b) medium rectangles in
Figure 5 highlighting the presence of different periodic shrimps. Different
colors represent different periods as shown in the color bar. White regions
represents chaos or divergence.
The fractal nature of the distribution of shrimps and their different periods
are highlighted in the zoom of the smallest rectangle in Figure 5 presented in
Figure 7(a). In Figure 7(b) a bifurcation diagram for $c_{0}=0.932$ (indicated
by the dashed line in Figure 7 (a)) is show. The presence of multiple periodic
windows of high period and a cascading of windows is clearly visible. The
chaotic structure appears immediately when crossing the periodic shrimps
boundaries without a period-doubling or other classical bifurcation scenario.
Figure 7: (a) Zoom of the smallest rectangle in Figure 5. Periods of the
orbits inside some shrimps are indicated; (b) Bifurcation diagram for
$c_{0}=0.932$ (indicated by a dashed line in (a)).
The presence of shrimps, especially their thin antennae, in the parameter
region in which chaotic orbits are generated is a challenge from the point of
view of practical applications of chaotic signals. Small perturbations in the
values of the chosen coefficients can cause the chaotic behavior to be
replaced by a stable periodic one, losing the DSCI that is critical in
applications involving secure CBCS [23, 24, 25]. This issue seems to have gone
unnoticed in many previous papers employing map-generated chaos in
communications [26, 13, 14], and analyses like the ones presented here need to
be taken into account when choosing the map and filters to be used to generate
bandlimited chaotic signals.
## 4 Conclusions
In this paper we present an analysis of the Hénon map including a linear non
recursive filter with two coefficients in the feedback loop. The use of such
filters is a way of generating bandlimited chaotic signals for use in CBCS.
Our numerical simulations, by means of the largest Lyapunov exponent, show
that the filter coefficients change the chaotic properties of the orbits in a
complex way, including the appearance of shrimps. The presence of these
structures shows that small changes in the filter coefficients, specially in
the parameter space in the vicinity of their thin antennae can completely
change the dynamic properties of the generated orbits. They can become chaotic
or periodic due to quantization errors expected in any implementation, for
example. This way, bandlimited CBCS must be carefully projected to guarantee
that the generated signals remain chaotic.
A natural next step is an analysis of what happen if filters with more
coefficients or with a recurrent structure are employed , as is usual in
communication systems. One possibility is to study how the order and cut-off
frequency of low-pass filters generated by different design techniques
influence the Lyapunov exponent.
## Acknowledgments
The authors thank Prof. Antonio M. Batista for interesting discussions
throughout the writing of this paper.
This study was financed in part by CNPq-Brazil (grants 140081/2022-4 and
311039/2019-7) and the CAPES-Brazil (Finance Code 001).
## References
* [1] K. T. Alligood, T. D. Sauer, J. A. Yorke, Chaos, Textbooks in Mathematical Sciences, Springer New York, 2000.
* [2] L. M. Pecora, T. L. Carroll, Synchronization in chaotic systems, Physical Review Letters 64 (8) (1990) 821–824.
* [3] C. E. C. Souza, D. P. B. Chaves, C. Pimentel, Digital communication systems based on three-dimensional chaotic attractors, IEEE Access 7 (2019) 10523–10532.
* [4] M. S. Baptista, Chaos for communication, Nonlinear Dynamics 105 (2) (2021) 1821–1841.
* [5] N. A. Loan, N. N. Hurrah, S. A. Parah, J. W. Lee, J. A. Sheikh, G. M. Bhat, Secure and robust digital image watermarking using coefficient differencing and chaotic encryption, IEEE Access 6 (2018) 19876–19897.
* [6] D. Rontani, D. Choi, C. Y. Chang, A. Locquet, D. S. Citrin, Compressive sensing with optical chaos, Scientific Reports 6 (2016).
* [7] X. Li, C. Li, I.-K. Lee, Chaotic image encryption using pseudo-random masks and pixel mapping, Signal Processing 125 (2016) 48–63.
* [8] N. Zhou, S. Pan, S. Cheng, Z. Zhou, Image compression-encryption scheme based on hyper-chaotic system and 2d compressive sensing, Optics & Laser Technology 82 (2016) 121 – 133.
* [9] A. Dmitriev, A. Kletsov, A. Laktyushkin, A. Panas, S. Starkov, Ultrawideband wireless communications based on dynamic chaos, Journal of Communications Technology and Electronics 51 (10) (2006) 1126–1140.
* [10] Y. Peng, K. Sun, S. He, A discrete memristor model and its application in hénon map, Chaos, Solitons & Fractals 137 (2020).
* [11] Z. Liu, L. Zhang, Z. Wu, J. Bian, A secure and robust frequency and time diversity aided OFDM–DCSK modulation system not requiring channel state information, IEEE Transactions on Communications (2020).
* [12] B. P. Lathi, Z. Ding, Modern Digital and Analog Communication Systems (The Oxford Series in Electrical and Computer Engineering), The Oxford Series in Electrical and Computer Engineering, Oxford University Press, 2009.
* [13] M. Eisencraft, R. D. Fanganiello, L. A. Baccala, Synchronization of discrete-time chaotic systems in bandlimited channels, Mathematical Problems in Engineering 2009 (2009) 1–12.
* [14] R. T. Fontes, M. Eisencraft, A digital bandlimited chaos-based communication system, Communications in Nonlinear Science and Numerical Simulation 37 (2016) 374–385.
* [15] M. Eisencraft, R. D. Fanganiello, L. H. A. Monteiro, Chaotic synchronization in discrete-time systems connected by bandlimited channels, IEEE Communications Letters 15 (6) (2011) 671–673.
* [16] M. Hénon, A two-dimensional mapping with a strange attractor, in: The Theory of Chaotic Attractors, Springer, 1976, pp. 94–102.
* [17] A. V. Oppenheim, R. W. Schafer, Discrete-Time Signal Processing, 3rd Edition, Pearson, Upper Saddle River, NJ, USA, 2009.
* [18] M. Lellep, J. Prexl, M. Linkmann, B. Eckhardt, Using machine learning to predict extreme events in the hénon map, Chaos: An Interdisciplinary Journal of Nonlinear Science 30 (1) (2020) 013113.
* [19] H.-X. Zhao, S.-C. Xie, J.-Z. Zhang, T. Wu, Efficient image encryption using two-dimensional enhanced hyperchaotic henon map, Journal of Electronic Imaging 29 (2) (2020) 023007.
* [20] S. de Souza, A. A. Lima, I. L. Caldas, R. Medrano-T, Z. d. O. Guimarães-Filho, Self-similarities of periodic structures for a discrete model of a two-gene system, Physics Letters A 376 (15) (2012) 1290–1294.
* [21] J. A. Gallas, Structure of the parameter space of the Hénon map, Physical Review Letters 70 (18) (1993) 2714.
* [22] V. Dos Santos, J. D. Szezech Jr, M. S. Baptista, A. M. Batista, I. L. Caldas, Unstable dimension variability structure in the parameter space of coupled Hénon maps, Applied Mathematics and Computation 286 (2016) 23–28.
* [23] W. Shao, Y. Fu, M. Cheng, L. Deng, D. Liu, Chaos synchronization based on hybrid entropy sources and applications to secure communication, IEEE Photonics Technology Letters 33 (18) (2021) 1038–1041.
* [24] B. Vaseghi, S. S. Hashemi, S. Mobayen, A. Fekih, Finite time chaos synchronization in time-delay channel and its application to satellite image encryption in ofdm communication systems, IEEE Access 9 (2021).
* [25] C.-C. Wang, J.-P. Su, A new adaptive variable structure control for chaotic synchronization and secure communication, Chaos, Solitons & Fractals 20 (5) (2004) 967–977.
* [26] M. Eisencraft, R. D. Fanganiello, J. M. V. Grzybowski, D. C. Soriano, R. F. Attux, A. M. Batista, E. E. N. Macau, L. H. A. Monteiro, J. M. T. Romano, R. Suyama, T. Yoneyama, Chaos-based communication systems in non-ideal channels, Communications in Nonlinear Science and Numerical Simulation 17 (12) (2012) 4707 – 4718.
|
VOL2022ISSNUMSUBM
# Properties of uniformly $3$-connected graphs
Frank Göring1 Tobias Hofmann1 My research was funded by the Deutsche
Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID
416228727 – SFB 1410. Chemnitz University of Technology, Germany
(2022-11-30)
###### Abstract
A graph on at least ${{k+1}}$ vertices is uniformly $k$-connected if each pair
of its vertices is connected by $k$ and not more than $k$ independent paths.
We reinvestigate a recent constructive characterization of uniformly
$3$-connected graphs and obtain a more detailed result that relates the number
of vertices to the operations involved in constructing a respective uniformly
$3$-connected graph. Furthermore, we investigate how crossing numbers and
treewidths behave under the mentioned constructions. We demonstrate how these
results can be utilized to study the structure and properties of uniformly
$3$-connected graphs with minimum number of vertices of minimum degree.
###### keywords:
uniform connectivity, graph constructions, crossing number, treewidth,
vertices of minimum degree
## 1 Introduction
Among the many connectivity concepts in graph theory, requiring the same
connectivity between each pair of a graph’s vertices may seem to be quite
restrictive. Yet it might be a valuable feature of certain communication or
supply networks and, from a theoretical point of view, uniform connectivity
nicely complements the notions of ordinary, minimal, or average connectivity.
When studying the latter, Beineke, Oellermann, and Pippert (3) introduced
uniformly connected graphs as they became interested in determining for which
graphs the connectivity equals the average connectivity. Let us begin by
recalling the following definition, whereas we refer to Diestel (9) for basic
graph theoretical terminology.
###### Definition 1
For a number $k\in\mathbb{N}$ a graph on at least ${{k+1}}$ vertices is called
_uniformly $k$-connected_ if each pair of its vertices is connected by $k$ and
not more than $k$ independent paths.
It is not hard to see that uniformly $1$-connected graphs are exactly all
trees and uniformly $2$-connected graphs are exactly all cycles. Further
examples are wheel graphs for ${{k=3}}$ or $k$-regular, $k$-connected graphs
for ${{k\in\mathbb{N}}}$. In more detail, such relations as well as uniformly
edge-connected graphs, in which each pair of vertices is connected by $k$ and
not more than $k$ edge-disjoint paths, are discussed by Göring, Hofmann, and
Streicher (10). This article also contains the following characterization.
###### Theorem 2
A graph is uniformly $3$-connected if and only if it is contained in the
following recursively defined class $\mathcal{C}$.
Figure 1: Constructing uniformly $3$-connected graphs
1. (i)
If a graph $G$ is $3$-regular and $3$-connected, then $G$ shall be contained
in $\mathcal{C}$.
2. (ii)
For two graphs ${{G_{1},G_{2}\in\mathcal{C}}}$ with vertices ${{v_{1}\in
V(G_{1})}}$ and ${{v_{2}\in V(G_{2})}}$ whose neighborhoods are
${{N(v_{1})=\\{x_{1},y_{1},z_{1}\\}}}$ and
${{N(v_{2})=\\{x_{2},y_{2},z_{2}\\}}}$, we include in $\mathcal{C}$ the graph
$(G_{1}-v_{1})\cup(G_{2}-v_{2})+x_{1}x_{2}+y_{1}y_{2}+z_{1}z_{2}.$
3. (iii)
For a graph ${{G\in\mathcal{C}}}$ with distinct vertices ${{v,w,x\in V(G)}}$,
containing ${{vw\in E(G)}}$, and satisfying ${{\deg(z)=3}}$ for all ${{z\in
V(G)\setminus\\{x\\}}}$, we include in $\mathcal{C}$ the graph
$G+y-vw+vy+wy+xy,$
where ${{y\notin V(G)}}$ is a new vertex to be added to $G$.
The operations (ii) and (iii) are illustrated in Figure 1. We refer to (ii) as
a _bridge operation_ and to (iii) as a _spoke operation_. More precisely, if
${{\deg(x)=3}}$ in (iii), we call it a _primary spoke operation_ and if
${{\deg(x)>3}}$, we call it a _secondary spoke operation_. Note that the class
of $3$-regular $3$-connected graphs is contained in the class of uniformly
$3$-connected graphs. In turn, the class of uniformly $3$-connected graphs is
contained in the class of $3$-connected graphs. So Theorem 2 is in a sense
complementary to the classical constructions by Tutte (15; 16) for $3$-regular
$3$-connected and $3$-connected graphs. A natural question to ask when
learning about a class of graphs is what degrees one might see. In extremal
graph theory, this led to extensive research on the minimum number of vertices
of minimum degree. Formally, for a graph $G$ one asks for the parameter
$\nu(G)\coloneqq\big{|}\big{\\{}v\in V(G):\deg(v)=\,\min_{\mathclap{v\in
V(G)}}\;\deg(v)\big{\\}}\big{|}.$
A cornerstone on which many related investigations build on is the result by
Halin (11), who proved that a minimally $k$-connected graph contains a vertex
of degree $k$. A series of results on that topic is concluded by Mader (13),
who gave the tight bound ${{\nu(G)\geq\lceil((k-1)\mkern 1.0mun+2\mkern
1.0muk)/(2\mkern 1.0muk-1)\rceil}}$ for a minimally $k$-connected graph $G$ on
$n$ vertices. This result does also hold for uniformly $3$-connected graphs,
as those are minimally $k$-connected. See Beineke, Oellermann, and Pippert (3)
for a proof of that result. But as minimally $k$-connected graphs do not have
to be uniformly $k$-connected, there can be stronger bounds on $\nu(G)$ and
indeed there is the following result.
###### Theorem 3
A uniformly $3$-connected graph $G$ on $n$ vertices satisfies
$\nu(G)\geq\lceil(2n+2)/3\rceil.$
This result is proven in (10). We call a uniformly $3$-connected graph
_extremal_ if it attains the bound from Theorem 3. The results of Section 2
shall help us to learn more about that class. There we show in detail how the
number of vertices of a uniformly $3$-connected graph depends on the
operations involved in constructing it. Furthermore, we show that the bridge
operation preserves in a sense crossing numbers and under certain conditions
tree widths larger than two. Section 3 is intended to demonstrate how these
results can be used, for example, to find out when extremal uniformly
$3$-connected graphs are planar.
## 2 Main results
In what follows, we build on one of the characterizations by Tutte (16,
Chapter 12), which says that all $3$-regular $3$-connected graphs can be
obtained from a complete graph on four vertices by a sequence of _edge joins_.
Formally, for a graph $G$ and two edges ${{st,vw\in E(G)}}$ _joining_ them
means to build the graph
$G+x+y-st-vw+sx+xt+vy+yw+xy$
where ${{x,y\notin V(G)}}$ are new vertices to be added to $G$. This
construction is illustrated in Figure 2. Note also that $st$ and $vw$ are two
distinct edges, but they may share one endvertex.
Figure 2: Joining two edges of a $3$-regular graph
###### Theorem 4
A uniformly $3$-connected graph $G$ on $n$ vertices satisfies
$n=4+2\mkern 1.0muj+2\mkern 1.0mut+p+s$
if $G$ is constructed from complete graphs on four vertices by a sequence of
$j$ bridge operations, $t$ edge joins, $p$ primary spoke operations and $s$
secondary spoke operations.
Proof: The smallest uniformly $3$-connected graph is the complete graph on
four vertices, for which ${{j=t=p=s=0}}$ and our claim holds. Now suppose we
are given a graph $G$ on $n$ vertices and our statement is true for all graphs
on less than $n$ vertices.
First, take the case where an edge join is the final operation in the sequence
of operations to build $G$. Then $G$ arises from a graph $G^{\prime}$ with
${{n=|V(G)|=|V(G^{\prime})|+2}}$, as an edge join adds two vertices. Denoting
the number of edge joins to build $G^{\prime}$ by $t^{\prime}$, we have
${{t=t^{\prime}+1}}$. By induction, we obtain
$\displaystyle n$ $\displaystyle=|V(G)|=|V(G^{\prime})|+2$
$\displaystyle=4+2\mkern 1.0muj+2\mkern 1.0mut^{\prime}+2+p+s$
$\displaystyle=4+2\mkern 1.0muj+2\mkern 1.0mut+p+s.$
Primary or secondary spoke operations add one vertex, as is illustrated in
Figure 1. If such an operation is the final operation to build $G$, we can
argue as in the previous case. It remains the case where a bridge operation is
the final operation to build $G$. Then $G$ arises from two graphs $G_{1}$ and
$G_{2}$. In view of Figure 1, we have ${{n=|V(G)|=|V(G_{1})|+|V(G_{2})|-2}}$,
as well as ${{j=j_{1}+j_{2}+1}}$, ${{t=t_{1}+t_{2}}}$, ${{p=p_{1}+p_{2}}}$,
and ${{s=s_{1}+s_{2}}}$, where $j_{i},t_{i},p_{i},s_{i}$ are the respective
numbers of bridge operations, edge joins, primary and secondary spoke
operations used when constructing $G_{i}$, where ${{i\in\\{1,2\\}}}$. By
induction, we obtain
$\displaystyle n$ $\displaystyle=|V(G)|=|V(G_{1})|+|V(G_{2})|-2$
$\displaystyle=4+2\mkern 1.0muj_{1}+2\mkern 1.0mut_{1}+p_{1}+s_{1}+4+2\mkern
1.0muj_{2}+2\mkern 1.0mut_{2}+p_{2}+s_{2}-2$ $\displaystyle=4+2\mkern
1.0mu(t_{1}+t_{2})+2\mkern
1.0mu(j_{1}+j_{2}+1)+(p_{1}+p_{2})+(s_{1}+s_{2})\hskip
76.09975pt\phantom{\square}$ $\displaystyle=4+2\mkern 1.0mut+2\mkern
1.0muj+p+s.\mathchoice{}{}{}{}\phantom{4+2\mkern 1.0mu(j_{1}+j_{2}+1)+2\mkern
1.0mu(t_{1}+t_{2})+(p_{1}+p_{2})+(s_{1}+s_{2})}\hskip 76.09975pt\square$
This allows us to reprove Theorem 3 as well as to obtain some additional
conditions on the numbers of operations involved.
###### Proof 2.5 (of Theorem 3).
For a uniformly $3$-connected graph $G$ on $n$ vertices, Theorem 4 tells us
that
$n=4+2\mkern 1.0muj+2\mkern 1.0mut+p+s.$ (1)
Let us recall that a primary spoke operation, by definition, can only be
applied to $3$-regular graphs, and it raises one of the respective degrees to
four. A graph whose construction involves $j$ bridge operations is formed by
recursively combining ${{j+1}}$ input graphs. For each input graph, one is
allowed to use at most one primary spoke operation. In other words,
$j+1\geq p\leavevmode\nobreak\ \Rightarrow\leavevmode\nobreak\ 2\mkern
1.0muj\geq 2\mkern 1.0mup-2.$ (2)
Combining Equations (1) and (2), we obtain
$n\geq 2+2\mkern 1.0mut+3\mkern 1.0mup+s\geq 2+3\mkern
1.0mup\leavevmode\nobreak\ \Rightarrow\leavevmode\nobreak\
p\leq\lfloor(n-2)/3\rfloor.$ (3)
The primary spoke operation is the only operation that reduces the number of
vertices of minimum degree. It does so by exactly one. Consequently,
$\nu(G)\geq n-p\geq\lceil(2\mkern 1.0mun+2)/3\rceil,$ (4)
which was to be shown.
Another property we shall verify in this section is that the bridge operation
preserves the crossing numbers of the input graphs. In our proof, we build on
the following basic fact about graph embeddings, presented by West (17,
Chapter 6).
###### Lemma 2.6.
If $E$ is the edge set of a face of some planar embedding of a graph $G$, then
there is an embedding of $G$ such that $E$ is the edge set of the outer face.
###### Theorem 2.7.
If $G$ is the result of applying the bridge operation on graphs $G_{1}$ and
$G_{2}$. Then
$\operatorname{cro}(G)\leq\operatorname{cro}(G_{1})+\operatorname{cro}(G_{2}).\vspace{-1ex}$
###### Proof 2.8.
We are given two graphs ${{G_{1},G_{2}}}$ with vertices ${{v_{1}\in
V(G_{1})}}$ and ${{v_{2}\in V(G_{2})}}$ whose neighborhoods are
${{N(v_{1})=\\{x_{1},y_{1},z_{1}\\}}}$ and
${{N(v_{2})=\\{x_{2},y_{2},z_{2}\\}}}$ and a graph
$G\coloneqq(G_{1}-v_{1})\cup(G_{2}-v_{2})+x_{1}x_{2}+y_{1}y_{2}+z_{1}z_{2}.$
At first, let us consider some drawing of $G_{1}$ in the plane, possibly with
crossings. We obtain a _planarization_ $P$ of this drawing by replacing each
occurring crossing by a new vertex. In this process, we may have to subdivide
some of the edges in
$\\{x_{1}v_{1},y_{1}v_{1},z_{1}v_{1},x_{2}v_{2},y_{2}v_{2},z_{2}v_{2}\\}$. The
vertex on the former edge $x_{1}v_{1}$ excluding $v_{1}$ but including $x_{1}$
that is closest to $v_{1}$ shall be denoted by $x_{1}^{\prime}$. Analogously,
we define
$y_{1}^{\prime},z_{1}^{\prime},x_{2}^{\prime},y_{2}^{\prime},z_{2}^{\prime}$.
Since ${{\deg(v_{1})=3}}$, we know that two of the three edges
$x_{1}^{\prime}v_{1},y_{1}^{\prime}v_{1},z_{1}^{\prime}v_{1}$, say
$x_{1}^{\prime}v_{1}$ and $y_{1}^{\prime}v_{1}$, are both contained in the
edge set of some face of $P$. Lemma 2.6 tells us that there is an embedding of
$P$ such that $\\{x_{1}^{\prime}v_{1},y_{1}^{\prime}v_{1}\\}$ is contained in
the edge set of the outer face. Replacing the vertices we introduced when
planarizing $G$ back to crossings, we obtain a drawing of $G_{1}$ where parts
of both edges $x_{1}v_{1}$ and $y_{1}v_{1}$ are incident to the outer face.
Even more, since we can reflect the embedding of $G_{1}$ across a line through
$v_{1}$, it is possible to choose the orientation of
$\\{x_{1}v_{1},y_{1}v_{1}\\}$. Likewise, we can take a drawing of $G_{2}$
where parts of $x_{2}v_{2}$ and $y_{2}v_{2}$ are incident to the outer face.
In other words, our situation is essentially as in Figure 3.
Figure 3: The bridge operation acting on graphs embedded in the plane
Since we embedded finite graphs in the plane, we can find radii
${{\varepsilon,\delta>0}}$ such that the discs $U_{\varepsilon}(v_{1})$ and
$U_{\delta}(v_{2})$ do not contain
$x_{1}^{\prime},y_{1}^{\prime},z_{1}^{\prime},x_{2}^{\prime},y_{2}^{\prime}$,
or $z_{2}^{\prime}$. We denote the intersection of the edge $x_{1}v_{1}$ with
the disc $U_{\varepsilon}(v_{1})$ by $x_{1}^{\prime\prime}$ and the
intersection of the edge $x_{2}v_{1}$ with the disc $U_{\delta}(v_{2})$ by
$x_{2}^{\prime\prime}$. This provides us with a polygonal arc, leading from
$x_{1}$ to $x_{1}^{\prime\prime}$ to $x_{2}^{\prime\prime}$ to $x_{2}$. There
are analogous polygonal arcs linking $y_{1}$ with $y_{2}$ and $z_{1}$ with
$z_{2}$. Those polygonal arcs can be drawn without intersections when choosing
the orientation of the embeddings of $G_{1}$ or $G_{2}$ as in Figure 3. This
tells us that we can build $G$ out of $G_{1}$ and $G_{2}$ by the bridge
operation without adding any additional crossings. So
${{\operatorname{cro}(G)\leq\operatorname{cro}(G_{1})+\operatorname{cro}(G_{2})}}$.
Finally, we will ask how the bridge operation affects the treewidths of the
input graphs. So let us recall the following terms.
###### Definition 2.9.
A _tree decomposition_ of a graph $G$ is a pair ${{(\\{X_{i}:i\in
I\\},T=(I,F))}}$ where $T$ is a tree and each _node_ $i\in I$ has a _bag_
${{X_{i}\subseteq V(G)}}$ such that the following properties hold.
1. 1.
Each vertex of $V$ belongs to some bag, or ${{\cup_{i\in I}X_{i}=V}}$.
2. 2.
For all ${{vw\in E(G)}}$ there exists an $i\in I$ such that ${{v,w\in
X_{i}}}$.
3. 3.
For all ${{v\in V}}$ the set of nodes ${{\\{i\in I:v\in X_{i}\\}}}$ induces a
subtree of $T$.
The _width_ of a tree decomposition ${{(\\{X_{i}:i\in I\\},T=(I,F))}}$ is
$\max_{i\in I}|X_{i}|-1$ and the _treewidth_ of a graph $G$ is the minimum
width of all tree decompositions of $G$. We shall denote the latter by
$\operatorname{tw}(G)$.
Before we focus on how the treewidth behaves under the bridge operation, let
us recall the following facts, whose proofs can be found in Bodlander (5).
###### Lemma 2.10.
If $H$ is a minor of $G$, then
${{\operatorname{tw}(H)\leq\operatorname{tw}(G)}}$.
###### Lemma 2.11.
If ${{(\\{X_{i}:i\in I\\},T=(I,F))}}$ is a tree decomposition of a graph $G$
and ${{W\subseteq V(G)}}$ a clique in $G$, then there is a node ${{i\in I}}$
such that ${{W\subseteq X_{i}}}$.
Furthermore, given a graph $G$, we will call a vertex ${{v\in V(G)}}$ with
${{\deg(v)=3}}$ _safe_ if $G$ admits a tree decomposition having a bag that
contains $v$ and two of its neighbors. In view of Lemma 2.11, a vertex of
degree three is safe if it has two neighbors that are adjacent or that can be
joined by an edge without increasing the treewidth of $G$. Furthermore, we
call a vertex of degree three _unsafe_ if it is not safe. By definition, an
unsafe vertex has an independent neighborhood, as is the case for the vertex
$v$ in Figure 4. Suppose $v$ is an unsafe vertex of the indicated graph $G$,
with neighborhood ${{N(v)=\\{x,y,z\\}}}$. Then adding a clique on four
vertices by the bridge operation results in a graph that has ${{G+xy}}$ as
minor, which can be seen by contracting the vertices shaded in gray. So, in
general, the bridge operation can increase the treewidth, but only if we
combine graphs at unsafe vertices, as we will show next.
Figure 4: Adding a clique at an unsafe vertex
###### Theorem 2.12.
Consider two graphs $G_{1}$ and $G_{2}$ with vertices ${{v_{1}\in V(G_{1})}}$
and ${{v_{2}\in V(G_{2})}}$ whose neighborhoods are
${{N(v_{1})=\\{x_{1},y_{1},z_{1}\\}}}$ and
${{N(v_{2})=\\{x_{2},y_{2},z_{2}\\}}}$ and a graph
$G\coloneqq(G_{1}-v_{1})\cup(G_{2}-v_{2})+x_{1}x_{2}+y_{1}y_{2}+z_{1}z_{2}.$
If $v_{1}$ and $v_{2}$ are safe and
${{\max\\{\operatorname{tw}(G_{1}),\operatorname{tw}(G_{2})\\}\geq 3}}$, then
$\operatorname{tw}(G)=\max\\{\operatorname{tw}(G_{1}),\operatorname{tw}(G_{2})\\}.$
###### Proof 2.13.
Let ${{(\\{X_{i}:i\in
I_{1}\\},T_{1}=(I_{1},F_{1}))}}\eqqcolon(\mathcal{X},T_{1})$ and
${{(\\{Y_{j}:j\in I_{2}\\},T_{2}=(I_{2},F_{2}))\eqqcolon(\mathcal{Y},T_{2})}}$
be minimum width tree decompositions of $G_{1}$ and $G_{2}$, respectively,
having a bag ${{X_{s}\in\mathcal{X}}}$ containing $v_{1}$ and two of its
neighbors, say $x_{1}$ and $y_{1}$, and a bag ${{Y_{t}\in\mathcal{Y}}}$
containing $v_{2}$ and two of its neighbors. We can assume the existence of
such bags because $v_{1}$ and $v_{2}$ are safe. To verify
${{\operatorname{tw}(G)\leq\max\\{\operatorname{tw}(G_{1}),\operatorname{tw}(G_{2})\\}}}$,
our goal is to define a tree decomposition of width at most
${{\max\\{\operatorname{tw}(G_{1}),\operatorname{tw}(G_{2})\\}}}$ for $G$.
Whereas we have denoted the neighbors of $v_{1}$ in bag $X_{s}$ by $x_{1}$ and
$y_{1}$ without loss of generality, there are two cases to consider with
respect to how the vertices of $X_{s}$ and $Y_{t}$ are joined by edges in $G$.
Setting ${{F\coloneqq\\{x_{1}x_{2},y_{1}y_{2},z_{1}z_{2}\\}\cap E(G[X_{s}\cup
Y_{t}])}}$, either ${{|F|=1}}$ or ${{|F|\geq 2}}$.
Let us begin with the case ${{|F|=1}}$, denoting the neighbors of $v_{2}$ in
$G_{2}$ that are contained in $Y_{t}$ by $x_{2}$ and $z_{2}$. Since in $G$ the
vertices $v_{1}$ and $v_{2}$ do not exist, we may safely replace them.
Formally, for each ${{i\in I_{1}}}$ where ${{v_{1}\in X_{i}}}$ set
${{X_{i}^{\prime}\coloneqq X_{i}\setminus\\{v_{1}\\}\cup\\{z_{2}\\}}}$ and for
each ${{i\in I_{1}}}$ where ${{v_{1}\notin X_{i}}}$ set
${{X_{i}^{\prime}\coloneqq X_{i}}}$. Furthermore, for each ${{j\in I_{2}}}$
where ${{v_{2}\in Y_{j}}}$ set ${{Y_{j}^{\prime}\coloneqq
Y_{j}\setminus\\{v_{2}\\}\cup\\{y_{1}\\}}}$ and for each ${{j\in I_{2}}}$
where ${{v_{2}\notin Y_{j}}}$ set ${{Y_{j}^{\prime}\coloneqq Y_{j}}}$. Note
that we have not changed the cardinalities of the bags. Now take a new node
${{v\notin I_{1}\cup I_{2}}}$ to define the tree ${{T\coloneqq T_{1}\cup
T_{2}+v+sv+vt}}$ as well as the bag
${{X_{v}\coloneqq\\{x_{1},x_{2},y_{1},z_{2}\\}}}$. Because ${{|X_{v}|=4}}$ and
our assumption that
${{\max\\{\operatorname{tw}(G_{1}),\operatorname{tw}(G_{2})\\}\geq 3}}$, we
observe
$\max\big{\\{}\max\limits_{i\in I_{1}}|X_{i}|,\max\limits_{j\in
I_{2}}|Y_{j}|\big{\\}}=\max\big{\\{}\max\limits_{i\in
I_{1}}|X_{i}^{\prime}|,\max\limits_{j\in
I_{2}}|Y_{j}^{\prime}|,|X_{v}|\big{\\}}.$
Figure 5: Combining two tree decompositions at bags of safe vertices when
${{|F|=1}}$
It remains to be checked that ${{D\coloneqq(\\{X_{i}^{\prime}:i\in
I_{1}\\}\cup\\{Y_{j}^{\prime}:j\in I_{2}\\}\cup\\{X_{v}\\},T)}}$ is a tree
decomposition of $G$. When building the bags of $D$, the only vertices we
removed were $v_{1}$ and $v_{2}$, which are not present in $G$. So $D$
satisfies Condition 1 of Definition 2.9. By the same reason, for each edge in
${{E(G_{1})\cup E(G_{2})}}$ we find a bag in $D$ containing its endvertices.
Furthermore, by Condition 2 of Definition 2.9, there must be some ${{k\in
I_{1}}}$ such that ${{v_{1},z_{1}\in X_{k}}}$, which implies that
${{z_{1},z_{2}\in X_{k}^{\prime}}}$. Likewise, there is an ${{\ell\in I_{2}}}$
such that ${{y_{1},y_{2}\in Y_{\ell}^{\prime}}}$. Since the edge $x_{1}x_{2}$
is covered by the bag $X_{v}$, this verifies Condition 2 of Definition 2.9. We
have to check Condition 3 of Definition 2.9 essentially for the vertices in
$X_{v}$. This is because $T$ by construction is a tree having $T_{1}$ and
$T_{2}$ as subtrees, the only vertices we removed when building $D$ were
$v_{1}$ and $v_{2}$, and the only vertices we included in some bag were those
of $X_{v}$. Figure 5 illustrates the construction of the tree decomposition.
Herein, we placed $z_{2}$ in every bag that contained $v_{1}$, indicated by
$z_{2}$ in a gray box with subscript $v_{1}$. Therefore, we observe that
${{\\{i\in I_{1}:z_{2}\in X_{i}^{\prime}\\}}}$ induces a subtree of $T_{1}$.
Since ${{\\{j\in I_{2}:z_{2}\in Y_{j}^{\prime}\\}=\\{j\in I_{2}:z_{2}\in
Y_{j}\\}}}$ induces a subtree of $T_{2}$ and ${{z_{2}\in X_{v}}}$, we find
that the nodes whose bags contain $z_{2}$ induce a subtree of $T$. By
investigating Figure 5, we can argue similarly for the remaining vertices of
$X_{v}$.
For the case ${{|F|=2}}$, denote the neighbors of $v_{2}$ in $G_{2}$ that are
contained in $Y_{t}$ by $x_{2}$ and $y_{2}$. For each ${{i\in I_{1}}}$ where
${{v_{1}\in X_{i}}}$ set ${{X_{i}^{\prime}\coloneqq
X_{i}\setminus\\{v_{1}\\}\cup\\{z_{1}\\}}}$ and for each ${{i\in I_{1}}}$
where ${{v_{1}\notin X_{i}}}$ set ${{X_{i}^{\prime}\coloneqq X_{i}}}$.
Likewise, for each ${{j\in I_{2}}}$ where ${{v_{2}\in Y_{j}}}$ set
${{Y_{j}^{\prime}\coloneqq Y_{j}\setminus\\{v_{2}\\}\cup\\{z_{1}\\}}}$ and for
each ${{j\in I_{2}}}$ where ${{v_{2}\notin Y_{j}}}$ set
${{Y_{j}^{\prime}\coloneqq Y_{j}}}$. For two new nodes ${{v,w\notin I_{1}\cup
I_{2}}}$ define the tree ${{T\coloneqq T_{1}\cup T_{2}+v+w+sv+vw+wt}}$ as well
as the bags ${{X_{v}\coloneqq\\{x_{1},y_{1},y_{2},z_{1}\\}}}$ and
${{X_{w}\coloneqq\\{x_{1},x_{2},y_{2},z_{1}\\}}}$. This defines a tree
decomposition of width at most
${{\max\\{\operatorname{tw}(G_{1}),\operatorname{tw}(G_{2})\\}}}$, which can
be checked by investigating Figure 6, analogous to the previous case.
Figure 6: Combining two tree decompositions at bags of safe vertices when
${{|F|=2}}$
For the other inequality, note that both $G_{1}$ and $G_{2}$ are minors of
$G$. For example, contracting all vertices in $G$ that stem from $G_{2}$ to a
single vertex yields $G_{1}$. This implies
${{\operatorname{tw}(G)\geq\max\\{\operatorname{tw}(G_{1}),\operatorname{tw}(G_{2})\\}}}$
by Lemma 2.10, which concludes our proof.
Quite a few difficult combinatorial problems on graphs can be solved in
polynomial, or even linear, time by dynamic programming approaches if the
input graph has bounded treewidth, about which Bodlaender and Koster (6) give
an overview. This makes statements such as that of the Theorem 2.12 useful. In
what follows, however, we will encounter situations where we cannot assume the
vertices involved in our bridge construction to be safe. Nevertheless, there
are some tools that will help us to show that extremal uniformly $3$-connected
graphs have bounded treewidth. To this end, let us recall the notion of a
_line graph_ $L(G)$ of a graph $G$. This is the graph on vertex set $E(G)$
whose vertices are adjacent exactly when they are incident in $G$.
###### Lemma 2.14.
For any graph $G$ holds
$\operatorname{tw}(G)\leq 2\operatorname{tw}(L(G))+1.$
This bound and related results are presented by Harvey and Wood (12).
Furthermore, Bodlaender, Van Leeuwen, Tan, and Thilikos (7) give the following
relation.
###### Lemma 2.15.
Let $G_{1}$ and $G_{2}$ be two graphs containing cliques ${{S\subseteq
V(G_{1})}}$ and ${{T\subseteq V(G_{2})}}$ with ${{|S|=|T|}}$ and let $G$ be a
_clique-sum_ of $G_{1}$ and $G_{2}$, meaning a graph obtained by taking the
disjoint union of $G_{1}$ and $G_{2}$ and identifying $S$ and $T$. Then
$\operatorname{tw}(G)=\max\\{\operatorname{tw}(G_{1}),\operatorname{tw}(G_{2})\\}.$
Figure 7: The bridge operation and its effect on the corresponding line graphs
###### Lemma 2.16.
Consider two graphs $G_{1}$ and $G_{2}$ with vertices ${{v_{1}\in V(G_{1})}}$
and ${{v_{2}\in V(G_{2})}}$ whose neighborhoods are
${{N(v_{1})=\\{x_{1},y_{1},z_{1}\\}}}$ and
${{N(v_{2})=\\{x_{2},y_{2},z_{2}\\}}}$ and a graph
$G\coloneqq(G_{1}-v_{1})\cup(G_{2}-v_{2})+x_{1}x_{2}+y_{1}y_{2}+z_{1}z_{2}.$
Furthermore, let $H$ be a clique-sum of $L(G_{1})$ and $L(G_{2})$ formed by
identifying ${{E(\\{v_{1}\\},V(G_{1})\setminus\\{v_{1}\\})}}$ and
${{E(\\{v_{2}\\},V(G_{2})\setminus\\{v_{2}\\})}}$. Then $L(G)$ is a (proper)
subgraph of $H$.
###### Proof 2.17.
When forming $G$ by the bridge operation, adding the edges $x_{1}x_{2}$,
$y_{1}y_{2}$, and $z_{1}z_{2}$ corresponds to identifying $v_{1}x_{1}$ with
$v_{2}x_{2}$, $v_{1}y_{1}$ with $v_{2}y_{2}$, and $v_{1}z_{1}$ with
$v_{2}z_{2}$ in the respective line graphs, as is indicated by dashed green
lines in Figure 7. Deleting $v_{1}$ and $v_{2}$ when forming $G$ by the bridge
operation removes the cliques, indicated by dotted gray lines in Figure 7, at
which the clique-sum of $L(G_{1})$ and $L(G_{2})$ is formed. This is why
$L(G)$ is a _proper_ subgraph of $H$.
###### Theorem 2.18.
Let $\mathcal{C}$ be a class of graphs which arises by successively taking the
bridge operation to join graphs from a base class whose line graphs have
treewidth bounded by $w$. Then for any $G\in\mathcal{C}$ holds
$\operatorname{tw}(G)\leq 2w+1.$
###### Proof 2.19.
This follows directly from Lemmas 2.14, 2.15, and 2.16.
## 3 Applications
Let us proceed with an example that illustrates how to use the Equations (1)
to (4), which we obtained in the course of our proof of Theorem 3, to get a
precise picture of extremal uniformly $3$-connected graphs.
###### Example 3.20.
Let us ask for the graphs on ${{n=10}}$ vertices with minimum number of
vertices of minimum degree. Condition (4) tells us that the extremal graphs
are those where $p$ is maximal. In view of Condition (3), we choose ${{p=2}}$.
Condition (1) then reads ${{4=2\mkern 1.0mut+2\mkern 1.0muj+s}}$ and by
Condition (2), we obtain ${{j\geq 1}}$. This leaves us exactly with the
settings where $p=2$ and
$t=1,j=1,s=0\quad\text{or}\quad t=0,j=2,s=0\quad\text{or}\quad t=0,j=1,s=2.$
A graph for the setting ${{t=1,j=1,p=2,s=0}}$ is illustrated in Figure 8.
Figure 8: An extremal uniformly $3$-connected graph on ten vertices
In what follows, we shall generalize the findings from this example, and so
identify the conditions under which extremal uniformly $3$-connected graphs
are planar.
###### Theorem 3.21.
Given an extremal uniformly $3$-connected graph on ${{n=3\mkern
1.0muk+\ell\geq 5}}$ vertices, for some ${{k\in\mathbb{N}\setminus\\{1\\}}}$
and ${{\ell\in\\{-1,0,1\\}}}$, let $j,t,p$, and $s$ be the respective numbers
of bridge operations, edge joins, primary and secondary spoke operations
involved in constructing $G$.
1. 1.
Then $p=k-1$.
2. 2.
If $\ell=-1$, then $j=k-2$, $t=s=0$.
3. 3.
If
$\ell={\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}-}0$,
then ${{j=k-2}}$, ${{t=0}}$, ${{s=1}}$.
4. 4.
If
$\ell={\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}-}1$,
then ${{j=k-1}}$, ${{t=s=0}}$ or ${{j=k-2}}$, ${{t=1}}$, ${{s=0}}$
If
$\ell={\color[rgb]{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@color@gray@fill{1}-}1$,
then ${{j=k-1}}$, ${{t=s=0}}$ or ${{j=k-2}}$, ${{t=0}}$, ${{s=2}}$.
###### Proof 3.22.
In view of Conditions (3) and (4), building an extremal graph involves
$p=\lfloor(n-2)/3\rfloor=\lfloor(3\mkern
1.0muk+\ell-2)/3\rfloor=k+\lfloor(\ell-2)/3\rfloor=k-1$
primary spoke operations. Thus Statement 1 holds. Condition (2) requires that
${{j\geq p-1=k-2}}$ and so Condition (1) tells us that
$\displaystyle n$ $\displaystyle=4+2\mkern 1.0mut+2\mkern 1.0muj+p+s$
$\displaystyle\Rightarrow 3\mkern 1.0muk+\ell$ $\displaystyle\geq 4+2\mkern
1.0mut+2\mkern 1.0mu(k-2)+k-1+s$ $\displaystyle\Rightarrow\phantom{3\mkern
1.0muk+\ell}\mathchoice{}{}{}{}1+\ell$ $\displaystyle\geq 2\mkern 1.0mut+s.$
For ${{\ell=-1}}$, we obtain ${{j=k-2}}$, ${{t=s=0}}$, which is Statement 2.
For ${{\ell=0}}$, we obtain ${{j=k-2}}$, ${{t=0}}$, ${{s=1}}$, which is
Statement 3. For ${{\ell=1}}$ and ${{j=k-2}}$, we obtain ${{t=0}}$ and
${{s=2}}$ or ${{t=1}}$ and ${{s=0}}$, which are the last two alternatives in
Statement 4. If ${{\ell=1}}$ and ${{j=k-1}}$, then Condition (1) implies
$t=s=0$, which is the remaining alternative in Statement 4. Finally, note that
$j$ cannot be larger than ${{k-1}}$, since otherwise the right hand side of
Equation (1) exceeds the left hand side.
Let us see what we now know about small extremal uniformly $3$-connected
graphs.
###### Observation 3.23
The _wheel_ graph on ${{n\geq 4}}$ is the graph resulting from a cycle on
${{n-1}}$ vertices by adding a new vertex which is adjacent to all other
vertices. We denote such a graph by $W_{n}$. The graph $W_{4}$ is complete and
all its vertices have degree three. Performing a primary spoke operation on
$W_{4}$ results in $W_{5}$. Similarly, performing a secondary spoke operation
on $W_{n}$ results in $W_{n+1}$ for all ${{n\in\mathbb{N}}}$.
Figure 9: Small extremal uniformly $3$-connected graphs built out of a wheel
graph by edge joins ($\rightsquigarrow_{t}$) and primary spoke operations
($\rightsquigarrow_{p}$)
Let us consider an extremal uniformly $3$-connected graph $G$ in whose
construction an edge join is involved. Recall that edge joins in Tutte’s
characterization (16), and so in Theorem 2, are only allowed to be applied on
$3$-regular $3$-connected graphs. It is not hard to see that extremal
uniformly $3$-connected graphs are nonregular for all ${{n\geq 5}}$. So when
an edge join is involved in building $G$, it can only take the graph $W_{4}$
as input. This can only produce the complete bipartite graph $K_{3,3}$ or the
envelope graph, depicted in the middle of Figure 9. Out of those graphs, we
can obtain the graphs on the right in Figure 9 by a primary spoke operation.
The dashed green edges drawn in the bottom right graph are to be understood as
alternatives. They indicate the three nonisomorphic graphs that can be built
out of the envelope graph by a primary spoke operation. In fact, one can check
that the alternative where edge $f$ is added to the envelope graph is
isomorphic to the top right graph in Figure 9. The alternative where edge $e$
is added to the envelope graph is isomorphic to the graph which results from
combining the wheel graphs $W_{4}$ and $W_{5}$ by the bridge operation.
Similarly, the envelope graph can be combined out of two wheel graphs $W_{4}$
by the bridge operation. So nonplanar graphs might arise even if we forbid
edge joins.
With Theorem 2.7, we have the key to combine our present findings as follows.
###### Theorem 3.24.
Let $G$ be an extremal uniformly $3$-connected graph on ${{n=3k+\ell\geq 4}}$
vertices, for suitable $k\in\mathbb{N}$ and ${{\ell\in\\{-1,0,1\\}}}$. Then
${{\operatorname{cro}(G)\leq 1}}$, and if ${{n=4}}$ or
${{\ell\in\\{-1,0\\}}}$, then $G$ is planar.
###### Proof 3.25.
The only uniformly $3$-connected graph for ${{n=4}}$ is the complete graph on
four vertices. It is an extremal one and it is planar. Consider now an
extremal uniformly $3$-connected graph $G$ on ${{n=3\mkern 1.0muk+\ell\geq
5}}$ vertices, where ${{k\in\mathbb{N}\setminus\\{1\\}}}$. If
${{\ell\in\\{-1,0\\}}}$, then Items 1 to 3 of Theorem 3.21 tell us that $G$ is
built by ${{k-1}}$ primary spoke operations, one secondary spoke operation if
${{\ell=0}}$, and ${{k-2}}$ bridge operations. In other words, $G$ results
from using the bridge operation recursively to combine wheels $W_{5}$, and one
wheel $W_{6}$ if ${{\ell=0}}$. So $G$ is planar by Theorem 2.7.
If ${{\ell=1}}$, then Items 1 and 4 of Theorem 3.21 tell us that $G$ is built
by ${{k-1}}$ primary spoke operations. If ${{j=k-1}}$, then ${{t=s=0}}$. So
$G$ results from recursively using the bridge operation to combine one wheel
$W_{4}$ and ${{k-1}}$ wheels $W_{5}$ or, in view of Observation 3.23, to
combine one of the graphs in the bottom right corner of Figure 9 with
${{k-2}}$ wheels $W_{5}$. So $\operatorname{cro}(G)\leq 1$ by Theorem 2.7.
It remains the case where ${{\ell=1}}$ and ${{j=k-2}}$. If ${{t=1}}$, then
${{s=0}}$ and $G$ results from using the bridge operation recursively to
combine wheels $W_{5}$ with one of the graphs on the right of Figure 9. So
${{\operatorname{cro}(G)\leq 1}}$ by Theorem 2.7. If ${{t=0}}$, then ${{s=2}}$
and $G$ results from using the bridge operation recursively to combine wheels
$W_{5}$ with two $W_{6}$ or one $W_{7}$. So ${{\operatorname{cro}(G)\leq 1}}$
by Theorem 2.7.
Figure 10: Extremal uniformly $3$-connected graphs
Questions concerning the colorability of uniformly $3$-connected graphs are
addressed by Aboulker, Brettell, Havet, Marx, and Trotignon (1). There, it is
shown that uniformly $3$-connected graphs, except the wheels on an even number
of vertices, are $3$-colorable. Moreover, the authors demonstrate that such a
coloring can be determined in polynomial time.
Another aspect one may notice is a certain similarity between extremal
uniformly $3$-connected graphs and _Halin graphs_ , surveyed by Brandstadt,
Le, and Spinrad (8). Those are graphs that can be obtained by embedding a tree
without vertices of degree two in the plane and connecting its leaves by a
cycle without crossing any of the tree edges. By the previous proof, we can
obtain those Halin graphs where the inner vertices are of degree four, with
few exceptions. If ${{\ell=0}}$, we may have one vertex of degree five. If
${{\ell=1}}$, we may have two vertices of degree five or one of degree six. An
example is illustrated on the left in Figure 10. In general, Halin graphs can
be seen to be uniformly $3$-connected, but not the other way around.
Counterexamples are certainly nonplanar (extremal) uniformly $3$-connected
graphs and even for ${{\ell=-1}}$, we find for example the graph depicted on
the right in Figure 10.
Figure 11: An extremal uniformly $3$-connected graph containing an unsafe
vertex $v$
The overlap with the class of Halin graphs motivates the question of whether
extremal uniformly $3$-connected graphs have a similar tree-like structure.
Whereas, as Bodlaender (4) shows, Halin graphs have treewidth three, general
uniformly $3$-connected graphs have unbounded treewidth. Meeks (14)
demonstrates this by an example illustrating that for any ${{k\in\mathbb{N}}}$
there are $3$-regular, $3$-connected graphs having a ${{k\times k}}$ grid as
minor. In contrast, the small extremal uniformly $3$-connected graphs in
Figure 9 as well as wheel graphs have treewidth three. In view of Observation
3.23 and the proof of Theorem 2.7, those are the elemental building blocks for
the bridge operation when constructing extremal uniformly $3$-connected
graphs. Theorem 2.12 guarantees that the bridge operation preserves the
treewidth of the input graphs if no unsafe vertices arise in the course of the
construction. But Figure 11 shows an extremal uniformly $3$-connected graph
that has an unsafe vertex. The depicted graph is that from the bottom right
corner of Figure 9 where edge $f$ is added. In Figure 11, the neighborhood of
$v$ is independent and connecting any of its neighbors, indicated by the
dashed green lines, produces a $K_{5}$ minor, which can be seen by contracting
the respective vertices shaded gray. As illustrated in Figure 4, adding a
wheel by the bridge operation at $v$, produces a graph of treewidth four.
Although this is the only unsafe situation we identified so far, we are not
sure whether others exist, precluding us to show that the treewidth is bounded
by four, using Theorem 2.12 only. However, with Theorem 2.18, we can at least
verify bounded treewidth.
###### Corollary 3.26 (of Theorem 2.18).
The treewidth of any extremal uniformly $3$-connected graph $G$ is bounded by
${{\operatorname{tw}(G)\leq 13}}$.
Figure 12: Small extremal uniformly $3$-connected graphs and their
corresponding line graphs
###### Proof 3.27.
In Observation 3.23 and the proof of Theorem 3.24, we argued that any extremal
uniformly $3$-connected graph can be built by successively taking the bridge
operation to join graphs from Figure 9 and wheel graphs on up to six vertices.
The treewidth of the line graphs of those graphs is bounded by ${{w=6}}$. This
can be seen by investigating Figure 12. The graphs depicted there satisfy
${{\operatorname{tw}(G_{1})\leq 5}}$, ${{\operatorname{tw}(G_{2})\leq 5}}$,
and ${{\operatorname{tw}(G_{3})\leq 6}}$. To certify this, let us determine
respective tree decompositions. In all three graphs, removing the vertices
highlighted by gray circles, and incident edges, leaves us with a tree, for
which we easily find a tree decomposition of width one. Putting the vertices
highlighted by gray circles in every bag, provides us with suitable tree
decompositions. Also recall that the graph from the bottom right corner of
Figure 9 where edge $e$ is added can be obtained by joining a wheel on five
vertices with one on four vertices by the bridge operation. Clearly, all
remaining line graphs of graphs from Figure 9 as well as line graphs of wheels
on four and five vertices are minors of one of the graphs depicted in Figure
12. So Theorem 2.18 provides us with the asserted bound.
Open Question. In view of Corollary 3.26 and Figures 11 and 4, we know that
there is a general upper bound ${{4\leq C\leq 13}}$ such that
${{\operatorname{tw}(G)\leq C}}$ holds for any extremal uniformly
$3$-connected graph $G$. It is open what the smallest such bound $C$ is. We
tend to believe that ${{C=4}}$.
## References
* Aboulker et al. (2017) Pierre Aboulker, Nick Brettell, Frédéric Havet, Dániel Marx, and Nicolas Trotignon. Coloring graphs with constraints on connectivity. _Journal of Graph Theory_ , 85(4):814–838, 2017\.
* Arnborg and Proskurowski (1989) Stefan Arnborg and Andrzej Proskurowski. Linear time algorithms for $NP$-hard problems restricted to partial $k$-trees. _Discrete Applied Mathematics_ , 23(1):11–24, 1989.
* Beineke et al. (2002) Lowell W. Beineke, Ortrud R. Oellermann, and Raymond E. Pippert. The average connectivity of a graph. _Discrete Mathematics_ , 252(1-3):31–45, 2002\.
* Bodlaender (1988) Hans L. Bodlaender. Planar graphs with bounded treewidth. _Technical Report RUU-CS-88-14, Utrecht University_ , 1988.
* Bodlaender (1998) Hans L. Bodlaender. A partial k-arboretum of graphs with bounded treewidth. _Theoretical Computer Science_ , 209(1-2):1–45, 1998.
* Bodlaender and Koster (2008) Hans L. Bodlaender and Arie M. C. A. Koster. Combinatorial optimization on graphs of bounded treewidth. _The Computer Journal_ , 51(3):255–269, 2008\.
* Bodlaender et al. (1997) Hans L. Bodlaender, Jan Van Leeuwen, Richard Tan, and Dimitrios M. Thilikos. On interval routing schemes and treewidth. _Information and Computation_ , 139(1):92–109, 1997.
* Brandstadt et al. (1999) Andreas Brandstadt, Van Bang Le, and Jeremy P. Spinrad. _Graph Classes: A Survey_. Monographs on Discrete Mathematics and Applications. Society for Industrial and Applied Mathematics, 1999.
* Diestel (2017) Reinhard Diestel. _Graph Theory_. Springer, 2017.
* Göring et al. (2022) Frank Göring, Tobias Hofmann, and Manuel Streicher. Uniformly connected graphs. _Journal of Graph Theory_ , pages 1–16, 2022.
* Halin (1969) Rudolf Halin. A theorem on $n$-connected graphs. _Journal of Combinatorial Theory_ , 7(2):150–154, 1969.
* Harvey and Wood (2018) Daniel J Harvey and David R Wood. The treewidth of line graphs. _Journal of Combinatorial Theory, Series B_ , 132:157–179, 2018.
* Mader (1979) Wolfgang Mader. Zur Struktur minimal $n$-fach zusammenhängender Graphen. _Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg_ , 49(1):49–69, 1979.
* Meeks (2016) Kitty Meeks. The challenges of unbounded treewidth in parameterised subgraph counting problems. _Discrete Applied Mathematics_ , 198:170–194, 2016.
* Tutte (1961) William T. Tutte. A theory of 3-connected graphs. _Indagationes Mathematicae_ , 23(441-455), 1961.
* Tutte (1966) William T. Tutte. _Connectivity in Graphs_. University of Toronto Press, 1966.
* West (2001) Douglas B. West. _Introduction to Graph Theory_. Prentice Hall, 2001.
|
# On the Boundedness solutions of the difference equation
$x_{n+1}=ax^{\alpha}_{n}+bx^{\alpha}_{n-1},0<\alpha\leq 2$ and its application
in medicine
Zeraoulia Rafik
University of batna2.Algeria
Departement of mathematics
yabous ,khenchela
<EMAIL_ADDRESS>
&Alvaro humberto Salas
Universidad national de Colombia
departement of physics
Bogota Colombia
<EMAIL_ADDRESS>
&Lorenzo Martinez
Universidad national de Colombia
departement of mathematics
Manizales,Caldas
<EMAIL_ADDRESS>
###### Abstract
Recently ,mathematicians have been interested in studying the theory of
discrete dynamical system, specifically difference equation, such that
considerable works about discussing the behavior properties of its solutions
(boundedness and unboundedness) are discussed and published in many areas of
mathematics which involves several interesting results and applications in
applied mathematics and physics ,One of the most important discrete dynamics
which is become of interest for researchers in the field is the rational
dynamical system .In this paper we may discuss qualitative behavior and
properties of the difference equation $x_{n+1}=ax^{2}_{n}+bx^{2}_{n-1}$ with
$a$ and $b$ are two parameters and we shall show its application to medicine
_K_ eywords difference equation $\cdot$ boundedness $\cdot$ number theory
$\cdot$ dynamical system
## 1 Introduction
The theory of difference equations finds many applications in almost all areas
of natural science [Daniel J. Duffy(2006)]. increasingly clearly emerges the
fundamental role that difference equations with discrete and continuous
argument is played for understanding nonlinear dynamics and phenomena also it
is used for combinatorics and in the approximation of solutions of partial
differential equations [Y. Ordokhan1 ,S. Davaei far(2017)]. The increased
interest in difference equations is partly due to their ease of handling. A
minimum is enough computing and graphical tools to see how the solution of
difference equations trace their bifurcations with changing parameters [Josef
Diblik ,Miroslava Ruzickova , Barbora Vaclavikova (2008)]. Thus opens a
complex understanding as well invariant manifolds for linear and nonlinear
dynamical systems.nonlinear difference equations and systems are of wide
interest due to their applications in real life. Such equations appear
naturally as the mathematical models which describe biological, physical and
economical phenomena.
Although difference equations have very simple forms, however, it is extremely
difficult to understand completely the global behavior of their solutions.y
the global behaviors of their solutions. One can refer to [Charlie y
Routh(1992)],[Camouzis and G.Ladas(2008)],[E.A.Grove and G. Ladas(2005)] and
the references therein. Difference equations have always played an important
role in the construction and analysis of mathematical models of biology,
ecology, physics, economic processes, etc. The study of nonlinear rational
difference equations of higher order is of paramount importance, since we
still know so little about such equations. In this paper [A. Q. Khan,S. M.
Qureshi (2020)] A. Q. Khan and S. M. Qureshi discussed Dynamical properties of
some rational systems of difference equations such as they explored the
equilibrium points, local and global dynamics, rate of convergence,
instability and boundedness of positive solution of some rational systems of
difference equations .As application to modern science ,namely, in
mathematical biology they also explored the local dynamics about equilibrium
points of the discrete-time Levin’s model .In the meanwhile A. Q. Khan studied
and discussed Global dynamics of a $3\times 6$ exponential system of
difference equations defined as:
$\begin{cases}x_{n+1}=\frac{\alpha_{1}+\beta_{1}\exp(-x_{n})}{\gamma_{1}+y_{n-1}}\\\
y_{n+1}=\frac{\alpha_{2}+\beta_{2}\exp(-y_{n})}{\gamma_{2}+z_{n-1}}\\\
z_{n+1}=\frac{\alpha_{3}+\beta_{3}\exp(-z_{n})}{\gamma_{3}+y_{n-1}}\end{cases}$
$n=0,1,....$
where parameters $\alpha_{i},\beta_{i},\gamma_{i}(i=1,2,3)$ and initial
conditions $x_{i},y_{i},z_{i}(i=0,-1)$ are nonnegative real numbers.
In [R. Abo-Zeid (2017)] .R. Abo Zeid has discussed the global behavior of all
solutions of the difference equation:
$x_{n+1}=\frac{x_{n}x_{n-1}}{ax_{n}+bx_{n-1}},n\in\mathbb{N}_{\textpdfrender{TextRenderingMode=Stroke,LineWidth=.1pt,}{0}}$
(1)
where $a,b$ are real numbers and the initial conditions $x_{-1},x_{0}$ are
real numbers .In this paper, we discuss the global behavior of the difference
equation :
$x_{n+1}=Ax_{n}^{\alpha}+Bx_{n-1}^{\alpha}$ (2)
with :$A,B$ are two parameters real numbers and $\alpha$ is a real number such
that :
$0<\alpha\leq 2$, For $\alpha=1$ the dynamics defined in (2) is deduced from
(1) by the substitution $y=\frac{1}{x_{n}}$ , the global behavior of all
solutions of (1) is discussed as well in [R. Abo-Zeid (2017)] by R. Abo-Zeid
,The difference equation (2) for $\alpha=1$ ,namely ,
$y_{n+1}=by_{n}+ay_{n-1},n\in\mathbb{N}$ (3)
The characteristic equation of equation (3) is :
$\lambda^{2}-b\lambda-a=0$ (4)
Equation (4) has two roots :
$\lambda_{1}=\frac{b-\sqrt{b^{2}-4ac}}{2a},\lambda_{2}=\frac{b+\sqrt{b^{2}-4ac}}{2a}$
The form of the solution should be according to the value of the quantity
$b^{2}+a^{2}$,The following theorem [S.Elaydi(2005)] is useful in studying the
solutions of the difference equation (3)
###### Theorem 1.1
The following statements holds:
* •
1) All solutions of (3) oscillates (about zero) if and only if the
characteristic equation has no positive roots.
* •
2) All solutions of (3) converge to zero if and only if
$\max\\{\lambda_{1},\lambda_{2},\\}<1$
For boundedness solutions of (3) one can refer to [R. Abo-Zeid (2017)] .Now
for $\alpha=2$ which it is the aim of this paper we are ready to do some
analysis and discussion about the global behavior of solutions of this
difference equation:
$y_{n+1}=by_{n}^{2}+ay_{n-1}^{2},n\in\mathbb{N}$ (5)
## 2 Analysis and discussion:
Case:1 $|a|,|b|<1$ ,for this case we may use a nice trick just we assume $a,b$
are two bounded functions such that :
$a=\sin(\theta),b=\cos(\theta),\theta\in\mathbb{R}$ then the dynamics defined
in (5) become :
$y_{n+1}=\cos(t)y_{n}^{2}+\sin(t)y_{n-1}^{2},n\in\mathbb{N},t\in\mathbb{R}$
(6)
Now ,we may ask for which values of $\theta$ does this equation
$x_{n+1}=\cos(\theta)x^{2}_{n}+\sin(\theta)x^{2}_{n-1}$ have bounded
solutions?.
We ran a small computation: The below plot,see figure (1) is created as
follows: For each point $r(\cos\theta,\sin\theta)$, We use $x_{-1}=0$,
$x_{0}=r$ as initial values, and $\theta$ as the parameter.
The white area is where the iterations is less than $2$ (bounded), the first
30 iterations. As we see, a small circle around the origin is white, meaning
that there are small initial values that is bounded for every $\theta$. Now,
this is not a proof, but the picture suggests this.
Changing the cut-off to $40$ iterations does not change the picture much.
Figure 1: Bounded solution for $x_{-1}=0,r=x_{0}$
For analytical proof we have :Let $r=1/\sqrt{2}$. If $x_{-1},x_{0}\leq r$,
then the sequence is bounded for all $\theta$.
We note that: $|\cos(\theta)x_{n-1}^{2}+\sin(\theta)x_{n}^{2}|\leq
r^{2}(|\cos\theta|+|\sin\theta|)\leq r^{2}\sqrt{2}\leq 1/\sqrt{2}$, and the
statement follows from induction.
Case:2
$|a|,|b|=1$, for the case $|a|=|b|=1$ we note after runing with the same
initial conditions for each point $r(\cos\theta,\sin\theta)$ (iteration of the
plot (20 iteration) ) that the obtained picture changed ( Rotation of picture)
and the white region become larger than it for the precedent case ,namely,
Case:1 this mean that we have another new initial values that is bounded for
every $\theta$.(The number of initial values is increasing).see figure 2
Figure 2: new intial values that bounded solutions for every
$\theta,x_{-1}=0,r=x_{0}$
Case3 In this case we may assume
:$a^{\prime}=|a|k,b^{\prime}=|b|k,k\in\mathbb{R}$ and
$|a^{\prime}|,|b^{\prime}|>1$, then our dynamics become :
$y_{n+1}=|\cos(t)|y_{n}^{2}+|\sin(t)|y_{n-1}^{2},n\in\mathbb{N},t\in\mathbb{R}$
(7)
With the same data and conditions that we have used in both of case1 and case
2, we noticed after runing using mathematica the white region become smaller
whenever $k$ being larger this indicate that the dyanmics defined in (7) would
be unbounded,for numerical evidence we just take , $k=5,20,-15$ as shown in
below plots (figure (3+4+5)).
Figure 3: boundedness solutions for the dynamic (7),$k=5$ Figure 4:
boundedness solutions for the dynamic (7),$k=15$ Figure 5: boundedness
solutions for the dynamic (7),$k=-150$
We noticed For the remainder case when $0<\alpha\leq 1$ the iteration of the
dynamics (2) show us much intinial values that is bounded for every $\theta$ ,
we took $\alpha=\frac{1}{2},\frac{1}{3},\alpha=1$ as comparative examples as
shown in below figures:
Figure 6: bounded solutions for $\alpha=1$ (white circle arround origin)
Figure 7: bounded solution for $\alpha=\frac{1}{2},\frac{1}{3},\cdots$ (The
circle become white at all this indicate all solution of our dynamics are
bounded for any arbitary initial values )
## 3 Stability and analysis
Dynamical modeling is the study of change and changes take place everywhere in
life. As a result dynamical systems have a wide range of application areas in
applied science and engineering. With these systems, real life situations can
be turned into the language of mathematics. If one can create a good model for
a real life situation, we will be able to predict the future states of the
system by simply iterations according to this model.
Stability, like one of the most important concepts in Discrete Dynamical
Theory, can tell much about the behavior of the dynamical system. In this
section a symbolic Mathematica package for analysis and control of chaos in
discrete two dimensional dynamical nonlinear systems, is presented. There are
constructed some computer codes to find stability types of the fixed points,
covering the stability of the one-dimensional nonlinear dynamical systems.
Applications are taken from chemical kinetics and population dynamics
(logistic model).Since our dynamics is two dimonsional discret dynamical
system it is enough to run the below code to get satibility and oscillation
points arround origing…
example one Let us apply that code for our dynamics: let $a=2,b=9$
twoDimStab[2 x^2 + y + 1, 9x^2 - 1]
1 112
{Null, {--, -(---)} -> oscillatory source}
11 121
One can do more examples just applying the above code with change of variable
$a$ and $b$.
For the periodicity of our dynamic for $\alpha=2$ it is already discussed in
[R. Abo-Zeid (2017)],The dynamics is two period solution
## 4 PHASE PLANE DIAGRAMS
Continuous systems are often approximated as discrete processes, meaning that
we look only at the solutions for positive integer inputs. Difference
equations are recurrence relations, and first order difference equations only
depend on the previous value. Using difference equations, we can model
discrete dynamical systems. The observations we can determine, by analyzing
phase plane diagrams of difference equations, are if we are modeling decay or
growth, convergence, stability, and equilibrium points.In this section we may
give a diagram plot of our dynamics for some values $a$ and $b$ and $\alpha=2$
, The dynamics defined in (2) can be written as sytem of two difference
equations :
$\begin{cases}x(t+1)=ax_{t}^{2}+y(t)+1\\\ y(t+1)=bx_{t}^{2}-1\end{cases}$ (8)
The phase portrait plot of the dynamic (8) is shown in below figures, Here is
the mathematica code one can change values of a,b to analyse the slop field
Figure 8: diagram plot of nonlinare dynamics (8), for $a=0.5,b=2$
## 5 Time series analysis and modeling (application to medicin)
Nowadays mathematics is being successfully applied to a number of important
fields in medicine including biofluids, cardiovascular diseases, clinical
schedules and tests, data analysis, drug design and discovery, epidemiology,
genetics, image processing, immunology, instrumentation, microbiology [José M.
Amigó , Michael Small (2017)], neuroscience, oncology, virology and more. The
list of tools includes virtually the whole of applied mathematics. To cite the
most familiar ones: difference equations and discrete-time dynamical systems,
information and coding theory, graph and network theory, integral transforms,
numerical and computational mathematics, ordinary differential equations and
continuous-time dynamical systems, partial differential equations, stochastic
and time-delay differential equations, statistics, probability and time-series
analysis. All this research has contributed to and continues to increasingly
contribute both to better understand medical phenomena and to finding
practical ways of action.
Time series modeling for predictive purpose has been an active research area
of machine learning for many years [Fatoumata Dama and Christine Sinoquet
(2019)]. However, no sufficiently comprehensive and meanwhile substantive
survey was offered so far. This survey strives to meet this need. A unified
presentation has been adopted for entire parts of this compilation. Time
series data are amongst the most ubiquitous data types that capture
information and record activity in most aeras. In any domain involving
temporal measurements via sensors, censuses, transaction records, the capture
of a sequence of observations indexed by time stamps first allows to provide
insights on the past evolution of some measurable quantity. Beyond this goal,
the pervasiveness of time series has generated an increasing demand for
performing various tasks on time series data (visualization, discovery of
recurrent patterns, correlation discovery, classification, clustering, outlier
detection, segmentation, forecasting, data simulation).
Time series classification (TSC) is one of data minings persistent challenges.
Applications of TSC abound in fields including agriculture, medicine, and
engine prognostics [Abdoli, A., Murillo, A. C., Yeh, C. C. M., Gerry, A. C.,
and Keogh, E. J. (2018, December)],and [Yu, W., Kim, I. Y., Mechefske, C.
(2021)] a common goal being to detect instances of sub optimal behavior or
decreased health (biologically or mechanically) as just one real-world
example. Dozens of new TSC algorithms were introduced in the last four years
alone [Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E.(2017)]. This
trend has been intensified by the increasing availability of real-world
datasets. In fact, the classification of any inherently ordered data
(temporally or otherwise) can be treated as a TSC problem [Gamboa, J. C. B.
(2017)] , making for a vast breadth of real-world applications. Deep learning
methods have shown suitability for time series classification in the health
and medical domain, with promising results for electrocardiogram data
classification. Successful identification of myocardial infarction holds life
saving potential and any meaningful improvement upon deep learning models in
this area is of great interest.In this section we show that the discussed
dynamics ,namely , (8) present new classes of heartbeat ,namely, A new
dynamical model for Generating Synthetic Electrocardiogram Signals,The
electrocardiogram (ECG) is a time-varying signal reflecting the ionic current
flow which causes the cardiac fibers to contract and subsequently relax. The
surface ECG is obtained by recording the potential difference between two
electrodes placed on the surface of the skin. A single normal cycle of the ECG
represents the successive atrial depolarization/repolarization and ventricular
depolarization/repolarization which occurs with every heartbeat. These can be
approximately associated with the peaks and troughs of the ECG waveform
labeled $P,Q,R,S$, and $T$ as shown in figure 9
Figure 9: Morphology of a mean PQRST-complex of an ECG recorded from a normal
human
Extracting useful clinical information from the real (noisy) ECG requires
reliable signal processing techniques [AL Goldberger,E Goldberger (1977)].
These include R-peak detection [Jiapu Pan; Willis J. Tompkins (1985)], [C. Li,
C. Zheng, and C. Tai(1995)], QT-interval detection and the derivation of heart
rate and respiration rate from the ECG [P.E. McSharry , G.D. Clifford, L.
Tarassenko, L.A. Smith (2003)]. The RR-interval is the time between successive
R-peaks, the inverse of this time interval gives the instantaneous heart rate.
A series of RR-intervals is known as a RR tachogram and variability of these
RR-intervals reveals important information about the physiological state of
the ECG signal. The ECG may be divided into the following sections.
* •
Q-wave: A small low-voltage deflection away from the baseline caused by the
depolarization of the atria prior to atrial contraction as the activation
(depolarization) wavefront propagates from the SA node through the atria.
* •
PQ-interval: The time between the beginning of atrial depolarization and the
beginning of ventricular depolarization.
* •
QRS-complex: The largest-amplitude portion of the ECG, caused by currents
generated when the ventricles depolarize prior to their contraction. Although
atrial repolarization occurs before ventricular depolarization, the latter
waveform (i.e. the QRS-complex) is of much greater amplitude and atrial
repolarization is therefore not seen on the ECG.
* •
QT-interval: The time between the onset of ventricular depolarization and the
end of ventricular repolarization. Clinical studies have demonstrated that the
QT-interval increases linearly as the RR-interval increases . Prolonged QT-
interval may be associated with delayed ventricular repolarization which may
cause ventricular tachyarrhythmias leading to sudden cardiac death
* •
ST-interval: The time between the end of S-wave and the beginning of T-wave.
Significantly elevated or depressed amplitudes away from the baseline are
often associated with cardiac illness.
* •
T-wave: Ventricular repolarization, whereby the cardiac muscle is prepared for
the next cycle of the ECG.
Analysis of variations in the instantaneous heart rate time series using the
beat-to-beat RR-intervals (the RR tachogram) is known as HRV analysis [M Malik
, AJ camm(1995)], [Task force of the european society of cardiology(1996)].
HRV analysis has been shown to provide an assessment of cardiovascular disease
[M H Crawford,S Bernstein and P Deedwania(1999)].A dynamical model based on
three coupled ordinary differential equations is introduced which is capable
of generating realistic synthetic electrocardiogram (ECG) signals.The
dynamical equations of motion are given by a set of three ordinary
differential equations defined as the following :
$\begin{cases}\dot{x}=\alpha x-\omega y\\\ \dot{y}=\alpha y+\omega x\\\
\dot{z}=-\sum_{i\in\\{P,Q,R,S,T\\}}\exp\bigg{(}-\frac{-\Delta\theta_{i}^{2}}{2b_{i}^{2}}\bigg{)}-(z-z_{0})\end{cases}$
(9)
where $\alpha=1-\sqrt{x^{2}+y^{2}},\Delta\theta_{i}=(\theta-\theta_{i})\bmod
2\pi,\theta=\arctan 2(y,x)$, (the four quadrant arctangent of the real parts
of elements of $x$ and $y$ , with $-\pi\leq\arctan 2(y,x)\leq\pi$) and
$\omega$ is the angular velocity of the trajectory as it moves around the
limit cycle. Baseline wander was introduced by coupling the baseline value
$z_{0}$ in (9) to the respiratory frequency $f_{2}$ using
:$z_{0}(t)=A\sin(2\pi f_{2}t)$ where $A=0.15mV$
Figure 10: ECG generated by dynamical model: (a) 10 s and (b) 50 s Figure 11:
Comparison between (a) synthetic ECG with additive normally distributed
measurement errors and (b) real ECG signal from a normal human.
## 6 Introduction to Electrocardiography
An electrocardiogram is a recording of the electrical activity of the
heart.The heart can be viewed as a three-dimensional vector. Therefore, its
electrical activity can, in theory, be recorded by three orthogonal leads. In
practice, however, a standard clinical EKG is recorded with 12 leads: 6 limb
leads and 6 precordical leads.A normal EKG reflects the electrical activity of
each of the four heart chambers: left and right ventricles and left and right
atria (Fig. 1).
Figure 12: A normal EKG reflects the electrical activity of each of the four
heart chambers: left and right ventricles and left and right atria
The P wave marks the beginning of atrial depolarization. The onset of the Q
wave is produced by the stimulus from the Purkinje system. The depolarization
of the ventricle produces the R wave. The S wave is initiated when the
excitation reaches the base of the ventricle. The T wave is produced by
ventricular repolarization (Tompkins 1993; Goldman 1992). The synthetic ECG
(Figure 10) illustrates the modulation of the QRS-complex due to RSA and Mayer
waves. Observational uncertainty is incorporated by adding normally
distributed measurement errors with mean zero and standard deviation $0.025mV$
[Figure 11], yielding a similar signal to a segment of real ECG from a normal
human [Figure 11]. Electrocardiogram (ECG) detection is currently the most
effective and direct way to detect ECG signals [Z. Tang, G. Zhao, and T.
Ouyang(2021)]. At present, the diagnosis of cardiac diseases is mainly
determined by medical doctors and clinicians through manual detection and ECG
analysis.ECG is a diagnostic technology that records the electrocardiography
activities of the heart in a certain time unit through the chest of biological
objects. It collects and records the electrodes connected to the skin of
specific parts of biological objects and preserves the relevant contents in a
certain form [D. A. Winter, P. M. Rautaharju, and H. K. Wolf(1997)].
The pre-ejection-period (PEP) is the time span between the depolarization of
the left ventricle (R onset) and opening of the aortic valve (B point). The R
onset is the beginning of the Q-wave signal; it indicates the beginning of the
depolarization and can be picked up from the ECG signal. As signature for the
beginning of the Q wave, we take the minimum of the ECG’s second derivative.
Its peak indicates the maximum curvature at the transition of the ECG signal
into the Q wave. However, as the Q wave is relatively small, other signatures
in the ECG signal can be misinterpreted as the R onset in an automated
evaluation of the data. In general, first and higher-order derivatives of
noisy signals suffer from containing spurious peaks. We therefore restrict the
possible occurrences of R onset to a time window after the, easily
identifiable, R peak (peak of the QRS complex). Within that window, the R
onset is typically seen as a clear negative peak of the ECG signal’s second
derivative, which can be located reliably and with high precision and thus
allows a reliable identification of the Q wave onset. Heart rate (HR) is then
calculated from the time difference of subsequent R points.[M. Thomas, M. K.
Das, and S. Ari(2015)]
The time point for the opening of the aortic valve (B point) is derived from
the impedance cardiogram (ICG). The impedance Z, and thus the ICG, is
sensitive to a variation of blood volume in the thorax. The first derivative,
$\frac{dZ}{dt}$, corresponds to blood flow. The second derivative
$\frac{d^{2}Z}{dt^{2}}$, in turn, corresponds to a change of the blood flow
and is thus indicative for the opening of the heart valves. The B point is the
onset of the aortic valve’s opening, indicated by a negative peak in the third
derivative, $\frac{d^{3}Z}{dt^{3}}$. While, compared to ECG, the ICG signal is
smooth and devoid of characteristic spikes, its first, second, and third
derivative show distinct features. As selection criterion for picking the
correct peak of the third derivative, we use the, easily identifiable, peak of
the first derivative, $\frac{dZ}{dt}$. The B point is obtained as the minimum
of $\frac{d^{3}Z}{dt^{3}}$ that occurs just before the maximum in
$\frac{dZ}{dt}$. This strategy allows for an automated evaluation of the PEP
interval for the large data sets, with few outliers and the required
precision. To calculate the derivatives of the measured signals, we use the
Savitzky-Golay filter [Jianwen LuoKui ,YingLijing BaiLijing Bai(2005)]. This
method allows data smoothing, while keeping intact signatures like peaks, and
the simultaneous determination of derivatives. Similar to a moving average, a
moving section of the data is selected. However, instead of a simple
averaging, the algorithm fits a polynomial of given degree to the selected
sequence. Then, one point of the fitted polynomial (usually the central point)
is taken as value for the smoothed curve. Higher derivatives are taken from
the corresponding derivatives of the fitted polynomial at the respective
point. The Savitzky-Golay filter is implemented numerically by a list
convolution with a kernel. That kernel is calculated in advance for the number
of points for the moving fitting, the order of the polynomial, and the order
of the derivative. We use a kernel length of 100 points, corresponding to a
time interval of 50 ms, and a 3rd-order polynomial for all kernels and for the
ICG and ECG signals. The third derivative of the ICG signal is calculated from
the first derivative of the ICG signal, which, together with Z, is provided by
the Biopac MP36 system (i.e., the system we used to measure ICG/ECG). We
ensure that no time lag gets introduced between the ICG and ECG signals and
their derivatives by the Savitzky-Golay filter. Thus, PEP and LVET data get
extracted from the ICG and ECG measurements in a semi-automated way and with a
by-heartbeat resolution. The Mathematica Notebook output is stored in a text
file, with every row containing a timestamp, the corresponding length of
cardiac PEP, LVET, and HR for each heartbeat. Here is the Graphical display of
$Z,\frac{dZ}{dt}$,EKG
Figure 13: Graphical display of $Z,\frac{dZ}{dt}$ ,EKG , Istart=1000,Iend
=16000
## 7 Medical interpretation of our discret dynamics
To analyze ECG signal, the most necessary step is to extract its QRS wave
group. The QRS complex reflects changes in depolarization potential and time
of the left and right ventricles. Considering the robustness and stability,
For analysis of ECG signal using delay differential equation (DDE’s) is
computationally fast one can refer to the system of ODE in (9), and could be
the basis for a real time diagnostic system. DDEs reveal non-linear properties
as well as spectral properties of the data as shown in this paper [Claudia
Lainscsek and Terrence J. Sejnowski(2013)] by Claudia Lainscsek and Terrence
J. Sejnowski they analyzed ECG signal using delay differential equation which
leads to the good classification of Electrocardiogram ,may that classification
wouldn’t work as well using discret map ,some authors discussed
Electrocardiogram Signal Classification in the Diagnosis of Heart Disease
Based on RBF Neural Network [Yan Fang , Jianshe Shi, Yifeng Huang, Taisheng
Zeng,Yuguang Ye , Lianta Su, Daxin Zhu,and Jianlong Huang(2022)] such that
they extracted QRS wave using discret map (they used difference equations to
analyze ECG ),Recall that in our paper we are interested to the behavior of
the following dynamics :
$x_{n+1}=ax^{2}_{n}-bx^{2}_{n-1},n=0,1,....$
,$a,b$ two real parameters ,It is known that ECG signal has a very obvious
periodicity, Taking $T$ as sampling period we may rewrite our dynamics using
$T$ as :
$x((n+1)T)=ax^{2}(nT)-bx^{2}(T(n-1),n=0,1,...$ (10)
The high-frequency characteristics can be enhanced by a nonlinear square
function [Yan Fang , Jianshe Shi, Yifeng Huang, Taisheng Zeng,Yuguang Ye ,
Lianta Su, Daxin Zhu,and Jianlong Huang(2022)],see page 3 equation 4, whose
equation can be expressed as :
$y((nT)=x^{2}(nT)$ (11)
This means strongly that our discret dynamics which is defined in (10) can be
interpreted in the medical point of view as :Enhanced high frequencies
regarding the behavior of the heartbeat in short time, we may consider the
discret time defined in (10) as a discretized boundary value problem (delay
differential equation) using some standrad numerical methods like finite
difference method in 1D (dimension) , we may give a short analysis of the ECG
signal which is produced by the dynamics defined in (10), Assume the
correspending non linear boundary value problem which we wanted to solve
satisfies $x_{1}=0$ and $x_{n+1}=1$ using the Finit difference technique [R.
P. AGARWAL and Y. M. CHOW(1985)]. The first step is to partition the domain
$[0,1]$ into a number of sub-domains or intervals of length $h$. So, if the
number of intervals is equal to $n$, then $nh=1$. We denote by $x_{i}$ the
interval end points or nodes, . In general, we have $x_{i}=(i-1)h$,
$i=1,2,\cdots n+1$. Let us denote the concentration at the ith node by
$C_{i}$, for short enough time (say between $t_{i},t_{i+1}$) the dynamics (10)
become as a simple linear differential equation such that : for $h\to 0$ we
have $x((n+1)T)\to\dot{x}\leq 0$ implies that we have a decreasing frequencies
coming up to a constant signal $x(t)$ thus ,the dynamic (10) can be
interpreted as heart attack or heart failure ,In general that case indicate
Cardiac diseases ,(see figure in example 1), Enhanced high frequency appear
whenever $\dot{x}>0$ which means derivative of the analyzed signal is always
positive (one can refer to figures in examples (2+3+4)).It is hard to analyze
ECG signal using coupled signal which is defined in (12)
we have plotted many figures with some values of $a$ and $b$ such that we call
for medical interpretation $a$ is a factor of life (Electrocardiogram (EKG))
,identification of that factor(EKG) present attempt to improve patient
survival and $b$ may present blood loss which causes the cardiac arrest.we may
introduce a new parameter here $\sigma$ as a toxine factor this for good
modelization of the discussed phenomena. Now,let us try showing heartbeat
classes ,The dynamics (8) for $\alpha=2$ can be written as :
$\begin{cases}x(t+1)=ax_{t}^{2}+\sigma y(t)+1\\\
y(t+1)=bx_{t}^{2}-1\end{cases}$ (12)
We have noticed that For heartbeat classes plots we should have : $a\leq
0.15$,we may try playing with $b$ and $\sigma<0$ (should be negative) and for
$x_{0}=0.07,y_{0}=0.08$ initial values ,increasing values of $\sigma$ means
attempt to reduce toxine factor and increasing values of $b$ means raise
factor of EKG thus a good attempt to improve patient survival ,for the reverse
assumption we would have cardiac arrest record.
Example1
we start with the first heartplot , $a=0.15,\sigma=-0.45,b=0.45$
Figure 14: heartbeat classe for
$a=0.15,\sigma=-0.45,b=0.45,x_{0}=0.07,y_{0}=0.08$
Example2 let :$a=0.15,\sigma=-0.45,b=0.45$
Figure 15: heartbeat case for $a=0.15,\sigma=-0.6,b=0.5,x_{0}=0.07,y_{0}=0.08$
Example3 let :$a=0.15,\sigma=-0.65,b=0.58$
Figure 16: heartbeat case for
$a=0.15,\sigma=-0.65,b=0.58,x_{0}=0.07,y_{0}=0.08$
Example4 let :$a=0.15,\sigma=-0.65,b=0.57$
Figure 17: heartbeat case for
$a=0.15,\sigma=-0.75,b=0.6,x_{0}=0.07,y_{0}=0.08$
## 8 Conclusion:
This study of difference equations with public health applications to science
develops the methodology for the solution of the general kth order linear
difference equation using the generating function approach and computers tools
like mathematica. It includes an examination of the dynamics of disease spread
and containment in populations using illness-death models ,Cardiovascular
disease is one of the major hazard to human health today. ECG stands for
electrocardiogram and it is an important way for clinical diagnosis
cardiovascular disease. The ECG refers to the small voltages ( 1mv) found on
the skin as a result of electrical activity of the heart. These electrical
actions trigger various electrical and muscular activity in the heart. The
health and function of the heart can be measured by the shape of the ECG
waveform. Typical heart problems are leaking valves and blocked coronary
arteries.in our paper we have discussed a new discret dynamics which lead to
discover a new illness-death model(Enhanced high frequencies model ) using
time series diagram and ECG analyses with some data which are defined by
iterative discret dynamics such as heartbeat classes ,in particulary attempts
to improve patient for being survival,we may call it model of life for more
time.
## 9 Data Availability
The data that support the findings of this study can be obtained from the
corresponding author upon reasonable request.
## 10 Conflicts of Interest
The author declare that they have no conflicts of interest.
## 11 Authors’ Contributions
All authors contributed to the study conception and design of that research .
Material preparation, data collection, analysis and medical interpretation
were performed by Zeraoulia Rafik . The first draft of the manuscript was
written by Zeraoulia Rafik and all authors commented on previous versions of
the manuscript. All authors read and approved the revised manuscript.
## References
* [Daniel J. Duffy(2006)] Daniel J. Duffy, 2006.Finite Difference Methods in Financial Engineering: A Partial Differential Equation Approach, ISBN:=9780470858820, 9780470858820
* [Y. Ordokhan1 ,S. Davaei far(2017) ] Able, B., 1956.Approximate Solutions of Differential Equations by Using the Bernstein Polynomials
DOI: 10.5402/2011/787694
* [Josef Diblik ,Miroslava Ruzickova , Barbora Vaclavikova (2008)] Josef Diblik ,Miroslava Ruzickova , Barbora Vaclavikova, 2008. Bounded Solutions of Dynamic Equations on Time Scales,International Journal of Difference Equations (IJDE). ,ISSN 0973-6069 Volume 3 Number 1 (2008), pp. 61–69
* [Charlie y Routh(1992)] Agarwal,Difference equations and inequalities,1992.first edition, Marcel Dekker
* [ Camouzis and G.Ladas(2008)] Dynamics of Third-Order Rational Difference Equations;With Open Problems and Conjectures, 2008. Chapman and Hall/HRC Boca Raton,
* [E.A.Grove and G. Ladas(2005)] E.A. Grove and G. Ladas,Periodicities in Nonlinear Difference Equations,2005. Chapmanand Hall/CRC, 2005
* [R. Abo-Zeid (2017)] R. Abo-Zeid On the solutions of a second order difference equation,2017.Mathematica Moravica Vol. 21, No. 2 (61-73
* [S.Elaydi(2005)] S.Elaydi, An Introduction to Difference Equations,2005. Third Edition, Springer, New York.
* [Fatoumata Dama and Christine Sinoquet (2019)] Fatoumata Dama and Christine Sinoquet,2019. Time Series Analysis and Modeling to Forecast: a Survey .LS2N / UMR CNRS 6004, Nantes University, France
* [Abdoli, A., Murillo, A. C., Yeh, C. C. M., Gerry, A. C., and Keogh, E. J. (2018, December)] Abdoli, A., Murillo, A. C., Yeh, C. C. M., Gerry, A. C., Keogh, E. J. 2018. Time series classification to improve poultry welfare. In 2018 17TH IEEE International conference on machine learning and applications (ICMLA) (pp. 635, 642). IEEE
* [Yu, W., Kim, I. Y., Mechefske, C. (2021)] Yu, W., Kim, I. Y., Mechefske, C. ,2021. Analysis of different RNN autoencoder variants for time series classification and machine prognostics. Mechanical Systems and Signal Processing, 149, 107322
* [Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E.(2017)] Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E. ,2017. The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data mining and knowledge discovery, 31(3), 606-660
* [Gamboa, J. C. B. (2017)] Gamboa, J. C. B. 2017. Deep learning for time series analysis.
* [A. Q. Khan,S. M. Qureshi (2020)] A. Q. Khan,S. M. Qureshi (2020) . Dynamical properties of some rational systems of difference equations.Mathematical method in the applied science
* [José M. Amigó , Michael Small (2017)] A. Q. Khan,S. M. Qureshi (2017) .Mathematical methods in medicine: neuroscience, cardiology and pathology National library of medcin
* [AL Goldberger,E Goldberger (1977)] AL Goldberger,E Goldberger (1977) .Clinical electrocardiac ,Louis MO
* [Jiapu Pan; Willis J. Tompkins (1985)] Jiapu Pan; Willis J. Tompkins (1985).A Real-Time QRS Detection Algorithm
* [P.E. McSharry , G.D. Clifford, L. Tarassenko, L.A. Smith (2003)] P.E. McSharry , G.D. Clifford, L. Tarassenko, L.A. Smith (2003).A dynamical model for generating synthetic electrocardiogram signals .IEEE Transactions on Biomedical Engineering ( Volume: 50, Issue: 3)
* [M Malik , AJ camm(1995)] M Malik , AJ camm(1995). Heart rate variability ,Armonk,NY :futura
* [Task force of the european society of cardiology(1996)] Task force of the european society of cardiology(1996). Heart rate variability :standard of measurement physiological interpretation and clinical use the north americain society of pacing and electrophysiology,sophia antipolis,France
* [M H Crawford,S Bernstein and P Deedwania(1999)] M H Crawford,S Bernstein and P Deedwania(1999). ACC AHA , guidlines for ambulatory ,electrocardiography.circulation vol 100, pp 886-893
* [Jianwen LuoKui ,YingLijing BaiLijing Bai(2005)] Jianwen LuoKui ,YingLijing BaiLijing Bai(2005). Savitzky–Golay smoothing and differentiation filter for even number data
* [C. Li, C. Zheng, and C. Tai(1995)] C. Li, C. Zheng, and C. Tai(1995).Detection of ECG characteristic points using wavelet transforms. IEEE Transactions on Biomedical Engineering, vol. 42, no. 1, pp. 21–28
* [Z. Tang, G. Zhao, and T. Ouyang(2021)] Z. Tang, G. Zhao, and T. Ouyang(2021).Two-phase deep learning model for short-term wind direction forecasting,Renewable Energy, vol. 173, pp. 1005–1016
* [D. A. Winter, P. M. Rautaharju, and H. K. Wolf(1997)] , D. A. Winter, P. M. Rautaharju, and H. K. Wolf(1997).Measurement and characteristics of over-all noise content in exercise electrocardiograms,American Heart Journal, vol. 74, no. 3, pp. 324–331, 1967.
* [M. Thomas, M. K. Das, and S. Ari(2015)] M. Thomas, M. K. Das, and S. Ari(2015). Automatic ECG arrhythmia classification using dual tree complex wavelet based features, AEU - International Journal of Electronics and Communications, vol. 69, no. 4, pp. 715–721, 2015
* [Yan Fang , Jianshe Shi, Yifeng Huang, Taisheng Zeng,Yuguang Ye , Lianta Su, Daxin Zhu,and Jianlong Huang(2022)] Yan Fang , Jianshe Shi, Yifeng Huang, Taisheng Zeng,Yuguang Ye , Lianta Su, Daxin Zhu,and Jianlong Huang(2022).Electrocardiogram Signal Classification in the Diagnosis of Heart Disease Based on RBF Neural Network .Hindawi Computational and Mathematical Methods in Medicine Volume 2022, Article ID 9251225, 9 pages
* [Claudia Lainscsek and Terrence J. Sejnowski(2013)] Claudia Lainscsek and Terrence J. Sejnowski(2013).Electrocardiogram classification using delay differential equations doi: 10.1063/1.4811544
* [R. P. AGARWAL and Y. M. CHOW(1985)] R. P. AGARWAL and Y. M. CHOW(1985). FINITE-DIFFERENCE METHODS FOR BOUNDARY-VALUE PROBLEMS OF DIFFERENTIAL EQUATIONS WITH DEVIATING ARGUMENTS
|
# Fully Distributed Continuous-Time Algorithm for Nonconvex Optimization over
Unbalanced Directed Networks
Jin Zhang, Yahui Hao, Lu Liu, and Haibo Ji J. Zhang is with the Department of
Automation, University of Science and Technology of China, Hefei 230027,
China, and also with the Department of Biomedical Engineering, City University
of Hong Kong, Hong Kong (e-mail: zj55555@mail.ustc.edu.cn). Y. Hao and L. Liu
are with the Department of Biomedical Engineering, City University of Hong
Kong, Hong Kong (e-mail<EMAIL_ADDRESS>luliu45@cityu.edu.hk).
H. Ji is with the Department of Automation, University of Science and
Technology of China, Hefei 230027, China (e-mail: jihb@ustc.edu.cn).
###### Abstract
This paper investigates the distributed continuous-time nonconvex optimization
problem over unbalanced directed networks. The objective is to cooperatively
drive all the agent states to an optimal solution that minimizes the sum of
the local cost functions. Based on the topology balancing technique and
adaptive control approach, a novel fully distributed algorithm is developed
for each agent with neither prior global information concerning network
connectivity nor convexity of local cost functions. By viewing the proposed
algorithm as a perturbed system, its input-to-state stability with a vanishing
perturbation is first established, and asymptotic convergence of the decision
variables toward the optimal solution is then proved under the relaxed
condition. A key feature of the algorithm design is that it removes the
dependence on the smallest strong convexity constant of local cost functions,
and the left eigenvector corresponding to the zero eigenvalue of the Laplacian
matrix of unbalanced directed topologies. The effectiveness of the proposed
fully distributed algorithm is illustrated with two examples.
###### Index Terms:
Fully distributed, nonconvex optimization, continuous-time optimization,
unbalanced directed networks, adaptive control.
## I Introduction
Distributed optimization problem (DOP) has experienced significant advances in
the past decade because of its great potential in a wide range of
applications. Typical examples of application include resource allocation,
sensor networks, and power systems [1, 2]. In the typical DOP of large-scale
networks, each agent is often endowed with an individual local cost function.
Seminal works on this topic primarily focus on discrete-time cases with convex
local cost functions, which can be traced back to [3, 4]. For more recent
developments on distributed discrete-time convex optimization, one may refer
to [5, 6, 7] and references therein.
In parallel, the distributed continuous-time convex optimization problem has
also been extensively studied (see, for example, [8, 9, 10, 11, 12]) because
many practical systems (e.g., unmanned vehicles and robots) operate in a
continuous-time setting [13]. A pioneering distributed gradient-based control
approach was developed in [8] to address the DOP of multi-agent systems with
single integrator dynamics over undirected graphs. In the subsequent work [9],
the restriction on the communication network topologies was relaxed to
balanced digraphs. By virtue of the proportional-integral control approach, a
modified distributed Lagrangian-based algorithm was then proposed in [10] at
the cost of special initialization so that communication and computation can
be reduced. Other works that involve distributed continuous-time convex
optimization, over either undirected or balanced directed networks, can be
found in [14, 15] and references therein.
A central and standard assumption for the analysis of gradient-based
minimization method in convex optimization is the convexity of the
corresponding local cost functions [16, 10], which is exploited to not only
refrain from the existence of local optima but also facilitate convergence
analysis [17]. However, it cannot be satisfied in a broad class of practical
applications such as sparse approximations of images, matrix factorization,
and compressed sensing [18]. In fact, cost functions in engineering practice
are often nonconvex. More recently, there are increasing number of studies on
distributed discrete-time nonconvex optimization providing that local cost
functions satisfy additional but relaxed conditions [19, 20, 21], such as the
Polyak-Łojasiewicz (P-Ł) condition[22], the $\rho$-weakly convex
condition[23], the $\mu$-gradient dominated coondition [24], and the second
order sufficiency condition [25] among others. On the contrary, few works
focus on developing continuous-time algorithms for distributed nonconvex
optimization problems. Based on the canonical duality theory, a continuous-
time algorithm was proposed in [26] for a class of nonconvex optimization, but
it can only be applied when undirected networks are considered.
The above-mentioned works, whether studying convex optimization or nonconvex
optimization, assume that the concerned network topologies are undirected or
balanced directed. It is of much more theoretical and practical significance
to study unbalanced directed topologies as the information exchange between
neighboring agents may be unidirectional due to limited bandwidth or other
physical constraints. In the discrete-time case, by employing a row or a
column stochastic matrix, several consensus-based strategies were proposed in
[5, 6] to tackle unbalanced digraphs in distributed convex optimization
problem. These strategies are then extented to address the more challenging
scenario of distributed nonconvex optimization over unbalanced digraphs [20,
23]. However, the agents in those works were required to know their out-degree
[5, 20], which is a form of global information.
In the continuous-time case, a new distributed control strategy was developed
based on the topology balancing technique to tackle the distributed convex
optimization over unbalanced directed networks [27]. Nevertheless, it cannot
be adopted when the left eigenvector corresponding to the zero eigenvalue of
the concerned Laplacian matrix is not available in advance. For the same
problem, a distributed estimator was designed in [28] to remove the explicit
dependence on the left eigenvector, and the gradient term therein was divided
by the state of the distributed estimator. However, the control gains of the
algorithm proposed in [28] still involve certain global information concerning
the network connectivity such as the second smallest eigenvalue of the
Laplacian matrix, and the smallest strong convexity constant of local cost
functions. To the best of our knowledge, no distributed continuous-time
algorithm has been proposed to address the distributed nonconvex optimization
over unbalanced directed networks up till now. To sum up, the existing
algorithms that tackle unbalanced digraphs more or less rely on global
information concerning the network connectivity and/or cost functions, and are
thus not fully distributed.
Motivated by the above observations, this paper aims at developing a fully
distributed continuous-time algorithm to address the distributed nonconvex
optimization over unbalanced directed networks. The main challenge lies in
establishing asymptotic convergence of the agent states in the absence of
symmetric Laplacian matrix and the convexity of local cost functions. A novel
algorithm is developed over unbalanced directed network topologies based on
the topology balancing technique [29, 28] and adaptive control approach [27,
30]. The developed continuous-time algorithm is fully distributed in the sense
that it does not depend on any global information about the network
connectivity or the local cost functions. The main contributions of this paper
are summarized as follows.
1) In contrast with those works addressing the DOP for undirected graphs or
balanced digraphs [8, 9, 10, 14, 15, 31], this work considers unbalanced
digraphs that are more general and also more challenging. Contrary to the
algorithms in [28, 27, 20, 23] that deal with unbalanced digraphs, our
distributed adaptive continuous-time algorithm does not depend on any global
information concerning network connectivity or cost functions, and can be
applied in a fully distributed manner. Specifically, the left eigenvector
corresponding to the zero eigenvalue of the Laplacian matrix, which plays a
crucial role in [27], no longer needs to be known a priori. The algorithm
proposed in this work is thus expected to be adopted in a wider range of
applications.
2) The requirement on the convexity of the local cost functions, which are
exploited to facilitate the convergence analysis in [8, 9, 10, 14, 15, 28], is
removed. Instead, in this work, the local cost functions are allowed to be
nonconvex. Such a relaxed condition greatly broadens the application scope of
distributed optimization.
The layout of this paper is as follows. Section II reviews some preliminaries
on graph theory and convex analysis, and provides the problem formulation and
control objective of this paper. Section III presents the algorithm design and
convergence analysis by means of adaptive control. Section IV provides two
examples to illustrate the effectiveness of the proposed algorithm, which is
followed by the conclusion and future challenge in Section V.
Notation: Let $\mathbb{R}$, $\mathbb{R}^{n}$ and $\mathbb{R}^{N\times N}$ be
the sets of real numbers, $n$-order real vectors and $N$-dimensional real
square matrices, respectively. $I_{n}$ refers to the $n$-dimensional identity
matrix. Let $\mathbf{0}_{n}$ and $\mathbf{1}_{n}$, or simply $\mathbf{0}$ and
$\mathbf{1}$, represent the $n$-dimensional column vector in which all entries
are equal to $0$ and $1$, respectively. $A_{i}$ and $A_{i}^{j}$ denote the
$i$th row elements and the $(i,j)$ entry of matrix $A$, respectively. The
transpose of vector $x$ and matrix $A$ are denoted by $x^{\mathrm{T}}$ and
$A^{\mathrm{T}}$, respectively. $\|\cdot\|$ represents the Euclidean norm of
vectors or induced 2-norm of matrices. The Kronecker product of matrices $A$
and $B$ is represented by $A\otimes B$.
$\operatorname{col}\left(x_{1},x_{2},\ldots,x_{n}\right)$ and
$\operatorname{diag}\left(x_{1},x_{2},\ldots,x_{n}\right)$ represent a column
vector and a diagonal matrix, respectively, with $x_{1},x_{2},\ldots,x_{n}$
being their elements.
## II Preliminaries and Problem Formulation
In this section, we first present some preliminaries on graph theory and
convex analysis, and then formulate the problem.
### II-A Graph Theory
A directed graph (in short, a digraph) of order $N$ can be described by a
triplet $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{A})$, where
$\mathcal{V}=\\{1,\ldots,N\\}$ is a set of nodes,
$\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}$ is a collection of edges,
and an adjacency matrix $\mathcal{A}$. For $i,j\in\mathcal{V}$, the ordered
pair $(j,i)\in\mathcal{E}$ refers to an edge from $j$ to $i$. A directed path
in a digraph is an ordered sequence of nodes, in which any pair of consecutive
nodes is a directed edge. A digraph is said to be strongly connected if for
each node, there exists a directed path from any other node to itself. The
adjacency matrix is defined as
$\mathcal{A}=\left[a_{ij}\right]\in\mathbb{R}^{N\times N}$, where $a_{ii}=0$
for all $i$, $a_{ij}>0$ if $(j,i)\in\mathcal{E}$, otherwise $a_{ij}=0$. The
Laplacian matrix $\mathcal{L}=\left[l_{ij}\right]\in\mathbb{R}^{N\times N}$
associated with the digraph $\mathcal{G}$ is defined as
$l_{ii}=\sum_{j=1}^{N}a_{ij}$ and $l_{ij}=-a_{ij}$ for $i\neq j$. A digraph
$\mathcal{G}$ is called balanced if and only if
$\mathbf{1}_{N}^{\mathrm{T}}\mathcal{L}=\mathbf{0}_{N}^{\mathrm{T}}$,
otherwise it is called unbalanced. One can consult [29] for more details.
###### Lemma 1.
(see [32, 33]) Let $\mathcal{L}$ be the Laplacian matrix associated with a
strongly connected digraph $\mathcal{G}$. Then the following statements hold.
* i)
There exists a left eigenvector
$\xi=(\xi_{1},\xi_{2},\ldots,\xi_{N})^{\mathrm{T}}$ associated with the zero
eigenvalue of the Laplacian matrix such that $\sum_{i=1}^{N}\xi_{i}=1$,
$\xi_{i}>0,~{}i=1,2,\ldots,N$, and
$\xi^{\mathrm{T}}\mathcal{L}=\mathbf{0}_{N}^{\mathrm{T}}$.
* ii)
Define $\bar{\mathcal{L}}=\
\mathcal{R}\mathcal{L}+\mathcal{L}^{\mathrm{T}}\mathcal{R}$ with
$\mathcal{R}=\mathrm{diag}\left(\xi_{1},\xi_{2},\ldots,\xi_{N}\right)$. Then
$\min_{\varsigma^{\operatorname{T}}x=0,x\neq
0}x^{\operatorname{T}}\bar{\mathcal{L}}x>\frac{\lambda_{2}(\bar{\mathcal{L}})}{N}x^{\operatorname{T}}x$
for any positive vector $\varsigma$, where $\lambda_{2}(\bar{\mathcal{L}})$
represents the second smallest eigenvalue of $\bar{\mathcal{L}}$.
* iii)
$e^{-\mathcal{L}t}$ is a nonnegative matrix with positive diagonal entries for
all $t>0$, and
$\lim_{t\to\infty}e^{-\mathcal{L}t}=\mathbf{1}_{N}\xi^{\mathrm{T}}$.
### II-B Convex Analysis
This subsection reviews the definitions of convexity and Lipschitz continuity.
One can refer to [16, 34] for more details.
A subset $\varOmega$ of $\mathbb{R}^{n}$ is called convex if, for all
$\alpha\in[0,1]$,
$\alpha x+(1-\alpha)y\in\varOmega,\quad\forall x,y\in\varOmega.$
A function $f:\varOmega\subset\mathbb{R}^{n}\mapsto\mathbb{R}$ is called
convex if, for all $\alpha\in[0,1]$,
$f(\alpha x+(1-\alpha)y)\leq\alpha f(x)+(1-\alpha)f(y),\quad\forall
x,y\in\varOmega,$ (1)
otherwise it is called nonconvex. A convex function
$f:\varOmega\mapsto\mathbb{R}$ is called strictly convex if, for all
$\alpha\in(0,1)$, the inequality (1) is strict for all $x,y\in\varOmega$ with
$x\neq y$. A convex function $f:\varOmega\mapsto\mathbb{R}$ is called strongly
convex if there exists a positive constant $m$ such that $(x-y)^{T}(\nabla
f(x)-\nabla f(y))\geq m\|x-y\|^{2}$ for all $x,y\in\varOmega$, where $\nabla
f$ denotes the gradient.
A function $g:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}$ is said to be Lipschitz
continuous, or simply Lipschitz, if there exists a constant $l>0$ such that
the following Lipschitz condition is satisfied,
$\|g(x)-g(y)\|\leq l\|x-y\|,\quad\forall x,y\in\mathbb{R}^{n}.$ (2)
### II-C Problem Formulation
Consider a multi-agent system composed of $N$ identical agents over an
unbalanced directed network. Each agent $i$ is attached with a private local
cost function $f_{i}(s):\mathbb{R}^{n}\to\mathbb{R}$, where
$s\in\mathbb{R}^{n}$ represents the local decision variable. The global cost
function and the corresponding optimal solution are defined as
$f(s)=\sum_{i=1}^{N}f_{i}(s)$ and $s^{\star}$, respectively. In this work, the
objective is to design a fully distributed continuous-time algorithm in the
sense that neither strong convexity of local cost functions nor global
information about network connectivity is required, such that the following
DOP can be solved,
$\min_{s\in\mathbb{R}^{n}}~{}f(s).$ (3)
To solve the problem, necessary assumptions are introduced.
###### Assumption 1.
The digraph $\mathcal{G}$ is strongly connected.
###### Remark 1.
In contrast with those works [8, 9, 10, 14, 15, 31] that address the
distributed continuous-time optimization for undirected graphs or balanced
digraphs, this paper concentrates on the more general and also more
challenging case of unbalanced directed communication topologies. Unbalanced
topologies bring challenges to the convergence establishment of the agent
states toward the exact optimal solution because consensus cannot be achieved
with the unweighted gradient information transmitted by the asymmetric
topologies.
###### Assumption 2.
Each local cost function $f_{i}$ is differentiable, and its gradient $\nabla
f_{i}$ is globally Lipschitz on $\mathbb{R}^{n}$ with constant $l_{i}$. The
optimal solution $s^{*}\in\mathbb{R}^{n}$ to the DOP in (3) exists and is
unique.
###### Remark 2.
In most existing works that study the DOP [8, 9, 10, 14, 15, 28], the local
cost functions are assumed to be strongly convex. Under such a stringent
assumption, the admissible local cost functions are confined to those which
can be bounded from the below by a quadratic function. In this work, on the
contrary, the local cost functions are allowed to take nonconvex forms, and
this greatly enlarges the set of admissible cost functions and thus broadens
the scope of application.
###### Remark 3.
The existence and uniqueness of the optimal solution can be guaranteed when
the global cost function is strictly convex or the set of global minimizers is
a singleton[16, 18]. The differentiability of local cost functions is required
only to facilitate the convergence analysis. Illustrative Example 2 shows that
our proposed adaptive algorithm can still solve the concerned problem even if
the cost function is nondifferentiable.
## III Main Result
In this section, we propose a fully distributed adaptive algorithm to solve
the DOP in (3) over unbalanced directed networks without any global
information concerning the network connectivity or local cost functions.
### III-A Fully Distributed Algorithm
In this subsection, based on adaptive control approach, a fully distributed
algorithm is designed for each agent $i,~{}i=1,2,\ldots,N$ as follows,
$\small\left\\{\begin{aligned} \dot{x}_{i}=&-\tfrac{\nabla
f_{i}(x_{i})}{w_{i}^{i}}-(\sigma_{i}+\rho_{i})\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j})-\sum_{j=1}^{N}a_{ij}(v_{i}-v_{j}),\\\\[-2.84526pt]
\dot{v}_{i}=&(\sigma_{i}+\rho_{i})\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j}),\\\\[-2.84526pt]
\dot{w}_{i}=&-\sum_{j=1}^{N}a_{ij}\left(w_{i}-w_{j}\right),\\\\[-2.84526pt]
\dot{\sigma}_{i}=&\Big{(}\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j})\Big{)}^{\operatorname{T}}\Big{(}\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j})\Big{)},\end{aligned}\right.$
(4)
where $x_{i}\in\mathbb{R}^{n}$ is the state of agent $i$,
$v_{i}\in\mathbb{R}^{n}$ and $w_{i}\in\mathbb{R}^{N}$ are two auxiliary
variables, $w_{i}^{i}$ is the $i$th component of $w_{i}$, and the initial
value $w_{i}(0)$ satisfies $w_{i}^{i}(0)=1$, otherwise $w_{i}^{k}(0)=0$ for
all $k\neq i$; $\sigma_{i}$ is an adaptive gain with initial condition
$\sigma_{i}(0)>0$, and the dynamic gain $\rho_{i}$ is designed as
$\rho_{i}=\big{(}\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j})\big{)}^{\operatorname{T}}\big{(}\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j})\big{)}$.
Define $w=\mathrm{col}(w_{1},w_{2},\ldots,w_{N})$. It follows that
$\dot{w}=-\left(\mathcal{L}\otimes I_{N}\right)w$. By referring to
$w_{i}^{i}(0)=1$, and $w_{i}^{k}(0)=0$ for all $k\neq i$ as well as iii) of
Lemma 1, one has
$\displaystyle w_{i}^{i}(t)=$ $\displaystyle\big{(}e^{-(\mathcal{L}\otimes
I_{N})t}\big{)}_{(i-1)N+i}\cdot w(0)$ $\displaystyle=$
$\displaystyle\big{(}e^{-(\mathcal{L}\otimes
I_{N})t}\big{)}_{(i-1)N+i}^{(i-1)N+i}\cdot w_{i}^{i}(0)>0,$ (5)
for all $t\geq 0$. Therefore, the term $-\tfrac{\nabla
f_{i}(x_{i})}{w_{i}^{i}}$ in algorithm (4) is well defined. Moreover, by
applying iii) of Lemma 1 and recalling the initial condition of $w(0)$, we can
obtain that
$\displaystyle\lim_{t\to\infty}w(t)=$
$\displaystyle\lim_{t\to\infty}e^{-(\mathcal{L}\otimes I_{N})t}w(0)$ (6)
$\displaystyle=$ $\displaystyle\left(\mathbf{1}_{N}\xi^{\mathrm{T}}\otimes
I_{N}\right)w(0)=\mathbf{1}_{N}\otimes\xi.$
This implies that $w_{i}^{i}(t),~{}i=1,2,\ldots,N$ are bounded for all $t>0$.
###### Remark 4.
Compared to the algorithm proposed in [27], we use $\tfrac{\nabla
f_{i}(x_{i})}{w_{i}^{i}}$ instead of $\tfrac{\nabla f_{i}(x_{i})}{\xi_{i}}$ in
the adaptive algorithm (4) so that it is able to not only tackle the imbalance
caused by unbalanced directed topologies but also eliminate the restrictive
requirement on the exact value of the left eigenvector corresponding to the
zero eigenvalue of the Laplacian matrix. The algorithm (4) does not depend on
any global information concerning network connectivity or cost functions, and
is thus fully distributed.
###### Remark 5.
The design of algorithm (4) takes the modified Lagrange structure in [10] with
an adaptive control scheme. By using the adaptive gain $\sigma_{i}$ and the
dynamic gain $\rho_{i}$ instead of constant gains as in [10, 28], some global
information concerning cost functions and network connectivity is no longer
needed, such as the smallest strong convexity constant of local cost
functions, and the second smallest or largest eigenvalue of the Laplacian
matrix.
Define
$x=\operatorname{col}(x_{1},x_{2},\ldots,x_{N}),\quad
v=\operatorname{col}(v_{1},v_{2},\ldots,v_{N}),$
$\mathcal{C}=\operatorname{diag}(\sigma_{1},\sigma_{2},\ldots,\sigma_{N}),\quad\mathcal{B}=\operatorname{diag}(\rho_{1},\rho_{2},\ldots,\rho_{N}),$
$\nabla\tilde{f}(x)=\operatorname{col}\big{(}\nabla f_{1}(x_{1}),\nabla
f_{2}(x_{2}),\ldots,\nabla f_{N}(x_{N})\big{)},$
$\mathcal{W}^{-1}=\operatorname{diag}(\tfrac{1}{w_{1}^{1}},\tfrac{1}{w_{2}^{2}},\ldots,\tfrac{1}{w_{N}^{N}}),\quad
e_{i}=\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j}).$
It can be seen from (III-A) that the matrix $\mathcal{W}^{-1}$ is well
defined. Given any $\sigma_{i}(0)>0$, it can be proved that the adaptive gain
$\sigma_{i}(t)$ remains to be positive for all $t>0$. Thus, the dynamics of
$(x,v,w,\sigma_{i})$ can be written in the following form,
$\small\left\\{\begin{aligned} \dot{x}=&-(\mathcal{W}^{-1}\otimes
I_{n})\nabla\tilde{f}(x)-\big{(}(\mathcal{C}+\mathcal{B})\mathcal{L}\otimes
I_{n}\big{)}x-(\mathcal{L}\otimes I_{n})v,\\\
\dot{v}=&\big{(}(\mathcal{C}+\mathcal{B})\mathcal{L}\otimes I_{n}\big{)}x,\\\
\dot{w}=&-\left(\mathcal{L}\otimes
I_{N}\right)w,\quad\dot{\sigma}_{i}=e_{i}^{\operatorname{T}}e_{i},\end{aligned}\right.$
(7)
In what follows, our goal is to show that the agent states
$x_{i},~{}i=1,2,\ldots,N$ of (4) will converge to the optimal solution
$s^{\star}$ of the DOP in (3).
### III-B Convergence Analysis
To proceed, a preliminary result on the optimality condition will be first
established. Define
$\mathcal{R}^{-1}=\operatorname{diag}(\frac{1}{\xi_{1}},\frac{1}{\xi_{2}},\ldots,\frac{1}{\xi_{N}})$,
where $\xi_{i},~{}i=1,2,\ldots,N$ is the $i$th component of the left
eigenvector $\xi=\left(\xi_{1},\xi_{2},\ldots,\xi_{N}\right)^{\mathrm{T}}$.
The following lemma reveals the optimality condition in terms of a set of
equations that the optimal solution $s^{\star}$ satisfies.
###### Lemma 2.
Under Assumptions 1–2, suppose that the point $(\bar{x},\bar{v})$ satisfies
the following equations,
$\displaystyle\mathbf{0}=$ $\displaystyle\\!-\\!(\mathcal{R}^{-1}\\!\otimes
I_{n})\nabla\tilde{f}(\bar{x})\\!-\\!\big{(}(\mathcal{C}+\mathcal{B})\mathcal{L}\otimes
I_{n}\big{)}\bar{x}\\!-\\!(\mathcal{L}\otimes I_{n})\bar{v},$ (8a)
$\displaystyle\mathbf{0}=$
$\displaystyle\big{(}(\mathcal{C}+\mathcal{B})\mathcal{L}\otimes
I_{n}\big{)}\bar{x}.$ (8b)
Then, one has $\bar{x}=\mathbf{1}_{N}\otimes s^{\star}$, with $s^{\star}$
being the optimal solution of the DOP in (3).
###### Proof.
Define
$\bar{x}=\operatorname{col}(\bar{x}_{1},\bar{x}_{2},\ldots,\bar{x}_{N})$. It
can be seen from (8b) that $\bar{x}=\mathbf{1}_{N}\otimes q$ holds for some
vector $q\in\mathbb{R}^{n}$. On the one hand, by pre-multiplying both sides of
equation (8a) with $\xi^{\mathrm{T}}\otimes I_{n}$, it follows from i) of
Lemma 1 that $\sum_{i=1}^{N}\nabla f_{i}(\bar{x}_{i})=\mathbf{0}_{n}$. On the
other hand, under Assumption 2, the optimality condition $\sum_{i=1}^{N}\nabla
f_{i}(s^{\star})=\mathbf{0}_{n}$ is satisfied. Therefore, it can be obtained
that $\bar{x}=\mathbf{1}_{N}\otimes s^{\star}$. ∎
With Lemma 2 in hand, to prove that the agent states $x_{i},~{}i=1,2,\ldots,N$
of (4) converge to the optimal solution $s^{\star}$ of the DOP in (3), it is
sufficient to show that $(x,v)$ of (7) converges to $(\bar{x},\bar{v})$ in
Lemma 2.
To proceed, define $\tilde{x}=x-\bar{x}$ and $\tilde{v}=v-\bar{v}$. By
subtracting the equations (7) from (8), and noting that
$\mathcal{W}^{-1}\neq\mathcal{R}^{-1}$, the dynamics of
$(\tilde{x},\tilde{v},w,\sigma_{i})$ can be written in the following form,
$\displaystyle\dot{\tilde{x}}=$ $\displaystyle-\big{(}\mathcal{W}^{-1}\otimes
I_{n}\big{)}h+\big{(}(\mathcal{R}^{-1}-\mathcal{W}^{-1})\otimes
I_{n}\big{)}\nabla\tilde{f}(\bar{x})$
$\displaystyle-\big{(}(\mathcal{C}+\mathcal{B})\mathcal{L}\otimes
I_{n}\big{)}\tilde{x}-(\mathcal{L}\otimes I_{n})\tilde{v},$ (9a)
$\displaystyle\dot{\tilde{v}}=$
$\displaystyle\big{(}(\mathcal{C}+\mathcal{B})\mathcal{L}\otimes
I_{n}\big{)}\tilde{x},$ (9b) $\displaystyle\dot{w}=$
$\displaystyle-\left(\mathcal{L}\otimes
I_{N}\right)w,\quad\dot{\sigma}_{i}=e_{i}^{\operatorname{T}}e_{i},$ (9c)
where $h=\nabla\tilde{f}(\bar{x}+\tilde{x})-\nabla\tilde{f}(\bar{x})$.
Therefore, to show that $\lim_{t\to\infty}x_{i}(t)=s^{\star}$, by
$\tilde{x}=x-\bar{x}$ in (9) and $\bar{x}=\mathbf{1}_{N}\otimes s^{\star}$ in
Lemma 2, we only need to prove that the state $\tilde{x}$ of (9) coverges to
zero as time tends to infinity.
However, the origin $(\tilde{x},\tilde{v})=(\mathbf{0},\mathbf{0})$ is not the
equilibrium point of (9) as $\mathcal{R}^{-1}\neq\mathcal{W}^{-1}$. This
brings some extra challenge to the convergence analysis. To tackle this issue,
we need to introduce some coordinate transformations. Define two new variables
$\zeta=(\mathcal{L}\otimes I_{n})\tilde{x}$ and $\eta=(\mathcal{L}\otimes
I_{n})\tilde{v}$. In what follows, we will first prove that
$\lim_{t\to\infty}\zeta(t)=\mathbf{0}$ and
$\lim_{t\to\infty}\eta(t)=\mathbf{0}$, which are followed by
$\lim_{t\to\infty}\tilde{x}(t)=\mathbf{1}_{N}\otimes\tau_{1}$ and
$\lim_{t\to\infty}\tilde{v}(t)=\mathbf{1}_{N}\otimes\tau_{2}$, for two
constant vectors $\tau_{1},\tau_{2}\in\mathbb{R}^{n}$. Then, we will show that
$\tau_{1}=\mathbf{0}$ and $\tau_{2}<\infty$ by seeking a contradiction.
Now we are ready to present the main result of this work.
###### Theorem 1.
Suppose Assumptions 1–2 hold. For $i=1,2,\ldots,N$, let $\sigma_{i}(0)>0$,
$w_{i}^{i}(0)=1$, and $w_{i}^{k}(0)=0$ for all $k\neq i$. Then, for any
initial conditions $x_{i}(0)$ and $v_{i}(0)$, the DOP in (3) is solved by the
fully distributed algorithm (4).
###### Proof.
To prove Theorem 1, it is sufficient to prove that the state $\tilde{x}$ of
(9) will converge to zero as time tends to infinity in the case of
$\mathcal{R}^{-1}\neq\mathcal{W}^{-1}$. The proof is composed of the following
two parts. Part 1. Show that $\lim_{t\to\infty}\zeta(t)=\mathbf{0}$ and
$\lim_{t\to\infty}\eta(t)=\mathbf{0}$.
To proceed, let $\zeta_{i}\in\mathbb{R}^{n},~{}i=1,2,\ldots,N$ be a column
vector stacked from the $((i-1)\times n+1)$th element to the $(i\times n)$th
element of vector $\zeta$. Recalling that
$e_{i}=\sum_{j=1}^{N}a_{ij}(x_{i}-x_{j})$, simple derivation gives
$e_{i}=\zeta_{i}$. Then, the dynamics of $(\zeta,\eta,w,\sigma_{i})$ can be
written as follows,
$\small\left\\{\begin{aligned}
\dot{\zeta}=&-\big{(}\mathcal{L}\mathcal{W}^{-1}\otimes
I_{n}\big{)}h+\big{(}\mathcal{L}(\mathcal{R}^{-1}-\mathcal{W}^{-1})\otimes
I_{n}\big{)}\nabla\tilde{f}(\bar{x})\\\
&-\big{(}\mathcal{L}(\mathcal{C}+\mathcal{B})\otimes
I_{n}\big{)}\zeta-(\mathcal{L}\otimes I_{n})\eta,\\\
\dot{\eta}=&\big{(}\mathcal{L}(\mathcal{C}+\mathcal{B})\otimes
I_{n}\big{)}\zeta,\\\ \dot{w}=&-\left(\mathcal{L}\otimes
I_{N}\right)w,\quad\dot{\sigma}_{i}=\zeta_{i}^{\operatorname{T}}\zeta_{i}.\end{aligned}\right.$
(10)
Define $\chi=\operatorname{col}(\zeta,\eta)$. Then the dynamics of $\chi$ can
be rewritten as
$\dot{\chi}=\varphi(\chi)+\phi(t),$ (11)
where $\varphi(\chi)$ and $\phi(t)$ are defined in (16) on the next page. At
first, we will establish the asymptotical convergence to the origin of $\chi$
in system (11) with $\mathcal{R}^{-1}\neq\mathcal{W}^{-1}$ and $\phi(t)\neq
0$.
$\displaystyle\varphi(\chi)=\left(\begin{array}[]{c}-(\mathcal{L}\mathcal{W}^{-1}\otimes
I_{n})h-\big{(}\mathcal{L}(\mathcal{C}+\mathcal{B})\otimes
I_{n}\big{)}\zeta-(\mathcal{L}\otimes I_{n})\eta\\\
\big{(}\mathcal{L}(\mathcal{C}+\mathcal{B})\otimes
I_{n}\big{)}\zeta\end{array}\right),\quad\phi(t)=\left(\begin{array}[]{c}\big{(}\mathcal{L}(\mathcal{R}^{-1}-\mathcal{W}^{-1})\otimes
I_{n}\big{)}\nabla\tilde{f}(\bar{x})\\\ \mathbf{0}\end{array}\right)$ (16)
Under Assumption 2,
$h=\nabla\tilde{f}(\bar{x}+\tilde{x})-\nabla\tilde{f}(\bar{x})$ and thus
$\varphi(\chi)$ are locally Lipschitz in $\chi\in\varOmega$ for any compact
subset $\varOmega\in\mathbb{R}^{n}$. In addition, it follows from (6) that
$\phi(t)$ is bounded for all $t\geq 0$. According to Theorem 4.19 in [34], to
establish the asymptotical convengence to the origin of $\chi$ in (11), global
uniform asymptotical stability of the unperturbed system
$\dot{\chi}=\varphi(\chi)$ should be established at first, and input-to-state
stability (ISS) of the system (11) will be presented in turn. The proof can be
accomplished by two steps.
Step 1. Show that the equilibrium point $\chi=\mathbf{0}$ of the unperturbed
system $\dot{\chi}=\varphi(\chi)$ is globally uniformly asymptotically stable.
By referring to [35], one can obtain that
$\operatorname{rank}(\mathcal{L}^{\operatorname{T}}\mathcal{L})=\operatorname{rank}(\mathcal{L})=N-1$.
Thus, zero is a simple eigenvalue of matrix
$\mathcal{L}^{\operatorname{T}}\mathcal{L}$. Note that $\xi_{i}$ and
$\sigma_{i}$ are positive for all $i=1,2,\ldots,N$. Consider the following
Lyapunov function candidate,
$V=V_{1}+V_{2}+\tfrac{33N\bar{\lambda}_{N}(\mathcal{L}^{\operatorname{T}}\mathcal{L})}{\lambda_{2}(\bar{\mathcal{L}})^{2}}V_{3},$
(17)
where $\bar{\lambda}_{N}(\mathcal{L}^{\operatorname{T}}\mathcal{L})$ denotes
the largest eigenvalue of $\mathcal{L}^{\operatorname{T}}\mathcal{L}$,
$\lambda_{2}(\bar{\mathcal{L}})$ denotes the second smallest eigenvalue of
$\bar{\mathcal{L}}=\mathcal{R}\mathcal{L}+\mathcal{L}^{\operatorname{T}}\mathcal{R}$,
and
$\left\\{\begin{aligned}
~{}V_{1}=&\tfrac{1}{2}\sum_{i=1}^{N}(\sigma_{i}-\sigma_{0})^{2},\\\\[-2.84526pt]
~{}V_{2}=&\tfrac{1}{2}\sum_{i=1}^{N}\xi_{i}(2\sigma_{i}+\rho_{i})\zeta_{i}^{\operatorname{T}}\zeta_{i},\\\\[2.84526pt]
~{}V_{3}=&\tfrac{1}{2}(\zeta+\eta)^{\mathrm{T}}(\mathcal{R}\otimes
I_{n})(\zeta+\eta),\end{aligned}\right.$
with $\sigma_{0}$ being a positive constant to be determined later.
The derivatives of $V_{1}$ and $V_{2}$ along the trajectories of the
unperturbed system $\dot{\chi}=\varphi(\chi)$ satisfy
$\displaystyle\dot{V}_{1}=$
$\displaystyle\sum_{i=1}^{N}\zeta_{i}^{\operatorname{T}}(\sigma_{i}-\sigma_{0})\zeta_{i},$
(18) $\displaystyle\dot{V}_{2}=$ $\displaystyle
2\sum_{i=1}^{N}\xi_{i}(\sigma_{i}+\rho_{i})\zeta_{i}^{\operatorname{T}}\dot{\zeta}_{i}+\sum_{i=1}^{N}\xi_{i}\rho_{i}\zeta_{i}^{\operatorname{T}}\zeta_{i}.$
(19)
By combining (18)–(19), and recalling $\varphi(\chi)$ in (16), we can obtain
that
$\displaystyle\dot{V}_{1}+\dot{V}_{2}\leq$
$\displaystyle-2\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})\mathcal{R}\mathcal{L}(\mathcal{C}+\mathcal{B})\otimes
I_{n}\big{)}\zeta+\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{R}\mathcal{B}$
$\displaystyle-\sigma_{0}I_{N})\otimes
I_{n}\big{)}\zeta-2\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})\mathcal{R}\mathcal{L}\mathcal{W}^{-1}\otimes
I_{n}\big{)}h$
$\displaystyle-2\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})\mathcal{R}\mathcal{L}\otimes
I_{n}\big{)}\eta.$ (20)
Define $\hat{\zeta}=\big{(}(\mathcal{C}+\mathcal{B})\otimes
I_{n}\big{)}\zeta$. Given that $\mathcal{C}$ and $\mathcal{B}$ are both
positive diagonal matrices, there exists a positive vector
$\varsigma=(\mathcal{C}+\mathcal{B})^{-1}\xi\otimes\mathbf{1}_{N}$, such that
$\displaystyle\varsigma^{\operatorname{T}}\hat{\zeta}=$
$\displaystyle\big{(}\xi^{\operatorname{T}}(\mathcal{C}+\mathcal{B})^{-1}\otimes\mathbf{1}_{N}^{\operatorname{T}}\big{)}\cdot\big{(}(\mathcal{C}+\mathcal{B})\otimes
I_{n}\big{)}\zeta$ $\displaystyle=$
$\displaystyle\big{(}\xi^{\operatorname{T}}\mathcal{L}\otimes\mathbf{1}_{N}^{\operatorname{T}}\big{)}\tilde{x}=0.$
Thus, it follows from ii) of Lemma 1 that
$\displaystyle-2\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})\mathcal{R}\mathcal{L}(\mathcal{C}+\mathcal{B})\otimes
I_{n}\big{)}\zeta$ $\displaystyle=$
$\displaystyle-\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})(\mathcal{R}\mathcal{L}+\mathcal{L}^{\operatorname{T}}\mathcal{R})(\mathcal{C}+\mathcal{B})\otimes
I_{n}\big{)}\zeta$ $\displaystyle\leq$
$\displaystyle-\tfrac{\lambda_{2}(\bar{\mathcal{L}})}{N}\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})^{2}\otimes
I_{n}\big{)}\zeta.$
Since it is proved in (III-A)–(6) that $w_{i}^{i}(t)>0,~{}i=1,2,\ldots,N$ are
bounded for all $t>0$,
$\check{w}=\min\big{\\{}\inf_{t>0}w_{i}^{i}(t),~{}i=1,2,\ldots,N\big{\\}}$ is
well defined. Then, the following two inequalities are satisfied,
$\left\\{\begin{array}[]{l}-2\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})\mathcal{R}\mathcal{L}\mathcal{W}^{-1}\otimes
I_{n}\big{)}h\\\\[2.84526pt]
\leq\tfrac{\lambda_{2}(\bar{\mathcal{L}})}{4N}\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})^{2}\otimes
I_{n}\big{)}\zeta+\tfrac{4N\bar{\lambda}_{N}(\mathcal{L}^{\operatorname{T}}\mathcal{L})}{\lambda_{2}(\bar{\mathcal{L}})\check{w}^{2}}\|h\|^{2},\\\\[11.38109pt]
-2\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})\mathcal{R}\mathcal{L}\otimes
I_{n}\big{)}\eta\\\\[2.84526pt]
\leq\tfrac{\lambda_{2}(\bar{\mathcal{L}})}{4N}\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})^{2}\otimes
I_{n}\big{)}\zeta+\tfrac{4N\bar{\lambda}_{N}(\mathcal{L}^{\operatorname{T}}\mathcal{L})}{\lambda_{2}(\bar{\mathcal{L}})}\|\eta\|^{2}.\end{array}\right.$
For convenience, we subsequently abbreviate $\lambda_{2}(\bar{\mathcal{L}})$
and $\bar{\lambda}_{N}(\mathcal{L}^{\operatorname{T}}\mathcal{L})$ as
$\lambda_{2}$ and $\bar{\lambda}_{N}$, respectively. Then, (III-B) can be
rewritten as follows,
$\displaystyle\dot{V}_{1}+\dot{V}_{2}\leq$
$\displaystyle-\tfrac{\lambda_{2}}{2N}\zeta^{\operatorname{T}}\Big{(}\big{(}\mathcal{C}+\mathcal{B}\big{)}^{2}\otimes
I_{n}\Big{)}\zeta+\tfrac{4N\bar{\lambda}_{N}}{\lambda_{2}\check{w}^{2}}\|h\|^{2}$
$\displaystyle+\tfrac{4N\bar{\lambda}_{N}}{\lambda_{2}}\|\eta\|^{2}+\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{R}\mathcal{B}-\sigma_{0}I_{N})\otimes
I_{n}\big{)}\zeta.$ (21)
The derivative of $V_{3}$ along the trajectories of the unperturbed system
$\dot{\chi}=\varphi(\chi)$ is given as follows,
$\displaystyle\dot{V}_{3}=$
$\displaystyle\big{(}\zeta+\eta\big{)}^{\operatorname{T}}\big{(}\mathcal{R}\otimes
I_{n}\big{)}\big{(}\dot{\zeta}+\dot{\eta}\big{)}$ $\displaystyle=$
$\displaystyle-\zeta^{\operatorname{T}}\big{(}\mathcal{R}\mathcal{L}\mathcal{W}^{-1}\otimes
I_{n}\big{)}h-\zeta^{\operatorname{T}}\big{(}\mathcal{R}\mathcal{L}\otimes
I_{n}\big{)}\eta$
$\displaystyle-\eta^{\operatorname{T}}\big{(}\mathcal{R}\mathcal{L}\mathcal{W}^{-1}\otimes
I_{n}\big{)}h-\eta^{\operatorname{T}}\big{(}\mathcal{R}\mathcal{L}\otimes
I_{n}\big{)}\eta.$ (22)
Applying ii) of Lemma 1 leads to
$-\eta^{\operatorname{T}}\big{(}\mathcal{R}\mathcal{L}\otimes
I_{n}\big{)}\eta\leq-\tfrac{\lambda_{2}}{2}\|\eta\|^{2}.$ (23)
Moreover, it can be verified that the following inequalities hold,
$\left\\{\begin{array}[]{l}\\!\\!-\zeta^{\operatorname{T}}\big{(}\mathcal{R}\mathcal{L}\otimes
I_{n}\big{)}\eta\leq\tfrac{\lambda_{2}}{8}\|\eta\|^{2}+\tfrac{2\bar{\lambda}_{N}}{\lambda_{2}}\|\zeta\|^{2},\\\\[5.69054pt]
\\!\\!-\eta^{\operatorname{T}}\big{(}\mathcal{R}\mathcal{L}\mathcal{W}^{-1}\otimes
I_{n}\big{)}h\leq\tfrac{\lambda_{2}}{8}\|\eta\|^{2}+\tfrac{2\bar{\lambda}_{N}}{\lambda_{2}\check{w}^{2}}\|h\|^{2},\\\\[5.69054pt]
\\!\\!-\zeta^{\operatorname{T}}\big{(}\mathcal{R}\mathcal{L}\mathcal{W}^{-1}\otimes
I_{n}\big{)}h\leq\|\zeta\|^{2}+\tfrac{\bar{\lambda}_{N}}{4\check{w}^{2}}\|h\|^{2}.\end{array}\right.$
(24)
Then, substituting inequalities (23)–(24) into (III-B) yields
$\displaystyle\dot{V}_{3}\leq-\tfrac{\lambda_{2}}{4}\|\eta\|^{2}+\tfrac{\lambda_{2}+2\bar{\lambda}_{N}}{\lambda_{2}}\|\zeta\|^{2}+\tfrac{(8+\lambda_{2})\bar{\lambda}_{N}}{4\lambda_{2}\check{w}^{2}}\|h\|^{2}.$
(25)
By combining (III-B) and (25), the derivative of $V$ in (17) along the
trajectories of the unperturbed system $\dot{\chi}=\varphi(\chi)$ satisfies
the following inequality,
$\displaystyle\dot{V}\leq$
$\displaystyle\zeta^{\operatorname{T}}\Big{(}\big{(}-\tfrac{\lambda_{2}}{2N}(\mathcal{C}+\mathcal{B})^{2}+(\mathcal{C}+\mathcal{R}\mathcal{B})-\sigma_{0}I_{N}$
$\displaystyle+\tfrac{33N\bar{\lambda}_{N}(\lambda_{2}+2\bar{\lambda}_{N})}{\lambda_{2}^{3}}I_{N}\big{)}\otimes
I_{n}\Big{)}\zeta-\tfrac{17N\bar{\lambda}_{N}}{4\lambda_{2}}\|\eta\|^{2}$
$\displaystyle+\Big{(}\tfrac{4N\bar{\lambda}_{N}}{\lambda_{2}\check{w}^{2}}+\tfrac{33N\bar{\lambda}_{N}^{2}(8+\lambda_{2})}{4\lambda_{2}^{3}\check{w}^{2}}\Big{)}\|h\|^{2}.$
(26)
By the Lipschitz condition of the gradients in Assumption 2, we have
$h=\nabla\tilde{f}(\bar{x}+\tilde{x})-\nabla\tilde{f}(\bar{x})\leq\hat{l}\|\tilde{x}\|$,
where $\hat{l}=\max\\{l_{1},l_{2},\ldots,l_{N}\\}$.
Denote the second smallest eigenvalue of matrix
$\mathcal{L}^{\operatorname{T}}\mathcal{L}$ by
$\bar{\lambda}_{2}(\mathcal{L}^{\operatorname{T}}\mathcal{L})$, or simply
$\bar{\lambda}_{2}$. Recall that $\zeta=(\mathcal{L}\otimes I_{n})\tilde{x}$
and $\mathcal{L}\mathbf{1}_{N}=\mathbf{0}_{N}$. It can be obtained from the
symmetry of $\mathcal{L}^{\operatorname{T}}\mathcal{L}$ and
$\bar{\lambda}_{2}(\mathcal{L}^{\operatorname{T}}\mathcal{L})>0$ that
$\|\zeta\|^{2}=\tilde{x}^{\operatorname{T}}\big{(}\mathcal{L}^{\operatorname{T}}\mathcal{L}\otimes
I_{n}\big{)}\tilde{x}\geq\bar{\lambda}_{2}\|\tilde{x}\|^{2}$. Thus, one has
$\|h\|^{2}\leq\hat{l}^{2}/\bar{\lambda}_{2}\|\zeta\|^{2}$. Define
$\omega_{1}=\tfrac{33N\bar{\lambda}_{N}(\lambda_{2}+2\bar{\lambda}_{N})}{\lambda_{2}^{3}}$
and
$\omega_{2}=\frac{\hat{l}^{2}}{\bar{\lambda}_{2}}\Big{(}\tfrac{4N\bar{\lambda}_{N}}{\lambda_{2}\check{w}^{2}}+\tfrac{17N\bar{\lambda}_{N}^{2}(8+\lambda_{2})}{4\lambda_{2}^{3}\check{w}^{2}}\Big{)}$.
Then, (III-B) can be rewritten as follows,
$\displaystyle\dot{V}\leq$
$\displaystyle-\zeta^{\operatorname{T}}\Big{(}\tfrac{\lambda_{2}}{2N}\big{(}(\mathcal{C}+\mathcal{B})-\tfrac{N}{\lambda_{2}}I_{N}\big{)}^{2}\otimes
I_{n}\Big{)}\zeta$
$\displaystyle-\Big{(}\sigma_{0}-\omega_{1}-\omega_{2}-\tfrac{N}{2\lambda_{2}}\Big{)}\|\zeta\|^{2}-\tfrac{17N\bar{\lambda}_{N}}{4\lambda_{2}}\|\eta\|^{2}.$
(27)
Choose $\sigma_{0}=1+\omega_{1}+\omega_{2}+\tfrac{N}{2\lambda_{2}}$. One thus
has
$\displaystyle\dot{V}\leq$
$\displaystyle-\|\zeta\|^{2}-\tfrac{17N\bar{\lambda}_{N}}{4\lambda_{2}}\|\eta\|^{2}.$
(28)
Therefore, it is proved that the equilibrium point $\chi=\mathbf{0}$ of the
unperturbed system $\dot{\chi}=\varphi(\chi)$ is globally uniformly
asymptotically stable, and the adaptive control gains
$\sigma_{i},~{}i=1,2,\ldots,N$ converge to some finite positive constants.
Step 2. Show that the perturbed system (11) is ISS, and the state $\chi(t)$
converges to zero as $t\to\infty$. Reconsider the Lyapunov function candidate
$V$ in (17), but with $\sigma_{0}$ being another positive constant to be
specified. Similarly, by referring to (III-B), the derivative of $V$ along the
trajectories of (11) can be given as follows,
$\displaystyle\dot{V}\leq$
$\displaystyle-\zeta^{\operatorname{T}}\Big{(}\tfrac{\lambda_{2}}{2N}\big{(}\mathcal{C}+\mathcal{B}-\tfrac{N}{\lambda_{2}}I_{N}\big{)}^{2}\otimes
I_{n}\Big{)}\zeta$ (29)
$\displaystyle-\Big{(}\sigma_{0}-\omega_{1}-\omega_{2}-\tfrac{N}{2\lambda_{2}}\Big{)}\|\zeta\|^{2}-\tfrac{17N\bar{\lambda}_{N}}{4\lambda_{2}}\|\eta\|^{2}$
$\displaystyle+2\zeta^{\operatorname{T}}\Big{(}\big{(}\mathcal{C}+\mathcal{B}\big{)}\mathcal{R}\mathcal{L}(\mathcal{R}^{-1}-\mathcal{W}^{-1}(t))\otimes
I_{n}\Big{)}\nabla\tilde{f}(\bar{x})$
$\displaystyle+\epsilon\big{(}\zeta+\eta\big{)}^{\operatorname{T}}\Big{(}\mathcal{R}\mathcal{L}\big{(}\mathcal{R}^{-1}-\mathcal{W}^{-1}(t)\big{)}\otimes
I_{n}\Big{)}\nabla\tilde{f}(\bar{x}),$
where
$\epsilon=\tfrac{33N\bar{\lambda}_{N}(\mathcal{L}^{\operatorname{T}}\mathcal{L})}{\lambda_{2}(\bar{\mathcal{L}})^{2}}$.
Define
$u(t)=\big{(}\mathcal{R}\mathcal{L}(\mathcal{R}^{-1}-\mathcal{W}^{-1}(t))\otimes
I_{n}\big{)}\nabla\tilde{f}(\bar{x})$. Then the following two inequalities are
satisfied,
$\left\\{\begin{aligned}
2\zeta^{\operatorname{T}}\big{(}(\mathcal{C}&+\mathcal{B})\mathcal{R}\mathcal{L}\big{(}\mathcal{R}^{-1}-\mathcal{W}^{-1}(t)\big{)}\otimes
I_{n}\big{)}\nabla\tilde{f}(\bar{x})\\\
\leq&\tfrac{\lambda_{2}}{4N}\zeta^{\operatorname{T}}\big{(}(\mathcal{C}+\mathcal{B})^{2}\otimes
I_{n}\big{)}\zeta+\tfrac{4N}{\lambda_{2}}\|u(t)\|^{2},\\\\[4.2679pt]
\big{(}\zeta+\eta\big{)}^{\operatorname{T}}&\big{(}\mathcal{R}\mathcal{L}(\mathcal{R}^{-1}-\mathcal{W}^{-1}(t))\otimes
I_{n}\big{)}\nabla\tilde{f}(\bar{x})\\\
\leq&\tfrac{\lambda_{2}}{8}\|\eta\|^{2}+\|\zeta\|^{2}+\tfrac{8+\lambda_{2}}{4\lambda_{2}}\|u(t)\|^{2}.\end{aligned}\right.$
(30)
By substituting (30) into (29), one has
$\displaystyle\dot{V}\leq$
$\displaystyle-\zeta^{\operatorname{T}}\Big{(}\tfrac{\lambda_{2}}{4N}\big{(}\mathcal{C}+\mathcal{B}-\tfrac{2N}{\lambda_{2}}I_{N}\big{)}^{2}\otimes
I_{n}\Big{)}\zeta-\Big{(}\sigma_{0}-\omega_{1}-\omega_{2}$
$\displaystyle-\epsilon-\tfrac{N}{\lambda_{2}}\Big{)}\|\zeta\|^{2}-\tfrac{N\bar{\lambda}_{N}}{8\lambda_{2}}\|\eta\|^{2}+\Big{(}\tfrac{4N}{\lambda_{2}}+\tfrac{\epsilon(8+\lambda_{2})}{4\lambda_{2}}\Big{)}\|u(t)\|^{2}.$
Define
$\varrho=\tfrac{4N}{\lambda_{2}}+\tfrac{\epsilon(8+\lambda_{2})}{4\lambda_{2}}$,
and $\kappa=\min\big{\\{}\tfrac{N\bar{\lambda}_{N}}{8\lambda_{2}},1\big{\\}}$.
By choosing
$\sigma_{0}=1+\omega_{1}+\omega_{2}+\epsilon+\frac{N}{\lambda_{2}}$, one has
$\displaystyle\dot{V}\leq$
$\displaystyle-\|\zeta\|^{2}-\tfrac{N\bar{\lambda}_{N}}{8\lambda_{2}}\|\eta\|^{2}++\Big{(}\tfrac{4N}{\lambda_{2}}+\tfrac{\epsilon(8+\lambda_{2})}{4\lambda_{2}}\Big{)}\|u(t)\|^{2}$
$\displaystyle\leq$ $\displaystyle\kappa\|\chi(t)\|^{2}+\varrho\|u(t)\|^{2}.$
Let $0<\theta<1$, it can be obtained that
$\displaystyle\dot{V}\leq$
$\displaystyle-(1-\theta)\kappa\|\chi(t)\|^{2},\quad\forall~{}\|\chi(t)\|\geq\sqrt{\frac{\varrho}{\theta\kappa}}\|u(t)\|.$
It then follows from Theorem 4.19 in [34] that the system (11) is ISS.
Note that it is proved in (6) that
$\lim_{t\to\infty}w(t)=\mathbf{1}_{N}\otimes\xi$. One then has
$\lim_{t\to\infty}w_{i}^{i}(t)=\xi_{i}$ and
$\lim_{t\to\infty}\mathcal{W}^{-1}(t)=\mathcal{R}^{-1}$. Thus,
$u(t)=\big{(}\mathcal{R}\mathcal{L}(\mathcal{R}^{-1}-\mathcal{W}^{-1}(t))\otimes
I_{n}\big{)}\nabla\tilde{f}(\bar{x})$ converges to zero as $t\to\infty$. By
the definition of ISS, it can be proved (see Exercise 4.58 in [34]) that the
state $\chi(t)=\operatorname{col}(\zeta(t),\eta(t))$ converges to zero as time
goes to infinity. Part 2. Show that $\lim_{t\to\infty}\tilde{x}(t)=\mathbf{0}$
and $\lim_{t\to\infty}\tilde{v}(t)=\mathbf{1}_{N}\otimes\tau_{2}$, for a
constant vector $\tau_{2}\in\mathbb{R}^{n}$.
Note that $\zeta=(\mathcal{L}\otimes I_{n})\tilde{x}$ and
$\eta=(\mathcal{L}\otimes I_{n})\tilde{v}$. By the obtained facts that
$\lim_{t\to\infty}\zeta(t)=\mathbf{0}$ and
$\lim_{t\to\infty}\eta(t)=\mathbf{0}$, one has
$\lim_{t\to\infty}\tilde{x}(t)=\mathbf{1}_{N}\otimes\tau_{1}$ and
$\lim_{t\to\infty}\tilde{v}(t)=\mathbf{1}_{N}\otimes\tau_{2}$, for two
constant vectors $\tau_{1},\tau_{2}\in\mathbb{R}^{n}$. Next, we will show that
$\tau_{1}=\mathbf{0}_{n}$ and $\tau_{2}<\infty$ by seeking a contradiction.
To this end, by taking limits on both sides of the equation in (9a), one has
$\displaystyle\mathbf{0}=$
$\displaystyle\lim_{t\to\infty}\Big{(}-(\mathcal{W}^{-1}(t)\otimes
I_{n})\nabla\tilde{f}(\bar{x}+\tilde{x}(t))+\big{(}\mathcal{R}^{-1}\otimes
I_{n}\big{)}\nabla\tilde{f}(\bar{x})\Big{)}$ (31) $\displaystyle=$
$\displaystyle\big{(}\mathcal{R}^{-1}\otimes
I_{n}\big{)}\big{(}\nabla\tilde{f}(\bar{x})-\nabla\tilde{f}(\bar{x}+\mathbf{1}_{N}\otimes\tau_{1})\big{)}.$
By pre-multiplying both sides of equation (31) with $\xi^{\mathrm{T}}\otimes
I_{n}$, one then has $\sum_{i=1}^{N}\nabla
f_{i}(\bar{x}_{i})=\sum_{i=1}^{N}\nabla f_{i}(\bar{x}_{i}+\tau_{1})$. Under
Assumption 2, the optimal solution to the DOP in (3) is unique, which implies
that $\tau_{1}=\mathbf{0}_{n}$. Meanwhile, it follows from (9b) that
$\lim_{t\to\infty}\dot{\tilde{v}}(t)=\mathbf{0}$. One thus has
$\tau_{2}<\infty$. To sum up, the trajectories of (9) are bounded for all
$t>0$, and $\tilde{x}(t)$ tends to zero as time goes to infinity. Therefore,
the convergence of the states $x_{i},~{}i=1,2,\ldots,N$ of (4) to the optimal
solution $s^{\star}$ of the problem (3) is presented, and the proof is thus
completed.
∎
###### Remark 6.
The adaptive gain $\sigma_{i}$ in (4) is updated based on relative state
errors so that it will always increase as long as the consensus is not
achieved, eventually rendering the state consensus of the agents. By adopting
the adaptive control approach, the requirement on the smallest strong
convexity constant of local cost functions is no longer needed to generate a
negative term related to agent states. Thus, the proposed adaptive algorithm
is able to sovle the DOP when local cost functions are nonconvex.
###### Remark 7.
The DOP over unbalanced digraphs is investigated in [28]. The advantages of
the distributed adaptive algorithm (4) in this work over the algorithm in [28]
can be stated in the following two aspects. First, the control gains in [28]
rely on some prior global information concerning network connectivity and cost
functions, such as the second smallest eigenvalue of the Laplacian matrix, the
smallest strong convexity constant of local cost functions as well as the
largest Lipschitz constant of their gradients. On the contrary, the algorithm
(4) developed in this work does not require those information and is thus
fully distributed. Second, to guarantee the convergence of their algorithm,
sufficient large control gains are needed in [28]. A potential problem with
high-gain feedback is that it may result in some undesirable issues, such as
increased sensitivity to unmodeled dynamics and noise, oscillations, and even
instability. The algorithm developed in this work can avoid this problem by
adopting an adaptive control mechanism.
## IV Illustrative Examples
In this section, the effectiveness of the developed fully distributed
algorithm (4) over unbalanced directed networks is illustrated by two
examples.
### IV-A Example 1
Consider five networked agents with their communication network topology being
described by the unbalanced digraph $\mathcal{G}$ in Fig. 1. It can be
verified that this digraph is strongly connected, and Assumption 1 is thus
satisfied. Suppose that agent $i,~{}i=1,\ldots,5$, are endowed with the
following local cost functions respectively:
$\displaystyle f_{1}$
$\displaystyle=5\sin\big{(}\|s+[4,5]^{\mathrm{T}}\|\big{)},~{}f_{2}=10\cos\big{(}\ln(\|s+[8,10]^{\mathrm{T}}\|)\big{)},$
$\displaystyle f_{3}$
$\displaystyle=4\times\|s+[2,3]^{\mathrm{T}}\|^{\frac{4}{3}},\quad
f_{4}=2\times\|s-[3,5]^{\mathrm{T}}\|^{2},$ $\displaystyle f_{5}$
$\displaystyle=\|s+[1,2]^{\mathrm{T}}\|^{2}/\sqrt{\|s+[1,2]^{\mathrm{T}}\|^{2}+2},$
where $s\in\mathbb{R}^{2}$. Note that the local cost functions $f_{1}(\cdot)$
and $f_{2}(\cdot)$ are nonconvex. However, it can be verified that the global
cost function $f(s)=\sum_{i=1}^{5}f_{i}(s)$ is strictly convex, which implies
that the global minimizer $s^{\star}$ is unique. Moreover, it can be verified
that the gradients of $f_{i}(\cdot),~{}i=1,\ldots,5$ are globally Lipschitz on
$\mathbb{R}$, and Assumption 2 is thus satisfied.
12345 Figure 1: An unbalanced directed network. Figure 2: Convergence
performance of the fully distributed algorithm (4) over the unbalanced
directed network in Fig. 1 under the relaxed condition that the local cost
functions are nonconvex.
We now present the convergence performance of the fully distributed algorithm
(4) under the relaxed condition that the global cost function is strictly
convex while local cost functions may be nonconvex. For $i=1,\ldots,5$, let
the initial values $x_{i}(0)$ and $v_{i}(0)$ be arbitrarily chosen, and the
initial values of the adaptive gains $\sigma_{i}$’s be chosen as
$\sigma_{i}(0)=1$. The simulation results are shown in Fig. 2. It can be
observed that the trajectories of agent states
$x_{i}=[x_{i1},x_{i2}]^{\mathrm{T}},~{}i=1,\ldots,5$ converge to the optimal
solution $s^{\star}=[1.4136,~{}2.53658]^{\mathrm{T}}$, which minimizes the
global cost function $f(s)=\sum_{i=1}^{5}f_{i}(s)$. Therefore, without either
strong convexity of local cost functions or prior global information of
network connectivity, it has been shown that the fully distributed algorithm
(4) solves the DOP in (3).
### IV-B Example 2
Figure 3: Convergence performance of algorithm (4) for solving the distributed
parameters estimation problem. Each measurement satisfies the independent
identically Gaussian distribution $\mathcal{N}(\mu,\sigma_{i})$, with mean
vector $\mu=[1,2,3]^{\operatorname{T}}$ and covariance matrix
$\sigma_{i}=0.05iI_{3},i=1,2,\ldots,5$. Huber loss parameter $\varsigma=0.5$.
In this example, we apply the fully distributed algorithm (4) to solve the
distributed parameter estimation problem, which is one of the most important
research topics in wireless sensor networks. The objective is to estimate a
parameter or function based on large amounts of data collected by sensors from
the environment. The distributed parameter estimation problem can be
reformulated as a DOP. Moreover, the estimator is typically the optimal
solution of the corresponding optimization problem. Please refer to [36] and
the references therein for more details.
More specifically, consider a group of $N$ sensors over general directed
networks that cooperatively estimate a parameter $s\in\mathbb{R}^{n}$ by
collecting private measurements. Suppose that each sensor $i$ collects $n_{i}$
measurements $Q_{ij}\in\mathbb{R}^{n},j=1,\ldots,n_{i}$. Let
$Q_{i}=\\{Q_{ij},j=1,\ldots,n_{i}\\}$ denote the private data set of sensor
$i$. Then the local cost function of sensor $i$ can be defined as follows,
$f_{i}(s)=\big{\|}\sum_{Q_{ij}\in Q_{i}}H(Q_{ij},s)\big{\|}_{1},$
where $H(\cdot,\cdot)$ represents the Huber loss function. It is noted that
$f_{i}(s)$ is nondifferentiable. In particular, the Huber loss function for
one-dimensional variable is defined as follows,
$H(Q_{ij},s)=\left\\{\begin{array}[]{l}\tfrac{\left(Q_{ij}-s\right)^{2}}{2},\text{
for }\left|Q_{ij}-s\right|\leq\varsigma,\\\
\varsigma\left|Q_{ij}-s\right|-\tfrac{\varsigma^{2}}{2},\text{ for
}\left|Q_{ij}-s\right|>\varsigma,\end{array}\right.$
where $\varsigma$ is a positive constant. Compared with the typical least
squares loss function, the Huber loss function takes a smaller value when the
difference between the measurements and the parameter to be estimated exceeds
the tolerance $\varsigma$. Therefore, it is less sensitive to outliers and
improves the robustness of the estimators. The corresponding DOP can be
formulated as follows,
$\min_{s\in\mathbb{R}^{n}}f(s)=\sum_{i=1}^{N}\big{\|}\sum_{Q_{ij}\in
Q_{i}}H(Q_{ij},s)\big{\|}_{1}.$ (32)
To proceed, consider a sensor network with communication topology depicted by
Fig. 1. Let the parameter to be estimated be taken as
$s=[1,2,3]^{\operatorname{T}}$. Assume that the data set of each sensor $i$
contains $500$ measurements. Besides, the obtained measurements satisfy the
independent identically Gaussian distribution $\mathcal{N}(\mu,\sigma_{i})$
with mean vector $\mu=[1,2,3]^{\operatorname{T}}$ and covariance matrix
$\sigma_{i}=0.01iI_{3},i=1,\ldots,5$. Let the Huber loss parameter
$\varsigma=0.5$. We apply the proposed fully distributed algorithm (4) to
solve the distributed parameter estimation problem in (32). Let the initial
values of adaptive gains $\sigma_{i}$ be chosen as $\sigma_{i}(0)=1$, for
$i=1,\ldots,5$. The initial values of agent states $x_{i}$ and auxiliary
variables $v_{i}$ are randomly chosen.
The simulation result is shown in Fig. 3. It can be observed that
$|x_{i}^{1}-1|$, $|x_{i}^{2}-2|$ and $|x_{i}^{3}-3|,i=1,2,\ldots,5$ are all
upper bounded by 0.02 before 100s. In other words, the trajectories of states
$x_{i},i=1,\ldots,5$ converge to a small neighborhood of
$s=[1,2,3]^{\operatorname{T}}$, which is the parameter to be estimated.
Therefore, it is shown that the distributed parameter estimation problem over
unbalanced directed networks can be solved by the fully distributed algorithm
(4).
## V Conclusion
In this paper, a novel fully distributed continuous-time algorithm has been
developed to solve the distributed nonconvex optimization problem over
unbalanced directed networks. The proposed adaptive algorithm design does not
need any prior global information concerning network connectivity or convexity
of local cost functions. Under mild assumptions, the proposed algorithm has
guaranteed that the agent states converge to the optimal solution that
minimizes the sum of the local cost functions. The effectiveness of the
developed fully distributed algorithm has been illustrated by two simulation
examples. Future work will focus on solving the fully distributed optimization
problem for systems with more general agent dynamics.
## References
* [1] D. K. Molzahn, F. Dörfler, H. Sandberg, S. H. Low, S. Chakrabarti, R. Baldick, and J. Lavaei, “A survey of distributed optimization and control algorithms for electric power systems,” _IEEE Transactions on Smart Grid_ , vol. 8, no. 6, pp. 2941–2962, 2017.
* [2] S. S. Ram, V. V. Veeravalli, and A. Nedić, “Distributed and recursive parameter estimation in parametrized linear state-space models,” _IEEE Transactions on Automatic Control_ , vol. 55, no. 2, pp. 488–492, 2010.
* [3] J. N. Tsitsiklis, “Problems in decentralized decision making and computation,” Ph.D. dissertation, MIT, Cambridge, 1984.
* [4] A. Nedić and A. Ozdaglar, “Distributed subgradient methods for multi-agent optimization,” _IEEE Transactions on Automatic Control_ , vol. 54, no. 1, pp. 48–61, 2009.
* [5] A. Nedić and A. Olshevsky, “Distributed optimization over time-varying directed graphs,” _IEEE Transactions on Automatic Control_ , vol. 60, no. 3, pp. 601–615, 2014.
* [6] C. Xi, V. S. Mai, R. Xin, E. H. Abed, and U. A. Khan, “Linear convergence in optimization over directed graphs with row-stochastic matrices,” _IEEE Transactions on Automatic Control_ , vol. 63, no. 10, pp. 3558–3565, 2018.
* [7] Q. Lü, X. Liao, H. Li, and T. Huang, “A nesterov-like gradient tracking algorithm for distributed optimization over directed networks,” _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , vol. 51, no. 10, pp. 6258–6270, 2020.
* [8] J. Wang and N. Elia, “Control approach to distributed optimization,” in _Proc. 48th IEEE Annual Allerton Conference on Communication, Control, and Computing (Allerton)_ , 2010, pp. 557–561.
* [9] B. Gharesifard and J. Cortes, “Distributed continuous-time convex optimization on weight-balanced digraphs,” _IEEE Transactions on Automatic Control_ , vol. 59, no. 3, pp. 781–786, 2013.
* [10] S. S. Kia, J. Cortés, and S. Martínez, “Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication,” _Automatica_ , vol. 55, pp. 254–264, 2015.
* [11] Z. Chen and H. Ji, “Distributed quantized optimization design of continuous-time multiagent systems over switching graphs,” _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , vol. 51, no. 11, pp. 7152–7163, 2020.
* [12] X. He, T. Huang, J. Yu, C. Li, and Y. Zhang, “A continuous-time algorithm for distributed optimization based on multiagent networks,” _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , vol. 49, no. 12, pp. 2700–2709, 2017.
* [13] T. Yang, X. Yi, J. Wu, Y. Yuan, D. Wu, Z. Meng, Y. Hong, H. Wang, Z. Lin, and K. H. Johansson, “A survey of distributed optimization,” _Annual Reviews in Control_ , vol. 47, pp. 278–305, 2019.
* [14] X. Li, G. Feng, and L. Xie, “Distributed proximal algorithms for multiagent optimization with coupled inequality constraints,” _IEEE Transactions on Automatic Control_ , vol. 66, no. 3, pp. 1223–1230, 2020.
* [15] X. Li, X. Yi, and L. Xie, “Distributed online optimization for multi-agent networks with coupled inequality constraints,” _IEEE Transactions on Automatic Control_ , vol. 66, no. 8, pp. 3575–3591, 2021.
* [16] D. P. Bertsekas, _Convex Optimization Theory_. Nashua NH, USA: Athena Scientific, 2009.
* [17] Y. Nesterov, _Introductory Lectures on Convex Programming: Volume I: Basic Course_ , 1998.
* [18] J. Bolte, S. Sabach, and M. Teboulle, “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” _Mathematical Programming_ , vol. 146, no. 1, pp. 459–494, 2014.
* [19] G. Scutari and Y. Sun, “Distributed nonconvex constrained optimization over time-varying digraphs,” _Mathematical Programming_ , vol. 176, no. 1, pp. 497–544, 2019.
* [20] T. Tatarenko and B. Touri, “Non-convex distributed optimization,” _IEEE Transactions on Automatic Control_ , vol. 62, no. 8, pp. 3744–3757, 2017.
* [21] A. Engelmann, Y. Jiang, B. Houska, and T. Faulwasser, “Decomposition of nonconvex optimization via bi-level distributed aladin,” _IEEE Transactions on Control of Network Systems_ , vol. 7, no. 4, pp. 1848–1858, 2020\.
* [22] X. Yi, S. Zhang, T. Yang, T. Chai, and K. H. Johansson, “Linear convergence of first-and zeroth-order primal-dual algorithms for distributed nonconvex optimization,” _IEEE Transactions on Automatic Control_ , vol. 67, no. 8, pp. 4194–4201, 2021.
* [23] S. Chen, A. Garcia, and S. Shahrampour, “On distributed nonconvex optimization: Projected subgradient method for weakly convex problems in networks,” _IEEE Transactions on Automatic Control_ , vol. 67, no. 2, pp. 662–675, 2021.
* [24] Y. Tang, J. Zhang, and N. Li, “Distributed zero-order algorithms for nonconvex multiagent optimization,” _IEEE Transactions on Control of Network Systems_ , vol. 8, no. 1, pp. 269–281, 2020.
* [25] F. Farina, A. Garulli, A. Giannitrapani, and G. Notarstefano, “A distributed asynchronous method of multipliers for constrained nonconvex optimization,” _Automatica_ , vol. 103, pp. 243–253, 2019.
* [26] X. Ren, D. Li, Y. Xi, and H. Shao, “Distributed global optimization for a class of nonconvex optimization with coupled constraints,” _IEEE Transactions on Automatic Control_ , vol. 67, no. 8, pp. 4322–4329, 2021.
* [27] Z. Li, Z. Ding, J. Sun, and Z. Li, “Distributed adaptive convex optimization on directed graphs via continuous-time algorithms,” _IEEE Transactions on Automatic Control_ , vol. 63, no. 5, pp. 1434–1441, 2017.
* [28] Y. Zhu, W. Yu, G. Wen, and W. Ren, “Continuous-time coordination algorithm for distributed convex optimization over weight-unbalanced directed networks,” _IEEE Transactions on Circuits and Systems II: Express Briefs_ , vol. 66, no. 7, pp. 1202–1206, 2018.
* [29] F. Bullo, _Lectures on Network Systems_. Kindle Direct Publishing, 2019.
* [30] P. A. Ioannou and J. Sun, _Robust Adaptive Control_. North Chelmsford, Massachusetts, USA: Courier Corporation, 2012\.
* [31] Z. Wu, Z. Li, Z. Ding, and Z. Li, “Distributed continuous-time optimization with scalable adaptive event-based mechanisms,” _IEEE Transactions on Systems, Man, and Cybernetics: Systems_ , vol. 50, no. 9, pp. 3252–3257, 2018\.
* [32] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” _IEEE Transactions on Automatic Control_ , vol. 49, no. 9, pp. 1520–1533, 2004.
* [33] J. Mei, W. Ren, and J. Chen, “Distributed consensus of second-order multi-agent systems with heterogeneous unknown inertias and control gains under a directed graph,” _IEEE Transactions on Automatic Control_ , vol. 61, no. 8, pp. 2019–2034, 2015.
* [34] H. K. Khalil, _Nonlinear Systems_ , 3rd ed. Upper Saddle River, NJ, USA: Prentice Hall, 2002.
* [35] D. S. Bernstein, _Matrix Mathematics: Theory, Facts, and Formulas_ , 2nd ed. Princeton University Press, 2009\.
* [36] M. G. Rabbat and R. D. Nowak, “Quantized incremental algorithms for distributed optimization,” _IEEE Journal on Selected Areas in Communications_ , vol. 23, no. 4, pp. 798–808, 2005.
|
The goal of NEAT is to make topological units modular. These can then be
combined in a not predetermined way. So our two questions while combining
become:
1. 1.
How can we make CNN’s modular?
2. 2.
How can these units be combined in a meaningful way?
Our first approach was simply taking NEAT and exchanging some of the neurons
for Filters.
An example network can be seen here:
[scale=0.4]approachone.png This approach is probably as modular as it gets,
however it brings various problems when combining.
1. 1.
We ignore one of the main advantages on CNN: Being able to drastically lower
the number of inputs through subsampling
2. 2.
We don’t use Pooling or ReLU layers
3. 3.
The significance of a single classic neuron in such a system is questionable
4. 4.
The filters in the same layer have to have some way of communicating to form a
convolution
5. 5.
Adding a new filter in a convolution conflicts with previous learned
parameters
We can’t address all of these conflicts in a satisfactory way, so we decided
to go on to a next approach.
We adressed issue 3 by separating the whole network into a convolutional and a
fully connected part. This allows us to take issue 1 by adding the concept of
a _minimal network_ , inspired by NEAT’s practice of always starting with
combining all inputs with all outputs.
In our case, the minimal network would incorporate some combination of
convolution and pooling to reduce the input space. While the exact form of it
is debatable, we think a good starting point is LeNet, as it proved itself to
be flexible in its application. [YannLeCun1998]
The overhauled version would start out like this:
[scale=0.4]approachtwoinit.png
And could evolve into something like this:
[scale=0.4]approachtwoevolved.png
This setup is problematic because filters are supposed to work together to
form convolutions.
To process the same input (issue 4), the filters need to have the same size,
which we cannot guarantee once we randomly insert new filters or, as per issue
2, pooling layers.
We can only scale the weight matrixes in the filters to the same size by
either filling the smaller ones with a bunch of meaningless zeros or pooling
the bigger one down, which, beeing a non-lossless compressing algorithm, makes
our matrix less accurate.
We come to the conclusion that we have to limit the modularity of the Filters,
as doing otherwise brings to many cons. Instead of letting the filters connect
to whatever they want, we group them in convolutions.
These can alter the dimensionality of all filters in them at once,
guaranteeing homogeneity and encapsulation.
With the filters now being synchronized in their convolutions, we have no more
problems introducing poolers or ReLUs, as a convolution as a whole doesn’t
care about the size of it’s input matrix.
Our updated pool of available units for stochastic insertion is now:
Convolutional | Fully connected
---|---
Convolution | Neuron
Pooler |
ReLU |
Our starting topology now looks like this:
[scale=0.4]approachthree.png
The possible developments consist of a chain of random units right after
LeNet. This raises a new question: _How is the meaning of the fully connected
part altered when we add a new unit in the convolutional part?_
After detailed evaluation, we came to the conclusion that all of the
parameters in the fully connected part would be fine tuned to a specific
expected input. This expectation however ceases to be met once the
dimensionality of the convolutions changes, as this shifts a lot of weight
parameters towards a new meaning.
This means that we have two choices on how to process the fully connected part
in case of a topological change in the convolutional part:
1. 1.
Adjust weights for the new meaning
2. 2.
Trash the fully connected part and train it anew
Both of these possibilities are not satisfactory. 1 will take a long time,
since the already trained fully connected 4structure is basically meaningless
now. 2 throws away big, otherwise perfectly usable, parts of the network.
After some research into this problem we found a recent paper from Google,
describing how to get rid of the fully connected layer completely by using a
global average pooler [Lin2014].
If we treat the feature map matrix $F$ at the $l$th dimension as a vector
$F^{\prime}_{l}$, the global average pooler is defined as follows:
$f(F^{\prime}_{l})=\frac{\sum_{i=1}^{n}F^{\prime}_{li}}{n}$
We then forward the results of every layer to the softmax layer.
Provided the last layer of the convolutional part outputs a tensor with
exactly as many dimensions as the number of possible network output, we can
exchange the complete fully connected part for this global average pooler
while achieving the same results with a drastically improved performance in
both evaluation and search space. [Lin2014]
The reason is, in a nutshell, that we stop imagining the output of a filter as
detection of a feature.
We now treat it as a rate of confidence: The bigger the numbers, the more
confident we are that the feature is present.
This means that the feature detection is no longer performed by the fully
connected part, but instead by every single filter in the network together
[Lin2014]. Our standard network now looks like this:
[scale=0.4]approachfour.png
And could develop into something like this:
[scale=0.4]approachfourevol.png
We now seem to have resolved all issues, however when looking at the layers of
the example, we see that it has a depth of 8 logical layers (ReLU layers are
not counted because they do not result in a feature extraction, as they are
merely activation functions). This huge amount is very atypical and has been
shown to result is various problems such as very high hardware requirements
and lower accuracies[Simonyan2015].
The fundamental problem is that the effect of a change in the parameters in a
lower layer becomes abysmal compared to a change in the higher ones
[Simonyan2015] [Hochreiter1991]. A network of this size is not realistically
trainable by us. A very recent paper now belonging to the Facebook AI Research
group deals with these issues.
They introduce the concept of Residual Networks, in short ResNets.
[KaimingHe2015]
Their goal was to create a convolutional network by combining an arbitrary
amount of well defined residual units on which these problems are of no
concern. Overly simplified, they address the problem of varying influence by
adding a new kind of connecting, called a shortcut.
What it does is simply add matrixes. If they have different dimensionalities,
the smaller matrix gets projected on the bigger one by being processed by a
one by one matrix with a respective number of filters.
A residual block looks like this:
shortcut.png
On the left side, a convolutional action takes place (in this case two
convolutions with one ReLU activation inbetween). On the right side, the
original input of the residual block is added to its output.
This overlay guarantees that the convolutions cannot alter the original state
too much, as they now merely highlight features as opposed to extracting them.
The issue of performance is addressed by applying a bottleneck.
This means downsampling the input dimensionality of the residual block by
applying one by one convolutions before performing the convolution and then
upscaling it again. This procedure is inspired by Googles Network In Network
Inception structure [KaimingHe2015] [Lin2014].
The overhauled residual block now looks like this:
bottleneck.png
While more convolutions would in theory be possible, only one is used, as the
bottleneck dimension poolers introduce new parameters themselves.
This method has been demonstrated to achieve very similar levels of accuracy
while reducing a bit chunk of the computational cost [KaimingHe2015].
|
# QED Fermions in a noisy magnetic field background
Jorge David Castaño-Yepes<EMAIL_ADDRESS>Facultad de Física, Pontificia
Universidad Católica de Chile, Vicuña Mackenna 4860, Santiago, Chile Marcelo
Loewe<EMAIL_ADDRESS>Facultad de Física, Pontificia Universidad Católica
de Chile, Vicuña Mackenna 4860, Santiago, Chile Centre for Theoretical and
Mathematical Physics, and Department of Physics, University of Cape Town,
Rondebosch 7700, South Africa Centro Científico Tecnológico de Valparaíso
CCTVAL, Universidad Técnica Federico Santa María, Casilla 110-V, Valparaíso,
Chile Facultad de Ingeniería, Arquitectura y Diseño, Universidad San
Sebastián, Santiago, Chile Enrique Muñoz<EMAIL_ADDRESS>Facultad de
Física, Pontificia Universidad Católica de Chile, Vicuña Mackenna 4860,
Santiago, Chile Center for Nanotechnology and Advanced Materials CIEN-UC,
Avenida Vicuña Mackenna 4860, Santiago, Chile Juan Cristóbal Rojas
<EMAIL_ADDRESS>Departamento de Física, Universidad Católica del Norte, Angamos
610, Antofagasta, Chile Renato Zamora<EMAIL_ADDRESS>Centro de
Investigación y Desarrollo en Ciencias Aeroespaciales (CIDCA), Fuerza Aérea de
Chile, Casilla 8020744, Santiago, Chile. Instituto de Ciencias Básicas,
Universidad Diego Portales, Casilla 298-V, Santiago, Chile.
###### Abstract
We consider the effects of a noisy magnetic field background over the fermion
propagator in QED, as an approximation to the spatial inhomogeneities that
would naturally arise in certain physical scenarios, such as heavy-ion
collisions or the quark-gluon plasma in the early stages of the evolution of
the Universe. We considered a classical, finite and uniform average magnetic
field background $\langle\mathbf{B}(\mathbf{x})\rangle=\mathbf{B}$, subject to
white-noise spatial fluctuations with auto-correlation of magnitude
$\Delta_{B}$. By means of the Schwinger representation of the propagator in
the average magnetic field as a reference system, we used the replica
formalism to study the effects of the magnetic noise in the form of
renormalized quasi-particle parameters, leading to an effective charge and an
effective refraction index, that depend not only on the energy scale, as
usual, but also on the magnitude of the noise $\Delta_{B}$ and the average
field $\mathbf{B}$.
## I Introduction
High-energy physics under the presence of strong magnetic fields is an
important subject of research in many scenarios, such as heavy-ion collisions
Alam _et al._ (2021); Ayala _et al._ (2022); Inghirami, Gabriele _et al._
(2020); Ayala _et al._ (2020a, 2017), the quark-gluon plasma Busza _et al._
(2018); Hattori and Satow (2016, 2018); Buballa (2005) and the early-universe
evolution Inghirami, Gabriele _et al._ (2020); Blaschke _et al._ (2020). In
such systems, rather strong magnetic fields can emerge in comparatively small
regions of space and, moreover, strong spatial anisotropies and fluctuations
can develop in the magnitude of such fields Inghirami, Gabriele _et al._
(2020); Alam _et al._ (2021).
Remarkably, magnetic fields can influence the physical properties of both
charged as well as neutral particles, the later due to the quantum mechanical
fluctuations of the vacuum, that lead to the creation of virtual charged
fermion-antifermion pairs. In the context of high-energy physics, the effect
of a constant and “classical” magnetic field background has been studied since
the seminal work of Schwinger Schwinger (1951), followed with extensive
discussions in the literature in the context of semi-classical effective
Lagrangians Dittrich and Reuter (1985); Dittrich and Gies (2000). More
recently, the effect of magnetic fields on nucleon parameters have been
discussed in the context of QCD Dominguez _et al._ (2020). In addition,
several studies have been reported concerning the effects of a classical,
static and uniform background magnetic field on the charged vacuum
fluctuations leading to the gluon polarization tensor Hattori and Itakura
(2013); Hattori and Satow (2016); Ayala _et al._ (2020b); Hattori and Satow
(2018), in particular the role of the field in the breaking of the Lorentz
invariance, thus predicting the emergence of the vacuum birefringence
phenomena Hattori and Itakura (2013); Ayala _et al._ (2020b). On the other
hand, vacuum fluctuations also affect the propagation of fermions in such
magnetized background Ayala, Alejandro _et al._ (2021), as expressed by the
self-energy, that leads to the definition of a magnetic mass and, according to
recent studies in QED Ayala _et al._ (2021), to an spectral width involving
the contribution of all the magnetic Landau levels. Moreover, non-perturbative
theoretical approaches Miransky and Shovkovy (2015) reveal the magnetic
catalysis effect, where the presence of a strong uniform magnetic field leads
to the emergence of effective masses for the fermion species, regardless of
their bare mass.
Interestingly, in most of these studies, the background magnetic field is
always idealized as static and uniform, and hence the presence of spatial
anisotropies or fluctuations in its magnitude are disregarded in the state of
the art of such calculations. A non-uniform but deterministic background
magnetic field has been studied in QED by means of a path-integral formulation
Gies and Roessler (2011). On the other hand, statistical fluctuations near a
zero average magnetic field have been studied in the context of QED2+1 Zhao
_et al._ (2017); Gusynin _et al._ (2001), as it arises as an effective
continuum theory for certain low-dimensional Dirac materials such as Graphene
Miransky and Shovkovy (2015). In the later, however, the average background
field is assumed to be zero, and hence the reference system is characterized
by a free fermion propagator rather than by a Schwinger propagator as we
consider in the present study. Since spatial fluctuations with respect to a
finite background magnetic field may indeed exist in the different
aforementioned physical scenarios Inghirami, Gabriele _et al._ (2020), in the
present work we shall study their effect over the renormalization of the
fermion propagator itself in QED. As we shall discuss in this article, a
perturbative treatment of such fluctuations in the framework of the replica
method Mèzard and Parisi (1991); Kardar _et al._ (1986) allows us to show
that their effect can be captured in terms of a renormalization of the charge
$e\rightarrow z_{3}e$ and an effective refraction index $v^{\prime}/c=z^{-1}$.
Moreover, we show that $z_{3}$ and $z$ depend not only on the energy scale, as
usual, but also on the magnitude of the average magnetic field $|\mathbf{B}|$,
as well as on the strength of its spatial fluctuations, that we define as
$\Delta_{B}$.
## II The model
We shall consider a physical scenario where a classical and static magnetic
field background, possessing random spatial fluctuations, modifies the quantum
dynamics of a system of fermions. For this purpose, we shall assume the
standard QED theory involving fermionic fields $\psi(x)$, as well as gauge
fields $A^{\mu}(x)$. In the later, we shall distinguish three physically
different contributions
$\displaystyle A^{\mu}(x)\rightarrow A^{\mu}(x)+A^{\mu}_{\text{BG}}(x)+\delta
A^{\mu}_{\text{BG}}(\mathbf{x}).$ (1)
Here, $A^{\mu}(x)$ represents the dynamical photonic quantum field, while BG
stands for “background”, representing the presence of a classical external
field imposed by the experimental conditions. Moreover, for this BG
contribution, we consider the effect of static (quenched) white noise spatial
fluctuations $\delta A^{\mu}_{\text{BG}}(\mathbf{x})$ with respect to the mean
value $A_{\text{BG}}^{\mu}(x)$, satisfying the statistical properties
$\displaystyle\langle\delta A^{j}_{\text{BG}}(\mathbf{x})\delta
A^{k}_{\text{BG}}(\mathbf{x}^{\prime})\rangle$ $\displaystyle=$
$\displaystyle\Delta_{B}\delta_{j,k}\delta^{3}(\mathbf{x}-\mathbf{x}^{\prime}),$
$\displaystyle\langle\delta A^{\mu}_{\text{BG}}(\mathbf{x})\rangle$
$\displaystyle=$ $\displaystyle 0.$ (2)
These statistical properties are represented by a Gaussian functional
distribution of the form
$\displaystyle dP\left[\delta A^{\mu}_{\text{BG}}\right]=\mathcal{N}e^{-\int
d^{3}x\,\frac{\left[\delta
A_{\text{BG}}^{\mu}(\mathbf{x})\right]^{2}}{2\Delta_{B}}}\mathcal{D}\left[\delta
A_{\text{BG}}^{\mu}(\mathbf{x})\right].$ (3)
Therefore, we write the Lagrangian for this model as a superposition of two
terms
$\displaystyle\mathcal{L}=\mathcal{L}_{\text{FBG}}+\mathcal{L}_{\text{NBG}},$
(4)
where the first represents the system of Fermions (and photons) immersed in
the deterministic background field (FBG)
$\displaystyle\mathcal{L}_{\text{FBG}}=\bar{\psi}\left(\mathrm{i}\not{\partial}-e\not{A}_{\text{BG}}-e\not{A}-m\right)\psi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu},$
(5)
while the second therm represents the interaction between the Fermions and the
classical noise (NBG)
$\displaystyle\mathcal{L}_{\text{NBG}}=\bar{\psi}\left(-e\delta\not{A}_{\text{BG}}\right)\psi.$
(6)
The generating functional (in the absence of sources) for a given realization
of the noisy fields is given by
$\displaystyle Z[A]=\int\mathcal{D}[\bar{\psi},\psi]e^{\mathrm{i}\int
d^{4}x\left[\mathcal{L}_{\text{FBG}}+\mathcal{L}_{\text{NBG}}\right]}.$ (7)
To study the physics of this system, we need to calculate the statistical
average over the magnetic background noise $\delta A_{\text{BG}}^{\mu}$ of the
$\overline{\ln Z}$. For this purpose, we apply the replica method, which is
based on the following identity Mèzard and Parisi (1991)
$\displaystyle\overline{\ln Z[A]}=\lim_{n\rightarrow
0}\frac{\overline{Z^{n}[A]}-1}{n}.$ (8)
Here, we defined the statistical average according to the Gaussian functional
measure of Eq. (3), and $Z^{n}$ is obtained by incorporating an additional
“replica” component for each of the Fermion fields, i.e.
$\psi(x)\rightarrow\psi^{a}(x)$, for $1\leq a\leq n$. The “replicated”
Lagrangian has the same form as Eqs. (5) and (6), but with an additional sum
over the replica components of the Fermion fields. Therefore, the averaging
procedure leads to
$\displaystyle\overline{Z^{n}[A]}$ $\displaystyle=$
$\displaystyle\int\prod_{a=1}^{n}\mathcal{D}[\bar{\psi}^{a},\psi^{a}]\int\mathcal{D}\left[\delta
A_{\text{BG}}^{\mu}\right]e^{-\int d^{3}x\,\frac{\left[\delta
A_{\text{BG}}^{\mu}(\mathbf{x})\right]^{2}}{2\Delta_{B}}}$ (9)
$\displaystyle\times e^{\mathrm{i}\int
d^{4}x\sum_{a=1}^{n}\left(\mathcal{L}_{\text{FBG}}[\bar{\psi}^{a},\psi^{a}]+\mathcal{L}_{DBG}[\bar{\psi}^{a},\psi^{a}]\right)}$
$\displaystyle=$
$\displaystyle\int\prod_{a=1}^{n}\mathcal{D}[\bar{\psi}^{a},\psi^{a}]e^{\mathrm{i}\bar{S}\left[\bar{\psi}^{a},\psi^{a};A\right]},$
where in the last step we explicitly performed the Gaussian integral over the
background noise, leading to the definition of the effective averaged action
for the replica system
$\displaystyle\bar{S}\left[\bar{\psi}^{a},\psi^{a};A\right]$ $\displaystyle=$
$\displaystyle\int
d^{4}x\left(\sum_{a}\bar{\psi}^{a}\left(\mathrm{i}\not{\partial}-e\not{A}_{\text{BG}}-e\not{A}-m\right)\psi^{a}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\right)$
(10) $\displaystyle+$ $\displaystyle\mathrm{i}\frac{e^{2}\Delta_{B}}{2}\int
d^{4}x\int
d^{4}y\sum_{a,b}\sum_{j=1}^{3}\bar{\psi}^{a}(x)\gamma^{j}\psi^{a}(x)\bar{\psi}^{b}(y)\gamma_{j}\psi^{b}(y)\delta^{3}(\mathbf{x}-\mathbf{y}).$
Clearly, we end up with an effective interacting theory, with an instantaneous
local interaction proportional to the fluctuation amplitude $\Delta_{B}$ that
characterizes the magnetic noise, as defined in Eq. (2). The “free” part of
the action corresponds to Fermions in the average background classical field
$A_{\text{BG}}^{\mu}(x)$. We choose this background to represent a uniform,
static magnetic field along the $z$-direction $\mathbf{B}=\hat{e}_{3}B$, using
the gauge Dittrich and Reuter (1985)
$\displaystyle A_{\text{BG}}^{\mu}(x)=\frac{1}{2}(0,-Bx^{2},Bx^{1},0).$ (11)
Therefore, this allows us to use directly the Schwinger proper-time
representation of the free-Fermion propagator dressed by the background field,
as follows Schwinger (1951); Dittrich and Reuter (1985)
$\displaystyle\left[S_{\text{F}}(k)\right]_{a,b}$
$\displaystyle=-\mathrm{i}\delta_{a,b}\int_{0}^{\infty}\frac{d\tau}{\cos(eB\tau)}e^{\mathrm{i}\tau\left(k_{\parallel}^{2}-\mathbf{k}_{\perp}^{2}\frac{\tan(eB\tau)}{eB\tau}-m^{2}+\mathrm{i}\epsilon\right)}$
(12)
$\displaystyle\times\left\\{\left[\cos(eB\tau)+\mathrm{i}\gamma^{1}\gamma^{2}\sin(eB\tau)\right](m+\not{k}_{\parallel})\right.$
$\displaystyle\left.+\frac{\not{k}_{\perp}}{\cos(eB\tau)}\right\\},$
which is clearly diagonal in replica space. Here, as usual, we separated the
parallel from the perpendicular directions with respect to the background
external magnetic field by splitting the metric tensor as
$g^{\mu\nu}=g_{\parallel}^{\mu\nu}+g_{\perp}^{\mu\nu}$, with
$\displaystyle g_{\parallel}^{\mu\nu}$ $\displaystyle=$
$\displaystyle\text{diag}(1,0,0,-1),$ $\displaystyle g_{\perp}^{\mu\nu}$
$\displaystyle=$ $\displaystyle\text{diag}(0,-1,-1,0),$ (13)
thus implying that for any 4-vector, such as the momentum $k^{\mu}$, we write
$\displaystyle\not{k}=\not{k}_{\perp}+\not{k}_{\parallel},$ (14)
and
$\displaystyle k^{2}=k_{\parallel}^{2}-\mathbf{k}_{\perp}^{2}.$ (15)
In particular, $k_{\parallel}^{2}=k_{0}^{2}-k_{3}^{2}$, while
$\mathbf{k}_{\perp}=(k^{1},k^{2})$ is the Euclidean 2-vector lying in the
plane perpendicular to the field, such that its square-norm is
$\mathbf{k}_{\perp}^{2}=k_{1}^{2}+k_{2}^{2}$. The Schwinger propagator can be
expressed as
$\displaystyle\left[S_{\text{F}}(k)\right]_{a,b}=-\mathrm{i}\delta_{a,b}\left[\left(m+\not{k}\right)\mathcal{A}_{1}\right.$
$\displaystyle\left.+(\mathrm{i}eB)\mathrm{i}\gamma^{1}\gamma^{2}\left(m+\not{k}_{\parallel}\right)\frac{\partial\mathcal{A}_{1}}{\partial\mathbf{k}_{\perp}^{2}}+\left(\mathrm{i}eB\right)^{2}\not{k}_{\perp}\frac{\partial^{2}\mathcal{A}_{1}}{\partial(\mathbf{k}_{\perp}^{2})^{2}}\right]$
$\displaystyle=-\mathrm{i}\delta_{a,b}\left[\left(m+\not{k}_{\parallel}\right)\mathcal{A}_{1}+\mathrm{i}\gamma^{1}\gamma^{2}\left(m+\not{k}_{\parallel}\right)\mathcal{A}_{2}+\mathcal{A}_{3}\not{k}_{\perp}\right]$
(16)
Here, we defined the function
$\displaystyle\mathcal{A}_{1}(k,B)=\int_{0}^{\infty}d\tau
e^{\mathrm{i}\tau\left(k_{\parallel}^{2}-m^{2}+\mathrm{i}\epsilon\right)-\mathrm{i}\frac{\mathbf{k}_{\perp}^{2}}{eB}\tan(eB\tau)},$
(17)
that clearly reproduces the inverse scalar propagator (with Feynman
prescription) in the zero-field limit
$\displaystyle\lim_{B\rightarrow
0}\mathcal{A}_{1}(k,B)=\frac{\mathrm{i}}{k^{2}-m^{2}+\mathrm{i}\epsilon}\equiv\frac{\mathrm{i}}{\mathcal{D}_{0}(k)},$
(18)
with
$\displaystyle\mathcal{D}_{0}(k)=k^{2}-m^{2}+\mathrm{i}\epsilon,$ (19)
and its derivatives
$\displaystyle\mathcal{A}_{2}(k,B)$ $\displaystyle\equiv$
$\displaystyle\int_{0}^{\infty}d\tau~{}\tan(eB\tau)e^{\mathrm{i}\tau\left(k_{\parallel}^{2}-t_{B}({\tau})\mathbf{k}_{\perp}^{2}-m^{2}+\mathrm{i}\epsilon\right)}$
(20a) $\displaystyle=$
$\displaystyle\mathrm{i}eB\frac{\partial\mathcal{A}_{1}}{\partial(\mathbf{k}_{\perp}^{2})},$
$\displaystyle\mathcal{A}_{3}(k,B)$ $\displaystyle\equiv$
$\displaystyle\int_{0}^{\infty}\frac{d\tau}{\cos^{2}(eB\tau)}e^{\mathrm{i}\tau\left(k_{\parallel}^{2}-t_{B}({\tau})\mathbf{k}_{\perp}^{2}-m^{2}+\mathrm{i}\epsilon\right)}$
(20b) $\displaystyle=$
$\displaystyle\mathcal{A}_{1}+(\mathrm{i}eB)^{2}\frac{\partial^{2}\mathcal{A}_{1}}{\partial(\mathbf{k}_{\perp}^{2})^{2}}.$
Moreover, with these definitions it is straightforward to verify that the
inverse of the Schwinger propagator Eq. (16) is given by:
$\displaystyle\hat{S}_{\text{F}}^{-1}(k)$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}}{\mathcal{D}(k)}\left[\left(m-\not{k}_{\parallel}\right)\mathcal{A}_{1}-\mathrm{i}\gamma^{1}\gamma^{2}\left(m-\not{k}_{\parallel}\right)\mathcal{A}_{2}\right.$
(21) $\displaystyle\left.-\mathcal{A}_{3}\not{k}_{\perp}\right],$
where
$\displaystyle\mathcal{D}(k)=\mathcal{A}_{3}^{2}\mathbf{k}_{\perp}^{2}-\left(\mathcal{A}_{1}^{2}-\mathcal{A}_{2}^{2}\right)\left(k_{\parallel}^{2}-m^{2}\right).$
(22)
Then, all the relevant expressions will be given in terms of
$\mathcal{A}_{1}$.
## III Perturbation theory: Self-energy and vertex corrections
Our goal is to develop a perturbation theory in powers of $\Delta$, where as
described in Section II and particularly in Eq. (10), the effective fermion-
fermion interaction arises as a result of averaging over the background
magnetic noise. Starting from a free Fermion propagator, as defined by Eq.
(12), we include the magnetic noise-induced interaction effects by “dressing”
the propagator with a self-energy, as shown diagrammatically in the Dyson
equation depicted in Fig. 1. We remark that for this theory, the skeleton
diagram for the self-energy is represented in Fig. 2.
Figure 1: Dyson equation for the ”dressed” propagator (double-line), in terms
of the free propagator (single-line) and the self-energy $\Sigma$. Figure 2:
Skeleton diagram representing the self-energy for the effective interacting
theory. The dashed line is the disorder-induced interaction $\Delta_{B}$,
while the box $\hat{\Gamma}$ represents the 4-point vertex function.
## IV Self-energy at order $\Delta$
Figure 3: Self-energy diagram at first order in $\Delta=e^{2}\Delta_{B}$.
It is possible to express the background-noise contribution to the self-energy
(where for notational simplicity, we define the parameter $\Delta\equiv
e^{2}\Delta_{B}$), as depicted in the Feynman diagram in Fig. (3), by the
integral expression
$\displaystyle\hat{\Sigma}_{\Delta}(q)$ $\displaystyle=$
$\displaystyle(\mathrm{i}\Delta)\int\frac{d^{3}p}{(2\pi)^{3}}\gamma^{j}\hat{S}_{F}(p+q;p_{0}=0)\gamma_{j}$
$\displaystyle=$ $\displaystyle\frac{i(\mathrm{i}\Delta)}{(2\pi)^{3}}\int
d^{3}p\left\\{3\left(\gamma^{0}q_{0}-m\right)\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp})\right.$
$\displaystyle\left.+\mathrm{i}\gamma^{1}\gamma^{2}\left(m-q_{0}\gamma^{0}\right)(\mathrm{i}eB)\frac{\partial}{\partial\mathbf{p}_{\perp}^{2}}\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp})\right\\}.$
The derivative term in the last expression can be integrated in cylindrical
coordinates $d^{3}p=\pi dp_{3}d(\mathbf{p}_{\perp}^{2})$, as follows
$\displaystyle\int
d^{3}p\,\frac{\partial}{\partial\mathbf{p}_{\perp}^{2}}\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp})$
$\displaystyle=\pi\int_{-\infty}^{+\infty}dp_{3}\int_{0}^{\infty}d(\mathbf{p}_{\perp}^{2})\frac{\partial}{\partial\mathbf{p}_{\perp}^{2}}\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp})$
$\displaystyle=-\pi\int_{-\infty}^{+\infty}dp_{3}\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp}=0),$
(24)
where the identity
$\lim_{\mathbf{p}_{\perp}^{2}\rightarrow\infty}\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp})=0$
was applied. Substituting this result into Eq. (IV), we finally obtain the
exact expression (valid at all orders in the background average magnetic field
$B$)
$\displaystyle\hat{\Sigma}_{\Delta}(q)$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}(\mathrm{i}\Delta)}{(2\pi)^{3}}\left[3\left(\gamma^{0}q_{0}-m\right)\widetilde{\mathcal{A}}_{1}(q_{0})\right.$
(25) $\displaystyle\left.-\mathrm{i}\gamma^{1}\gamma^{2}(\mathrm{i}\pi
eB)\left(m-q_{0}\gamma^{0}\right)\widetilde{\mathcal{A}}_{2}(q_{0})\right],$
where we have defined:
$\displaystyle\widetilde{\mathcal{A}}_{1}(q_{0})$ $\displaystyle\equiv$
$\displaystyle\int d^{3}p\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp}),$
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})$ $\displaystyle\equiv$
$\displaystyle\int_{-\infty}^{+\infty}dp_{3}\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp}=0).$
(26)
Inserting the first-order in $\Delta$ expression for the self-energy Eq. (25)
into the Dyson equation, as depicted diagrammatically in Fig. 1, we obtain the
dressed inverse propagator at first-order in $\Delta$
$\displaystyle\hat{S}_{\Delta}^{-1}(k)=\hat{S}_{\text{F}}^{-1}(k)-\hat{\Sigma}_{\Delta},$
(27)
so that by using Eqs. (21) and (25) we explicitly obtain:
$\displaystyle\hat{S}_{\Delta}^{-1}(q)$ $\displaystyle=$
$\displaystyle\mathrm{i}\left[\frac{m\mathcal{A}_{1}(q)}{\mathcal{D}(q)}+\frac{3m(\mathrm{i}\Delta)}{(2\pi)^{3}}\widetilde{\mathcal{A}}_{1}(q_{0})\right]$
$\displaystyle-$
$\displaystyle\mathrm{i}\left[\frac{\mathcal{A}_{1}(q)}{\mathcal{D}(q)}+\frac{3(\mathrm{i}\Delta)}{(2\pi)^{3}}\widetilde{\mathcal{A}}_{1}(q_{0})\right]\left(q_{0}\gamma^{0}\right)$
$\displaystyle-$
$\displaystyle\mathrm{i}\left[\frac{m\mathcal{A}_{2}(q)}{\mathcal{D}(q)}-\mathrm{i}\frac{m\pi(\mathrm{i}\Delta)(eB)}{(2\pi)^{3}}\widetilde{\mathcal{A}}_{2}(q_{0})\right]\left(\mathrm{i}\gamma^{1}\gamma^{2}\right)$
$\displaystyle+$
$\displaystyle\mathrm{i}\left[\frac{\mathcal{A}_{2}(q)}{\mathcal{D}(q)}-\frac{\mathrm{i}\pi(\mathrm{i}\Delta)(eB)}{(2\pi)^{3}}\widetilde{\mathcal{A}}_{2}(q_{0})\right]\left(\mathrm{i}\gamma^{1}\gamma^{2}q_{0}\gamma^{0}\right)$
$\displaystyle-$
$\displaystyle\mathrm{i}\frac{\mathcal{A}_{1}(q)}{\mathcal{D}(q)}\left(q_{3}\gamma^{3}\right)+\mathrm{i}\frac{\mathcal{A}_{2}}{\mathcal{D}(q)}\left(\mathrm{i}\gamma^{1}\gamma^{2}q_{3}\gamma^{3}\right)-\mathrm{i}\frac{\mathcal{A}_{3}}{\mathcal{D}(q)}\not{q}_{\perp}.$
## V Renormalization of the propagator
Let us define by $m^{\prime}$, $z$ and $z_{3}$ as the renormalization factors
for the mass, the wave function and the charge, respectively. While $z$ will
emerge as a global factor in the dressed propagator, the factor $z_{3}$ will
only be associated to the tensor structures involving the spin-magnetic field
interaction $e\sigma_{\mu\nu}F^{\mu\nu}_{\text{BG}}=i\gamma_{1}\gamma_{2}eB$.
Therefore, we can compare Eq. (21) with Eq. (LABEL:Scorrected), in order to
identify the corresponding scalar factors for each tensor structure in both
expressions, thus leading to the definition of the renormalized coefficients
as follows
* * •
For $\mathbb{1}$:
$\displaystyle\frac{m\mathcal{A}_{1}(q)}{\mathcal{D}(q)}+\frac{3m(\mathrm{i}\Delta)}{(2\pi)^{3}}\widetilde{\mathcal{A}}_{1}(q_{0})\equiv
z\frac{m^{\prime}\mathcal{A}_{1}(q)}{\mathcal{D}(q)}$ (29a)
* •
For $\gamma^{1}\gamma^{2}\gamma^{0}$:
$\displaystyle\frac{\mathcal{A}_{2}(q)}{\mathcal{D}(q)}-\frac{\mathrm{i}\pi(\mathrm{i}\Delta)(eB)}{(2\pi)^{3}}\widetilde{\mathcal{A}}_{2}(q_{0})\equiv
z\cdot z_{3}\frac{\mathcal{A}_{2}(q)}{\mathcal{D}(q)}$ (29b)
* •
For $\gamma^{1}\gamma^{2}$:
$\displaystyle\frac{m\mathcal{A}_{2}(q)}{\mathcal{D}(q)}-\frac{\mathrm{i}m\pi(\mathrm{i}\Delta)(eB)}{(2\pi)^{3}}\widetilde{\mathcal{A}}_{2}(q_{0})\equiv
z\cdot z_{3}\frac{m^{\prime}\mathcal{A}_{2}(q)}{\mathcal{D}(q)}$
Then, solving the system of equations we obtain
$\displaystyle
z=1+\frac{3\mathrm{i}\Delta}{(2\pi)^{3}}\frac{\widetilde{\mathcal{A}}_{1}(q_{0})}{\mathcal{A}_{1}(q)}\mathcal{D}(q),$
(30a) $\displaystyle
z_{3}=\frac{1-\frac{\mathrm{i}\pi(\mathrm{i}\Delta)(eB)}{(2\pi)^{3}}\frac{\widetilde{\mathcal{A}}_{2}(q_{0})}{\mathcal{A}_{2}(q)}\mathcal{D}(q)}{1+\frac{3\mathrm{i}\Delta}{(2\pi)^{3}}\frac{\widetilde{\mathcal{A}}_{1}(q_{0})}{\mathcal{A}_{1}(q)}\mathcal{D}(q)},$
(30b) $\displaystyle m^{\prime}=m,$ (30c) and
$\displaystyle\frac{v^{\prime}}{c}=z^{-1}=\left(1+\frac{3\mathrm{i}\Delta}{(2\pi)^{3}}\frac{\widetilde{\mathcal{A}}_{1}(q_{0})}{\mathcal{A}_{1}(q)}\mathcal{D}(q)\right)^{-1}$
(30d)
With these definitions into the “magnetic noise-dressed” inverse propagator
Eq. (LABEL:Scorrected), after organizing the different tensor structures, we
obtain the expression
$\displaystyle
S_{\Delta}^{-1}(q)=\frac{\mathrm{i}z}{\mathcal{D}(q)}\left[\left(m-q_{0}\gamma^{0}-z^{-1}q_{3}\gamma^{3}\right)\mathcal{A}_{1}(q)\right.$
(31)
$\displaystyle\left.-z_{3}\left(\mathrm{i}\gamma^{1}\gamma^{2}\right)\left(m-q_{0}\gamma^{0}-z^{-1}q_{3}\gamma^{3}\right)\mathcal{A}_{2}(q)\right.$
$\displaystyle\left.-\mathrm{i}\mathcal{A}_{3}(q)z^{-1}\not{q}_{\perp}\right]$
$\displaystyle=$
$\displaystyle\frac{\mathrm{i}z}{\mathcal{D}(q)}\left[\left(m-\tilde{\not{q}}_{\parallel}\right)\mathcal{A}_{1}(q)-z_{3}\left(\mathrm{i}\gamma^{1}\gamma^{2}\right)\left(m-\tilde{\not{q}}_{\parallel}\right)\mathcal{A}_{2}(q)\right.$
$\displaystyle\left.-\mathrm{i}\mathcal{A}_{3}(q)\tilde{\not{q}}_{\perp}\right],$
where in the last line we defined the four-vector
$\tilde{q}^{\mu}=(q^{0},z^{-1}\mathbf{q})$ that incorporates the definition of
the effective refraction index $v^{\prime}/c=z^{-1}$ due to the random
magnetic fluctuations. By comparing Eq. (31) with Eq. (21), it is clear that
they possess the same tensor structure. Therefore, by means of the elementary
properties of the Dirac matrices, this expression can be readily inverted to
obtain the “magnetic noise-dressed” fermion propagator
$\displaystyle
S_{\Delta}(q)=-\mathrm{i}z^{-1}\frac{\mathcal{D}(q)}{\tilde{\mathcal{D}}(q)}\left[\left(m+\tilde{\not{q}}_{\parallel}\right)\mathcal{A}_{1}(q)\right.$
$\displaystyle\left.+\mathrm{i}z_{3}\gamma^{1}\gamma^{2}\left(m+\tilde{\not{q}}_{\parallel}\right)\mathcal{A}_{2}(q)+\mathcal{A}_{3}(q)\tilde{\not{q}}_{\perp}\right],$
(32)
where $\mathcal{D}(q)$ was defined in Eq. (22), and
$\displaystyle\tilde{\mathcal{D}}(q)=\mathcal{A}_{3}^{2}z^{-2}\mathbf{q}_{\perp}^{2}-\left(\mathcal{A}_{1}^{2}-\mathcal{A}_{2}^{2}\right)\left(z^{-2}q_{\parallel}^{2}-m^{2}\right)$
(33)
Let us now discuss the explicit magnetic field and magnetic noise dependence
of the renormalized parameters defined in Eqs. (30a) and (30b), i.e $z$ and
$z_{3}$. For this purpose, we shall distinguish three different regimes,
corresponding to the very weak, the intermediate and the ultra-intense
magnetic field, respectively.
### V.1 Very weak field $eB/m^{2}\ll 1$
As shown in detail in the Appendix A, for very weak fields $eB/m^{2}\ll 1$ the
function $\mathcal{A}_{1}(k,B)$ can be expanded in terms of the power series
$\displaystyle\mathcal{A}_{1}(k,B)=\frac{\mathrm{i}}{\mathcal{D}_{\parallel}}\left(1+\sum_{j=1}^{\infty}\left(\frac{\mathrm{i}eB}{\mathcal{D}_{\parallel}}\right)^{j}\mathcal{E}_{j}(x)\right),$
(34)
where for notational simplicity, we defined the “parallel” inverse scalar
propagator
$\displaystyle\mathcal{D}_{\parallel}=k_{\parallel}^{2}-m^{2}+\mathrm{i}\epsilon,$
(35)
and the dimensionless variable $x=\mathbf{k}_{\perp}^{2}/eB$. We also defined
the polynomials $\mathcal{E}_{j}(x)$, as those generated by the function
$e^{-\mathrm{i}x\tan v}$, i.e.
$\displaystyle\mathcal{E}_{j}(x)=\lim_{v\rightarrow
0}\frac{\partial^{j}}{\partial v^{j}}\left(e^{-\mathrm{i}x\tan v}\right).$
(36)
For instance, the explicit analytical expressions for the first three
polynomials ($j=1,2,3$) are as follows
$\displaystyle\mathcal{E}_{1}(x)$ $\displaystyle=$
$\displaystyle-\mathrm{i}x,$ $\displaystyle\mathcal{E}_{2}(x)$
$\displaystyle=$ $\displaystyle-x^{2},$ $\displaystyle\mathcal{E}_{3}(x)$
$\displaystyle=$ $\displaystyle-2\mathrm{i}x+\mathrm{i}x^{3}.$
At the lowest order, after subtracting the divergent vacuum contribution from
Eq. (34), we have after Eq. (LABEL:AA1_low) (see Appendix A for details),
$\displaystyle\mathcal{A}_{1}(k,B)-\mathcal{A}_{1}(k,0)=\frac{-2\mathrm{i}\left(eB\right)^{2}\mathbf{k}_{\perp}^{2}}{\left[k^{2}-m^{2}+\mathrm{i}\epsilon\right]^{4}}+O((eB)^{4}).$
(37)
Therefore, using this weak field expansion of the propagator, we calculate the
integral (details in Appendix C)
$\displaystyle\widetilde{\mathcal{A}}_{1}(q_{0})$ $\displaystyle=$
$\displaystyle-2\mathrm{i}(eB)^{2}\int
d^{3}p\frac{\mathbf{p_{\perp}}^{2}}{(q_{0}^{2}-p_{3}^{2}-\mathbf{p_{\perp}}^{2}-m^{2}+\mathrm{i}\epsilon)^{4}}$
(38) $\displaystyle=$
$\displaystyle-\frac{\pi^{2}}{6}\frac{(eB)^{2}}{(q_{0}^{2}-m^{2})^{3/2}}.$
In addition, at order $\mathcal{O}((eB)^{2})$ we also need to evaluate the
integral (details in Appendix C)
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})$ $\displaystyle=$
$\displaystyle\mathrm{i}\int_{-\infty}^{+\infty}\frac{dp_{3}}{q_{0}^{2}-(p^{3})^{2}-m^{2}+\mathrm{i}\epsilon}$
(39) $\displaystyle=$ $\displaystyle\frac{\pi}{\sqrt{q_{0}^{2}-m^{2}}}.$
Therefore, from Eqs. (30a), (37), (38), and (39), we can directly evaluate the
renormalization parameters to obtain
$\displaystyle z$ $\displaystyle=$ $\displaystyle
1+\frac{3\mathrm{i}\Delta}{(2\pi)^{3}}\frac{\widetilde{\mathcal{A}}_{1}(q_{0})}{\mathcal{A}_{1}(q)}\mathcal{D}(q)$
(40) $\displaystyle=$ $\displaystyle
1+\frac{\Delta(eB)^{4}}{8\pi}\frac{\mathbf{q}_{\perp}^{2}}{(q^{2}-m^{2}+\mathrm{i}\epsilon)^{3}(q_{0}^{2}-m^{2}+\mathrm{i}\epsilon)^{3/2}}$
$\displaystyle=$ $\displaystyle 1+O((eB)^{4}),$
and similarly from Eq. (30b)
$\displaystyle z_{3}$ $\displaystyle=$ $\displaystyle 1+O((eB)^{4}).$ (41)
### V.2 Intermediate field
For intermediate magnetic field intensities, we can calculate the integral
$\mathcal{A}_{1}$ by means of an expansion in terms of Landau levels. For this
purpose, let us consider the generating function of the Laguerre
polynomialsGradshteyn and Rhyzik (2000a)
$\displaystyle
e^{-\frac{x}{2}\frac{1-t}{1+t}}=(1+t)e^{-x/2}\sum_{n=0}^{\infty}(-t)^{n}L_{n}^{0}(x),$
(42)
since
$\displaystyle e^{-\mathrm{i}x\tan v}$ $\displaystyle=$
$\displaystyle\exp\left[-x\left(1-e^{-2\mathrm{i}v}\right)/\left(1+e^{-2\mathrm{i}v}\right)\right]$
$\displaystyle=$
$\displaystyle\left(1+e^{-2\mathrm{i}v}\right)e^{-x}\sum_{n=0}^{\infty}(-1)^{n}e^{-2\mathrm{i}nv}L_{n}^{0}(2x)$
Therefore, we have (for $x=\mathbf{k}_{\perp}^{2}/eB$)
$\displaystyle\mathcal{A}_{1}(k)=e^{-x}\left(\sum_{n=0}^{\infty}(-1)^{n}L_{n}^{0}(2x)\int_{0}^{\infty}d\tau
e^{\mathrm{i}(\mathcal{D}_{\parallel}-2(n+1)eB)\tau}\right.$
$\displaystyle\left.+\sum_{n=0}^{\infty}(-1)^{n}L_{n}^{0}(2x)\int_{0}^{\infty}d\tau
e^{\mathrm{i}(\mathcal{D}_{\parallel}-2neB)\tau}\right)$ (44)
Evaluating the exponential integrals, we obtain
$\displaystyle\mathcal{A}_{1}(k)$ $\displaystyle=$
$\displaystyle\mathrm{i}e^{-x}\left(\sum_{n=0}^{\infty}(-1)^{n}\frac{L_{n}^{0}(2x)}{\mathcal{D}_{\parallel}-2(n+1)eB}\right.$
$\displaystyle\left.+\sum_{n=0}^{\infty}(-1)^{n}\frac{L_{n}^{0}(2x)}{\mathcal{D}_{\parallel}-2neB}\right)$
$\displaystyle=$
$\displaystyle\mathrm{i}\frac{e^{-x}}{\mathcal{D}_{\parallel}}\left[1+\sum_{n=1}^{\infty}\frac{(-1)^{n}\left[L_{n}^{0}(2x)-L_{n-1}^{0}(2x)\right]}{1-2n\frac{eB}{\mathcal{D}_{\parallel}}}\right]$
Inserting this expression into the definition of $\widetilde{\mathcal{A}}_{1}$
of Eq. (26), we obtain
$\displaystyle\widetilde{\mathcal{A}}_{1}(q_{0})$ $\displaystyle=$
$\displaystyle\int d^{3}p\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp})$ (46)
$\displaystyle=$ $\displaystyle\mathrm{i}\int
d^{3}p\Bigg{[}\frac{e^{-\mathbf{p}_{\perp}^{2}/eB}}{q_{0}^{2}-p_{3}^{2}-m^{2}+\mathrm{i}\epsilon}$
$\displaystyle+$
$\displaystyle\sum_{n=1}^{\infty}(-1)^{n}e^{-\mathbf{p}_{\perp}^{2}/eB}\frac{L_{n}\left(\frac{2\mathbf{p}_{\perp}^{2}}{eB}\right)-L_{n-1}\left(\frac{2\mathbf{p}_{\perp}^{2}}{eB}\right)}{q_{0}^{2}-p_{3}^{2}-m^{2}-2neB+\mathrm{i}\epsilon}\Bigg{]}$
$\displaystyle=$
$\displaystyle\mathcal{I}_{1}+\sum_{n=1}^{\infty}(-1)^{n}\mathcal{I}_{2,n}$
Here, we defined
$\displaystyle\mathcal{I}_{1}$ $\displaystyle=$ $\displaystyle\mathrm{i}\int
d^{3}p\frac{e^{-\mathbf{p}_{\perp}^{2}/eB}}{q_{0}^{2}-p_{3}^{2}-m^{2}+\mathrm{i}\epsilon}$
(47) $\displaystyle=$
$\displaystyle\frac{\pi^{2}eB}{\sqrt{q_{0}^{2}-m^{2}+\mathrm{i}\epsilon}},$
and
$\displaystyle\mathcal{I}_{2,n}=\mathrm{i}\int
d^{3}p\,e^{-\mathbf{p}_{\perp}^{2}/eB}\frac{L_{n}\left(\frac{2\mathbf{p}_{\perp}^{2}}{eB}\right)-L_{n-1}\left(\frac{2\mathbf{p}_{\perp}^{2}}{eB}\right)}{q_{0}^{2}-p_{3}^{2}-m^{2}-2neB+\mathrm{i}\epsilon}$
(48)
We calculate the momentum integrals in cylindrical coordinates, making use of
the azimuthal symmetry, such that $d^{3}p=dp_{3}\pi
d(\mathbf{p}_{\perp}^{2})$. Moreover, in the integral over
$\mathbf{p}_{\perp}$, we define the auxiliary variable
$x=\frac{2\mathbf{p}_{\perp}^{2}}{eB}$, such that
$\displaystyle\mathcal{I}_{2,n}$ $\displaystyle=$ $\displaystyle\frac{\pi
eB}{2}\int_{-\infty}^{\infty}\frac{dp_{3}}{q_{0}^{2}-m^{2}-p_{3}^{2}-2nqB+\mathrm{i}\epsilon}$
(49)
$\displaystyle\times\int_{0}^{\infty}dx\,e^{-x/2}\left[L_{n}(x)-L_{n-1}(x)\right]$
$\displaystyle=$ $\displaystyle 2\pi
eB(-1)^{n}\int_{-\infty}^{\infty}\frac{dp_{3}}{q_{0}^{2}-m^{2}-p_{3}^{2}-2nqB+\mathrm{i}\epsilon}$
where we used the identity Gradshteyn and Rhyzik (2000b)
$\int_{0}^{\infty}dxe^{-bx}L_{n}(x)=\left(b-1\right)^{n}b^{-n-1}\,\,\,\text{Re}\,b>0.$
(50)
Figure 4: Spectral density for the Landau level spectrum $\rho(E)$, as a
function of the dimensionless energy scale $E/m$, for different values of the
average background magnetic field $eB/m^{2}$. The inset is shown in order to
appreciate in detail the staircase pattern produced by the discrete Landau
levels.
Inserting Eq. (47) and Eq. (49) into Eq. (46), we obtain (after shifting the
index $n\rightarrow n+1$)
$\displaystyle\tilde{\mathcal{A}}_{1}(q_{0})=\frac{\pi^{2}(eB)}{\sqrt{q_{0}^{2}-m^{2}+\mathrm{i}\epsilon}}$
(51) $\displaystyle+2\pi
eB\int_{-\infty}^{+\infty}dp_{3}\sum_{n=0}^{\infty}\frac{1}{q_{0}^{2}-m^{2}-p_{3}^{2}-2(n+1)eB+\mathrm{i}\epsilon}$
Let us introduce the density of states for Landau levels
$\displaystyle\rho(E)=\int_{-\infty}^{\infty}\frac{dp_{3}}{2\pi}\sum_{n=0}^{\infty}\delta\left(E-E_{n}(p_{3})\right),$
(52)
with the dispersion relation for the spectrum
$\displaystyle E_{n}(p_{3})=\sqrt{p_{3}^{2}+m^{2}+2(n+1)eB}.$ (53)
As shown in detail in Appendix B
$\displaystyle\rho(E)$ $\displaystyle=$
$\displaystyle\Theta(E-\sqrt{m^{2}+2eB})\frac{E}{\pi\sqrt{eB}}$ (54)
$\displaystyle\times\left[\zeta\left(\frac{1}{2},\frac{E^{2}-m^{2}-2eB}{2eB}-N_{max}(E)\right)\right.$
$\displaystyle\left.-\zeta\left(\frac{1}{2},\frac{E^{2}-m^{2}}{2eB}\right)\right],$
where we defined
$\displaystyle N_{max}(E)=\lfloor\frac{E^{2}-m^{2}}{2eB}-1\rfloor,$ (55)
with $\lfloor x\rfloor$ the integer part of $x$, and $\zeta(n,z)$ is the
Riemann zeta function. The spectral density Eq. (54) is represented in Fig. 4,
as a function of the dimensionless energy scale $E/m$. For large magnetic
fields $eB/m^{2}\gg 1$, the spectral density displays a clear staircase
pattern, where each step represents the contribution arising from a single
Landau level $n=0,1,\ldots$. On the other hand, for weak magnetic fields
$eB/m^{2}\ll 1$, the spectral density exhibits a denser, quasi-continuum
behavior.
With these definitions, we obtain from Eq. (V.2) the exact expression
$\displaystyle\tilde{\mathcal{A}}_{1}(q_{0})=\frac{\pi^{2}eB}{\sqrt{q_{0}^{2}-m^{2}+\mathrm{i}\epsilon}}+4\pi^{2}eB\int_{-\infty}^{+\infty}dE\frac{\rho(E)}{q_{0}^{2}-E^{2}+\mathrm{i}\epsilon},$
On the other hand, from Eq. (26) we have
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})$ $\displaystyle=$
$\displaystyle\int_{-\infty}^{\infty}dp_{3}p\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp}=0)$
$\displaystyle=$
$\displaystyle\mathrm{i}\int_{-\infty}^{\infty}dp_{3}\frac{1}{q_{0}^{2}-p_{3}^{2}-m^{2}+\mathrm{i}\epsilon}$
$\displaystyle+$
$\displaystyle\mathrm{i}\int_{-\infty}^{\infty}dp_{3}\sum_{n=1}^{\infty}(-1)^{n}\frac{L_{n}(0)-L_{n-1}(0)}{q_{0}^{2}-p_{3}^{2}-m^{2}-2neB+\mathrm{i}\epsilon},$
where the second term vanishes, given that $L_{n}(0)=1~{}\forall n$. Hence, we
end up with the simple expression
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})=\frac{\pi}{\sqrt{q_{0}^{2}-m^{2}+\mathrm{i}\epsilon}}.$
(58)
In order to evaluate the formulas for $z$ in Eq. (30a) and $z_{3}$ in Eq.
(30b), we also need to evaluate the following coefficients (for
$x=\mathbf{k}_{\perp}^{2}/(eB)$)
$\displaystyle\mathcal{A}_{2}$ $\displaystyle=$ $\displaystyle
ieB\frac{\partial\mathcal{A}_{1}}{\partial\mathbf{k}_{\perp}^{2}}=\mathrm{i}\frac{\partial\mathcal{A}_{1}}{\partial
x}$ $\displaystyle=$
$\displaystyle\frac{e^{-x}}{\mathcal{D}_{\parallel}}\left[1+\sum_{n=1}^{\infty}\frac{(-1)^{n}\left(L_{n}(2x)-2L_{n}^{{}^{\prime}}(2x)-L_{n-1}(2x)+2L_{n-1}^{{}^{\prime}}(2x)\right)}{1-2\frac{neB}{\mathcal{D}_{\parallel}}}\right]$
$\displaystyle\mathcal{A}_{3}$ $\displaystyle=$
$\displaystyle\mathcal{A}_{1}+(\mathrm{i}eB)^{2}\frac{\partial^{2}\mathcal{A}_{1}}{\partial(\mathbf{k}_{\perp}^{2})^{2}}=\mathcal{A}_{1}-\frac{\partial^{2}\mathcal{A}_{1}}{\partial
x^{2}}$ (59) $\displaystyle=$ $\displaystyle
i\frac{e^{-x}}{\mathcal{D}_{\parallel}}\sum_{n=1}^{\infty}\frac{(-1)^{n}\left(L_{n}(2x)-L_{n-1}(2x)-2L_{n}^{{}^{\prime}}(2x)+4L_{n}^{{}^{\prime\prime}}(2x)+2L_{n-1}^{{}^{\prime}}(2x)-4L_{n-1}^{{}^{\prime\prime}}(2x)\right)}{1-2\frac{neB}{\mathcal{D}_{\parallel}}}$
Finally, in order to further simplify these expressions, it is convenient to
use the identities
$\displaystyle L_{n}^{{}^{\prime}}(2x)$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{cc}-L_{n-1}^{(1)}(2x),&n\geq 1\\\
0,&\text{otherwise}\end{array}\right.$ (62) $\displaystyle=$
$\displaystyle-\theta_{n-1}\cdot L_{n-1}^{(1)}(2x),$ (63) $\displaystyle
L_{n}^{{}^{\prime\prime}}(2x)$ $\displaystyle=$
$\displaystyle\theta_{n-2}L_{n-2}^{(2)}(2x),$ (64)
where $\theta_{n-k}$ is the Heaviside step function
$\displaystyle\theta_{n-k}=\left\\{\begin{array}[]{cc}1,&n\geq k\\\
0,&\text{otherwise}\end{array}\right.$ (67)
With these identities, we obtain the final expressions
$\displaystyle\mathcal{A}_{2}$ $\displaystyle=$
$\displaystyle\frac{e^{-x}}{\mathcal{D}_{\parallel}}\left[1+\sum_{n=1}^{\infty}\frac{(-1)^{n}\left(L_{n}(2x)+2L_{n-1}^{(1)}(2x)-L_{n-1}(2x)-2\theta_{n-2}\cdot
L_{n-2}^{(1)}(2x)\right)}{1-2\frac{neB}{\mathcal{D}_{\parallel}}}\right]$ (68)
$\displaystyle\mathcal{A}_{3}$ $\displaystyle=$ $\displaystyle
i\frac{e^{-x}}{\mathcal{D}_{\parallel}}\sum_{n=1}^{\infty}\frac{(-1)^{n}\left(L_{n}(2x)-L_{n-1}(2x)+2L_{n-1}^{(1)}(2x)+4\theta_{n-2}\cdot
L_{n-2}^{(2)}(2x)-2L_{n-1}^{(1)}(2x)-4\theta_{n-3}\cdot
L_{n-3}^{(2)}(2x)\right)}{1-2\frac{neB}{\mathcal{D}_{\parallel}}}$
With these expressions, we evaluate
$\mathcal{D}(k)=\mathcal{A}_{3}^{2}\mathbf{k}_{\perp}^{2}-\left(\mathcal{A}_{1}^{2}-\mathcal{A}_{2}^{2}\right)(k_{\parallel}^{2}-m^{2}),$
(69)
and finally evaluate $z$ and $z_{3}$ with Eq. (30a) and Eq. (30b),
respectively. These results can be appreciated in Figs.5–10, as a function of
the energy scale $q_{0}/m$, as well as the magnitude of the average background
magnetic field $eB/m^{2}$, respectively.
### V.3 Ultra-intense (LLL) field $eB/m^{2}\gg 1$
Let us now analyze the asymptotic behaviour of the quasi-particle
renormalization parameters $z$, $z_{3}$, and $v^{\prime}/c$, respectively, in
the ultra-intense magnetic field regime $eB/m^{2}\gg 1$. Here, we obtain the
corresponding asymptotic expression for $\mathcal{A}_{1}(q)$ by considering
only the lowest-Landau level (LLL) $n=0$ in Eq. (V.2). Therefore, we have
$\displaystyle\mathcal{A}_{1}(q)\sim\mathrm{i}\frac{e^{-\mathbf{q}_{\perp}^{2}/eB}}{q_{\parallel}^{2}-m^{2}},$
(70)
and the corresponding expressions for its derivatives are
$\displaystyle\mathcal{A}_{2}(q)=\frac{e^{-\mathbf{q}_{\perp}^{2}/eB}}{q_{\parallel}^{2}-m^{2}},$
(71)
and
$\displaystyle\mathcal{A}_{3}(q)=0.$ (72)
Similarly, we also have
$\displaystyle\mathcal{D}(q)=2\frac{e^{-2\mathbf{q}_{\perp}^{2}/eB}}{q_{\parallel}^{2}-m^{2}}.$
(73)
Finally, the integrals of $\mathcal{A}_{1}(q)$ are given, in this
approximation, by the expressions
$\displaystyle\widetilde{\mathcal{A}}_{1}(q)=\frac{\pi^{2}eB}{\sqrt{q_{0}^{2}-m^{2}}},$
(74a)
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})=\frac{\pi}{\sqrt{q_{0}^{2}-m^{2}}}.$
(74b)
Applying these asymptotic results for the ultra-strong field regime, and
substituting into the general definitions Eq. (30a) and Eq. (30b), we obtain
explicit analytical expressions for the renormalization factors $z$ and
$z_{3}$, respectively, as follows
$\displaystyle z$ $\displaystyle=$ $\displaystyle
1+\frac{3}{4}\frac{\Delta(eB)e^{-\mathbf{q}_{\perp}^{2}/eB}}{\pi\sqrt{q_{0}^{2}-m^{2}}},$
(75)
and
$\displaystyle z_{3}$ $\displaystyle=$
$\displaystyle\left(1+\frac{\Delta(eB)e^{-\mathbf{q}_{\perp}^{2}/eB}}{4\pi\sqrt{q_{0}^{2}-m^{2}}}\right)z^{-1}$
(76) $\displaystyle=$
$\displaystyle\frac{1+\frac{\Delta(eB)e^{-\mathbf{q}_{\perp}^{2}/(eB)}}{4\pi\sqrt{q_{0}^{2}-m^{2}}}}{1+\frac{3}{4}\frac{\Delta(eB)e^{-\mathbf{q}_{\perp}^{2}/(eB)}}{\pi\sqrt{q_{0}^{2}-m^{2}}}}$
Remarkably, while for very large magnetic fields $z\sim eB/m^{2}$ grows
linearly, $z_{3}$ instead converges asymptotically to the constant limit
$\displaystyle\lim_{\frac{eB}{m^{2}}\rightarrow\infty}z_{3}=\frac{1}{3}.$ (77)
Figure 5: Wavefunction renormalization factor $z$ as a function of the
dimensionless energy scale $q_{0}/m$. Here
$\mathbf{p}_{\perp}^{2}=p_{\parallel}^{2}-m^{2}$. Figure 6: Wavefunction
renormalization factor $z$ as a function of the dimensionless magnetic field
scale $eB/m^{2}$. Here $\mathbf{p}_{\perp}^{2}=p_{\parallel}^{2}-m^{2}$, and
$eB/m^{2}\in[10,500]$ Figure 7: Charge renormalization factor $z_{3}$ as a
function of the dimensionless energy scale $q_{0}/m$. Here
$\mathbf{p}_{\perp}^{2}=p_{\parallel}^{2}-m^{2}$. Figure 8: Charge
renormalization factor $z_{3}$ as a function of the dimensionless magnetic
field scale $eB/m^{2}$. Here $\mathbf{p}_{\perp}^{2}=p_{\parallel}^{2}-m^{2}$,
and $eB/m^{2}\in[10,500]$ Figure 9: Effective refraction index $v^{\prime}/c$
as a function of the dimensionless energy scale $q_{0}/m$. Here
$\mathbf{p}_{\perp}^{2}=p_{\parallel}^{2}-m^{2}$. Figure 10: Effective
refraction index $v^{\prime}/c$ as a function of the dimensionless magnetic
field scale $eB/m^{2}$. Here $\mathbf{p}_{\perp}^{2}=p_{\parallel}^{2}-m^{2}$,
and $eB/m^{2}\in[10,500]$
Let us now summarize the behavior of the renormalization factors $z$ and
$z_{3}$ as a function of the energy scale $q_{0}/m$, and the average
background magnetic field $eB/m^{2}$, respectively, in the whole range of both
parameters, as displayed in Figs. 5–8, respectively. As can be appreciated in
Fig. 5, $z$ presents a monotonically decreasing behaviour as a function of the
energy $q_{0}/m$, that asymptotically reaches the limit $z\rightarrow 1$ as
$q_{0}/m\gg 1$, for all values of the average background magnetic field
$eB/m^{2}$. In physical terms, this shows that the quasi-particle
renormalization due to the random magnetic field fluctuations tends to be
negligible as the energy of the propagating fermions becomes very large, but
in contrast it can be quite significant at low energy scales. This trend is
also consistent with the effective refraction index $v^{\prime}/c=z^{-1}$, as
shown in Figs. 9,10. For low energy scales, $v^{\prime}/c<1$, indicating a
strong renormalization of the effective group velocity of the propagating
quasi-particles due to the presence of the magnetic background fluctuations.
In contrast, for larger energy scales the effect becomes weaker, thus
recovering the asymptotic limit $v^{\prime}/c\rightarrow 1$ as $q_{0}/m\gg 1$.
Since low energy and momentum components in the Fourier representation of the
propagator correspond to long-wavelength components in the space of
configurations, our results are consistent with the fact that such long-
wavelength components are more sensitive to the spatial distribution of the
magnetic fluctuations, and hence experience a higher degree of decoherence,
thus reducing the corresponding group velocity. In contrast, the high-energy
Fourier modes of the propagator, that correspond to short-wavelength
components in the configuration space, are less sensitive to the presence of
spatial fluctuations of the background magnetic field.
Concerning the charge renormalization factor $z_{3}$, as can be appreciated in
Fig.7 it experiences a strong effect $z_{3}<1$ at low quasi-particle energies
$q_{0}/m$, but this effect becomes negligible a large energy scales
$q_{0}/m\gg 1$, since $z_{3}\rightarrow 1$ as an asymptotic limit. This
behavior, that can be interpreted physically as a charge screening due to the
spatial magnetic fluctuations in the background, is consistent with the
aforementioned interpretation for the effective index of refraction as a
function of energy. On the other hand, as can be appreciated in Fig. 8,
$z_{3}$ tends to decrease as a function of the average background magnetic
field intensity, achieving an asymptotic limit $z_{3}\rightarrow 1/3$ as shown
in Eq. (77).
## VI Vertex corrections at $O(\Delta^{2})$
Let us now consider the renormalization of the effective interaction term
$\Delta\rightarrow\tilde{\Delta}$, that characterizes the strength of the
effective interaction vertex in the effective, averaged action for the replica
system, Eq. (10). Following the skeleton diagrams for the perturbation theory,
as depicted in Fig. 2, the diagrams contributing at order $\Delta^{2}$ to the
4-point vertex $\hat{\Gamma}$ are depicted in Fig. 11.
Figure 11: Diagrams contributing to the 4-point vertex function $\hat{\Gamma}$
at order $\Delta^{2}$.
Therefore, the matrix elements corresponding to each diagram are given by the
following integral expressions
$\displaystyle\hat{\Gamma}_{\text{(a)}}=\int\frac{d^{3}q}{(2\pi)^{3}}\,\gamma^{i}S_{\text{F}}(p-q)\gamma^{j}\otimes\gamma_{i}S_{\text{F}}(p^{\prime}-q)\gamma_{j},$
(78a)
$\displaystyle\hat{\Gamma}_{\text{(b)}}=\int\frac{d^{3}q}{(2\pi)^{3}}\,\gamma^{i}S_{\text{F}}(p-q)\gamma^{j}\otimes\gamma_{i}S_{\text{F}}(p^{\prime}+q)\gamma_{j},$
(78b) and
$\displaystyle\hat{\Gamma}_{\text{(c)}}=\int\frac{d^{3}q}{(2\pi)^{3}}\,\gamma^{i}S_{\text{F}}(p+q)\gamma^{j}\otimes\gamma_{i}S_{\text{F}}(p^{\prime}-q)\gamma_{j}.$
(78c)
In order to compute the expressions above, it is convenient to introduce the
notation
$\displaystyle\hat{\Gamma}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}q}{(2\pi)^{3}}\,\gamma^{i}S_{\text{F}}(p+\lambda
q)\gamma^{j}\otimes\gamma_{i}S_{\text{F}}(p^{\prime}+\sigma q)\gamma_{j},$
where $\lambda,\sigma=\pm 1$. Then, we have the correspondence
$\hat{\Gamma}_{\text{(a)}}=\hat{\Gamma}^{(-,-)}$,
$\hat{\Gamma}_{\text{(b)}}=\hat{\Gamma}^{(-,+)}$, and
$\hat{\Gamma}_{\text{(c)}}=\hat{\Gamma}^{(+,-)}$, respectively.
Figure 12: real and imaginary parts of the correction coefficient from Eq.
(82) as a function of the fermion’s energy. Here, we take $Q=0$ and
$\mathbf{p}_{\perp}^{2}=p_{0}^{2}-m^{2}$.
By considering the tensor structure of the propagator, it is straightforward
to realize that the full vertex, taking into account the multiplicity and
symmetry factors for each diagram, is given by
$\displaystyle\hat{\Gamma}=2\hat{\Gamma}^{(-,-)}+2\hat{\Gamma}^{(-,+)}+4\hat{\Gamma}^{(+,-)}.$
(80)
The former leads to an effective interaction of the form
$\displaystyle\hat{\Gamma}=\tilde{\Delta}(\bar{\psi}\gamma^{i}\psi)(\bar{\psi}\gamma^{i}\psi)+{\rm{other\,\,tensor\,\,structures}},$
(81)
where the renormalized coefficient $\tilde{\Delta}$ is given, up to second
order in $\Delta$ (see Appendix D for details) by the expression
$\displaystyle\tilde{\Delta}=\Delta+2\Delta^{2}\left(\mathcal{J}_{2}^{(-,-)}+\mathcal{J}_{2}^{(-,+)}+2\mathcal{J}_{2}^{(+,-)}\right.$
$\displaystyle\left.+\left(1-\partial_{x}^{2}\right)(1-\partial_{y}^{2})\mathcal{J}_{3}^{(-,-)}+\left(1-\partial_{x}^{2}\right)(1-\partial_{y}^{2})\mathcal{J}_{3}^{(-,+)}\right.$
$\displaystyle\left.+2\left(1-\partial_{x}^{2}\right)(1-\partial_{y}^{2})\mathcal{J}_{3}^{(+,-)}\right).$
(82)
Here, we defined the integrals
$\displaystyle\mathcal{J}_{1}^{(\lambda,\sigma)}(p,p^{\prime})\equiv\int\frac{d^{3}q}{(2\pi)^{3}}\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q),$
$\displaystyle\mathcal{J}_{2}^{(\lambda,\sigma)}(p,p^{\prime})\equiv\int\frac{d^{3}q}{(2\pi)^{3}}q_{\parallel}^{2}\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q),$ and
$\displaystyle\mathcal{J}_{3}^{(\lambda,\sigma)}(p,p^{\prime})\equiv\int\frac{d^{3}q}{(2\pi)^{3}}\mathbf{q}_{\perp}^{2}\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q).$ (83c)
In order to calculate the integrals $\mathcal{J}_{i}$, we shall use the
analytical expression for $\mathcal{A}_{1}(k)$ Eq. (LABEL:eq_A1_U) (details in
Appendix A):
$\displaystyle\mathcal{A}_{1}(k)$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}e^{-\mathbf{k}_{\perp}^{2}/eB}}{2eB}\exp\left[-\frac{\mathrm{i}\pi\left(k_{\parallel}^{2}-m^{2}\right)}{2eB}\right]$
$\displaystyle\times$
$\displaystyle\Gamma\left(-\frac{k_{\parallel}^{2}-m^{2}}{2eB}\right)U\left(-\frac{k_{\parallel}^{2}-m^{2}}{2eB},0,\frac{2\mathbf{k}_{\perp}^{2}}{eB}\right).$
### VI.1 The integral $\mathcal{J}_{1}$
Let us first consider the integral
$\displaystyle\mathcal{J}_{1}^{(\lambda,\sigma)}(p,p^{\prime})=\int\frac{d^{3}q}{(2\pi)^{3}}\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q).$ (85)
For the case $(\lambda,\sigma)=(-1,-1)$ we change the integration variables as
follows
$\displaystyle p^{\prime}-q$ $\displaystyle=$ $\displaystyle q^{\prime}+Q,$
$\displaystyle p-q$ $\displaystyle=$ $\displaystyle q^{\prime}-Q.$ (86)
For notational simplicity, in what follows we shall use $q$ instead of
$q^{\prime}$, and we shall define the parameters
$\displaystyle a$ $\displaystyle=$
$\displaystyle-\frac{\mathcal{D}_{\parallel}(q_{3}+Q_{\parallel})}{2eB},$
$\displaystyle a^{\prime}$ $\displaystyle=$
$\displaystyle-\frac{\mathcal{D}_{\parallel}(q_{3}-Q_{\parallel})}{2eB}.$ (87)
Furthermore, we shall use the identity (for $z=2\mathbf{q}_{\perp}^{2}/eB$)
$\displaystyle\Gamma(a)U(a,\epsilon,z)=\frac{1}{a}M(a,\epsilon,z)+\Gamma(-1+\epsilon)zM(1+a,2,z),$
along with the singular expansion for $\epsilon\rightarrow 0^{+}$
$\displaystyle\Gamma(-1+\epsilon)=\frac{-1}{\epsilon}+\gamma_{e}-1+O(\epsilon),$
(89)
where $\gamma_{e}=0.577$ is the Euler-Mascheroni constant. In addition, given
that there is a strong exponential damping in the integral, we consider the
expansion of the Kummer function for small values of its argument, that is
given by
$\displaystyle M(a,b,z)=1+\frac{a}{b}z+O(z^{2}),$ (90)
so that, after regularization by removing the divergences in $1/\epsilon$, we
end up with the integral
$\displaystyle\mathcal{J}_{1}^{(-,-)}(p,p^{\prime})$ $\displaystyle=$
$\displaystyle\left(\frac{\mathrm{i}}{2eB}\right)^{2}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}\int_{-\infty}^{\infty}\frac{dq_{3}}{2\pi}~{}e^{\mathrm{i}\pi\left(a+a^{\prime}\right)}\int_{0}^{\infty}\frac{d^{2}q_{\perp}}{(2\pi)^{2}}e^{-\frac{2\mathbf{q}^{2}_{\perp}}{eB}}~{}\Gamma\left(a\right)U\left(a,\epsilon,\frac{(q+Q)_{\perp}^{2}}{eB}\right)\Gamma\left(a^{\prime}\right)U\left(a^{\prime},\epsilon,\frac{(q-Q)_{\perp}^{2}}{eB}\right)$
(91) $\displaystyle=$
$\displaystyle\frac{1}{(2\pi)^{3}}\left(\frac{\mathrm{i}}{2eB}\right)^{2}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}\int_{-\infty}^{\infty}dq_{3}~{}e^{\mathrm{i}\pi\left(a+a^{\prime}\right)}$
$\displaystyle\times$
$\displaystyle\int_{0}^{\infty}d^{2}q_{\perp}e^{-\frac{2\mathbf{q}^{2}_{\perp}}{eB}}\left[\frac{1}{a}+\frac{\gamma_{e}-1}{eB}\left(\mathbf{q}_{\perp}^{2}+2\mathbf{Q}_{\perp}\cdot\mathbf{q}_{\perp}+\mathbf{Q}_{\perp}^{2}\right)\right]\left[\frac{1}{a^{\prime}}+\frac{\gamma_{e}-1}{eB}\left(\mathbf{q}_{\perp}^{2}-2\mathbf{Q}_{\perp}\cdot\mathbf{q}_{\perp}+\mathbf{Q}_{\perp}^{2}\right)\right].$
Furthermore, we shall set the external 3-momenta to zero, except for the
presence of the $Q_{\perp}$ factors, that we shall keep as finite in order to
use this expression as a generating function. Therefore, the integral reduces
to the simpler expression
$\displaystyle\mathcal{J}_{1}^{(-,-)}(p,p^{\prime})=-\frac{eB}{4\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}$
$\displaystyle\times$
$\displaystyle\int_{-\infty}^{\infty}\frac{dq_{3}}{\left(q_{3}^{2}+m^{2}+\mathrm{i}\epsilon\right)^{2}}\exp\left[\frac{\mathrm{i}\pi}{2eB}\left(q_{3}^{2}+m^{2}\right)\right]$
$\displaystyle\times$
$\displaystyle\Bigg{[}1+\frac{(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}\left(q_{3}^{2}+m^{2}\right)}{(eB)^{2}}+\frac{(\gamma_{e}-1)\left(q_{3}^{2}+m^{2}\right)}{eB}\Bigg{]}$
Performing the last integral explicitly, we obtain (details in Appendix D)
$\displaystyle\mathcal{J}_{1}^{(-,-)}(p,p^{\prime})=-\frac{eB}{4\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}\Bigg{\\{}\frac{(1-\mathrm{i})\pi}{2\sqrt{eB}m^{2}}e^{\frac{\mathrm{i}\pi
m^{2}}{2eB}}$ $\displaystyle+$ $\displaystyle\frac{(eB+\mathrm{i}\pi
m^{2})\pi}{2(eB)m^{3}}\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]$
$\displaystyle+$
$\displaystyle\frac{\pi(\gamma_{e}-1)}{(eB)m}\left(1+\frac{\mathbf{Q}_{\perp}^{2}}{eB}\right)\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]\Bigg{\\}},$
where $\text{erf}(x)$ is the error function.
### VI.2 The integral $\mathcal{J}_{2}$
For this second integral, we notice that $q_{\parallel}^{2}=-q_{3}^{2}$, so
that after integration over $z=2\mathbf{q}_{\perp}^{2}/(eB)$ we obtain
(details in Appendix D)
$\displaystyle\mathcal{J}_{2}^{(-,-)}(p,p^{\prime})=\frac{eB}{4\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}$
$\displaystyle\times$
$\displaystyle\int_{-\infty}^{\infty}dq_{3}\frac{q_{3}^{2}}{\left(q_{3}^{2}+m^{2}+\mathrm{i}\epsilon\right)^{2}}\exp\left[\frac{\mathrm{i}\pi}{2eB}\left(q_{3}^{2}+m^{2}\right)\right]$
$\displaystyle\times$
$\displaystyle\Bigg{[}1+\frac{(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}\left(q_{3}^{2}+m^{2}\right)}{(eB)^{2}}+\frac{(\gamma_{e}-1)\left(q_{3}^{2}+m^{2}\right)}{eB}\Bigg{]}$
After performing the remaining momentum integral over $q_{3}$, as shown in
detail in Appendix D, we finally obtain
$\displaystyle\mathcal{J}_{2}^{(-,-)}(p,p^{\prime})=\frac{eB}{4\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}$
$\displaystyle\times$
$\displaystyle\Bigg{\\{}\frac{(\mathrm{i}-1)\pi}{\sqrt{eB}}e^{\frac{\mathrm{i}\pi
m^{2}}{2eB}}$ $\displaystyle+$ $\displaystyle\frac{\left(eB-\mathrm{i}\pi
m^{2}\right)\pi}{2(eB)m}\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]$
$\displaystyle-$
$\displaystyle\frac{m\pi(\gamma_{e}-1)}{(eB)}\left(1+\frac{\mathbf{Q}_{\perp}^{2}}{eB}\right)\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]\Bigg{\\}}.$
### VI.3 The integral $\mathcal{J}_{3}$
By following the same procedure as in the previous two cases, as shown in
Appendix D, it is straightforward to obtain the analytical expression
$\displaystyle\mathcal{J}_{3}^{(-,-)}(p,p^{\prime})=-\frac{eB^{2}}{8\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}\Bigg{\\{}\frac{(1-\mathrm{i})\pi}{2\sqrt{eB}m^{2}}e^{\frac{\mathrm{i}\pi
m^{2}}{2eB}}$ $\displaystyle+$ $\displaystyle\frac{(eB+\mathrm{i}\pi
m^{2})\pi}{2(eB)m^{3}}\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]$
$\displaystyle+$
$\displaystyle\frac{\pi(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}}{(eB)^{2}m}\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]\Bigg{\\}}.$
Moreover, it is straightforward to verify the following relations
$\displaystyle\mathcal{J}_{n}^{(+,+)}(p,p^{\prime})$ $\displaystyle=$
$\displaystyle\mathcal{J}_{n}^{(-,-)}(p,p^{\prime})$
$\displaystyle\mathcal{J}_{n}^{(+,-)}(p,p^{\prime})$ $\displaystyle=$
$\displaystyle\mathcal{J}_{n}^{(-,+)}(p,p^{\prime})=\mathcal{J}_{n}^{(-,-)}(p,p^{\prime})\mid_{Q\rightarrow
P},$
for $n=1,2,3$, that allows us to generate all the remaining expressions from
these three explicit analytical results.
An explicit numerical evaluation of our analytical expressions for the
renormalized effective interaction, expressed by the combination
$(\tilde{\Delta}-\Delta)/(2\Delta^{2})$, is displayed in Fig. 12. Clearly,
this effective coupling develops both a real as well as an imaginary part
(left and right panels in Fig. 12, respectively). In particular, the emergence
of an imaginary component implies an imaginary contribution to the self-
energy, corresponding to a relaxation time (spectral broadening) of the quasi-
particle spectrum. This is a natural consequence of the decoherence mechanism
induced by the random fluctuating magnetic environment. On the other hand, as
can be appreciated in Fig. 12, both the real and imaginary contributions to
the effective interaction $\tilde{\Delta}$ display a large enhancement (in
absolute value) at low energy scales $p_{0}/m<1$, while asymptotically
$\tilde{\Delta}\rightarrow\Delta$ at higher energies $p_{0}/m\gg 1$. This
strong renormalization effect at low-energies is consistent with the effect
observed in the previous section for the charge $z_{3}$ and refraction index
$v^{\prime}/c$, respectively, and can be explained in similar terms due to the
short-range spatial distribution of the magnetic noise, that therefore
renormalizes mainly the long-wavelength components of the propagator,
corresponding to the small energy-momentum components in Fourier space.
## VII Conclusions
We have studied the effects of quenched, white noise spatial fluctuations in
an otherwise uniform background magnetic field, over the properties of the QED
fermion propagator. This configuration is important in different physical
scenarios, including heavy-ion collisions and the quark-gluon plasma, where
spatial anisotropies of the background magnetic field may be present. We
developed explicit results, that we carried over by combining the replica
method to average over spatial fluctuations, with a perturbation theory based
on the Schwinger propagator for the average background field. Upon averaging
over magnetic fluctuations, we obtained an effective action in the replica
fields, with an effective particle-particle interaction proportional to the
strength $\Delta$ of the spatial auto-correlation function of the background
noise. Our perturbative results show that, up to first order in $\Delta$, the
propagator retains its form, thus representing renormalized quasi-particles
with the same mass $m^{\prime}=m$, but propagating in the medium with a
magnetic field and noise-dependent index of refraction $v^{\prime}/c=z^{-1}$,
and effective charge $e^{\prime}=z_{3}e$, where $z$ and $z_{3}$ are
renormalization factors. We showed that $z$ presents a monotonically
decreasing behaviour as a function of the energy $q_{0}/m$, that reaches the
asymptotic limit $z\rightarrow 1$ as $q_{0}/m\gg 1$, for all values of the
average background magnetic field $eB/m^{2}$. In physical terms, this shows
that the quasi-particle renormalization due to the random magnetic field
fluctuations, while being quite significant at low energy scales, tends to be
negligible as the energy of the propagating fermions becomes very large. This
trend is also observed in the effective refraction index
$v^{\prime}/c=z^{-1}$, since at low energy scales $v^{\prime}/c<1$, indicating
a strong renormalization of the effective group velocity of the propagating
quasi-particles due to the presence of the magnetic background fluctuations.
In contrast, for larger energy scales the effect becomes weaker, thus
recovering the asymptotic limit $v^{\prime}/c\rightarrow 1$ as $q_{0}/m\gg 1$.
Our results show that the effective quasi-particle charge experiences a strong
renormalization $z_{3}<1$ at low energies $q_{0}/m$, while the effect becomes
negligible a large energy scales $q_{0}/m\gg 1$, since $z_{3}\rightarrow 1$ as
an asymptotic limit. We interpret this as a charge screening due to the
spatial magnetic fluctuations in the background. On the other hand, $z_{3}$
tends to decrease as a function of the average background magnetic field
intensity, achieving an asymptotic limit $z_{3}\rightarrow 1/3$ as shown in
Eq. (77).
We remark that both the effective refraction index $v^{\prime}/c=z^{-1}$ and
charge screening $z_{3}$ display a similar, and therefore consistent,
renormalization behaviour of the quasi-particle properties in the magnetically
fluctuating environment. In order to understand such effects in physical
terms, we remark that the low energy and momentum components in the Fourier
representation of the propagator correspond to long-wavelength components in
the space of configurations. Therefore, our results are consistent with the
fact that such long-wavelength components are more sensitive to the spatial
distribution of the background magnetic noise, and hence experience a higher
degree of decoherence, thus reducing the corresponding group velocity and
enhancing the charge screening. In contrast, the high-energy Fourier
components of the propagator, that correspond to short-wavelength components
in the configuration space, are less sensitive to the presence of spatial
fluctuations of the background magnetic field. In addition, we remark that the
intensity of the average background magnetic field defines a characteristic
length-scale known as the Landau radius $l_{B}=1/\sqrt{eB}$, that determines
the support of the quasi-particle propagator in configuration space. Moreover,
in the semi-classical picture this length-scale represents the typical size of
the “cyclotron radius” of the helycoidal trajectories that propagate along the
magnetic field axis. Therefore, the stronger the magnetic field, the smaller
the Landau radius, and hence the quasi-particle propagator is modulated
towards higher momentum and energy components that, as previously discussed,
are more sensitive to the magnetic noise renormalization effects, as is
verified by the trend observed both in $z_{3}$ and in $v^{\prime}/c$, that
strongly decrease as the average magnetic field intensity increases
$eB/m^{2}\gg 1$.
Moreover, we also showed that 4-point vertex corrections at the second order
in $\Delta^{2}$ lead to a renormalized $\tilde{\Delta}=\Delta+O(\Delta^{2})$,
whose relative magnitude grows with the average magnetic field intensity
$eB/m^{2}$, and tend to decrease with the quasi-particle energy scale
$p_{0}/m$, in agreement with the behavior of $v^{\prime}/c$ and $z_{3}$ and
the physical interpretation previously discussed.
The analysis and results presented in this work only concern the study of the
quasi-particle fermion propagator in the noisy magnetic field background.
However, the effective model obtained via the replica method and its
consequences can be extended towards the study of other physical quantities,
such as the photon polarization tensor. We are currently investigating this
and it will be communicated in a separate article.
###### Acknowledgements.
J.D.C.-Y. and E.M. acknowledge financial support from ANID PIA Anillo
ACT/192023. E.M. also acknowledges financial support from Fondecyt 1190361. M.
L. acknowledges support from ANID/CONICYT FONDECYT Regular (Chile) under
grants No. 1200483, 1190192 and 1220035. M. L. also acknowledges support from
ANID/PIA/APOYO AFB180002 (Chile).
## References
* Alam _et al._ (2021) Sk Noor Alam, Victor Roy, Shakeel Ahmad, and Subhasis Chattopadhyay, “Electromagnetic field fluctuation and its correlation with the participant plane in $\mathrm{Au}+\mathrm{Au}$ and isobaric collisions at $\sqrt{{s}_{NN}}=200\text{ }\text{ }\mathrm{GeV}$,” Phys. Rev. D 104, 114031 (2021).
* Ayala _et al._ (2022) Alejandro Ayala, Jorge David Castaño-Yepes, LA Hernández, Ana Julia Mizher, María Elena Tejeda-Yeomans, and R Zamora, “Anisotropic photon emission from gluon fusion and splitting in a strong magnetic background I: The two-gluon one-photon vertex,” (2022), arXiv:2209.09364 [hep-ph] .
* Inghirami, Gabriele _et al._ (2020) Inghirami, Gabriele, Mace, Mark, Hirono, Yuji, Del Zanna, Luca, Kharzeev, Dmitri E., and Bleicher, Marcus, “Magnetic fields in heavy ion collisions: flow and charge transport,” Eur. Phys. J. C 80, 293 (2020).
* Ayala _et al._ (2020a) Alejandro Ayala, Jorge David Castaño Yepes, Isabel Dominguez Jimenez, Jordi Salinas San Martín, and María Elena Tejeda-Yeomans, “Centrality dependence of photon yield and elliptic flow from gluon fusion and splitting induced by magnetic fields in relativistic heavy-ion collisions,” Eur. Phys. J. A 56, 53 (2020a).
* Ayala _et al._ (2017) Alejandro Ayala, Jorge David Castano-Yepes, Cesareo A. Dominguez, Luis A. Hernandez, Saul Hernandez-Ortiz, and Maria Elena Tejeda-Yeomans, “Prompt photon yield and elliptic flow from gluon fusion induced by magnetic fields in relativistic heavy-ion collisions,” Phys. Rev. D 96, 014023 (2017), [Erratum: Phys.Rev.D 96, 119901 (2017)].
* Busza _et al._ (2018) Wit Busza, Krishna Rajagopal, and Wilke van der Schee, “Heavy ion collisions: The big picture and the big questions,” Annual Review of Nuclear and Particle Science 68, 339–376 (2018), https://doi.org/10.1146/annurev-nucl-101917-020852 .
* Hattori and Satow (2016) Koichi Hattori and Daisuke Satow, “Electrical conductivity of quark-gluon plasma in strong magnetic fields,” Phys. Rev. D 94, 114032 (2016).
* Hattori and Satow (2018) Koichi Hattori and Daisuke Satow, “Gluon spectrum in a quark-gluon plasma under strong magnetic fields,” Phys. Rev. D 97, 014023 (2018).
* Buballa (2005) Michael Buballa, “Njl-model analysis of dense quark matter,” Physics Reports 407, 205–376 (2005).
* Blaschke _et al._ (2020) D. Blaschke, A. Ayiriyan, and A. (Editors) Friesen, “Compact stars in the QCD phase diagram,” (Universe MppiaG, 2020).
* Schwinger (1951) Julian Schwinger, “On gauge invariance and vacuum polarization,” Phys. Rev. 82, 664–679 (1951).
* Dittrich and Reuter (1985) W. Dittrich and M. Reuter, “Effective Lagrangians in Quantum Electrodynamics,lecture notes in physics,” (Springer-Verlag, Berlin-Heidelberg, 1985).
* Dittrich and Gies (2000) W. Dittrich and H. Gies, “Probing the quantum vacuum: Perturbative effective action approach in quantum electrodynamics and its application, springer tracts in modern physics,” (Springer-Verlag, Berlin-Heidelberg, 2000).
* Dominguez _et al._ (2020) C. A. Dominguez, Luis A. Hernández, Marcelo Loewe, Cristian Villavicencio, and R. Zamora, “Magnetic field dependence of nucleon parameters from qcd sum rules,” Phys. Rev. D 102, 094007 (2020).
* Hattori and Itakura (2013) Koichi Hattori and Kazunori Itakura, “Vacuum birefringence in strong magnetic fields: (i) photon polarization tensor with all the landau levels,” Annals of Physics 330, 23–54 (2013).
* Ayala _et al._ (2020b) Alejandro Ayala, Jorge David Castaño Yepes, M. Loewe, and Enrique Muñoz, “Gluon polarization tensor in a magnetized medium: Analytic approach starting from the sum over landau levels,” Phys. Rev. D 101, 036016 (2020b).
* Ayala, Alejandro _et al._ (2021) Ayala, Alejandro, Hernández, Luis A., Loewe, Marcelo, and Villavicencio, Cristian, “Qcd phase diagram in a magnetized medium from the chiral symmetry perspective: the linear sigma model with quarks and the nambu-jona-lasinio model effective descriptions,” Eur. Phys. J. A 57, 234 (2021).
* Ayala _et al._ (2021) Alejandro Ayala, Jorge David Castaño Yepes, M. Loewe, and Enrique Muñoz, “Fermion mass and width in qed in a magnetic field,” Phys. Rev. D 104, 016006 (2021).
* Miransky and Shovkovy (2015) Vladimir A. Miransky and Igor A. Shovkovy, “Quantum field theory in a magnetic field: From quantum chromodynamics to graphene and dirac semimetals,” Physics Reports 576, 1–209 (2015), quantum field theory in a magnetic field: From quantum chromodynamics to graphene and Dirac semimetals.
* Gies and Roessler (2011) Holger Gies and Lars Roessler, “Vacuum polarization tensor in inhomogeneous magnetic fields,” Phys. Rev. D 84, 065035 (2011).
* Zhao _et al._ (2017) P.-L. Zhao, A.-M. Wang, and G.-Z. Liu, “Effects of random potentials in three-dimensional quantum electrodynamics,” Phys. Rev. B 95, 235144 (2017).
* Gusynin _et al._ (2001) Valery Gusynin, Anthony Hams, and Manuel Reenders, “Nonperturbative infrared dynamics of three-dimensional qed with a four-fermion interaction,” Phys. Rev. D 63, 045025 (2001).
* Mèzard and Parisi (1991) M. Mèzard and G. Parisi, “Replica field theory for random manifolds,” J. Phys. I France 1, 809–836 (1991).
* Kardar _et al._ (1986) M. Kardar, G. Parisi, and Y.-C. Zhang, “Dynamic scaling of growing interfaces,” Phys. Rev. Lett. 56, 889–892 (1986).
* Gradshteyn and Rhyzik (2000a) I. S. Gradshteyn and I. M. Rhyzik, “Table of Integrals, Series and Products, 6th ed.” (Academic Press, San Diego-CA, 2000) pp. 992–8.975–1.
* Gradshteyn and Rhyzik (2000b) I. S. Gradshteyn and I. M. Rhyzik, “Table of Integrals, Series and Products, 6th ed.” (Academic Press, San Diego-CA, 2000) pp. 803–7.414–6.
## Appendix A The coefficient $\mathcal{A}_{1}$
We can calculate the integral $\mathcal{A}_{1}$ in terms of Landau levels by
means of the generating function of the Laguerre polynomials
$\displaystyle
e^{-\frac{x}{2}\frac{1-t}{1+t}}=(1+t)e^{-x/2}\sum_{n=0}^{\infty}(-t)^{n}L_{n}^{0}(x),$
(97)
since
$\displaystyle e^{-\mathrm{i}x\tan v}$ $\displaystyle=$ $\displaystyle
e^{-x\left(1-e^{-2\mathrm{i}v}\right)/\left(1+e^{-2\mathrm{i}v}\right)}$
$\displaystyle=$
$\displaystyle\left(1+e^{-2\mathrm{i}v}\right)e^{-x}\sum_{n=0}^{\infty}(-1)^{n}e^{-2\mathrm{i}nv}L_{n}^{0}(2x)$
Therefore, we have (for $x=\mathbf{k}_{\perp}^{2}/eB$)
$\displaystyle\mathcal{A}_{1}=e^{-x}\left(\sum_{n=0}^{\infty}(-1)^{n}L_{n}^{0}(2x)\int_{0}^{\infty}d\tau
e^{\mathrm{i}(\mathcal{D}_{\parallel}-2(n+1)eB)\tau}\right.$
$\displaystyle\left.+\sum_{n=0}^{\infty}(-1)^{n}L_{n}^{0}(2x)\int_{0}^{\infty}d\tau
e^{\mathrm{i}(\mathcal{D}_{\parallel}-2neB)\tau}\right)$ (99)
Evaluating the exponential integrals,
$\displaystyle\mathcal{A}_{1}$ $\displaystyle=$
$\displaystyle\mathrm{i}e^{-x}\left(\sum_{n=0}^{\infty}(-1)^{n}\frac{L_{n}^{0}(2x)}{\mathcal{D}_{\parallel}-2(n+1)eB}\right.$
(100)
$\displaystyle\left.+\sum_{n=0}^{\infty}(-1)^{n}\frac{L_{n}^{0}(2x)}{\mathcal{D}_{\parallel}-2neB}\right)$
$\displaystyle=$
$\displaystyle\mathrm{i}\frac{e^{-x}}{\mathcal{D}_{\parallel}}\left[1+\sum_{n=1}^{\infty}\frac{(-1)^{n}\left[L_{n}^{0}(2x)-L_{n-1}^{0}(2x)\right]}{1-2n\frac{eB}{\mathcal{D}_{\parallel}}}\right]$
This expansion seems is fair for $eB>0$. However, we would like to inspect if
it is possible to use it to generate a valid expansion near $eB=0$.
### A.1 Expansion for low magnetic fields
Notice that, from Eq. (V.2) we have the identities
$\displaystyle
e^{-x}\sum_{n=0}^{\infty}(-1)^{n}e^{-2\mathrm{i}nv}L_{n}^{0}(2x)=\frac{e^{-\mathrm{i}x\tan
v}}{1+e^{-2\mathrm{i}v}}$ (101) $\displaystyle
e^{-x}\sum_{n=0}^{\infty}(-1)^{n}e^{-2\mathrm{i}(n+1)v}L_{n}^{0}(2x)=\frac{e^{-2\mathrm{i}v}e^{-\mathrm{i}x\tan
v}}{1+e^{-2\mathrm{i}v}}$ (102)
Therefore, we can calculate the following expansions
$\displaystyle
e^{-x}\sum_{n=0}^{\infty}\frac{(-1)^{n}L_{n}^{0}(2x)}{\mathcal{D}_{\parallel}-2(n+1)eB}=\frac{1}{\mathcal{D}_{\parallel}}\sum_{k=0}^{\infty}\left(\frac{2eB}{\mathcal{D}_{\parallel}}\right)^{k}$
$\displaystyle\times\left[e^{-x}\sum_{n=0}^{\infty}(-1)^{n}(n+1)^{k}L_{n}^{0}(2x)\right]$
$\displaystyle=\frac{1}{\mathcal{D}_{\parallel}}\sum_{k=0}^{\infty}\left(\frac{2eB}{\mathcal{D}_{\parallel}}\right)^{k}\frac{1}{(-2\mathrm{i})^{k}}\left.\frac{\partial^{k}}{\partial
v^{k}}\left(\frac{e^{-2\mathrm{i}v}e^{-\mathrm{i}x\tan
v}}{1+e^{-2\mathrm{i}v}}\right)\right|_{v\rightarrow 0}$ (103)
and similarly
$\displaystyle
e^{-x}\sum_{n=0}^{\infty}\frac{(-1)^{n}L_{n}^{0}(2x)}{\mathcal{D}_{\parallel}-2neB}=\frac{1}{\mathcal{D}_{\parallel}}\sum_{k=0}^{\infty}\left(\frac{2eB}{\mathcal{D}_{\parallel}}\right)^{k}$
$\displaystyle\times\left[e^{-x}\sum_{n=0}^{\infty}(-1)^{n}n^{k}L_{n}^{0}(2x)\right]$
$\displaystyle=\frac{1}{\mathcal{D}_{\parallel}}\sum_{k=0}^{\infty}\left(\frac{2eB}{\mathcal{D}_{\parallel}}\right)^{k}\frac{1}{(-2\mathrm{i})^{k}}\left.\frac{\partial^{k}}{\partial
v^{k}}\left(\frac{e^{-\mathrm{i}x\tan
v}}{1+e^{-2\mathrm{i}v}}\right)\right|_{v\rightarrow 0}$ (104)
Substituting both expressions into Eq.(100), we obtain the infinite series
$\displaystyle\mathcal{A}_{1}=\frac{\mathrm{i}}{\mathcal{D}_{\parallel}}\left(1+\sum_{k=1}^{\infty}\left(\frac{\mathrm{i}eB}{\mathcal{D}_{\parallel}}\right)^{k}\mathcal{E}_{k}(x)\right)$
(105)
Here, we have defined the polynomials $\mathcal{E}_{k}(x)$ (for
$x=k_{\perp}^{2}/eB$) as generated by the function $e^{-\mathrm{i}x\tan v}$,
$\displaystyle\mathcal{E}_{k}(x)=\lim_{v\rightarrow
0}\frac{\partial^{k}}{\partial v^{k}}\left(e^{-\mathrm{i}x\tan v}\right)$
(106)
The first few cases
$\displaystyle\mathcal{E}_{1}(x)$ $\displaystyle=$ $\displaystyle-\mathrm{i}x$
$\displaystyle\mathcal{E}_{2}(x)$ $\displaystyle=$ $\displaystyle-x^{2}$
$\displaystyle\mathcal{E}_{3}(x)$ $\displaystyle=$
$\displaystyle-2\mathrm{i}x+\mathrm{i}x^{3}$ $\displaystyle\mathcal{E}_{4}(x)$
$\displaystyle=$ $\displaystyle x^{4}-8x^{2}$
$\displaystyle\mathcal{E}_{5}(x)$ $\displaystyle=$
$\displaystyle-\mathrm{i}x^{5}+20\mathrm{i}x^{3}-16\mathrm{i}x$
$\displaystyle\mathcal{E}_{6}(x)$ $\displaystyle=$
$\displaystyle-x^{6}+40x^{4}-136x^{2}$ (107) $\displaystyle\vdots$ (108)
Substituting these expressions into the series Eq. (34), it can be reorganized
as an expansion in terms of the variable
$y=\mathbf{k}_{\perp}^{2}/\mathcal{D}_{\parallel}$, as follows
$\displaystyle\mathcal{A}_{1}$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}}{\mathcal{D}_{\parallel}}\left[(1+y+y^{2}+y^{3}+\ldots)\right.$
(109)
$\displaystyle\left.-2\left(\frac{eB}{\mathcal{D}_{\parallel}}\right)^{2}y\left(1+4y+10y^{2}+20y^{4}+\ldots\right)\right.$
$\displaystyle\left.+8\left(\frac{eB}{\mathcal{D}_{\parallel}}\right)^{4}y\left(2+17y+77y^{2}+\ldots\right)\right]+O\left(\frac{eB}{\mathcal{D}_{\parallel}}\right)^{6}$
$\displaystyle=$
$\displaystyle\frac{\mathrm{i}}{\mathcal{D}_{\parallel}}\left[\frac{1}{1-y}-2\left(\frac{eB}{\mathcal{D}_{\parallel}}\right)^{2}\frac{y}{(1-y)^{4}}\right.$
$\displaystyle\left.+8\left(\frac{eB}{\mathcal{D}_{\parallel}}\right)^{4}\frac{y(3y+2)}{(1-y)^{7}}\right]+O\left(\frac{eB}{\mathcal{D}_{\parallel}}\right)^{6}$
Finally, using the simple identity
$\displaystyle\mathcal{D}_{\parallel}(1-y)=\mathcal{D}_{\parallel}(1-\frac{\mathbf{k}_{\perp}^{2}}{\mathcal{D}_{\parallel}})=k^{2}-m^{2}+\mathrm{i}\epsilon$
(110)
we obtain the final form
$\displaystyle\mathcal{A}_{1}$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}}{k^{2}-m^{2}+\mathrm{i}\epsilon}\left[1-2\left(\frac{eB}{\mathcal{D}_{\parallel}}\right)^{2}\frac{y}{(1-y)^{3}}\right.$
$\displaystyle\left.+8\left(\frac{eB}{\mathcal{D}_{\parallel}}\right)^{4}\frac{y(3y+2)}{(1-y)^{6}}\right]+O(\frac{eB}{\mathcal{D}_{\parallel}})^{6}$
$\displaystyle=$
$\displaystyle\frac{\mathrm{i}}{k^{2}-m^{2}+\mathrm{i}\epsilon}+\frac{-2\mathrm{i}(eB)^{2}\mathbf{k}_{\perp}^{2}}{\left[k^{2}-m^{2}+\mathrm{i}\epsilon\right]^{4}}+O((eB)^{4}).$
### A.2 A closed expression in terms of Hypergeometric functions
In order to simplify the integrals $\mathcal{J}_{i}$, let us to provide an
analytical expression for $\mathcal{A}_{1}(k)$. From the generating function
of the Laguerre’s polynomials:
$\displaystyle\sum_{n=0}^{\infty}(-1)^{n}t^{n}L_{n}^{\alpha}(x)=\frac{1}{(1+t)^{\alpha}}\exp\left(\frac{t}{1+t}x\right),$
(112)
then, by defining:
$\displaystyle b\equiv\frac{t}{1+t},$ (113)
we get:
$\displaystyle(1-b)^{t+\alpha}e^{bx}=\sum_{n=0}^{\infty}(-1)^{n}\frac{b^{n}}{(1-b)^{n}}L_{n}^{\alpha}(x).$
(114)
Multiplying by $b^{\beta}/(1-b)^{\beta+2}$:
$\displaystyle
b^{\beta}(1+b)^{\alpha-\beta-1}e^{-bx}=\sum_{n=0}^{\infty}(-1)^{n}b^{n+\beta}(1-b)^{-n-\beta-2}L_{n}^{\alpha}(x),$
so that:
$\displaystyle\int_{0}^{-\infty}db~{}b^{\beta}(1+b)^{\alpha-\beta-1}e^{-bx}=-\sum_{n=0}^{\infty}\frac{(-1)^{n}L_{n}^{\alpha}(x)}{n+\beta+1}.$
Now by setting $b\to-b$:
$\displaystyle\sum_{n=0}^{\infty}\frac{(-1)^{n}L_{n}^{\alpha}(x)}{n+\beta+1}$
$\displaystyle=$
$\displaystyle(-1)^{\beta}\int_{0}^{\infty}db~{}b^{\beta}(1+b)^{\alpha-\beta-1}e^{-bx}$
$\displaystyle=$ $\displaystyle
e^{\mathrm{i}\pi\beta}\Gamma(1+\beta)U(1+\beta,1+\alpha,x),$
where $\Gamma(z)$, and $U(a,b,z)$ are the gamma-function and the confluent
hypergeometric function, respectively.
Now, from Eq. (100):
$\displaystyle\mathcal{A}_{1}$ $\displaystyle=$
$\displaystyle\mathrm{i}e^{-x}\sum_{n=0}^{\infty}\left(\frac{(-1)^{n}L_{n}^{0}(2x)}{\mathcal{D}_{\parallel}-2(n+1)eB}+\frac{(-1)^{n}L_{n}^{0}(2x)}{\mathcal{D}_{\parallel}-2neB}\right)$
$\displaystyle=$
$\displaystyle-\frac{\mathrm{i}e^{-x}}{2eB}\sum_{n=0}^{\infty}\left[\frac{(-1)^{n}L_{n}^{0}(2x)}{n+\left(1-\frac{\mathcal{D}_{\parallel}}{2eB}\right)}+\frac{(-1)^{n}L_{n}^{0}(2x)}{n-\frac{\mathcal{D}_{\parallel}}{2eB}}\right].$
Therefore, by using Eq. (LABEL:ResultSuma):
$\displaystyle\mathcal{A}_{1}$ $\displaystyle=$
$\displaystyle-\frac{\mathrm{i}e^{-x}}{2eB}$ $\displaystyle\times$
$\displaystyle\Bigg{[}e^{-\mathrm{i}\pi\frac{\mathcal{D}_{\parallel}}{2eB}}\Gamma\left(1-\frac{\mathcal{D}_{\parallel}}{2eB}\right)U\left(1-\frac{\mathcal{D}_{\parallel}}{2eB},1,2x\right)$
$\displaystyle+$ $\displaystyle
e^{-\mathrm{i}\pi\left(1+\frac{\mathcal{D}_{\parallel}}{2eB}\right)}\Gamma\left(-\frac{\mathcal{D}_{\parallel}}{2eB}\right)U\left(-\frac{\mathcal{D}_{\parallel}}{2eB},1,2x\right)\Bigg{]}.$
The latter can be simplified with the properties of the gamma and the
hypergeometric functions. First, from the identity $\Gamma(z+1)=z\Gamma(z)$:
$\displaystyle\mathcal{A}_{1}=\frac{\mathrm{i}e^{-x}}{2eB}\exp\left(-\frac{\mathrm{i}\pi\mathcal{D}_{\parallel}}{2eB}\right)\Gamma\left(-\frac{\mathcal{D}_{\parallel}}{2eB}\right)$
$\displaystyle\times$
$\displaystyle\Bigg{[}\frac{\mathcal{D}_{\parallel}}{2eB}U\left(1-\frac{\mathcal{D}_{\parallel}}{2eB},1,2x\right)+U\left(-\frac{\mathcal{D}_{\parallel}}{2eB},1,2x\right)\Bigg{]}.$
Moreover, given that:
$\displaystyle U(a,b,z)-aU(a+1,b,z)=U(a,b-1,z),$ (121)
we finally arrive to:
$\displaystyle\mathcal{A}_{1}(k)$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}e^{-\mathbf{k}_{\perp}^{2}/eB}}{2eB}\exp\left[-\frac{\mathrm{i}\pi\left(k_{\parallel}^{2}-m^{2}\right)}{2eB}\right]$
$\displaystyle\times$
$\displaystyle\Gamma\left(-\frac{k_{\parallel}^{2}-m^{2}}{2eB}\right)U\left(-\frac{k_{\parallel}^{2}-m^{2}}{2eB},0,\frac{2\mathbf{k}_{\perp}^{2}}{eB}\right).$
## Appendix B The density of states $\rho(E)$
In this appendix, we show the details for the calculation of the density of
states for the Landau level spectrum $\rho(E)$ defined in the main text. We
start from the definition of the density of states
$\displaystyle\rho(E)$ $\displaystyle=$
$\displaystyle\int_{-\infty}^{\infty}\frac{dp_{3}}{2\pi}\sum_{n=0}^{\infty}\delta\left(E-\sqrt{p_{3}^{2}+m^{2}+2(n+1)eB}\right)$
(123) $\displaystyle=$ $\displaystyle
2\sum_{n=0}^{\infty}\int_{0}^{\infty}\frac{dp_{3}}{2\pi}\frac{\delta\left(p^{3}-\sqrt{E^{2}}-m^{2}-2(n+1)eB\right)}{p^{3}/E}$
$\displaystyle=$
$\displaystyle\frac{E}{\pi}\sum_{n=0}^{\infty}\frac{\Theta\left(E-\sqrt{m^{2}+2(n+1)eB}\right)}{\sqrt{E^{2}-m^{2}-2(n+1)eB}}$
Since this function, and hence the corresponding sum, is defined for each
fixed value of the energy $E$, there is a maximum integer $n=N_{max}(E)$ at
which the sum is truncated by the condition imposed on the Heaviside step
function
$\displaystyle E-\sqrt{m^{2}+2(N_{max}+1)eB}=0,$ (124)
that leads to the definition (with $\lfloor z\rfloor$ the lowest integer part)
$\displaystyle N_{max}(E)=\lfloor\frac{E^{2}-m^{2}}{2eB}-1\rfloor.$ (125)
Hence, we have
$\displaystyle\rho(E)$ $\displaystyle=$
$\displaystyle\Theta(E-\sqrt{m^{2}+2eB})\frac{E}{\pi}$ (126)
$\displaystyle\times\sum_{n=0}^{N_{max}(E)}\frac{1}{\sqrt{E^{2}-m^{2}-2(n+1)eB}}$
In this finite sum, $0\leq n\leq N_{max}(E)$, we can redefine the index by
$\displaystyle\ell\equiv N_{max}(E)-n\Longrightarrow 0\leq\ell\leq
N_{max}(E),$ (127)
and hence we have the equivalent expression
$\displaystyle\rho(E)$ $\displaystyle=$
$\displaystyle\Theta(E-\sqrt{m^{2}+2eB})\frac{E}{\pi\sqrt{eB}}$ (128)
$\displaystyle\times\sum_{\ell=0}^{N_{max}(E)}\frac{1}{\sqrt{\frac{E^{2}-m^{2}-2(N_{max}(E)+1)eB}{2eB}}+\ell}$
Finally, using the property of the Riemann Zeta function
$\displaystyle\sum_{\ell=0}^{N}\left(z+\ell\right)^{-s}=\zeta(s,z)-\zeta(s,z+N+1),$
(129)
we obtain
$\displaystyle\rho(E)$ $\displaystyle=$
$\displaystyle\Theta(E-\sqrt{m^{2}+2eB})\frac{E}{\pi\sqrt{eB}}$ (130)
$\displaystyle\times\left[\zeta\left(\frac{1}{2},\frac{E^{2}-m^{2}-2eB}{2eB}-N_{max}(E)\right)\right.$
$\displaystyle\left.-\zeta\left(\frac{1}{2},\frac{E^{2}-m^{2}}{2eB}\right)\right]$
## Appendix C Computing $\tilde{A}_{1}$ and $\tilde{A}_{2}$
### C.1 Weak magnetic field limit
We need to compute:
$\displaystyle\widetilde{\mathcal{A}}_{1}(q_{0})=-2\mathrm{i}(eB)^{2}\int
d^{3}p\frac{p_{\perp}^{2}}{(q_{0}^{2}-p_{3}^{2}-p_{\perp}^{2}-m^{2}+\mathrm{i}\epsilon)^{4}}$
so that in spherical coordinates, with $p_{3}=p\cos\theta$ and
$p_{\perp}=p\sin\theta$:
$\displaystyle\widetilde{\mathcal{A}}_{1}(q_{0})$ (132) $\displaystyle=$
$\displaystyle-4\pi\mathrm{i}(eB)^{2}\int_{0}^{\pi}d\theta\sin^{3}\theta\int_{0}^{\infty}dp\frac{p^{4}}{(q_{0}^{2}-p^{2}-m^{2}+\mathrm{i}\epsilon)^{4}}$
$\displaystyle=$
$\displaystyle-4\pi\mathrm{i}(eB)^{2}\frac{4}{3}\int_{0}^{\infty}dp\frac{p^{4}}{(q_{0}^{2}-p^{2}-m^{2}+\mathrm{i}\epsilon)^{4}}$
$\displaystyle=$
$\displaystyle-4\pi\mathrm{i}(eB)^{2}\frac{4}{3}\int_{0}^{\infty}dp\frac{p^{4}}{(q_{0}^{2}-p^{2}-m^{2}+\mathrm{i}\epsilon)^{4}}$
By defining
$\displaystyle a^{2}\equiv q_{0}^{2}-m^{2}+\mathrm{i}\epsilon,$ (133)
and
$\displaystyle a_{\pm}=\sqrt{q_{0}^{2}-m^{2}}\pm\mathrm{i}\epsilon,$ (134)
we can use complex integration:
$\displaystyle\widetilde{\mathcal{A}}_{1}(q_{0})$ $\displaystyle=$
$\displaystyle-4\pi\mathrm{i}(eB)^{2}\frac{4}{3}\frac{1}{2}\int_{-\infty}^{\infty}dz\frac{z^{4}}{(z^{2}-a^{2})^{4}}$
(135) $\displaystyle=$
$\displaystyle-4\pi\mathrm{i}(eB)^{2}\frac{4}{3}\frac{1}{2}\frac{2\pi\mathrm{i}}{3!}\lim_{z\to
a_{+}}\frac{d^{3}}{dz^{3}}\left[\frac{z^{4}}{(z-a_{-})^{4}}\right]$
$\displaystyle=$
$\displaystyle-4\pi\mathrm{i}(eB)^{2}\frac{4}{3}\frac{1}{2}\frac{2\pi\mathrm{i}}{3!}\frac{(-3)}{16(q_{0}^{2}-m^{2})^{3/2}}$
$\displaystyle=$
$\displaystyle-\frac{\pi^{2}}{6}\frac{(eB)^{2}}{(q_{0}^{2}-m^{2})^{3/2}}.$
On the other hand, at order $\mathcal{O}((eB)^{2})$:
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})$ $\displaystyle=$
$\displaystyle\mathrm{i}\int_{-\infty}^{+\infty}\frac{dp_{3}}{q_{0}^{2}-(p^{3})^{2}-m^{2}+\mathrm{i}\epsilon}$
(136) $\displaystyle=$
$\displaystyle-\mathrm{i}\int_{-\infty}^{+\infty}\frac{dz}{z^{2}-a^{2}}$
$\displaystyle=$
$\displaystyle-\mathrm{i}\int_{-\infty}^{+\infty}\frac{dz}{(z-a_{+})(z+a_{-})}$
By choosing a contour closing upside, we get from the residue theorem:
$\displaystyle-\mathrm{i}\int_{-\infty}^{+\infty}\frac{dz}{(z-a_{+})(z+a_{-})}=\frac{\pi}{a_{+}}.$
(137)
Then:
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})=\frac{\pi}{\sqrt{q_{0}^{2}-m^{2}}}$
(138)
### C.2 Arbitrary magnetic field
From the definition of $\widetilde{\mathcal{A}}_{1}$:
$\displaystyle\widetilde{\mathcal{A}}_{1}(q_{0})$ $\displaystyle=$
$\displaystyle\int d^{3}p\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp})$
(139) $\displaystyle=$ $\displaystyle\mathrm{i}\int
d^{3}p\Bigg{[}\frac{e^{-\mathbf{p}_{\perp}^{2}/eB}}{q_{0}^{2}-p_{3}^{2}-m^{2}+\mathrm{i}\epsilon}$
$\displaystyle+$
$\displaystyle\sum_{n=1}^{\infty}(-1)^{n}e^{-\mathbf{p}_{\perp}^{2}/eB}\frac{L_{n}\left(\frac{2\mathbf{p}_{\perp}^{2}}{eB}\right)-L_{n-1}\left(\frac{2\mathbf{p}_{\perp}^{2}}{eB}\right)}{q_{0}^{2}-p_{3}^{2}-m^{2}-2neB+\mathrm{i}\epsilon}\Bigg{]}$
$\displaystyle=$
$\displaystyle\mathcal{I}_{1}+\sum_{n=1}^{\infty}(-1)^{n}\mathcal{I}_{2,n}$
Here, we defined
$\displaystyle\mathcal{I}_{1}$ $\displaystyle=$ $\displaystyle\mathrm{i}\int
d^{3}p\frac{e^{-\mathbf{p}_{\perp}^{2}/eB}}{q_{0}^{2}-p_{3}^{2}-m^{2}+\mathrm{i}\epsilon}$
(140) $\displaystyle=$ $\displaystyle-\mathrm{i}\int
d^{2}p_{\perp}e^{-\mathbf{p}_{\perp}^{2}/eB}\int_{-\infty}^{\infty}\frac{dz}{(z-a-\mathrm{i}\epsilon)(z+a+\mathrm{i}\epsilon)}$
$\displaystyle=$
$\displaystyle-\mathrm{i}\pi(eB)\frac{2\pi\mathrm{i}}{2(a+\mathrm{i}\epsilon)}=\frac{\pi^{2}eB}{\sqrt{q_{0}^{2}-m^{2}+\mathrm{i}\epsilon}},$
and
$\displaystyle\mathcal{I}_{2,n}=\mathrm{i}\int
d^{3}p\,e^{-\mathbf{p}_{\perp}^{2}/eB}\frac{L_{n}\left(\frac{2\mathbf{p}_{\perp}^{2}}{eB}\right)-L_{n-1}\left(\frac{2\mathbf{p}_{\perp}^{2}}{eB}\right)}{q_{0}^{2}-p_{3}^{2}-m^{2}-2neB+\mathrm{i}\epsilon}$
(141)
In cylindrical coordinates, with azimuthal symmetry, $d^{3}p=dp_{3}\pi
d(\mathbf{p}_{\perp}^{2})$. Moreover, in the integral over
$\mathbf{p}_{\perp}$, define $x=\frac{2\mathbf{p}_{\perp}^{2}}{eB}$, such that
$\displaystyle\mathcal{I}_{2,n}$ $\displaystyle=$ $\displaystyle\frac{\pi
qB}{2}\int_{-\infty}^{\infty}\frac{dp_{3}}{q_{0}^{2}-m^{2}-p_{3}^{2}-2nqB+\mathrm{i}\epsilon}$
(142)
$\displaystyle\times\int_{0}^{\infty}dx\,e^{-x/2}\left[L_{n}(x)-L_{n-1}(x)\right]$
$\displaystyle=$ $\displaystyle 2\pi
qB(-1)^{n}\int_{-\infty}^{\infty}\frac{dp_{3}}{q_{0}^{2}-m^{2}-p_{3}^{2}-2nqB+\mathrm{i}\epsilon}$
where we used the identity (Gradshteyn-Ryzhik)
$\int_{0}^{\infty}dxe^{-bx}L_{n}(x)=\left(b-1\right)^{n}b^{-n-1}\,\,\,\text{Re}\,b>0$
(143)
Inserting Eq. (47) and Eq. (49) into Eq. (46), we obtain (after shifting the
index $n\rightarrow n+1$)
$\displaystyle\tilde{\mathcal{A}}_{1}(q_{0})=\frac{\pi^{2}(eB)}{\sqrt{q_{0}^{2}-m^{2}+\mathrm{i}\epsilon}}$
(144) $\displaystyle+2\pi
qB\int_{-\infty}^{+\infty}dp_{3}\sum_{n=0}^{\infty}\frac{1}{q_{0}^{2}-m^{2}-p_{3}^{2}-2(n+1)eB+\mathrm{i}\epsilon}$
Let us introduce the density of states for Landau levels
$\displaystyle\rho(E)=\int_{-\infty}^{\infty}\frac{dp_{3}}{2\pi}\sum_{n=0}^{\infty}\delta\left(E-E_{n}(p_{3})\right),$
(145)
with the dispersion relation for the spectrum
$\displaystyle E_{n}(p_{3})=\sqrt{p_{3}^{2}+m^{2}+2(n+1)eB}.$ (146)
With these definitions, we obtain from Eq. (V.2) the exact expression
$\displaystyle\tilde{\mathcal{A}}_{1}(q_{0})=\frac{\pi^{2}eB}{\sqrt{q_{0}^{2}-m^{2}+\mathrm{i}\epsilon}}+4\pi^{2}eB\int_{-\infty}^{+\infty}dE\frac{\rho(E)}{q_{0}^{2}-E^{2}+\mathrm{i}\epsilon},$
where, as shown in detail in Appendix B
$\displaystyle\rho(E)$ $\displaystyle=$
$\displaystyle\Theta(E-\sqrt{m^{2}+2eB})\frac{E}{\pi\sqrt{eB}}$ (148)
$\displaystyle\times\left[\zeta\left(\frac{1}{2},\frac{E^{2}-m^{2}-2eB}{2eB}-N_{max}(E)\right)\right.$
$\displaystyle\left.-\zeta\left(\frac{1}{2},\frac{E^{2}-m^{2}}{2eB}\right)\right],$
where we defined
$\displaystyle N_{max}(E)=\lfloor\frac{E^{2}-m^{2}}{2eB}-1\rfloor,$ (149)
with $\lfloor x\rfloor$ the integer part of $x$, and $\zeta(n,z)$ is the
Riemann zeta function.
On the other hand:
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})$ $\displaystyle=$
$\displaystyle\int_{-\infty}^{\infty}dp_{3}p\mathcal{A}_{1}(q_{0},p_{3};\mathbf{p}_{\perp}=0)$
$\displaystyle=$
$\displaystyle\mathrm{i}\int_{-\infty}^{\infty}dp_{3}\frac{1}{q_{0}^{2}-p_{3}^{2}-m^{2}+\mathrm{i}\epsilon}$
$\displaystyle+$
$\displaystyle\mathrm{i}\int_{-\infty}^{\infty}dp_{3}\sum_{n=1}^{\infty}(-1)^{n}\frac{L_{n}(0)-L_{n-1}(0)}{q_{0}^{2}-p_{3}^{2}-m^{2}-2neB+\mathrm{i}\epsilon},$
where the second term vanishes, given that $L_{n}(0)=1~{}\forall n$. Hence
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})=\mathcal{I}_{1}=\frac{\pi^{2}eB}{\sqrt{q_{0}^{2}-m^{2}+\mathrm{i}\epsilon}}.$
(151)
### C.3 Strong magnetic field limit
In this limit:
$\displaystyle\mathcal{A}_{1}(q)=\mathrm{i}\frac{e^{-\mathbf{q}_{\perp}^{2}/eB}}{q_{\parallel}^{2}-m^{2}},$
$\displaystyle\mathcal{A}_{2}(q)=\frac{e^{-\mathbf{q}_{\perp}^{2}/eB}}{q_{\parallel}^{2}-m^{2}},$
(152b) $\displaystyle\mathcal{A}_{3}(q)=0,$ (152c)
$\displaystyle\mathcal{D}(q)=2\frac{e^{-2\mathbf{q}_{\perp}^{2}/eB}}{q_{\parallel}^{2}-m^{2}}$
(152d)
and
$\displaystyle\widetilde{\mathcal{A}}_{1}(q)=\frac{\pi^{2}eB}{\sqrt{q_{0}^{2}-m^{2}}}$
(153a)
$\displaystyle\widetilde{\mathcal{A}}_{2}(q_{0})=\frac{\pi}{\sqrt{q_{0}^{2}-m^{2}}}.$
(153b)
## Appendix D Vertex corrections at $O(\Delta^{2})$
The diagrams contributing at order $\Delta^{2}$ to the 4-point vertex are
depicted in Fig. 11, and hence their corresponding matrix elements are given
by the following integral expressions
$\displaystyle\hat{\Gamma}_{\text{(a)}}=\int\frac{d^{3}q}{(2\pi)^{3}}\,\gamma^{i}S_{\text{F}}(p-q)\gamma^{j}\otimes\gamma_{i}S_{\text{F}}(p^{\prime}-q)\gamma_{j},$
$\displaystyle\hat{\Gamma}_{\text{(b)}}=\int\frac{d^{3}q}{(2\pi)^{3}}\,\gamma^{i}S_{\text{F}}(p-q)\gamma^{j}\otimes\gamma_{i}S_{\text{F}}(p^{\prime}+q)\gamma_{j},$
and
$\displaystyle\hat{\Gamma}_{\text{(c)}}=\int\frac{d^{3}q}{(2\pi)^{3}}\,\gamma^{i}S_{\text{F}}(p+q)\gamma^{j}\otimes\gamma_{i}S_{\text{F}}(p^{\prime}-q)\gamma_{j}$
In order to compute the former expressions, it is convenient to introduce the
notation
$\displaystyle\hat{\Gamma}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}q}{(2\pi)^{3}}\,\gamma^{i}S_{\text{F}}(p+\lambda
q)\gamma^{j}\otimes\gamma_{i}S_{\text{F}}(p^{\prime}+\sigma q)\gamma_{j}.$
where $\lambda,\sigma=\pm 1$. Then, we have the correspondence
$\hat{\Gamma}_{\text{(a)}}=\hat{\Gamma}^{(-,-)}$,
$\hat{\Gamma}_{\text{(b)}}=\hat{\Gamma}^{(-,+)}$, and
$\hat{\Gamma}_{\text{(c)}}=\hat{\Gamma}^{(+,-)}$, respectively. By considering
the tensor structure of the propagator, it is straightforward to realize that
the full vertex, taking into account the multiplicity factors for each
diagram,
$\displaystyle\hat{\Gamma}=2\hat{\Gamma}^{(-,-)}+2\hat{\Gamma}^{(-,+)}+4\hat{\Gamma}^{(+,-)},$
(156)
leads to an effective interaction of the form
$\displaystyle\hat{\Gamma}=\tilde{\Delta}(\bar{\psi}\gamma^{i}\psi)(\bar{\psi}\gamma^{i}\psi)+{\rm{other\,\,tensor\,\,structures}},$
(157)
where the renormalized coefficient $\tilde{\Delta}$ is given up to second
order in $\Delta$ by
$\displaystyle\tilde{\Delta}=\Delta+2(\Delta)^{2}\left(\mathcal{J}_{2}^{(-,-)}+\mathcal{J}_{2}^{(-,+)}+2\mathcal{J}_{2}^{(+,-)}\right.$
$\displaystyle\left.+\left(1-\partial_{x}^{2}\right)(1-\partial_{y}^{2})\mathcal{J}_{3}^{(-,-)}+\left(1-\partial_{x}^{2}\right)(1-\partial_{y}^{2})\mathcal{J}_{3}^{(-,+)}\right.$
$\displaystyle\left.+2\left(1-\partial_{x}^{2}\right)(1-\partial_{y}^{2})\mathcal{J}_{3}^{(+,-)}\right)$
(158)
Now, from Eq. (16):
$\displaystyle\hat{\Gamma}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle-\left(\hat{\Gamma}_{11}^{{}^{(\lambda,\sigma)}}+\hat{\Gamma}_{12}^{{}^{(\lambda,\sigma)}}+\hat{\Gamma}_{13}^{{}^{(\lambda,\sigma)}}+\hat{\Gamma}_{21}^{{}^{(\lambda,\sigma)}}\right.$
$\displaystyle+$
$\displaystyle\left.\hat{\Gamma}_{22}^{{}^{(\lambda,\sigma)}}+\hat{\Gamma}_{23}^{{}^{(\lambda,\sigma)}}+\hat{\Gamma}_{31}^{{}^{(\lambda,\sigma)}}+\hat{\Gamma}_{32}^{{}^{(\lambda,\sigma)}}+\hat{\Gamma}_{33}^{{}^{(\lambda,\sigma)}}\right),$
where
$\displaystyle\hat{\Gamma}_{11}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}q}{(2\pi)^{3}}\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
(160a) $\displaystyle\times\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q),$
$\displaystyle\hat{\Gamma}_{12}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle\mathrm{i}\gamma^{1}\gamma^{2}\int\frac{d^{3}q}{(2\pi)^{3}}\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
(160b) $\displaystyle\times\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{2}(p^{\prime}+\sigma q),$
$\displaystyle\hat{\Gamma}_{13}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}q}{(2\pi)^{3}}\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(\not{p}^{\prime}_{\perp}+\sigma\not{q}_{\perp}\right)$
(160c) $\displaystyle\times$ $\displaystyle\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{3}(p^{\prime}+\sigma q),$
$\displaystyle\hat{\Gamma}_{21}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle\mathrm{i}\gamma^{1}\gamma^{2}\int\frac{d^{3}q}{(2\pi)^{3}}\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
(160d) $\displaystyle\times$ $\displaystyle\mathcal{A}_{2}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q),$
$\displaystyle\hat{\Gamma}_{22}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle-\int\frac{d^{3}q}{(2\pi)^{3}}\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
(160e) $\displaystyle\times$ $\displaystyle\mathcal{A}_{2}(p+\lambda
q)\mathcal{A}_{2}(p^{\prime}+\sigma q),$
$\displaystyle\hat{\Gamma}_{23}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle\mathrm{i}\gamma^{1}\gamma^{2}\int\frac{d^{3}q}{(2\pi)^{3}}\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(\not{p}^{\prime}_{\perp}+\sigma\not{q}_{\perp}\right)$
(160f) $\displaystyle\times$ $\displaystyle\mathcal{A}_{2}(p+\lambda
q)\mathcal{A}_{3}(p^{\prime}+\sigma q),$
$\displaystyle\hat{\Gamma}_{31}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}q}{(2\pi)^{3}}\left(\not{p}_{\perp}+\lambda\not{q}_{\perp}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
(160g) $\displaystyle\times$ $\displaystyle\mathcal{A}_{3}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q),$
$\displaystyle\hat{\Gamma}_{32}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle-\mathrm{i}\gamma^{1}\gamma^{2}\int\frac{d^{3}q}{(2\pi)^{3}}\left(\not{p}_{\perp}+\lambda\not{q}_{\perp}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
(160h) $\displaystyle\times$ $\displaystyle\mathcal{A}_{3}(p+\lambda
q)\mathcal{A}_{2}(p^{\prime}+\sigma q),$
$\displaystyle\hat{\Gamma}_{33}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle\int\frac{d^{3}q}{(2\pi)^{3}}\left(\not{p}_{\perp}+\lambda\not{q}_{\perp}\right)\left(\not{p}_{\perp}^{\prime}+\sigma\not{q}_{\perp}\right)$
(160i) $\displaystyle\times$ $\displaystyle\mathcal{A}_{3}(p+\lambda
q)\mathcal{A}_{3}(p^{\prime}+\sigma q).$
The latter equations can be condensed by defining a single master integral in
terms of $\mathcal{A}_{1}$ and its derivatives. To do so, note that:
$\displaystyle\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
(161) $\displaystyle=$ $\displaystyle
m^{2}+m\left[\not{p}_{\parallel}+\not{p}_{\parallel}^{\prime}+(\sigma+\lambda)\not{q}_{\parallel}\right]+\not{p}_{\parallel}\not{p}_{\parallel}^{\prime}$
$\displaystyle+$
$\displaystyle\sigma\not{p}_{\parallel}\not{q}_{\parallel}+\lambda\not{q}_{\parallel}\not{p}_{\parallel}^{\prime}+\lambda\sigma\left(\not{q}_{\parallel}\right)^{2},$
and given that $\mathcal{A}_{i}$ are even functions of $q$, the linear terms
can be ignored. Then:
$\displaystyle\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
(162) $\displaystyle\to$ $\displaystyle
m^{2}+m\left(\not{p}_{\parallel}+\not{p}_{\parallel}^{\prime}\right)+\not{p}_{\parallel}\not{p}_{\parallel}^{\prime}+\lambda\sigma
q_{\parallel}^{2}.$
Now, it is convenient to define:
$\displaystyle P$ $\displaystyle\equiv$ $\displaystyle\frac{p^{\prime}+p}{2},$
$\displaystyle Q$ $\displaystyle\equiv$ $\displaystyle\frac{p^{\prime}-p}{2},$
(163)
from which:
$\displaystyle\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
$\displaystyle\to$ $\displaystyle
m^{2}+2m\not{P}_{\parallel}+\left(\not{P}_{\parallel}-\not{Q}_{\parallel}\right)\left(\not{P}_{\parallel}+\not{Q}_{\parallel}\right)+\lambda\sigma
q_{\parallel}^{2}.$
Similarly:
$\displaystyle\left(m+\not{p}_{\parallel}+\lambda\not{q}_{\parallel}\right)\left(\not{p}^{\prime}_{\perp}+\sigma\not{q}_{\perp}\right)$
(165) $\displaystyle\to$ $\displaystyle
m\left(\not{P}_{\perp}+\not{Q}_{\perp}\right)+\left(\not{P}_{\parallel}-\not{Q}_{\parallel}\right)\left(\not{P}_{\perp}+\not{Q}_{\perp}\right),$
$\displaystyle\left(\not{p}_{\perp}+\lambda\not{q}_{\perp}\right)\left(m+\not{p}_{\parallel}^{\prime}+\sigma\not{q}_{\parallel}\right)$
(166) $\displaystyle\to$ $\displaystyle
m\left(\not{P}_{\perp}-\not{Q}_{\perp}\right)+\left(\not{P}_{\perp}-\not{Q}_{\perp}\right)\left(\not{P}_{\parallel}+\not{Q}_{\parallel}\right),$
and
$\displaystyle\left(\not{p}_{\perp}+\lambda\not{q}_{\perp}\right)\left(\not{p}_{\perp}^{\prime}+\sigma\not{q}_{\perp}\right)$
(167) $\displaystyle\to$
$\displaystyle\left(\not{P}_{\perp}-\not{Q}_{\perp}\right)\left(\not{P}_{\perp}+\not{Q}_{\perp}\right)-\lambda\sigma\mathbf{q}_{\perp}^{2}.$
Moreover, Eqs. (20) provide relations between $\mathcal{A}_{1}$ and
$\mathcal{A}_{2}$ and $\mathcal{A}_{3}$ so that by introducing the variables
$\displaystyle
x=\frac{\mathbf{p}_{\perp}^{2}}{eB},~{}~{}y=\frac{\mathbf{p}_{\perp}^{{}^{\prime}2}}{eB},$
(168)
so that
$\displaystyle\hat{\Gamma}^{{}^{(\lambda,\sigma)}}$ $\displaystyle=$
$\displaystyle-\left[\left(\not{P}_{\parallel}-\not{Q}_{\parallel}\right)\left(\not{P}_{\parallel}+\not{Q}_{\parallel}\right)+2m\not{P}_{\parallel}+m^{2}\right]\left[1+\partial_{x}\partial_{y}-\gamma^{1}\gamma^{2}\left(\partial_{x}-\partial_{y}\right)\right]\mathcal{J}_{1}^{(\lambda,\sigma)}(p,p^{\prime})$
(169)
$\displaystyle-\left[\left(\not{P}_{\parallel}-\not{Q}_{\parallel}\right)\left(\not{P}_{\perp}+\not{Q}_{\perp}\right)+m\left(\not{P}_{\perp}+\not{Q}_{\perp}\right)\right]\left(1-\gamma^{1}\gamma^{2}\partial_{x}\right)\left(1-\partial^{2}_{y}\right)\mathcal{J}_{1}^{(\lambda,\sigma)}(p,p^{\prime})$
$\displaystyle-\left[\left(\not{P}_{\perp}-\not{Q}_{\perp}\right)\left(\not{P}_{\parallel}+\not{Q}_{\parallel}\right)+m\left(\not{P}_{\perp}-\not{Q}_{\perp}\right)\right]\left(1+\gamma^{1}\gamma^{2}\partial_{y}\right)\left(1-\partial^{2}_{x}\right)\mathcal{J}_{1}^{(\lambda,\sigma)}(p,p^{\prime})$
$\displaystyle-\left(\not{P}_{\perp}-\not{Q}_{\perp}\right)\left(\not{P}_{\perp}+\not{Q}_{\perp}\right)\left(1-\partial_{x}^{2}\right)\left(1-\partial_{y}^{2}\right)\mathcal{J}_{1}^{(\lambda,\sigma)}(p,p^{\prime})$
$\displaystyle-\lambda\sigma\left[1+\partial_{x}\partial_{y}-\gamma^{1}\gamma^{2}\left(\partial_{x}-\partial_{y}\right)\right]\mathcal{J}_{2}^{(\lambda,\sigma)}(p,p^{\prime})-\lambda\sigma\left(1-\partial_{x}^{2}\right)\left(1-\partial_{y}^{2}\right)\mathcal{J}_{3}^{(\lambda,\sigma)}(p,p^{\prime}),$
where
$\displaystyle\mathcal{J}_{1}^{(\lambda,\sigma)}(p,p^{\prime})\equiv\int\frac{d^{3}q}{(2\pi)^{3}}\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q),$
$\displaystyle\mathcal{J}_{2}^{(\lambda,\sigma)}(p,p^{\prime})\equiv\int\frac{d^{3}q}{(2\pi)^{3}}q_{\parallel}^{2}\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q),$ and
$\displaystyle\mathcal{J}_{3}^{(\lambda,\sigma)}(p,p^{\prime})\equiv\int\frac{d^{3}q}{(2\pi)^{3}}\mathbf{q}_{\perp}^{2}\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q),$
In order to simplify the integrals $\mathcal{J}_{i}$, we shall use the
analytical expression for $\mathcal{A}_{1}(k)$ Eq. (LABEL:eq_A1_U) (details in
Appendix A):
$\displaystyle\mathcal{A}_{1}(k)$ $\displaystyle=$
$\displaystyle\frac{\mathrm{i}e^{-\mathbf{k}_{\perp}^{2}/eB}}{2eB}\exp\left[-\frac{\mathrm{i}\pi\left(k_{\parallel}^{2}-m^{2}\right)}{2eB}\right]$
$\displaystyle\times$
$\displaystyle\Gamma\left(-\frac{k_{\parallel}^{2}-m^{2}}{2eB}\right)U\left(-\frac{k_{\parallel}^{2}-m^{2}}{2eB},0,\frac{2\mathbf{k}_{\perp}^{2}}{eB}\right).$
### D.1 The integral $\mathcal{J}_{1}$
We shall consider the integral:
$\displaystyle\mathcal{J}_{1}^{(\lambda,\sigma)}(p,p^{\prime})=\int\frac{d^{3}q}{(2\pi)^{3}}\mathcal{A}_{1}(p+\lambda
q)\mathcal{A}_{1}(p^{\prime}+\sigma q)$ (172)
For the case $(\lambda,\sigma)=(-1,-1)$ we change the integration variables as
follows
$\displaystyle p^{\prime}-q$ $\displaystyle=$ $\displaystyle q^{\prime}+Q$
$\displaystyle p-q$ $\displaystyle=$ $\displaystyle q^{\prime}-Q$ (173)
in what follows, we shall use $q$ instead of $q^{\prime}$. For notation
simplicity, we shall define the parameters
$\displaystyle a$ $\displaystyle=$
$\displaystyle-\frac{\mathcal{D}_{\parallel}(q_{3}+Q_{\parallel})}{2eB},$
$\displaystyle a^{\prime}$ $\displaystyle=$
$\displaystyle-\frac{\mathcal{D}_{\parallel}(q_{3}-Q_{\parallel})}{2eB},$
(174)
and we shall use the identity
$\displaystyle\Gamma(a)U(a,\epsilon,z)=\frac{1}{a}M(a,\epsilon,z)+\Gamma(-1+\epsilon)zM(1+a,2,z),$
together with for $\epsilon\rightarrow 0^{+}$
$\displaystyle\Gamma(-1+\epsilon)=\frac{-1}{\epsilon}+\gamma_{e}-1+O(\epsilon),$
(176)
where $\gamma_{e}=0.577$ is the Euler-Mascheroni constant. Also, given that
there is a strong exponential damping in the integral, we consider the
expansion of the Kummer function for small values of its argument, that is
given by
$\displaystyle M(a,b,z)=1+\frac{a}{b}z+O(z^{2}),$ (177)
so that, after removing the divergences, we end up with the integral
$\displaystyle\mathcal{J}_{1}^{(-,-)}(p,p^{\prime})$ $\displaystyle=$
$\displaystyle\left(\frac{\mathrm{i}}{2eB}\right)^{2}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}\int_{-\infty}^{\infty}\frac{dq_{3}}{2\pi}~{}e^{\mathrm{i}\pi\left(a+a^{\prime}\right)}\int_{0}^{\infty}\frac{d^{2}q_{\perp}}{(2\pi)^{2}}e^{-\frac{2\mathbf{q}^{2}_{\perp}}{eB}}~{}\Gamma\left(a\right)U\left(a,\epsilon,\frac{(q+Q)_{\perp}^{2}}{eB}\right)\Gamma\left(a^{\prime}\right)U\left(a^{\prime},\epsilon,\frac{(q-Q)_{\perp}^{2}}{eB}\right)$
(178) $\displaystyle=$
$\displaystyle\frac{1}{(2\pi)^{3}}\left(\frac{\mathrm{i}}{2eB}\right)^{2}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}\int_{-\infty}^{\infty}dq_{3}~{}e^{\mathrm{i}\pi\left(a+a^{\prime}\right)}$
$\displaystyle\times$
$\displaystyle\int_{0}^{\infty}d^{2}q_{\perp}e^{-\frac{2\mathbf{q}^{2}_{\perp}}{eB}}\left[\frac{1}{a}+\frac{\gamma_{e}-1}{eB}\left(\mathbf{q_{\perp}^{2}}+2\mathbf{Q}_{\perp}\cdot\mathbf{q}_{\perp}+\mathbf{Q}_{\perp}^{2}\right)\right]\left[\frac{1}{a^{\prime}}+\frac{\gamma_{e}-1}{eB}\left(\mathbf{q_{\perp}^{2}}-2\mathbf{Q}_{\perp}\cdot\mathbf{q}_{\perp}+\mathbf{Q}_{\perp}^{2}\right)\right]$
Let us focus into the integrand. At order
$\mathcal{O}(\mathbf{q}_{\perp}^{2})$, we have:
$\displaystyle\left[\frac{1}{a}+\frac{\gamma_{e}-1}{eB}\left(\mathbf{q_{\perp}^{2}}+2\mathbf{Q}_{\perp}\cdot\mathbf{q}_{\perp}+\mathbf{Q}_{\perp}^{2}\right)\right]\left[\frac{1}{a^{\prime}}+\frac{\gamma_{e}-1}{eB}\left(\mathbf{q_{\perp}^{2}}-2\mathbf{Q}_{\perp}\cdot\mathbf{q}_{\perp}+\mathbf{Q}_{\perp}^{2}\right)\right]$
$\displaystyle\simeq$
$\displaystyle\frac{1}{aa^{\prime}}+\frac{(\gamma_{e}-1)\mathbf{q}_{\perp}^{2}}{eB}\left(\frac{1}{a}+\frac{1}{a^{\prime}}+\frac{2(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}}{eB}\right)+\frac{(\gamma_{e}-1)(2\mathbf{Q}_{\perp}\cdot\mathbf{q}_{\perp})}{eB}\left(\frac{1}{a^{\prime}}-\frac{1}{a}\right)$
$\displaystyle-$
$\displaystyle\frac{(\gamma_{e}-1)^{2}(2\mathbf{Q}_{\perp}\cdot\mathbf{q}_{\perp})^{2}}{eB^{2}}+\frac{(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}}{eB}\left(\frac{1}{a}+\frac{1}{a^{\prime}}\right).$
Then, by defining:
$\displaystyle z\equiv\frac{2\mathbf{q}_{\perp}^{2}}{eB},$ (180)
such that
$\displaystyle\mathbf{Q}_{\perp}\cdot\mathbf{q}_{\perp}$ $\displaystyle=$
$\displaystyle|\mathbf{Q}_{\perp}|~{}|\mathbf{q}_{\perp}|\cos\theta=\sqrt{\frac{eB}{2}}|\mathbf{Q}_{\perp}|z^{1/2}\cos\theta,$
$\displaystyle d^{3}q$ $\displaystyle=$ $\displaystyle
dq_{3}\,d^{2}q_{\perp}=\frac{eB}{4}dzd\theta dq_{3},$ (181)
where after angular integration we obtain:
$\displaystyle\mathcal{J}_{1}^{(-,-)}(p,p^{\prime})=\frac{1}{(2\pi)^{3}}\frac{\pi
eB}{2}\left(\frac{\mathrm{i}}{2eB}\right)^{2}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}$
(182) $\displaystyle\times$
$\displaystyle\int_{-\infty}^{\infty}dq_{3}e^{-\frac{\mathrm{i}\pi}{2eB}\left(\mathcal{D}_{\parallel}(q_{3}+Q_{\parallel})+\mathcal{D}_{\parallel}(q_{3}-Q_{\parallel})\right)}$
$\displaystyle\times$
$\displaystyle\int_{0}^{\infty}dze^{-z}\Bigg{\\{}\frac{1}{aa^{\prime}}+\frac{(\gamma_{e}-1)}{2}\left(\frac{1}{a}+\frac{1}{a^{\prime}}\right)z$
$\displaystyle+$
$\displaystyle\frac{(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}}{eB}\left(\frac{1}{a}+\frac{1}{a^{\prime}}\right)\Bigg{\\}}.$
Performing the integration over $z$:
$\displaystyle\mathcal{J}_{1}^{(-,-)}(p,p^{\prime})$ $\displaystyle=$
$\displaystyle-\frac{eB}{4\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}e^{-\frac{\mathrm{i}\pi(Q_{\parallel}^{2}-m^{2})}{eB}}$
(183) $\displaystyle\times$
$\displaystyle\int_{-\infty}^{\infty}dq_{3}\frac{\exp\left(\frac{\mathrm{i}\pi}{2eB}q_{3}^{2}\right)}{\left(Q_{\parallel}^{2}-q_{3}^{2}-m^{2}+\mathrm{i}\epsilon\right)^{2}-4Q_{3}^{2}q_{3}^{2}}$
$\displaystyle\times$
$\displaystyle\Bigg{[}1-\frac{(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}\left(Q_{\parallel}^{2}-q_{3}^{2}-m^{2}\right)}{(eB)^{2}}$
$\displaystyle-$
$\displaystyle\frac{(\gamma_{e}-1)\left(Q_{\parallel}^{2}-q_{3}^{2}-m^{2}\right)}{eB}\Bigg{]}$
In what follows, we shall set the external 3-momenta to zero, except for the
presence of the $Q_{\perp}$ factors, those we shall keep in order to use this
expression as a generating function. Then:
$\displaystyle\mathcal{J}_{1}^{(-,-)}(p,p^{\prime})=-\frac{eB}{4\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}$
$\displaystyle\times$
$\displaystyle\int_{-\infty}^{\infty}\frac{dq_{3}}{\left(q_{3}^{2}+m^{2}+\mathrm{i}\epsilon\right)^{2}}\exp\left[\frac{\mathrm{i}\pi}{2eB}\left(q_{3}^{2}+m^{2}\right)\right]$
$\displaystyle\times$
$\displaystyle\Bigg{[}1+\frac{(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}\left(q_{3}^{2}+m^{2}\right)}{(eB)^{2}}+\frac{(\gamma_{e}-1)\left(q_{3}^{2}+m^{2}\right)}{eB}\Bigg{]}$
Now, from the well-known results
$\displaystyle\int_{-\infty}^{\infty}\frac{dx}{x^{2}+m^{2}}\exp\left[\frac{\mathrm{i}\pi}{2b}(x^{2}+m^{2})\right]$
$\displaystyle=$
$\displaystyle\frac{\pi}{m}\left[1-(1-\mathrm{i})C\left(\frac{m}{\sqrt{b}}\right)-(1+\mathrm{i})S\left(\frac{m}{\sqrt{b}}\right)\right],$
and
$\displaystyle\int_{-\infty}^{\infty}\frac{dx}{(x^{2}+m^{2})^{2}}\exp\left[\frac{\mathrm{i}\pi}{2b}(x^{2}+m^{2})\right]$
$\displaystyle=$
$\displaystyle\frac{(1-\mathrm{i})\pi}{2\sqrt{b}m^{2}}e^{\frac{\mathrm{i}\pi
m^{2}}{2b}}$ $\displaystyle+$ $\displaystyle\frac{(b+\mathrm{i}\pi
m^{2})\pi}{2bm^{3}}\left[1-(1-\mathrm{i})C\left(\frac{m}{\sqrt{b}}\right)-(1+\mathrm{i})S\left(\frac{m}{\sqrt{b}}\right)\right],$
where $C(x)$ and $S(x)$ are the cosine and sine Fresnel integrals,
respectively. From the property:
$\displaystyle
C(x)+\mathrm{i}S(x)=\sqrt{\frac{\pi}{2}}\frac{1+\mathrm{i}}{2}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}x\right),$
(186)
we have:
$\displaystyle\mathcal{J}_{1}^{(-,-)}(p,p^{\prime})=-\frac{eB}{4\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}\Bigg{\\{}\frac{(1-\mathrm{i})\pi}{2\sqrt{eB}m^{2}}e^{\frac{\mathrm{i}\pi
m^{2}}{2eB}}$ $\displaystyle+$ $\displaystyle\frac{(eB+\mathrm{i}\pi
m^{2})\pi}{2(eB)m^{3}}\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]$
$\displaystyle+$
$\displaystyle\frac{\pi(\gamma_{e}-1)}{(eB)m}\left(1+\frac{\mathbf{Q}_{\perp}^{2}}{eB}\right)\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]\Bigg{\\}},$
where $\text{erf}(x)$ is the error function.
### D.2 The integral $\mathcal{J}_{2}$
For this integral, note that $q_{\parallel}^{2}=-q_{3}^{2}$, so that after
integration over $z$ we get:
$\displaystyle\mathcal{J}_{2}^{(-,-)}(p,p^{\prime})=\frac{eB}{4\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}$
$\displaystyle\times$
$\displaystyle\int_{-\infty}^{\infty}dq_{3}\frac{q_{3}^{2}}{\left(q_{3}^{2}+m^{2}+\mathrm{i}\epsilon\right)^{2}}\exp\left[\frac{\mathrm{i}\pi}{2eB}\left(q_{3}^{2}+m^{2}\right)\right]$
$\displaystyle\times$
$\displaystyle\Bigg{[}1+\frac{(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}\left(q_{3}^{2}+m^{2}\right)}{(eB)^{2}}+\frac{(\gamma_{e}-1)\left(q_{3}^{2}+m^{2}\right)}{eB}\Bigg{]}$
By using:
$\displaystyle\int_{-\infty}^{\infty}dx\frac{x^{2}}{x^{2}+m^{2}}\exp\left[\frac{\mathrm{i}\pi}{2b}(x^{2}+m^{2})\right]$
$\displaystyle=$ $\displaystyle(1+\mathrm{i})e^{\frac{\mathrm{i}\pi
m^{2}}{2b}}\sqrt{b}$ $\displaystyle-$ $\displaystyle
m\pi\left[1-(1-\mathrm{i})C\left(\frac{m}{\sqrt{b}}\right)-(1+\mathrm{i})S\left(\frac{m}{\sqrt{b}}\right)\right],$
and
$\displaystyle\int_{-\infty}^{\infty}dx\frac{x^{2}}{(x^{2}+m^{2})^{2}}\exp\left[\frac{\mathrm{i}\pi}{2b}(x^{2}+m^{2})\right]$
$\displaystyle=$
$\displaystyle\frac{(\mathrm{i}-1)\pi}{\sqrt{b}}e^{\frac{\mathrm{i}\pi
m^{2}}{2b}}$ $\displaystyle+$ $\displaystyle\frac{(\mathrm{i}b+\pi
m^{2})\pi}{2bm}\left[-\mathrm{i}+(1+\mathrm{i})C\left(\frac{m}{\sqrt{b}}\right)-(1-\mathrm{i})S\left(\frac{m}{\sqrt{b}}\right)\right],$
we get:
$\displaystyle\mathcal{J}_{2}^{(-,-)}(p,p^{\prime})=\frac{eB}{4\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}$
$\displaystyle\times$
$\displaystyle\Bigg{\\{}\frac{(\mathrm{i}-1)\pi}{\sqrt{eB}}e^{\frac{\mathrm{i}\pi
m^{2}}{2eB}}$ $\displaystyle+$ $\displaystyle\frac{\left(eB-\mathrm{i}\pi
m^{2}\right)\pi}{2(eB)m}\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]$
$\displaystyle-$
$\displaystyle\frac{m\pi(\gamma_{e}-1)}{(eB)}\left(1+\frac{\mathbf{Q}_{\perp}^{2}}{eB}\right)\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]\Bigg{\\}}$
### D.3 The integral $\mathcal{J}_{3}$
By following the same procedure, it is straightforward to obtain at order
$\mathcal{O}(z)$:
$\displaystyle\mathcal{J}_{3}^{(-,-)}(p,p^{\prime})=-\frac{eB^{2}}{8\pi^{2}}e^{-\frac{2\mathbf{Q}_{\perp}^{2}}{eB}}\Bigg{\\{}\frac{(1-\mathrm{i})\pi}{2\sqrt{eB}m^{2}}e^{\frac{\mathrm{i}\pi
m^{2}}{2eB}}$ $\displaystyle+$ $\displaystyle\frac{(eB+\mathrm{i}\pi
m^{2})\pi}{2(eB)m^{3}}\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]$
$\displaystyle+$
$\displaystyle\frac{\pi(\gamma_{e}-1)\mathbf{Q}_{\perp}^{2}}{(eB)^{2}m}\left[1-\sqrt{\frac{\pi}{2}}\text{erf}\left(\frac{1-\mathrm{i}}{\sqrt{2}}\frac{m}{\sqrt{eB}}\right)\right]\Bigg{\\}},$
Moreover, it is easy to check that
$\displaystyle\mathcal{J}_{n}^{(+,+)}(p,p^{\prime})$ $\displaystyle=$
$\displaystyle\mathcal{J}_{n}^{(-,-)}(p,p^{\prime})$
$\displaystyle\mathcal{J}_{n}^{(+,-)}(p,p^{\prime})$ $\displaystyle=$
$\displaystyle\mathcal{J}_{n}^{(-,+)}(p,p^{\prime})=\mathcal{J}_{n}^{(-,-)}(p,p^{\prime})\mid_{Q\rightarrow
P},$
for $n=1,2,3$.
|
# An Optical Study of the Black Widow Population
D. Kandel Department of Physics, Stanford University, Stanford, CA, 94305, USA
Roger W. Romani Department of Physics, Stanford University, Stanford, CA,
94305, USA
(Received 2022 June)
###### Abstract
The optical study of the heated substellar companions of ‘Black Widow’ (BW)
millisecond pulsars (MSP) provides unique information on the MSP particle and
radiation output and on the neutron star mass. Here we present analysis of
optical photometry and spectroscopy of a set of relatively bright BWs, many
newly discovered in association with Fermi $\gamma$-ray sources. Interpreting
the optical data requires sophisticated models of the companion heating. We
provide a uniform analysis, selecting the preferred heating model and
reporting on the companion masses and radii, the pulsar heating power and
neutron star mass. The substellar companions are substantially degenerate,
with average densities $15-30\times$ Solar, but are inflated above their zero
temperature radii. We find evidence that the most extreme recycled BW pulsars
have both large $>0.8M_{\odot}$ accreted mass and low $<10^{8}$G magnetic
fields. Examining a set of heavy BWs, we infer that neutron star masses larger
than $2.19M_{\odot}$ ($1\sigma$ confidence) or $2.08M_{\odot}$ ($3\sigma$
confidence) are required; these bounds exclude all but the stiffest equations
of state in standard tabulations.
pulsars: general — pulsars: individual (PSR J0023$+$0923, J0636$+$5128,
J0952$-$0607, J1301$+$0833, J1311$-$3430, J1653$-$0158, J1810$+$1744,
J1959$+$2048, J2052$+$1219)
## 1 Introduction
Black widows (BWs) with sub-stellar companions and redbacks (RBs) with low-
mass star companions, together known as ‘spiders’, are binary systems of MSP
in tight $<1$ d orbits, with the companion heated and evaporated by the pulsar
spindown power. Since the discovery of BW PSR J1959+2048’s companion(Fruchter
et al., 1988; Kulkarni et al., 1988), it has been known that the heating
pattern and the resulting light curve are important probes of the binary
geometry (Djorgovski & Evans 1988, Callanan et al. 1995) and the mass of the
MSP (Aldcroft et al. 1992, van Kerkwijk et al. 2011). These pulsars are
difficult to discover in the radio band, due to scattering and obscuration by
the evaporation wind. The pulsars’ peak photon output is in the penetrating
GeV gamma-rays and with the advent of the Fermi LAT sky survey and attendant
follow-up searches, the number of known ‘spiders’ has greatly increased. We
now have detailed optical light curves of many of these objects (Draghis et
al., 2019). Photometric data and its modeling provide an important probe of
the physics of these systems and can constrain the size and heating of the
companion. Most importantly, such modeling constrains the binary inclination
$i$, which together with spectroscopic measurements can help determine the
pulsar and companion masses.
Traditional modeling of these spider systems assumes direct pulsar irradiation
of the companion, which then spontaneously re-emits thermal radiation. For
such models, the optical light curve is symmetric with its maximum at pulsar
inferior conjunction. While some objects show optical modulation broadly
consistent with this picture, many light curves, especially those with higher
S/N, show significant peak asymmetries (Stappers et al., 2001) and phase-
shifts (Schroeder & Halpern, 2014). Past work has discussed the possible
physical origin of such asymmetries, including companion irradiation by an
asymmetric intrabinary shock (IBS; Romani & Sanchez 2016), IBS particles
ducting along magnetic field lines to companion poles (Sanchez & Romani,
2017), heat transfer from a global wind (Kandel & Romani 2020, hereafter KR20)
or general surface diffusion (Voisin et al., 2020). As described in Kandel &
Romani (2020), and Romani et al. (2021), properly accounting for these effects
is important to get an unbiased estimate of the NS mass. We caution that, in
many cases, masses based on simple direct heating modeling will not be
accurate.
In this paper, we discuss ten BW systems, presenting uniform LC modeling of
nine for which we have photometric data. Six out of these also have radial
velocity data, which gives us an opportunity to estimate the system masses. We
show that BW systems host heavy neutron stars, and by combining the inferred
neutron star masses, we can put a strong lower bound on the maximal neutron
star mass, appreciably above the values inferred from radio Shapiro delay
measurements. This should have significant ramifications for the equation of
state of dense matter.
Table 1: Summary of BW parameters and photometric observations Name | $P_{s}$ (ms) | $P_{b}$ (hr) | $\dot{E}(10^{34}$erg/s) | $x_{1}$(lt-s) | $A_{V}$(mag) | $B\,(10^{8}$G) | Bands | Ref.
---|---|---|---|---|---|---|---|---
J$0023+0923$ | 3.05 | 3.33 | 1.60 | 0.035 | 0.382 | 1.90 | $gri$ | Draghis et al. (2019)
J$0636+5128$ | 2.87 | 1.60 | 0.58 | 0.009 | 0.218 | 1.01 | $grizHK$ | Draghis & Romani (2018)
J$0952-0607$ | 1.41 | 6.42 | 6.65 | 0.063 | 0.137 | 0.82 | $ugriz$ | Romani et al. (2022)
J$1301+0833$ | 1.84 | 6.48 | 6.65 | 0.078 | 0.082 | 1.41 | $griz$ | Draghis et al. (2019)
J$1311-3430$ | 2.56 | 1.56 | 5.00 | 0.011 | 0.137 | 2.36 | $ugriz$ | Romani et al. (2015)
J$1653-0158$ | 1.97 | 1.25 | 1.20 | 0.011 | 0.710 | 0.68 | $u^{\prime}g^{\prime}r^{\prime}i^{\prime}$ | Nieder et al. (2020)
J$1810+1744$ | 1.66 | 3.56 | 3.97 | 0.095 | 0.390 | 0.88 | $ugriz$ | Romani et al. (2021)
J$1959+2048$ | 1.61 | 9.17 | 16.0 | 0.089 | 0.600 | 1.66 | $BVRIK$ | Reynolds et al. (2007)
J$2051-0827$ | 4.51 | 2.37 | 0.55 | 0.045 | 0.296 | 2.42 | $ugriz$ | Dhillon et al. (2022)
J$2052+1219$ | 1.99 | 2.75 | 3.37 | 0.061 | 0.328 | 1.17 | $gri$ | Draghis et al. (2019)
## 2 Observations
Table 1 summarises some basic properties of the ten BWs in our population
study. It also shows the photometric bands used; the acquisition and
processing of these data have been described in earlier papers. Photometry of
five systems – J0023+0923, J0636+5128, J1301+0833, J1959+2048 and J2052+1219 –
is described in Draghis et al. (2019) and references therein. Optical
photometry and spectroscopy of J0952$-$0607 is described in Romani et al.
(2022a). For J1311$-$3430, we use synthesized photometry from LRIS spectra, as
described in Romani et al. (2015). For J1653$-$0158, we use ULTRACAM data from
Nieder et al. (2020). Optical observations of J1810+1744 are described in
Romani et al. (2021). Photometry and fitting of PSR J2051$-$0827 are presented
in Dhillon et al. (2022).
## 3 Photometric Fitting
Table 2: Light Curve Fit Results Name | $i\,(\mathrm{deg})$ | $f$ | $T_{N}\,(\mathrm{K})$ | $L_{\mathrm{H}}\,(10^{33}\,\mathrm{erg/s})$ | $d\,({\rm kpc})$ | $\theta_{\rm hs}\,(\mathrm{deg})$ | $\phi_{\rm hs}\,(\mathrm{deg})$ | $\mathcal{A}_{\rm hs}$ | $\sigma_{\rm hs}\,(\mathrm{deg})$ | $\epsilon$ | $\chi^{2}/{\rm DoF}$
---|---|---|---|---|---|---|---|---|---|---|---
J$0023+0923$ | $74.1^{+8.8}_{-12}$ | $0.87^{+0.04}_{-0.05}$ | $2992^{+94}_{-92}$ | $1.22^{+0.30}_{-0.30}$ | $0.59^{+1.39}_{-0.39}$ | $353.0^{+2.7}_{-5.5}$ | $-22.6^{+75.8}_{-42.1}$ | $0.62^{+0.16}_{-0.15}$ | $11.4^{+3.8}_{-2.6}$ | $-$ | $105/53$
J$0636+5128$ | $34.2^{+2.9}_{-2.5}$ | $0.88\pm 0.02$ | $2284^{+100}_{-100}$ | $1.9^{+0.3}_{-0.3}$ | $0.63^{+1.44}_{-0.43}$ | $258.2^{+11.3}_{-12.4}$ | $66.1^{+6.9}_{-8.9}$ | $1.0^{+0.10}_{-0.13}$ | $11.0^{+3.0}_{-2.2}$ | - | $235/155$
J$0952-0607$ | $59.8^{+2.0}_{-1.9}$ | $0.79\pm 0.01$ | $3085^{+85}_{-80}$ | $3.81^{+0.46}_{-0.43}$ | $6.26^{+0.36}_{-0.40}$ | - | - | - | - | - | 286/287
J$1301+0833$ | $46.6^{+2.5}_{-2.2}$ | $0.61\pm 0.06$ | $2288^{+80}_{-84}$ | $4.7^{+0.5}_{-0.5}$ | $1.77^{+0.10}_{-0.11}$ | - | - | - | - | - | 157/121
J$1311-3430$ | $68.7^{+2.1}_{-2.0}$ | $0.99$ | $821^{+959}_{-402}$ | $228^{+56}_{-44}$ | $3.01\pm 0.15$ | $119.8^{+4.3}_{-4.3}$ | $27.5^{+15.7}_{-17.5}$ | $11.2^{+10.6}_{-5.9}$ | $16.8^{+3.4}_{-3.1}$ | $-0.072^{+0.001}_{-0.001}$ | $74/84$
J$1653-0158$ | $72.8^{+4.0}_{-4.0}$ | $0.84^{+0.02}_{-0.02}$ | $2253^{+499}_{-389}$ | $4.3^{+0.4}_{-0.6}$ | $0.87^{+0.07}_{-0.08}$ | $287.1^{+5.7}_{-5.2}$ | $-48.1^{+6.9}_{-5.2}$ | $1.23^{+0.47}_{-0.36}$ | $21.8^{+3.4}_{-3.2}$ | - | 1296/941
J$1810+1744$ | $66.3^{+0.5}_{-0.5}$ | 0.99 | $3101^{+77}_{-94}$ | $41.8^{+3.7}_{-4.8}$ | $2.86\pm 0.13$ | $86.9^{+1.1}_{-0.8}$ | $23.0^{+3.2}_{-2.6}$ | $0.86\pm 0.23$ | $23.4^{+1.2}_{-1.1}$ | $-0.19\pm 0.02$ | 229/168
J$1959+2048^{*}$ | $85.3^{+2.9}_{-1.2}$ | $0.74^{+0.03}_{-0.04}$ | $2710^{+25}_{-23}$ | $42.7^{+2.7}_{-2.2}$ | $2.25\pm 0.11$ | - | - | - | - | - | 128/88
J$2051-0837^{*}$ | $55.9^{+4.8}_{-4.1}$ | $0.88^{+-.02}_{-0.02}$ | $2750^{+120}_{-150}$ | $1.7^{+0.3}_{-0.3}$ | $2.48^{+0.39}_{-0.38}$ | - | - | - | - | - | 1288/1341
J$2052+1219$ | $59.3^{+2.3}_{-1.8}$ | $0.99$ | $2710^{+332}_{-1218}$ | $11.8^{+3.5}_{-3.4}$ | $5.57^{+0.47}_{-0.55}$ | - | - | - | - | $0.03\pm 0.01$ | 153/80
Our fits are performed with an outgrowth of the ICARUS light-curve model
(Breton et al., 2012) with additions described by Kandel et al. (2020). An
additional update replaces the simplified limb-darkening laws in the base code
with the more detailed limb-darkening coefficients computed by Claret &
Bloemen (2011) for two models, the ATLAS and PHOENIX atmospheres. We generally
find that the ATLAS coefficients perform better. Note that gravity- and limb-
darkening serve to rescale the local fluxes; to monitor the heating budget we
take care to integrate the emergent flux to determine the total companion
heating $L_{H}$, subtracting the thermal base emission (characterized by the
night-side temperature $T_{N}$). In all fits, we explore a simple direct
heating (DH) model, a wind model (WH), and a hotspot model (HS) and perform
model selection using Akaike Information Criterion (AIC, Akaike, 1974). In
general, asymmetry of the light-curve maximum and bluer colors to one side of
the peak indicate can the presence of a hotspot, whereas peak broadening and a
flux gradient across the maximum tend to indicate heat advection along
latitude lines from global winds.
In our fits, we choose the gravity darkening coefficient to be $\beta_{\rm
D}=$0.08 for companions with low front-side temperature ($T\lesssim 10,000$ K)
and 0.25 for front-side temperature $T\gtrsim 10,000$ K. To check whether
radiative conditions apply, for imtermediate front-side temperature
$5,000\lesssim T\lesssim 9,000$ K, we tried fitting the gravity darkening
coefficient and found that $\beta_{\rm D}$ of 0.08 was strongly preferred over
0.25; for such cases, $\beta_{\rm D}=0.08$ was fixed during model fitting.
The observed lightcurves are also affected by the intervening extinction
$A_{V}$. For many of our objects, we adopt the extinction estimates from the
three-dimensional dust model of Green et al. (2019) available as a query at
argonaut.skymaps.info, using the Bayestar2019 model and $A_{V}=2.73\,E(g-r)$
values at the estimated distance. For southern sources, we take the total
extinction column through the Galaxy from the NASA Extragalactic Database
(Schlafly & Finkbeiner, 2011) as an upper limit.
For the light curve modeling, the principal geometrical and physical fit
parameters are the inclination of the binary system $i$, the Roche lobe
filling factor $f$ (defined as the ratio of the companion nose radius to the
radius of $L_{1}$ point), the base temperature of the star $T_{N}$, the pulsar
irradiation power $L_{H}$, and the distance to the binary system $d$ in kpc.
For HS models, we also fit the Gaussian spot amplitude $\mathcal{A}_{\rm hs}$
(the multiplicative increase to the local temperature), radial size $r_{\rm
hs}$, and angular position $\theta_{\rm hs}$ (measured from the sub-pulsar
point on the companion), $\phi_{\rm hs}$ [measured from the orbital plane, 0
toward the dusk (East) terminator of the companion and $\pi/2$ toward the
North (spin) pole]. For WH models, we fit for the wind parameter
$\epsilon=\tau_{rad}/\tau_{\rm adv}$, the ratio of the radiation cooling and
advection timescales. For $i$, a uniform prior in $\cos\,i$ was applied; for
$L_{H}$, a uniform logarithmic prior was applied, whereas for all other
parameters a uniform prior was used. For all our fits, we explore models with
increasing model complexity: a direct heating (DH) model, DH+wind heating
(WH), DH+ hot spot(HS), and DH+WH+HS. To penalize increased model complexity,
the final model selection is based on Akaike Information Criterion.
Photometric fitting gives us constraints on heating parameters as well as
binary geometry. This can be combined with center-of-mass velocity obtained
from radial velocity (RV) fitting to estimate the NS and companion masses. In
general, the observed RV amplitude is different from the center-of-mass (CoM)
radial velocity amplitude $K_{\rm CoM}$ because strong pulsar irradiation
shifts the photometric center of the secondary star (“center of light” CoL)
away from its CoM. Moreover, different absorption lines have strengths varying
with the surface temperature; thus the CoL shift is different for each
spectral line. The temperature sensitivity of the line profile can be
characterized by monitoring its equivalent-width $EW(T)$ variation across the
companion surface. Therefore, we model the CoL RVs for a given $K_{\rm CoM}$,
by averaging up the radial velocities over the companion surface elements,
weighted by flux and temperature-dependent equivalent-width $EW(T)$ of the
lines that dominate the radial velocity template. We fit these predicted CoL
RVs to the cross-correlation velocities, estimating $K_{\rm CoM}$ and hence
the binary masses. Although we do not perform a simultaneous photometry fit,
we do marginalize the spectroscopic fit over the geometrical parameters from
the end of the photometric Markov Chain Monte Carlo chains, sampling $\sim
2\sigma$ uncertainties. Thus, the mass errors do include all uncertainties in
the model fitting, spectroscopic and photometric.
Below, we describe photometric and RV fitting of individual BW objects. Our
population study includes J0952-0607 and J1810+1744, but since these objects
were already fitted with the most updated prescription in Romani et al. (2022)
and Romani et al. (2021), we skip the modeling details and present only the
fit results. The values for J2051$-$0827 are from Dhillon et al. (2022).
### PSR J0023+0923
Lacking radial velocity measurements, we do not have enough constraints for a
detailed NS mass estimate, so we fix at $M_{\rm NS}=1.5\,M_{\odot}$. We find
that a HS model fits the data the best, with $\chi^{2}/$DoF $=107/57$ and a
poorly constrained $i(^{\circ})=74.5^{+9.7}_{-15.5}$. With this inclination,
the companion mass is rather small $M_{c}\sim 0.018M_{\odot}\,(M_{\rm
NS}/1.5\,M_{\odot})^{2/3}$, typical of other BWs. The maximum is relatively
well sampled, thus the fill-factor is modestly constrained at $0.87\pm 0.05$,
implying a volume-averaged companion radius of $0.10\,R_{\odot}$. As noted in
Draghis & Romani (2018), BW companions appear to be inflated by the heating
and hence we expect companion radius to be larger than the radius $R_{\rm
cold}$ of a cold (degenerate) remnant of a stellar core. This heating-induced
inflation is discussed in §4.2. Due to limited photometry near optical
minimum, our inclination and hot-spot position estimates have substantial
uncertainty. While the hotspot’s nose angle $\theta_{\rm hs}$ is near the
nose, its phase angle is highly unconstrained. Additional photometry covering
the minimum, especially in the $i$ and $z$ bands, would help constrain $T_{N}$
and $i$, and $u$ band data would help constrain hot-spot parameters.
Figure 1: Four BW LCs. $\phi_{\rm B}=0$ is defined as pulsar TASC (time of the
pulsar ascending node). Two cycles are shown. For J0952 and J1301 direct
heating provides the best-fit model and is shown in both cycles. For the
others the first cycle shows the best-fit direct heating model, and the second
is the best-fit hotspot or wind model. Lower panels show residuals to the
model (for a red band and a blue band) of the corresponding cycle – the model
differences are often subtle, best seen in the residuals, but are
statistically significant (see text). Colors denote the various photometric
bands (blue=$u$, green=$g$ red=$r$, magenta=$i$, black=$z$, yellow=$H$,
cyan=$K$.
Figure 2: As for figure 1, except for J1959+2048, where blue=$B$, green=$V$,
red=$R$, magenta=$I$.
### PSR J0636+5128
Fig 1 shows the $grizHK$ LC of this object. The LC is relatively shallow for
all colors, indicating a low binary inclination. Since radial velocity
measurements are unavailable, we cannot fit the NS mass, so we fix at $M_{\rm
NS}=1.5\,M_{\odot}$. The best model is achieved with a $11.0^{\circ}$ Gaussian
hotspot located close to the pole of the ‘northern’ hemisphere, with a
substantial temperature excess of 64%. The direct heating power is $1.9\times
10^{33}\,$erg/s representing about $36\%$ of the spin-down power. With the HS
model, the inclination is $i\approx 34.0^{\circ}$, slightly larger than the DH
estimate in Draghis et al. (2019). With this inclination, the companion mass
is rather small $M_{c}\sim 0.01M_{\odot}(M_{\rm NS}/1.5\,M_{\odot})^{2/3}$, so
for a modest $M_{\rm NS}>1.5\,M_{\odot}$, J0636’s has a BW-type companion. The
timing parallax (Stovall et al., 2014) of J0636 gave a very small distance,
but this has been superseded by more extended timing, giving
$d=1.1^{+0.6}_{-0.3}$ kpc (Arzoumanian et al., 2018), and our photometric
distance fit of $0.63^{+1.44}_{-0.43}$ is in good agreement with this value
although the constraint is rather loose. At this distance, the fill factor of
$0.88$ gives a companion radius of $0.095\,R_{\odot}$.
### PSR J1301+0833
Romani et al. (2016) found that for J1301, the combination of faint magnitude
and $\sim 4500$ K effective temperature companion requires a distance of $\sim
6$ kpc for direct full-surface heating of a Roche-lobe filling star. This is
dramatically larger than the $d=1.23$ kpc implied by the pulsar’s
$DM=13.2\,{\rm pc\,cm}^{-3}$ (Ray et al., 2012) in the YMW17 model. We have
fit with DH, HS and WH models and find that all prefer a companion that
substantially under-fills its Roche lobe at $f\sim 0.61$. This is the smallest
fill factor found for any of our BWs, but with this value the fit distance is
a plausible 1.77 kpc. While the HS and WH models do slightly decrease
$\chi^{2}$ (by 10 and 4, respectively), with the extra parameters AIC prefers
the basic DH model. All models give a similar $i\approx 46.6^{\circ}$. This
small inclination is quite consistent with the relatively small RV amplitude
of Romani et al. (2016). At our fit distance the Shklovskii-corrected spin-
down power is $\dot{E}_{c}=\dot{E}I_{45}\left[1-(0.31\pm 0.07)d_{\rm
kpc}\right]=3.0\times 10^{34}\,$erg/s for the moment of inertia
$10^{45}I_{45}$ g/cm2. Thus, our best-fit $L_{H}=4.7\times 10^{33}$ erg/s is a
modest $16\%$ of ${\dot{E}}$.
### PSR J1311-3430
J1311 is particularly interesting since Romani et al. (2012) found that the NS
might be very massive. Romani et al. (2015) used early versions of several HS
models to show that the fit $i$ could range between $64^{\circ}-85^{\circ}$,
with the NS mass as low as $1.8\,M_{\odot}$ and as large as $2.7\,M_{\odot}$.
With our improved modeling we can reduce these ranges.
Figure 2 shows the LCs of this object from simultaneous photometry synthesized
from Keck LRIS spectra. Simultaneous light curves are essential for this
object, since J1311 is known to have intense optical flaring, with detectable
flares occurring nearly every orbit and large flares appearing every dozen
orbits. With simultaneous colors and multiple orbits we can prune these flares
leaving the underlying quiescent thermal flux to be described by the heating
model. J1311’s LCs show three main features: i) the maximum is particularly
flat, indicating strong gravity darkening, ii) the maximum is asymmetrical,
especially in bluer colors, iii) a slight gradient is present across maximum.
All fits find that the companion is very close to Roche-lobe filling, so we
set the fill factor at $f=0.99$. The large fill factor (low surface gravity
near the $L_{1}$ point) and the high average front-side temperature $>10,000$
K should require a large $\beta_{\rm D}$. Indeed, if freed in the fits,
$\beta_{\rm D}=0.30\pm 0.06$, consistent with the $\beta_{\rm D}=0.25$
expected for a radiative atmosphere.
In our fitting, the best model invokes both HS + WH, with a sub- Alfvénic wind
with $\epsilon=-0.07\pm 0.01$, and a hotspot just past the day-night
terminator with a large $\approx 12\times$ excess over the local nightside
temperature. The resulting binary inclination is quite well constrained at
$68.7^{\circ}\pm 2.1^{\circ}$. We also find that the base temperature $T_{N}$
is very low, with samples converging below the lowest temperature limit of
1000K of our spectral library. Such extrapolation to low temperature will
likely be imprecise as unmodelled effects such as dust settling and clouds are
important for low-temperature atmosphere spectra (Husser et al., 2013). Thus,
we consider $T_{N}$ low, but poorly constrained. Note that the inclination is
hardly affected by this uncertainty as the model light curves are dominated by
hot surface elements resulting from extreme pulsar irradiation.
Using the DM$=37.84$ pc cm-3 (Ray et al., 2013), Antoniadis (2021) estimated
$2.43\pm 0.48$ kpc. Our distance estimate of $3.01\pm 0.15$ kpc is consistent
with this DM estimate. The best-fit $L_{H}=2.28\times 10^{35}$ erg/s is
$\approx 4$ larger than the nominal spin-down power. For J1311, a substantial
portion of the flux seems to be from IBS particle precipitation (a strong hot
spot) and some leakage from this particle-mediate heating may also contribute
to the large $L_{H}$.
### PSR J1653-0158
The lightcurve shows shallow modulation implying either a low inclination or a
strong veiling flux. However, with the companion RV amplitude $K_{CoL}=669\pm
7.5\,{\rm km\,s^{-1}}$ we find a large mass function $f(M)=1.60\pm
0.05M_{\odot}$ (Romani et al., 2014), so small inclination would lead to an
unphysically large NS mass. Noting that the minimum is flat and that the
modulation decreases for bluer colors, we infer that a strong blue veiling
flux dominates at orbital minimum. This is also visible in the phase-resolved
spectra (Romani et al., 2014). Although this veiling flux is likely associated
with the IBS, we model it here as a simple power-law with form
$f_{\nu}=f_{A}(\nu/10^{14}{\rm Hz})^{-p}$, with $f_{A}\sim 101\pm 10\,\mu$Jy
and $p=0.50\pm 0.03$. This flux is fairly constant through the orbit, although
there are hints of sharp phase structures in the light curve, e.g. in $r$ and
$i$ at $\phi_{\rm B}=0.72$. Any model without such a hard-spectrum component
is completely unacceptable. These fits prefer an $A_{V}\sim 1.0$ mag slightly
higher than, but consistent with the maximum in this direction (which is found
for all $d>300$pc).
We find that the HS model best fits the data. However, with the fine structure
near maximum, the model is not yet fully acceptable ($\chi^{2}/DoF\sim 1.38$).
More detailed models, including modulated veiling flux from the IBS, may be
needed to fully model the light curves. Such modeling would be greatly helped
by light curves over an even broader spectral range, with IBS effects
increasingly dominant in the UV and low-temperature companion emission better
constrained in the IR. With many cycles, we could also gauge the reality (and
stability) of the apparent fine structure and test for hot spot motion.
### PSR J1810+1744
Our fit follows the assumptions of Romani et al. (2021), except that we allow
for possible errors in the photometric zero points. This results in small
changes in the fit parameters and a substantial increase in the distance and
$L_{H}$ uncertainties. The neutron star mass is decreased from that paper’s
value by $\sim 0.5\sigma$.
### PSR J1959+2048
This, the original BW pulsar, is of special interest, since van Kerkwijk et
al. (2011), with an approximate treatment of DH effects, infer that it may
have a pulsar mass as large as $2.4\pm 0.12M_{\odot}$. Recently, evidence of
an eclipse of the pulsed $\gamma-$ray emission from J1959 has been presented
in (Clark et al., 2021). With a small $\sim 0.1R_{\odot}$ companion, this
requires a binary inclination $i>83^{\circ}$, far from the result of optical
LC modeling in van Kerkwijk et al. (2011). Some support for this edge-on view
also comes from evidence for an X-ray eclipse in Kandel et al. (2021).
With our improved treatment of gravity darkening, a simple DH model gives
$i=65.7^{\circ}\pm 2.4^{\circ}$ and $\chi^{2}/$DoF = $138/89$. However,
looking at the light curves, one can see a small phase shift in the maximum,
with the peak somewhat broadened and excess flux at phases $\phi\sim
0.75–0.9$, especially in the bluer bands. In Kandel & Romani (2020), we showed
how this excess can be explained by both a Gaussian hotspot and a banded wind.
Because the LC maxima are relatively flat, the best-fit $i$ is $\sim
64^{\circ}$. Strong gravity darkening might also produce such a flat maximum,
but the low front-side temperature of J1959 ($\lesssim 5000\,$K) together with
a relatively low fill-factor of $\sim 0.8$ precludes this possibility.
For our analysis, we impose a prior on $i$ of $(83^{\circ},90^{\circ})$, where
the bounds are the result of geometrical constraints imposed by the
observation of $\gamma-$ray eclipse. At such a high inclination, the
lightcurve becomes relatively narrow at any reasonable value of the fill
factor. Thus, to match the observational data, we require a spread of heat
away from the maximum. While a banded wind can achieve this, a model with heat
diffusion fits the data the best, resulting in $i=85.2^{+2.9}_{-1.2}$, with
$\chi^{2}/$DoF=$128/88$. The angular scale of diffusion is $17.1^{\circ}\pm
4.2^{\circ}$. While this model fits data reasonably well, it is statistically
worse than a model with an inclination as a free parameter. Fig 2 shows the
light curves for the best-fit model with inclination bound by $\gamma$-ray
eclipse (first cycle) and one with free inclination (second cycle). $V$ and
$R$ fit residuals for the two models are shown in the lower panels. This
substantial tension between the gamma-ray eclipse and the optical light curve
modeling is worrisome, and shows that this binary should be re-measured with
modern multi-band photometry, to see if systematics affect the limited
existing data or if additional physical effects are required to obtain a good
fit.
### PSR J2051$-$0827
Recently Dhillon et al. (2022) have described simultaneous multi-color
observations and ICARUS fitting of this BW. We do not attempt to re-fit here,
but for comparison report the parameters of that study. They find significance
evidence for a hot spot before 2011 (parameters not given), but that this spot
is no longer prominent in 2021.
### PSR J2052+1219
All model fits require a fill factor $\sim 1$, so we fix $f=0.99$. Lacking RV
data, we fix $M_{\rm NS}=1.5\,M_{\odot}$. A DH model gives $\chi^{2}/$DoF of
$165/81$, with best-fit $i=59.8^{\circ}\pm 2.2^{\circ}$. Adding a hotspot
reduces the $\chi^{2}$/DoF to $145/76$, with $i=60.6^{\circ}\pm 2.0^{\circ}$.
However, the hotspot is rather large with a radius $\sim 39^{\circ}$ and a
temperature-excess of $\sim 200\%$ of the base temperature. A WH model gives
$\chi^{2}$/DoF to $153/80$, with a very similar inclination estimate of
$i=59.3^{\circ}\pm 2.0^{\circ}$. After penalizing for the extra DoF in the HS
model, the WH model is preferred at a marginal level of 60% over the HS model.
Physically, very shallow gradients (large effective spot radius) seem more
natural for wind flow than a magnetic pole, so we prefer WH on these grounds.
With very similar $i$, either model appears to capture the overall heating
pattern. With good overall light curve matches (Fig 2), we infer that the
large $\chi^{2}/$ DoF is the result of low-level stochastic flaring or
possibly under-estimation of photometry errors. With this inclination, the
companion mass is $M_{c}\sim 0.041(M_{\rm NS}/1.5\,M_{\odot})^{2/3}$, typical
for a BW.
Table 3: Mass estimates of some BWs and RBs for which RV measurements are available. Name | $i\,(\mathrm{deg})$ | $K_{\rm CoM}\,(\mathrm{km/s})$ | $M_{\rm PSR}\,(M_{\odot})$ | $M_{\rm C}\,(M_{\odot})$
---|---|---|---|---
J$0952-0607$ | $59.8^{+2.0}_{-1.9}$ | $376.1\pm 5.1$ | $2.35\pm 0.17$ | $0.032\pm 0.002$
J$1301+0833$ | $46.6^{+2.5}_{-2.2}$ | $274.8\pm 8.1$ | $1.60_{-0.25}^{+0.22}$ | $0.036\pm 0.006$
J$1311-3430$ | $68.7^{+2.1}_{-2.0}$ | $641.2\pm 3.6$ | $2.22\pm 0.10$ | $0.012\pm 0.0006$
J$1653-0158$ | $72.8^{+4.0}_{-4.0}$ | $700.2\pm 8.0$ | $2.15^{+0.16}_{-0.16}$ | $0.014\pm 0.001$
J$1810+1744$ | $66.3^{+0.5}_{-0.5}$ | $462.9\pm 2.2$ | $2.11\pm 0.04$ | $0.064\pm 0.001$
J$1959+2048$ | $85.3^{+2.3}_{-1.2}$ | $334.6\pm 3.6$ | $1.55_{-0.05}^{+0.06}$ | $0.024\pm 0.001$
## 4 Discussion and Summary
### 4.1 Thermal emission from the companion
We find that a basic direct (photon) heating model is insufficient to
represent the light curves of many BWs and RBs. There is statistically
significant improvement in most model fits when adding a localized hot spot
and/or global winds, and improved treatments of gravity and limb darkening. We
compute multiple models for each object, including HS, WH, and diffusion,
before deciding on the best, using the AIC. In the cases where the AIC did not
select a preferred model and the competing models fit the light curve well,
both give very similar binary kinematic parameters. In other words a well-
matched light curve gives a reliable mass, even when the fit model is not
unique.
However, variability can be an important factor in this modeling. The hot spot
phases and brightness can vary, especially for RB, and may even show secular
trends (e.g van Staden & Antoniadis, 2016). When correctly modeled, the
underlying heating pattern, and the fit binary kinematic parameters should be
stable. Other variability issues arise from the optical/X-ray flaring activity
seen in strongly heated spiders. Such flares are best isolated in simultaneous
multi-color photometric observations. It is important to excise these events
before forming the ‘quiescent’ light curve needed for the steady heating model
and fit of the orbital parameters. An important, but often subtle effect is
the presence of a non-thermal veiling flux. If associated with the IBS this
may be phase dependent and may also exhibit secular variations. When bright
enough to dominate at binary minimum (as for J1653), this too can be an
important complication in fitting the quiescent heating pattern.
We find that $\sim$ half of the BW sources in our study prefer models with a
hotspot. In Table 5 we separate the companions’ integrated thermal surface
emission into day side $L_{\rm comp}$ and hotspot $L_{\rm hs}$ components. The
hotpots often appear far from the direct heating maximum at the sub-pulsar
point, and so can represent substantial changes in the local surface flux (and
corresponding light curve features) with only modest energy input. The largest
$L_{\rm hs}/L_{\rm comp}$ ratio is $\sim 0.25$ for J1653. Such hotspots are
even more common in the RBs where the heavier ($\geq 0.08\,M_{\odot}$)
companions should have core fusion and strongly convective envelopes. Since
tidal locking ensures rapid rotation, the RBs should support strong
$\alpha-\Omega$ dynamos resulting in large (but constantly refreshing) dipole
magnetic fields. Such fields can direct IBS-generated particles to the
magnetic poles (Sanchez & Romani, 2017), resulting in substantial hotspots,
whose positions may vary with epoch. Although the BW companions lack core
fusion, the very large front-back temperature gradients may well drive
internal convection, resulting in similar dynamo-supported global fields.
Indeed the objects preferring hotspots tend to have stronger heating. A recent
two-epoch photometric study of BW J2051$-$0827 (Dhillon et al., 2022) suggests
that hot spot variability may occur in BWs, as well.
We conclude that direct heating dominates energetically. Since pulsar SEDs are
dominated by the GeV $\gamma$-rays we naturally assume that $\gamma$-rays will
typically dominate the companion heating. But it is important to recall that
the Fermi-observed $\gamma$-ray flux is from a single slice through the
$\gamma$-ray beam along the Earth line-of-sight (for pulsars with aligned spin
and orbital angular momenta this will be at co-latitude $i$). This cut may not
represent the $\gamma$-ray flux intercepted near the orbital plane by the
companion; for spin aligned pulsars this is the spin equator. Most modern
outer magnetosphere models have the $\gamma$-ray flux concentrated to this
equatorial plane, so in general we expect the companion-intercepted heating
will be larger than that seen by Fermi, although the correction factors are
highly model dependent (Draghis et al., 2019).
For most MSP, the observed photon (i.e. $\gamma$-ray) flux represents only a
modest fraction of the spin-down power; the remainder is assumed to be carried
off as the $e^{\pm}/B$ pulsar wind. For the BWs in our sample if we (naively,
incorrectly) assume that the $\gamma$-ray flux is isotropic, we find that it
represents $<0.25I\Omega{\dot{\Omega}}={\dot{E}}/4$. Table 5 lists this
isotropic $\gamma$-ray efficiency $f_{iso}$ for a standard moment of inertia
$I=10^{45}{\rm g\,cm^{2}}$ (i.e. $I_{45}=1$). The very low $\gamma$-ray flux
of J0636 indicates that our small $i$ view is outside of the main $\gamma$-ray
beam. Three sources have a large isotropic $\gamma$-ray efficiency: For J2052,
our photometric $d$ is substantially larger than the Table 4 $DM$ distance
values and may be an overestimate; at the 2.4 kpc DM distance it would have a
more typical $f_{iso}=0.13$. From our fitting below, J1810 has a large pulsar
mass (Table 3), so that we expect $I_{45}=2-3$, reducing $f_{iso}$ to
0.19-0.28. For J1311, the very large $f_{iso}=1.4$ may well be produced by a
combination of the two factors; the inferred NS mass is large and $d$ exceeds
the $DM$ estimates. Here an independent (eg. optical) parallax distance would
be quite valuable. Of course all these large values should be mitigated by
$\gamma$-ray beaming preferentially to the Earth line-of-sight.
Since in modern models the pulsar wind power (as well as the $\gamma$-ray
pulsed emission) is concentrated to the spin equator, we approximate this in
our direct heating models by distributing the heating power as $\propto{\rm
sin}^{2}\theta$, with $\theta$ measured from the spin axis. With this
distribution we can write the model-fit heating flux’s value for the Earth
line-of-sight $f_{H}$. It now becomes interesting to examine
$\eta_{\gamma}=f_{H}/f_{\gamma}$. If we assume that the direct heating is
$\gamma$-ray pulsed flux $\eta_{\gamma}>1$ tells us that there is more
$\gamma$-emission heating the companion near the spin equator than at the
Earth sightline, and vice-versa. Of course, we can also compare the integral
heating flux required with ${\dot{E}}$, via $\eta_{\dot{E}}=L_{H}/{\dot{E}}$.
At $\eta_{\dot{E}}=1$, ${\rm sin}^{2}\theta$ heating would require the full
$I_{45}=1$ spindown power. For $\eta_{\dot{E}}>1$ we infer some combination of
spindown power more equatorially focused, large $I_{45}$ and smaller $d$.
J1311 certainly requires some or all of these effects to reconcile the
observed heating with the pulsar energy loss rate.
Table 4: BW distances and Shklovskii-corrected magnetic fields Name | $d$ (kpc) | CL02(kpc) | Y17(kpc) | Parallax(kpc) | $B_{\rm int}(10^{8}$G)
---|---|---|---|---|---
J$0023+0923$ | $0.59^{+1.39}_{-0.39}$ | 0.69 | 1.25 | $-$ | $1.82^{+0.05}_{-0.17}$
J$0636+5128$ | $0.63^{+1.44}_{-0.43}$ | 0.49 | 0.21 | $1.1^{+0.6}_{-0.3}$aaTiming parallax Arzoumanian et al. (2018); bGAIA DR3; cVLBI parallax Romani et al. (2022b) | $1.0^{+0.01}_{-0.06}$
J$0952-0607$ | $6.26^{+0.36}_{-0.40}$ | 0.97 | 1.74 | $-$ | $0.61\pm 0.02$
J$1301+0833$ | $1.77^{+0.10}_{-0.11}$ | 0.67 | 1.23 | $-$ | $0.96\pm 0.03$
J$1311-3430$ | $3.01\pm 0.15$ | 1.4 | 2.43 | $-$ | $2.29\pm 0.03$
J$1653-0158$ | $0.87^{+0.07}_{-0.08}$ | | | $0.57^{+0.44}_{-0.18}$bbBolometric luminosity of the hotspot and the companion star without hotspot. Uncertainties generated through MCMC chain of the photometric fitting. | $0.36\pm 0.04$
J$1810+1744$ | $2.86\pm 0.13$ | 2.00 | 2.36 | $1.54^{+7.61}_{-0.70}$bbBolometric luminosity of the hotspot and the companion star without hotspot. Uncertainties generated through MCMC chain of the photometric fitting. | $0.77\pm 0.01$
J$1959+2048$ | $2.25\pm 0.11$ | 2.49 | 1.73 | $2.57^{+1.84}_{-0.77}$ccfootnotemark: | $1.19\pm 0.01$
J$2052+1219$ | $5.57^{+0.47}_{-0.55}$ | 2.4 | 3.92 | $-$ | $0.26\pm 0.22$
Table 5: BW Derived and Observed Heating Name | $f_{\gamma}$ | $\eta_{iso}$ | $f_{\rm H}$aa$f_{H}=3{\rm sin^{2}}i\,L_{h}/(8\pi d^{2})$, i.e. assuming $\sin^{2}\theta$ $\gamma$-ray beaming, orbital and spin momenta aligned. | $\eta_{\dot{E}}$ | $\eta_{\gamma}$ | $L_{\rm comp}$bbBolometric luminosity of the hotspot and the companion star without hotspot. Uncertainties generated through MCMC chain of the photometric fitting. | $L_{\rm hs}$bbBolometric luminosity of the hotspot and the companion star without hotspot. Uncertainties generated through MCMC chain of the photometric fitting.
---|---|---|---|---|---|---|---
| $10^{-12}$erg/s/cm2 | | $10^{-12}$erg/s/cm2 | | | $10^{30}$erg/s | $10^{30}$erg/s
J$0023+0923$ | $7.3\pm 1.4$ | 0.019 | $40.6^{+430}_{-38.3}$ | 0.076 | 5.6 | $8.0\pm 1.0$ | $1.5^{+0.7}_{-0.5}$
J$0636+5128$ | $<0.5$ | $<0.004$ | $18.9^{+230}_{-17.7}$ | 0.33 | $>$38 | $4.1^{+0.8}_{-0.7}$ | $0.1^{+0.04}_{-0.02}$
J$0952-0607$ | $2.2\pm 0.5$ | 0.154 | $0.91^{+0.23}_{-0.22}$ | 0.057 | 0.41 | $108^{+10}_{-11}$ | $-$
J$1301+0833$ | $10.6\pm 1.5$ | 0.060 | $10.8^{+3.9}_{-2.84}$ | 0.071 | 1.0 | $16.6^{+1.7}_{-1.8}$ | $-$
J$1311-3430$ | $64.7\pm 1.9$ | 1.40 | $273^{+106}_{-85.6}$ | 4.56 | 4.2 | $321^{+80}_{-62}$ | $16^{+19}_{-7}$
J$1653-0158$ | $33.7\pm 1.8$ | 0.26 | $65.0^{+26.3}_{-18.1}$ | 0.11 | 1.9 | $7.3\pm 0.9$ | $1.9^{+0.9}_{-0.6}$
J$1810+1744$ | $22.4\pm 1.3$ | 0.56 | $47.8^{+5.9}_{-5.6}$ | 1.05 | 2.1 | $187^{+17}_{-19}$ | $19\pm 2$
J$1959+2048$ | $17.9\pm 1.9$ | 0.068 | $105\pm 9.5$ | 0.27 | 5.9 | $110^{+7}_{-8}$ | $-$
J$2052+1219$ | $6.5\pm 1.0$ | 0.720 | $3.5^{+2.3}_{-1.5}$ | 0.35 | 0.54 | $72.1^{+13.8}_{-12.0}$ | $-$
### 4.2 Mass distributions
Figure 3: The companion mass versus NS mass for all BWs (black points) and RBs
(red points) for which a reliable mass measurement exists. Masses for objects
with circled points come from this paper; cross points are RBs from Strader et
al. (2019) and BW J1555 from Kennedy et al. (2022).
Figure 3 shows the distribution of NS and companion masses for both BWs and
RBs, with most reliable measurements of BWs coming from this paper. The NS
masses of RBs have a wide spread and are on average lower than those of BWs.
For companion masses, the BW/RB separation remains clear, with BWs having a
low companion mass $\sim 0.01-0.06\,M_{\odot}$, and RBs having a higher
companion mass $\sim 0.2-0.5\,M_{\odot}$. However, the detection of relatively
heavy BW companions in J1810 and J1555 suggests that the mass gap between
these two populations is a factor $\sim 3\times$.
In Chen et al. (2013), it was argued that RBs and BWs are distinct
evolutionary populations, with irradiation efficiency determining the
formation channel; in this picture RBs do not evolve into BWs. On the other
hand, numerical simulations in Benvenuto et al. (2014) suggests that $P_{B}<1$
d RBs will indeed evolve into BWs. Our sample does not distinguish these two
scenarios, although a decrease in the BW-RB companion mass gap could be seen
as supporting an evolutionary connection.
Table 6: BW Companion Mean Densities Name | $P_{b}$(h) | $\rho$ (g/cm3) | $R/R_{\rm Y_{e}=Y_{\odot}}$
---|---|---|---
J$0023+0923$ | 3.33 | 22.8$\pm 0.8$ | 0.92
J$0636+5128$ | 1.60 | 21.3$\pm 0.3$ | 1.77aaUCXRB progenitor; cold radius for $Y_{e}=0.5$.
J$0952-0607$ | 6.42 | 40.1$\pm 2.5$ | 1.12
J$1301+0833$ | 6.48 | 36.1$\pm 7.5$ | 1.25
J$1311-3430$ | 1.56 | 28.6$\pm 1.4$ | 1.52aaUCXRB progenitor; cold radius for $Y_{e}=0.5$.
J$1653-0158$ | 1.25 | 33.3$\pm 2.5$ | 1.61aaUCXRB progenitor; cold radius for $Y_{e}=0.5$.
J$1810+1744$ | 3.56 | 30.8$\pm 0.5$ | 1.94
J$1959+2048$ | 9.17 | 28.6$\pm 1.9$ | 1.04
J$2051-0827$ | 2.37 | 20.2$\pm 0.6$ | 1.60bbDhillon et al. (2022)
J$2052+1219$ | 2.75 | 33.3 | 1.64
In Table 6, we list the inferred mean density of our companions. With our fit
parameters, these are surprisingly similar, varying by less than $2\times$. We
also list the ratio of their volume equivalent radius to that of a cold
degenerate object of the companion mass. The $T=0$ radius for a $\Gamma=5/3$
degenerate object with $Y_{e}$ electrons per nucleon is $R_{\rm
cold}=0.0126\left(\frac{M_{C}}{M_{\odot}}\right)^{-1/3}(2Y_{e})^{5/3}R_{\odot}$.
In most BW evolutionary scenarios, mass transfer and subsequent companion
evaporation are initiated well before core exhaustion. In this case we might
expect a $\sim$ solar metalicity for the companion with $Y_{e}\approx 0.833$.
We see that most companions are inflated above this value (up to nearly
$2\times$ for the strongly heated J1810). A few objects have small
$R/R_{Y_{\odot}}$, (e.g. J0023); for these we might assume some core
enrichment before evaporation commences, giving smaller cold $R$ and stronger
inflation. On the other hand the shortest period BWs are believed to be the
descendants of ultra-compact X-ray binaries, with hydrogen depleted
secondaries, so for the three objects with $P_{b}<2$ h we can assume
$Y_{e}=0.5$. This is supported by spectroscopy of J1311 and J1653, where the
absence of H features shows that even the photosphere is H depleted (Romani et
al., 2014).
Figure 4: Shklovskii-corrected surface dipole magnetic field vs. accreted mass
for some known BWs (blue) and RBs (red), assuming photometric distances except
when more accurate parallax distances are available. Note that for three PSRs
(J1311, J2039 and J2215) we assume that they were born as higher mass neutron
stars. While the RBs show no trend the BWs seem to have lower magnetic fields
at higher accreted (and final) mass.
### 4.3 Relationship between NS mass and B field
Binary pulsars have lower inferred dipole fields than isolated neutron stars,
and it has long been proposed (e.g., Bisnovatyi-Kogan, 1974; Romani, 1990)
that past accretion has decreased young pulsar magnetic fields to millisecond-
pulsar values. It is unclear whether the amount of mass accreted (as in simple
burial models) or the duration of the accretion phase (as in heating-driven
ohmic decay) is most important (see Mukherjee, 2017, for a review). Note that
small MSP $P_{s}$ are not, of themselves, precise probes of accretion history.
Since the angular momentum needed for spin-up to breakup periods is only $\sim
0.1\,M_{\odot}$ and since birth masses seem to vary by at least this amount it
is unlikely that even high precision mass measurements can determine the mass
gain with sufficient accuracy to probe angular moment addition (although
statistical studies might prove useful). Instead $P_{s}$ tracks the neutron
star dipole field in equilibrium spin-up models, thus telling us something
about the accretion rate at the end of spin-up, and subsequent spindown. The
best hope for a probe of dipole field reduction occurs if large amounts of
mass are needed to decrease the field from initial Terra-Gauss values to $\sim
10^{8}$G levels required for small $P_{s}$. Even then the situation is
complicated, since MSP population studies suggest that the initial masses
$M_{i}$ of these pulsars are bimodal with a dominant component of $M_{i}\leq
1.4M_{\odot}$ and a 20% sub-population with $M_{i}\sim 1.8M_{\odot}$
(Antoniadis et al., 2016, and references therein).
We make a first attempt at such comparison in Figure 4 where the Shklovskii-
corrected intrinsic dipole surface field $B_{\rm int}$ for an assumed moment
of inertia $I_{45}=1$ is plotted against inferred mass increase. Table 4
$B_{\rm int}$ values are computed assuming the nominal photometric distance
uncertainty; for the plot we add an 10% additional uncertainty in quadrature
to acknowledge $I_{45}$ variation with the uncertain mass. Three BWs that have
particularly high accreted mass (J0952 $\Delta M\approx 1.0M_{\odot}$,
J1653$-$0158 $\Delta M\approx 0.8M_{\odot}$, and J1810+1740 $\Delta M\approx
0.76M_{\odot}$, assuming a start at typical $1.35\,M_{\odot}$) also have some
of the lowest $B_{\rm int}$ known. However, J1723 has a relatively low surface
field and has apparently accreted very little mass (although its mass
measurement has large error bars, and this object has not been subject to the
uniform fits of our study). Conversely the well-measured J1311, J2039 and
J2215 are heavy neutron stars. Their $B_{\rm int}$ fields also seem
substantial at $>1.5\times 10^{8}$ G, although with the possible photometric
distance errors above, the Shklovskii corrections remain quite uncertain. A
possible solution is that these systems may be from the 20% subset of MSP
starting the accretion phase from $\sim 1.8\,M_{\odot}$. This is assumed in
Figure 4. At this point we can only conclude that the largest $M$, smallest
$B_{\rm int}$ and shortest $P_{s}$ appear associated with substantial mass
increase – it is likely that other factors are important in determining
$B_{\rm int}$ and we will need more, and better ($B_{\rm int}$, $\Delta M$)
pairs to draw detailed conclusions.
Figure 5: Normalized probability distributions of mass estimates for MSPs with
$M_{\rm NS}>2.0\,M_{\odot}$. The dashed curves show the mass estimates for the
two most massive radio-selected pulsars with white dwarf (WD) companions,
measured from pulse timing (supplemented by WD atmosphere modeling for J0348).
The solid curves show spider binary mass estimates, relying on companion
spectrophotometry. The bottom panel shows the cumulative probability
distributions for $M<M_{\rm max}$, for the radio objects, the spiders, and all
six pulsars. Lower bounds on $M_{\rm max}$ ($1\sigma$, $2\sigma$, and
$3\sigma$) for these distributions are listed in the legend at top.
### 4.4 Mass implications on NS EoS
Shapiro delay measurements of radio pulsar PSR J0348+0432 give $2.01\pm
0.04\,M_{\odot}$ (Antoniadis et al., 2013) and PSR J0740+6620 give $2.08\pm
0.07\,M_{\odot}$ (Fonseca et al., 2021). However, the quoted errors are at the
$1\sigma$ level, and so these data do not require $M_{\rm max}>2\,M_{\odot}$
at high statistical confidence. Since the impact on EoS modeling is driven by
the minimum required $M_{\max}$, higher masses, with tighter estimates should
provide even greater physics impact. The spiders pulsars are good candidates
for such measurements and, indeed, J1810 is the first individual object for
which a $\sim 3\sigma$ lower bound on the mass exceeds $2\,M_{\odot}$ (Romani
et al., 2021). We have argued above that our improved, uniform treatment of BW
LC and spectroscopy data results in more stable and better-determined mass
measurements, relatively immune to modeling uncertainty. Our task now is to
combine these into a global constrain on $M_{\max}$. Figure 5 shows the mass
uncertainty ranges for the objects in this paper with central values
$>2M_{\odot}$.
Several approaches can be used to estimate $M_{\rm max}$. One option is to
model the full distribution of (binary) NS masses and see if an upper cutoff
is required; Alsing et al. (2018), for example, determine that $M_{\rm max}$
is within a $1\sigma$ range of 2.0–2.2 $M_{\odot}$. Here we only attempt to
determine a lower bound to $M_{\rm max}$. Assuming that our heavy NS sample is
drawn from a uniform population of lower bound $M_{1}\equiv 1.8\,M_{\odot}$
and upper bound $M_{\rm max}$, $U(M_{1},M_{\rm max})$, the marginal
distribution of individual observations $\\{M_{i},\sigma_{i}\\}$ is:
$\displaystyle p(M_{i})$ $\displaystyle=\int_{0}^{\infty}{\rm
d}\mu_{i}\,N(M_{i}|\mu_{i};\sigma_{i})\,U(\mu_{i}|M_{1},M_{\rm max})$ (1)
$\displaystyle=\frac{{\rm
Erf}\left[\frac{M_{i}-M_{1}}{\sqrt{2}\sigma_{i}}\right]-{\rm
Erf}\left[\frac{M_{i}-M_{\rm max}}{\sqrt{2}\sigma_{i}}\right]}{2(M_{\rm
max}-M_{1})}$ (2)
For $n$ observations, the log(likelihood) is
$\displaystyle\log\mathcal{L}$ $\displaystyle=-n\log(M_{\rm max}-M_{1})$
$\displaystyle+\sum_{i=1}^{n}\log\left({\rm
Erf}\left[\frac{M_{i}-M_{1}}{\sqrt{2}\sigma_{i}}\right]-{\rm
Erf}\left[\frac{M_{i}-M_{\rm max}}{\sqrt{2}\sigma_{i}}\right]\right)$ (3)
The log(likelihood) curves for different sample sets (Radio and Spiders) are
plotted in the lower panel of Fig. 5 and the median value, as well as the
$1-,2-$ and $3-\sigma$ lower bounds on $M_{\rm max}$ are listed for each
sample. At the $1\sigma$ level, $M_{\rm max}$ inferred from spiders is higher
than that inferred from radio measurements by $0.18\,M_{\odot}$, suggesting
that NSs in the spider systems are significantly heavier that those in NS-
white dwarf binaries. In our uniform re-fit of these spiders we have
marginalized over the photometric parameters. We have also shown that with
good fits to high quality LC data, we can select between various physically-
motivated heating models. When different models’ fit-quality is similar, we
find that the orbital parameters determining the fit mass are similar too.
Thus we have reduced systematic uncertainties, and it becomes worth discussing
statistical bounds at higher confidence levels. We can now say with high
($3\sigma$ one-sided lower bound) statistical confidence that $M_{\rm
max}>2.08\,M_{\odot}$, and that at $\sim 1\sigma$ significance $M_{\rm
max}>2.19\,M_{\odot}$ is preferred.
Figure 6: Mass-radius curves for a range of EoS. Figure adapted from Özel &
Freire (2016); see that paper for the EoS labels. Note that any viable EOS
must rise above the $M_{\rm max}$ lower bound. As this increases many
otherwise viable EoS are excluded at high confidence level.
For each EoS, integration of the Oppenheimer-Volkoff equations gives a
different $M-R$ curve. Curves for various literature EoS are shown in Figure
6. Although some of these are already deprecated by radius and tidal
deformation measurements of common $\sim 1.4\,M_{\odot}$ NS, if one is
interested in the physics of extreme conditions, the most important aspect of
these curves is probed by the roll-off at their maximum. This is because only
high mass objects achieve the core densities to probe such physics. ‘Soft’,
compressible EoS are already strongly excluded by the $\sim 2M_{\odot}$ radio
MSP. But the compressibility at $\rho>4\rho_{\rm sat}$ is best probed at
higher masses. In general EOS with non-nucleonic phases, such as hyperons or
condensates cannot reach these masses. Conversely, to reach masses far above
$2M_{\odot}$ a phase transition from nucleonic to even less compressible
matter might be required. In any event, the bounds from figure 5 are the
tightest constraints in this region. We have transferred the 1$\sigma$ and
$3\sigma$ limits from all spiders to Figure 6. Note that only five of the
listed models (two with implausibly large radii at $\sim 1.4M_{\odot}$)
survive the $1\sigma$ bound. Extending to $3\sigma$, only two more survive. It
is clear that astrophysics community’s efforts to pin down neutron stars in
the $M-R$ plane, substantially enhanced by our spider measurements, are
dramatically narrowing the options for the dense matter EoS.
We thank Alex Filippenko and colleagues for collaboration on many of the
observations re-analyzed in this paper. We also thank Rene Breton for useful
discussions on spider light curve fitting. DK and RWR were supported in part
by NASA grants 80NSSC17K0024390 and 80NSSC17K0502.
## References
* Akaike (1974) Akaike, H. 1974, IEEE Transactions on Automatic Control, 19, 716
* Aldcroft et al. (1992) Aldcroft, T. L., Romani, R. W., & Cordes, J. M. 1992, The Astrophysical Journal, 400, 638
* Alsing et al. (2018) Alsing, J., Silva, H. O., & Berti, E. 2018, MNRAS, 478, 1377
* Antoniadis (2021) Antoniadis, J. 2021, Monthly Notices of the Royal Astronomical Society, 501, 1116
* Antoniadis et al. (2016) Antoniadis, J., Tauris, T. M., Ozel, F., et al. 2016, arXiv e-prints, arXiv:1605.01665. https://arxiv.org/abs/1605.01665
* Antoniadis et al. (2013) Antoniadis, J., Freire, P. C., Wex, N., et al. 2013, Science, 340
* Arzoumanian et al. (2018) Arzoumanian, Z., Brazier, A., Burke-Spolaor, S., et al. 2018, The Astrophysical Journal Supplement Series, 235, 37
* Benvenuto et al. (2014) Benvenuto, O. G., De Vito, M. A., & Horvath, J. E. 2014, The Astrophysical Journal Letters, 786, L7
* Bisnovatyi-Kogan (1974) Bisnovatyi-Kogan, G. 1974, Soviet Astronomy, 18, 261
* Breton et al. (2012) Breton, R. P., Rappaport, S. A., van Kerkwijk, M. H., & Carter, J. A. 2012, The Astrophysical Journal, 748, 115
* Callanan et al. (1995) Callanan, P. J., Van Paradijs, J., & Rengelink, R. 1995, The Astrophysical Journal, 439, 928
* Chen et al. (2013) Chen, H.-L., Chen, X., Tauris, T. M., & Han, Z. 2013, The Astrophysical Journal, 775, 27
* Claret & Bloemen (2011) Claret, A., & Bloemen, S. 2011, Astronomy & Astrophysics, 529, A75
* Clark et al. (2021) Clark, C., M., K., & Breton, R. P. 2021, in 9th International Fermi Symposium
* Dhillon et al. (2022) Dhillon, V. S., Kennedy, M. R., Breton, R. P., et al. 2022, arXiv e-prints, arXiv:2208.09249. https://arxiv.org/abs/2208.09249
* Djorgovski & Evans (1988) Djorgovski, S., & Evans, C. R. 1988, The Astrophysical Journal, 335, L61
* Draghis & Romani (2018) Draghis, P., & Romani, R. W. 2018, The Astrophysical Journal Letters, 862, L6
* Draghis et al. (2019) Draghis, P., Romani, R. W., Filippenko, A. V., et al. 2019, The Astrophysical Journal, 883, 108
* Fonseca et al. (2021) Fonseca, E., Cromartie, H., Pennucci, T. T., et al. 2021, The Astrophysical Journal Letters, 915, L12
* Fruchter et al. (1988) Fruchter, A. S., Stinebring, D. R., & Taylor, J. H. 1988, Nature, 333, 237, doi: 10.1038/333237a0
* Green et al. (2019) Green, G. M., Schlafly, E., Zucker, C., Speagle, J. S., & Finkbeiner, D. 2019, The Astrophysical Journal, 887, 93
* Husser et al. (2013) Husser, T.-O., Wende-von Berg, S., Dreizler, S., et al. 2013, Astronomy & Astrophysics, 553, A6
* Kandel & Romani (2020) Kandel, D., & Romani, R. W. 2020, The Astrophysical Journal, 892, 101
* Kandel et al. (2021) Kandel, D., Romani, R. W., & An, H. 2021, The Astrophysical Journal Letters, 917, L13
* Kandel et al. (2020) Kandel, D., Romani, R. W., Filippenko, A. V., Brink, T. G., & Zheng, W. 2020, ApJ, 903, 39, doi: 10.3847/1538-4357/abb6fd
* Kennedy et al. (2022) Kennedy, M. R., Breton, R. P., Clark, C. J., et al. 2022, MNRAS, 512, 3001, doi: 10.1093/mnras/stac379
* Kulkarni et al. (1988) Kulkarni, S. R., Djorgovski, S., & Fruchter, A. S. 1988, Nature, 334, 504, doi: 10.1038/334504a0
* Mukherjee (2017) Mukherjee, D. 2017, Journal of Astrophysics and Astronomy, 38, 1
* Nieder et al. (2020) Nieder, L., Clark, C., Kandel, D., et al. 2020, The Astrophysical journal letters, 902, L46
* Özel & Freire (2016) Özel, F., & Freire, P. 2016, Annual Review of Astronomy and Astrophysics, 54, 401
* Ray et al. (2012) Ray, P., Abdo, A., Parent, D., et al. 2012, arXiv preprint arXiv:1205.3089
* Ray et al. (2013) Ray, P., Ransom, S., Cheung, C., et al. 2013, The Astrophysical journal letters, 763, L13
* Reynolds et al. (2007) Reynolds, M. T., Callanan, P. J., Fruchter, A. S., et al. 2007, Monthly Notices of the Royal Astronomical Society, 379, 1117
* Romani (1990) Romani, R. W. 1990, Nature, 347, 741
* Romani et al. (2014) Romani, R. W., Filippenko, A. V., & Cenko, S. B. 2014, The Astrophysical Journal Letters, 793, L20
* Romani et al. (2015) —. 2015, The Astrophysical Journal, 804, 115
* Romani et al. (2012) Romani, R. W., Filippenko, A. V., Silverman, J. M., et al. 2012, The Astrophysical Journal Letters, 760, L36
* Romani et al. (2016) Romani, R. W., Graham, M. L., Filippenko, A. V., & Zheng, W. 2016, The Astrophysical Journal, 833, 138
* Romani et al. (2021) Romani, R. W., Kandel, D., Filippenko, A. V., Brink, T. G., & Zheng, W. 2021, The Astrophysical Journal Letters, 908, L46
* Romani et al. (2022) —. 2022, The Astrophysical Journal Letters, 934, L18
* Romani et al. (2022a) Romani, R. W., Kandel, D., Filippenko, A. V., Brink, T. G., & Zheng, W. 2022a, ApJ, 934, L17, doi: 10.3847/2041-8213/ac8007
* Romani & Sanchez (2016) Romani, R. W., & Sanchez, N. 2016, ApJ, 828, 7, doi: 10.3847/0004-637X/828/1/7
* Romani et al. (2022b) Romani, R. W., Deller, A., Guillemot, L., et al. 2022b, ApJ, 930, 101, doi: 10.3847/1538-4357/ac6263
* Sanchez & Romani (2017) Sanchez, N., & Romani, R. W. 2017, ApJ, 845, 42, doi: 10.3847/1538-4357/aa7a02
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, The Astrophysical Journal, 737, 103
* Schroeder & Halpern (2014) Schroeder, J., & Halpern, J. 2014, The Astrophysical Journal, 793, 78
* Stappers et al. (2001) Stappers, B., van Kerkwijk, M., Bell, J., & Kulkarni, S. 2001, The Astrophysical Journal Letters, 548, L183
* Stovall et al. (2014) Stovall, K., Lynch, R., Ransom, S., et al. 2014, The Astrophysical Journal, 791, 67
* Strader et al. (2019) Strader, J., Swihart, S., Chomiuk, L., et al. 2019, The Astrophysical Journal, 872, 42
* van Kerkwijk et al. (2011) van Kerkwijk, M., Breton, R., & Kulkarni, S. 2011, The Astrophysical Journal, 728, 95
* van Staden & Antoniadis (2016) van Staden, A. D., & Antoniadis, J. 2016, The Astrophysical Journal Letters, 833, L12
* Voisin et al. (2020) Voisin, G., Kennedy, M., Breton, R., Clark, C., & Mata-Sánchez, D. 2020, MNRAS, 499, 1758
|
scaletoline
# SparsePose: Sparse-View Camera Pose Regression and Refinement
Samarth Sinha1 Jason Y. Zhang2
Andrea Tagliasacchi1,3,4 Igor Gilitschenski1 David B. Lindell1,5
1University of Toronto 2Carnegie Mellon University 3Simon Fraser University
4Google 5Vector Institute
###### Abstract
Camera pose estimation is a key step in standard 3D reconstruction pipelines
that operate on a dense set of images of a single object or scene. However,
methods for pose estimation often fail when only a few images are available
because they rely on the ability to robustly identify and match visual
features between image pairs. While these methods can work robustly with dense
camera views, capturing a large set of images can be time-consuming or
impractical. We propose SparsePose for recovering accurate camera poses given
a sparse set of wide-baseline images (fewer than 10). The method learns to
regress initial camera poses and then iteratively refine them after training
on a large-scale dataset of objects (Co3D: Common Objects in 3D). SparsePose
significantly outperforms conventional and learning-based baselines in
recovering accurate camera rotations and translations. We also demonstrate our
pipeline for high-fidelity 3D reconstruction using only 5-9 images of an
object.
## 1 Introduction
Figure 1: SparsePose – Given sparse input views, our method predicts initial
camera poses and then refines the poses based on learned image features
aggregated using projective geometry. SparsePose outperforms conventional
methods for camera pose estimation based on feature matching within a single
scene and enables high-fidelity novel view synthesis (shown for each iteration
of pose refinement) from as few as five input views.
Computer vision has recently seen significant advances in photorealistic new-
view synthesis of individual objects [41, 52, 59, 24] or entire scenes [5, 61,
80]. Some of these multiview methods take tens to hundreds of images as input
[41, 36, 35, 5], while others estimate geometry and appearance from a few
sparse camera views [52, 75, 43]. To produce high-quality reconstructions,
these methods currently require accurate estimates of the camera position and
orientation for each captured image.
Recovering accurate camera poses, especially from a limited number of images
is an important problem for practically deploying 3D reconstruction
algorithms, since it can be challenging and expensive to capture a dense set
of images of a given object or scene. While some recent methods for appearance
and geometry reconstruction jointly tackle the problem of camera pose
estimation, they typically require dense input imagery and approximate
initialization [34, 66] or specialized capture setups such as imaging rigs
[27, 77]. Most conventional pose estimation algorithms learn the 3D structure
of the scene by matching image features between pairs of images [58, 46], but
they typically fail when only a few wide-baseline images are available. The
main reason for this is that features cannot be matched robustly resulting in
failure of the entire reconstruction process.
In such settings, it may be outright impossible to find correspondences
between image features. Thus, reliable camera pose estimation requires
learning a prior over the geometry of objects. Based on this insight, the
recent RelPose method [78] proposed a probabilistic energy-based model that
learns a prior over a large-scale object-centric dataset [52]. RelPose is
limited to predicting camera rotations (i.e., translations are not predicted).
Moreover, it operates directly on image features without leveraging explicit
3D reasoning.
To alleviate these limitations, we propose SparsePose, a method that predicts
camera rotation and translation parameters from a few sparse input views based
on 3D consistency between projected image features (see Figure 1). We train
the model to learn a prior over the geometry of common objects [52], such that
after training, we can estimate the camera poses for sparse images and
generalize to unseen object categories. More specifically, our method performs
a two-step coarse-to-fine image registration: (1) we predict coarse
approximate camera locations for each view of the scene, and (2) these initial
camera poses are used in a pose refinement procedure that is simultaneously
iterative and autoregressive, which allows learning fine-grained camera poses.
We evaluate the utility of the proposed method by demonstrating its impact on
sparse-view 3D reconstruction.
Our method outperforms other methods for camera pose estimation in sparse view
settings. This includes conventional image registration pipelines, such as
COLMAP [58], as well as recent learning-based methods, such as RelPose [78].
Overall, SparsePose enables real-life, sparse-view reconstruction with as few
as five images of common household objects, and is able to predict accurate
camera poses, with only 3 source images of previously unseen objects.
In summary, we make the following contributions.
* •
We propose SparsePose, a method that predicts camera poses from a sparse set
of input images.
* •
We demonstrate that the method outperforms other techniques for camera pose
estimation in sparse settings;
* •
We evaluate our approach on 3D reconstruction from sparse input images via an
off-the-shelf method, where our camera estimation enables much higher-fidelity
reconstructions than competing methods.
## 2 Related Work
Camera pose estimation from RGB images is a classical task in computer vision
[42, 47]. It finds applications in structure-from-motion (SfM [20]), visual
odometry [44], simultaneous localization and mapping (SLAM [17]), rigid pose
estimation [9], and novel view synthesis with neural rendering (NVS [41]). In
our discussion of related work, we focus on several related types of pose
inference as well as few-shot reconstruction.
#### Local pose estimation
A variety of techniques estimate camera poses by extracting keypoints and
matching their local features across input images [57, 38, 3, 60, 37, 54, 26].
Features can be computed in a hand-crafted fashion [39, 6] or can be learnt
end-to-end [74, 14]. Cameras are then refined via bundle adjustment, where
camera poses and 3D keypoint locations are co-optimized so to minimize
reprojection errors [62, 1]. Generally speaking, feature matching methods fail
in few-shot settings due to occlusion and limited overlap between images,
where a sufficient number of common keypoints cannot be found. These methods
fail because they are local – and therefore do not learn priors about the
solution of the problem.
#### Global pose optimization
Global pose optimization methods rely on differentiable rendering techniques
to recover camera poses by minimizing a photometric reconstruction error [68,
71]. Within the realm of neural radiance fields [41], we find techniques for
estimating pose between images [73] and between a pair of NeRFs [19], as well
as models that co-optimize the scene’s structure altogether with camera pose
[34, 66, 12], even including camera distortion [25]. SAMURAI [7] is able to
give very accurate camera poses by performing joint material decomposition and
pose refinement, but relies on coarse initial pose estimates, many images for
training ($\approx$80), and trains from scratch for each new image sequence.
In contrast, our method is trained once and can then perform forward inference
on unseen scenes in seconds. Overall, such global pose optimization methods
often fail in few-shot settings since they require sufficiently accurate pose
initializations to converge. While local methods are often used for
initialization, they too fail with sparse-view inputs, and so cannot reliably
provide pose initializations in this regime.
Figure 2: Method Overview. We propose Sparse-View Camera Pose Regression and
Refinement (SparsePose), which takes as input few-views of an object from wide
baselines, and predicts the camera poses. SparsePose is trained on a large
scale dataset of “common objects” to learn a prior over the 3D geometry of the
scene and the object. Our method works by first predicting coarse initial
camera poses by performing cross-image reasoning. The initial camera pose
estimates are then iteratively refined in an auto-regressive manner, which
learns to implicitly encode the 3D geometry of the scene based on sampled
image features. For notational convenience and simplicity, we use $\mathbf{T}$
to represent the rotations $\mathbf{R}$ and translations $\mathbf{t}$ in
homogeneous coordinates (as used in the text). Figure 3: Stage 1
architecture: We initialize the camera poses by directly estimating the models
using global reasoning, and directly regressing the poses using pretrained
features and joint-reasoning over the source images. We note that $\bigoplus$
denotes a skip connection (or addition) between the input and the output of
the transformer $\mathcal{T}_{\text{init}}$, and for learnable positional
encoding $\mathbf{\gamma}$. For simplicity we use $\mathbf{T}$ to represent
the rotations $\mathbf{R}$ and translations $\mathbf{t}$ in homogeneous
coordinates (as used in the text). A detailed approach of stage 1 is in
Section 3.1. Figure 4: Stage 2 architecture: After obtaining the initial
camera poses from Stage 1, we iteratively and autoregressively refine the
camera poses using a local feature reasoning module, which learns the
optimization dynamics of the camera poses. Since the optimization is non-
linear, the model iteratively updates the camera poses by resampling points
and predicting pose offsets. We note that $\bigoplus$ denotes a skip
connection between the input and the output of the transformer
$\mathcal{T}_{\text{refine}}$, and for simplicity we use $\mathbf{T}$ to
represent the rotations $\mathbf{R}$ and translations $\mathbf{t}$ in
homogeneous coordinates (as used in the text). A detailed approach of stage 2
is in Section 3.2.
#### Global pose regression
Given a set of input images camera poses can also be directly regressed. In
visual odometry, we find techniques that use neural networks to auto-
regressively estimate pose [72, 65], but these methods assume a small baseline
between subsequent image pairs, rendering them unsuitable to the problem at
hand. Category-specific priors can be learnt by robustly regressing pose
w.r.t. a “canonical” 3D shape [28, 79, 70], or by relying on strong semantic
priors such as human shape [63, 40] or (indoor) scene appearance [10]. Closest
to our method, recent category-agnostic techniques, such as RelPose [78], are
limited to estimating only rotations by learning an energy-based probabilistic
model over $\mathrm{SO(3)}$. However, RelPose only considers the global image
features from the sparse views and does not perform local 3D consistency. Even
for predicting rotations, our method performs significantly better than
RelPose since it takes into account both global and local features from
images.
#### Direct few-shot reconstruction
Rather than regressing pose and then performing reconstruction, it is also
possible to reconstruct objects directly from (one or more) images using data-
driven category priors [52], or directly training on the scene. Category-
specific single-image 3D reconstruction estimate geometry and pose by matching
pixels or 2D keypoints to a 3D template [45, 31, 32, 33], learning to
synthesize class-specific 3D meshes [18, 33], or exploiting image symmetry
[69, 68]. Recent progress has also enabled few-shot novel view synthesis,
where images of the scene from a novel viewpoint are generated conditioned on
only a small set of images [52, 55, 67, 16, 76, 43, 23, 13, 21]. Such methods
are either trained to learn category-centric features [52, 21, 68], or are
trained on a large-scale dataset to encode the 3D geometry of the scenes [55,
67], or propose regularization schemes for neural radiance based methods [43,
13, 23]. However, 3D consistency in these models is learnt by augmentation
rather than by construction, resulting in lower visual quality compared to our
approach.
## 3 Method
Estimating camera parameters typically involves predicting the intrinsics
(i.e. focal length, principal point, skew) and extrinsics (i.e. rotation,
translation) from a set of images. In this paper, we only consider the task of
estimating extrinsics; we assume the intrinsics are known as they can be
calibrated once for a camera a priori and are often provided by the camera
manufacturer. More formally, our goal is to jointly predict the rotation
$\mathbf{R}_{c}{\in}\mathrm{SO}(3)$ and translation
$\mathbf{t}_{c}{\in}\mathbb{R}^{3}$ for all input images $\mathbf{C}_{c}$. Our
proposed method for this task consists of two phases, which are illustrated in
Fig. 2:
* •
Section 3.1 – we first initialize the camera poses in a coarse prediction step
which considers the global image features in the scene.
* •
Section 3.2 – we refine the poses using an iterative procedure in which an
autoregressive network predicts updates to align local image features to match
the 3D geometry of the scene.
The goal of the coarse pose estimation is to use global image features to
provide an initial 3D coordinate frame and estimates of the relative camera
rotations and translations which can then be iteratively refined.
### 3.1 Initializing camera poses
Given a sparse set of $C$ images $\\{\mathbf{C}_{1},\ldots,\mathbf{C}_{C}\\}$
of a single object, the first task is to predict initial camera poses and
establish a coordinate frame. The initialization:
* •
extracts low-resolution image features $\mathbf{f}\in\mathbb{R}^{F}$;
* •
combines image features into a global representation;
* •
regresses rotation and translation for each camera.
We use a pre-trained, self-supervised encoder $\mathcal{E}_{\text{init}}$ [8]
to extract features from each image. Following VIT [15], we also add a
learnable positional embedding $\boldsymbol{\gamma}_{c}\in\mathbb{R}^{F}$ to
each feature:
$\mathbf{f}_{c}=\overbrace{\mathcal{E}_{\text{init}}(\mathbf{C}_{c})}^{\text{pre-
trained
features}}~{}~{}+\overbrace{\rule{0.0pt}{6.88889pt}\boldsymbol{\gamma}_{c}}^{\text{learnable
encoding}}.$ (1)
These features are passed to a transformer $\mathcal{T}_{\text{init}}$ [64,
15] and a skip connection to aggregate global context and predict a new set of
features:
$\mathbf{g}_{c}=\mathcal{T}_{\text{init}}(\\{\mathbf{f}_{1},\ldots,\mathbf{f}_{C}\\};\boldsymbol{\theta})~{}+~{}\mathbf{f}_{c}.$
(2)
Finally, a shallow fully-connected network $\mathcal{N}_{\text{init}}$ (we use
two hidden layers) predicts quaternions representing the initial camera
rotations $\mathbf{R}{\in}\mathrm{SO}(3)$, and translations
$\mathbf{t}{\in}\mathbb{R}^{3}$:
$\mathbf{R}^{(0)}_{c},\mathbf{t}^{(0)}_{c}=\mathcal{N}_{\text{init}}(\mathbf{g}_{c};\boldsymbol{\theta}).$
(3)
The initial rotation and translation estimates are then refined using the
iterative procedure described in Section 3.2.
### 3.2 Refining camera poses
After obtaining the initial poses, $\mathbf{R}^{(0)},\mathbf{t}^{(0)}$, we can
leverage geometric reasoning to iteratively refine the pose estimates
$\mathbf{R}^{(t)},\mathbf{t}^{(t)}$, where $1\leq t\leq T$. We achieve this by
probing a collection of 3D points within the scene given the current camera
estimate (i.e. the points are re-sampled at each step). After projecting the
points back into the images, features are fetched and aggregated into global
feature vectors from which camera pose updates are computed.
#### Sampling
We aim to uniformly sample within the volume where we expect the imaged object
to be located. Given the initially estimated camera poses, the center of the
capture volume $\mathbf{c}$ is predicted considering principal rays (i.e. rays
passing through the principal point of each image). We compute the point
closest to the principal rays (in the least-squares sense) and calculate the
average camera radius as
$r^{(t)}=\mathbb{E}_{c}\|\mathbf{c}-\mathbf{c}_{c}^{(t)}\|_{2}$, where
$\mathbf{c}_{c}^{(t)}$ is the camera center of image $c$ at iteration $t$. We
then uniformly sample $P$ points within an Euclidean ball:
$\displaystyle\\{\mathbf{p}^{(t)}_{p},\ldots,\mathbf{p}^{(t)}_{P}\\}\sim\mathcal{B}(\mathbf{c}^{(t)},r^{(t)}/2).$
(4)
To increase robustness of the optimization and to ensure the camera does not
get stuck in a local minima, we re-sample the 3D points after each camera pose
update to jitter the 3D points and image features, analogous to how PointNet
jitters input pointcloud data [49].
#### Featurization
Let $\textbf{R}_{c}^{(t)}$ and $\textbf{T}_{c}^{(t)}$, denote the estimated
rotation and translation at the ${(t)}$-th iteration of the refinement
procedure. Given the set of 3D points and a known camera intrinsic matrix
$\mathbf{K}$, we project them into the coordinate frame of each camera using
3D geometry:
$\displaystyle\mathbf{p}_{c,p}^{(t)}=\mathbf{\Pi}_{c}^{(t)}(\mathbf{p}_{p}^{(t)})=\mathbf{K}(\mathbf{R}_{c}^{(t)}\mathbf{p}^{(t)}_{p}+\mathbf{t}_{c}^{(t)}).$
(5)
We interpolate samples of the previously extracted image features $\mathbf{f}$
at the projected 2D pixel coordinates for each source image [21], resulting in
a set of feature embeddings for each camera image and each point at the
current refinement iteration. We concatenate the positional encoding for the
current predicted rotations and translations and the original 3D points to the
embedding to generate a joint local feature vector:
$\displaystyle\mathbf{f}_{c,p}^{(t)}$
$\displaystyle=\mathcal{E}_{\text{init}}(\mathbf{C}_{c})[\mathbf{p}_{c,p}]\quad[\cdot]\equiv\text{
bilinear}$ (6) $\displaystyle\tilde{\mathbf{f}}_{c,p}^{(t)}$
$\displaystyle=[\mathbf{f}_{c,p}^{(t)},\>\>\gamma(\mathbf{p}_{p}^{(t)}),\>\>\gamma(\mathbf{R}^{(t)}_{c}),\>\>\gamma(\mathbf{t}^{(t)}_{c})]\in\mathbb{R}^{134},$
(7)
where $\gamma(.)$ is a Fourier positional encoding [41]. We then reduce the
dimensionality of this large vector with a single linear layer
$\mathcal{E}_{\text{refine}}$, and then concatenate along the samples
dimension:
$\tilde{\mathbf{f}}_{c}^{(t)}=\left[\mathcal{E}_{\text{refine}}(\tilde{\mathbf{f}}_{c,1}^{(t)};\boldsymbol{\theta}),\ldots,\mathcal{E}_{\text{refine}}(\tilde{\mathbf{f}}_{c,P}^{(t)};\boldsymbol{\theta})\right]\in\mathbb{R}^{\text{P}\cdot
32},$ (8)
where P is the number of sampled points, which was chosen to be 1,000
resulting in a 32,000 dimensional local feature vector for each pose iteration
step. With this _local_ feature vector $\tilde{\mathbf{f}}_{c}^{(t)}$ we
summarize the appearance and geometry of the scene as sampled by the $c$-th
camera, allowing learned refinement of the camera poses in a 3D consistent
manner to predict the camera pose updates.
#### Optimization
Similar to (2), we use a multi-headed self-attention module along with a skip
connection to perform joint reasoning over the source views:
$\tilde{\mathbf{g}}_{c}=\mathcal{T}_{\text{refine}}(\tilde{\mathbf{f}}_{0}^{(t)},\ldots,\tilde{\mathbf{f}}_{c}^{(t)})+\tilde{\mathbf{f}}_{c}^{(t)},$
(9)
and regress pose updates using a long short-term memory network lstm [22] and
a 2-layer MLP $\mathcal{N}_{\text{pose}}$:
$\displaystyle\bar{\mathbf{g}}_{c}^{(t)}=\textsc{lstm}(\tilde{\mathbf{g}}_{c}^{(t)};\\{\tilde{\mathbf{g}}_{c}^{(t-1)},\dots,\tilde{\mathbf{g}}_{c}^{(0)}\\})$
(10) $\displaystyle\begin{aligned}
\Delta\mathbf{R}^{(t)}_{c},\Delta\mathbf{t}_{c}^{(t)}&=\mathcal{N}_{\text{pose}}\big{(}\bar{\mathbf{g}}_{c}^{(t)}\big{)}\\\
\mathbf{R}_{c}^{(t+1)}&=\mathbf{R}_{c}^{(t)}\cdot\Delta\mathbf{R}_{c}^{(t)}\\\
\mathbf{t}_{c}^{(t+1)}&=\mathbf{t}_{c}^{(t)}+\Delta\mathbf{t}_{c}^{(t)}.\end{aligned}$
(11)
Such auto-regressive models have been shown effective in implementing meta-
optimization routines [2], as they can learn priors on the dynamics of the
optimization in few-shot settings [51]. In practice, we perform 10 steps of
the lstm pose refinement to allow for the camera poses to converge. An
additional ablation study for the number of steps is provided in the
supplementary.
To train the model we minimize the loss
$\displaystyle\mathcal{L}_{\text{pose}}$
$\displaystyle=\mathbb{E}_{c}\Sigma_{k\in\\{0,K\\}}\>\>\mathcal{L}_{\text{pose}}^{\mathbf{R}}+\mathcal{L}_{\text{pose}}^{\mathbf{t}},$
(12) $\displaystyle\mathcal{L}_{\text{pose}}^{\mathbf{R}}$
$\displaystyle=d(||\mathbf{R}^{(t)}_{c}\mathbf{R}^{\text{GT}}_{c}-\mathbb{I}||_{\mathcal{F}},$
(13) $\displaystyle\mathcal{L}_{\text{pose}}^{\mathbf{t}}$
$\displaystyle=d(||\mathbf{t}_{c}^{(t)}-\mathbf{t}_{c}^{\text{GT}}||_{2}^{2}),$
(14)
where GT denotes ground truth data obtained by applying COLMAP [58] to densely
sampled video footage—note that we require dense frames at training time only,
while our inference procedure uses sparse views. To make the regression
invariant to choices of coordinate frames, we align the ground truth rotation
and translation such that the first source image is always at the unit camera
location $\mathbf{R}{=}\mathbb{I}$, $\mathbf{t}{=}0$. The model then predicts
relative rotations and translations in this canonical coordinate space. To
stabilize training, we follow [11, 29] and take the loss over normalized
quaternions. For the penalty function $d(.)$ we use an adaptive and robust
loss function [4]. The losses are applied only to the initial pose estimator
and the output of the pose refinement module (i.e., the last iteration of the
LSTM update). Both prediction stages are trained jointly end-to-end.
### 3.3 Implementation Details
#### Training data
We train the model on the CO3D dataset [52], which contains 19,000 videos,
with 1.5 M individual frames and camera poses across 50 different categories
of common objects. Training the model on this large dataset with diverse
objects facilitates learning object-appearance priors, which ultimately
enables pose prediction from sparse views. We split the dataset into 30 train
and 20 test categories, to verify the method’s ability to adapt to novel
classes.
#### Testing data
To construct the test set, we use the 20 test categories, and sample 100
sequences for each number of source images $C\in[3,9]$. To sample a test-set,
we follow the uniform-variant of the evaluation protocol from RelPose [78],
and perform stratified sampling along the frame indices in the CO3D dataset to
select for wide-baseline views [52]. Furthermore, we use batch sampling from
PyTorch3D [50] to shuffle batches with a random number of source images
$C\in\\{3,\ldots,9\\}$, so that the model learns how to aggregate features and
jointly predicts camera poses with a different number of source images. We
note that the model architecture is designed to work with an arbitrary number
of unposed source images. As mentioned previously, the camera intrinsic matrix
$\mathbf{K}$ is assumed to be known for all training and testing sequences,
which is a reasonable assumption, given that the CO3D dataset was collected
from smartphones and such parameters can be easily obtained from smartphone
manufacturers.
#### Training details
The model is trained on two A6000 48 GB GPUs for 3 days until convergence. The
model is jointly trained using an Adam optimizer [30] with initial learning
rate of $10^{-4}$, which is decayed once by a factor of 10 after 250 epochs of
training. The other Adam optimizer parameters are left to the default values
from the PyTorch implementation [48]. A pre-trained DINO Vision Transformer is
used to compute the pre-trained embeddings $\mathcal{E}_{\text{init}}$ [8,
15]. We use the frozen weights from the official release of the ViT-B/8
variant of the DINO ViT model.
Figure 5: Quantitative evaluation of sparse-view camera pose estimation. We
evaluate the quality of rotations and translations for varying numbers of
source views. We show the percentage of cameras that were predicted to be
within $15^{\circ}$ of the ground truth (left) and translations that were
predicted within 20% of the scale of the scene compared to ground truth
(right). SparsePose outperforms both classical and learning-based methods.
Figure 6: Quantitative evaluation of sparse-view, novel view synthesis. We
use the camera poses predicted by each method to perform novel view synthesis;
SparsePose significantly outperforms other baseline methods for this task in
terms of PSNR (higher is better) and LPIPS (lower is better). Figure 7:
Visualization of camera poses. We compare the predicted camera poses for
various methods and different number of images, camera centers are projected
to the $x-y$ plane for comparison. The ground truth poses are shown in black,
predicted poses in red, and the first camera for each sequence (used to align
predictions) in green. Gray boxes indicate failure to converge. Figure 8:
Visualizing the rendered from few-sparse unposed images. We use the initial
predicted poses by each of the methods, and refine them using BARF [34], and a
category-centric pretrained NeRFormer model, trained on the category. The
importance of predicting accurate initial poses can be seen since SparsePose
is able to generate photorealistic renders. Gray boxes indicate failure to
converge. Figure 9: Ablation results over different design choices for
SparsePose. We perform an exhaustive ablation to justify the design choices
made in the paper, and report the ability of the model to correctly predict
the rotations within $15^{\circ}$ of the ground truth rotations, over
different number of source views.
## 4 Experiments
For the task of sparse-view camera pose estimation, we consider both classical
Structure-from-Motion baselines and modern deep learning based techniques.
More specifically, we compare SparsePose against three classical SfM
baselines:
* •
COLMAP with SIFT features [58, 39];
* •
Hierarchical Localization (HLOC) [56] which uses COLMAP with SuperPoint for
feature extraction [14] and SuperGlue for image matching [57];
* •
Pixel-Perfect SfM [37] which is a state-of-the-art SfM method that refines
COLMAP camera poses using “featuremetric bundle adjustment”.
We further compare SparsePose against:
* •
Scene Representation Transformer (SRT) [55] by adding an additional layer to
the transformer output which jointly learns 3D reconstruction and pose
estimation over the large dataset;
* •
MetaPose [63] where we initialize the camera estimates using our initial pose
estimation model, and perform pose refinement using their architecture;
* •
RelPose [78] which only predicts rotations by learning a energy-based
probabilistic model over $\mathrm{SO}(3)$, given a set of images by only
considering the global features across the images in the scene.
For camera pose estimation, since the ground-truth cameras from CO3D [52] are
in arbirtary coordinate frames, we measure the relative rotations and
translations. That is, we measure the absolute angle difference between the
ground truth and the predicted camera poses by using the Rodriguez’s formula
between the rotations [53], and measure the $\ell$-2 norm between the
translations. Following RelPose [78], we report the percentage of cameras that
were predicted within $15^{\circ}$ of the ground truth rotation. For
translations, since the scale of the scene changes between sequences, we
report the percentage of cameras that were within $20\%$ of the scale of the
scene.
We then evaluate the predicted cameras for a downstream task of few-view 3D
reconstruction on 20 unseen test categories. For the novel-view synthesis
task, we report the Peak-Signal-to-Noise-Ratio (PSNR) which measures the
difference in the RGB space, and Learned Perceptual Image Patch Similarity
(LPIPS) which measures the difference as a “perceptual” score.
### 4.1 Wide-baseline camera pose estimation
We report quantitative results with different numbers of source views in
Figure 5 which shows the percentage of predicted rotations within $15^{\circ}$
of ground truth and predicted translations within $20\%$ of the scale of the
scene. The ground truth is obtained using SfM on dense videos with more than
$300$ images [58]. SparsePose is able to significantly outperform both
classical SfM and learning-based baselines by a significant margin. Moreover,
SparsePose consistently improves in performance as the number of source views
increases, which is not the case across all baselines (e.g., MetaPose [63]).
Using our method, $65-80\%$ of predicted camera locations and orientations
fall within the thresholds described above. In contrast, correspondence based
approaches such as COLMAP [58, 39], HLOC [56], and Pixel-Perfect SfM [37], are
only able to recover $20-40\%$ of the cameras within the thresholds for
rotation and translation with $C{<}10$. This significant difference in
performance motivates the effectiveness of learning appearance priors and
learning to perform geometry-based pose refinement. We note that RelPose [78]
only predicts rotations, and therefore cannot be evaluated on the translation
prediction task.
We also show a visualization of the predicted and ground truth cameras in
Figure 7, where we project the camera centers onto the $x$–$y$ plane to help
visualize 3D offsets in the camera centers. SparsePose predicts accurate
camera poses given a sparse set of images with very wide baselines,
significantly outperforming other methods. Even on very challenging sequences
with uneven lighting and low textural information (e.g., $C{=}7$), our method
predicts accurate cameras, which is important in practical cases. We note that
on many sequences HLOC fails to register all the source images and so does not
converge to a usable output; we cannot include a result in this case.
### 4.2 Sparse-view 3D reconstruction
We test the predicted cameras on the downstream task of sparse-view 3D
reconstruction using a NeRFormer that is trained on the test categories [52]
(but not any of the test sequences we evaluate on). Training is performed
using the default hyperparameters from PyTorch3D [50]. During evaluation, we
further finetune the cameras using BARF [34], an off-the-shelf 3D
reconstruction and pose refinement technique which updates the camera poses by
minimizing the photometric loss. In addition to the previous baselines, we add
a “BARF-only” baseline, which performs refinement after a unit camera
initialization for finetuning with BARF (i.e.,
$\mathbf{R}=\mathbb{I},\mathbf{T}=0$).
Quantitative results are shown in Figure 6. Here, SparsePose significantly
outperforms all baselines. Most importantly, we show that, while predicted
rotations and translations are not perfect, we can yet recover high-fidelity
3D reconstructions in the downstream task. Additionally, we find that BARF
without good initial pose estimates does not converge to accurate camera
poses. This further highlights the importance of accurate initial pose
estimates. For the comparison to RelPose, we use the ground truth translations
as predicted by CO3D (since the method does not predict translations); yet,
SparsePose significantly outperforms RelPose across all numbers of the source
views in both visual metrics. Finally, we also show qualitative novel-view
synthesis results in Figure 8. SparsePose results in significantly better
novel-view synthesis compared to baselines such as RelPose (with ground truth
translations) and HLOC. Note that when using significantly more source images,
the performance of conventional methods such as HLOC [56] or COLMAP [58]
typically improves. For example, see the analysis in Zhang et al. [78] which
shows performance of such methods with up to 20 images.
### 4.3 Ablation study
We provide an ablation study of SparsePose using different variants of the
model to justify the design choices (see Figure 9): (1) only initial pose
(i.e., no pose refinement), (2) no resampling of the 3D points between
iterations, (3) using an MLP instead of lstm, (4) no positional encoding on
the inputs to $\mathcal{T}_{\text{refine}}$, (5) using RGB values instead of
features from $\mathcal{E}_{\text{init}}$ for the refinement step, (6) no
robust kernel [4]. We show that SparsePose with the proposed design
outperforms the other variants. Interestingly, using the initial poses
$\mathbf{R}^{(0)},\mathbf{t}^{(0)}$, we achieve performance similar to
classical SfM methods, which highlights the importance of learned appearance
priors from large datasets. The best performance is achieved when refining
camera poses using the proposed method, including positional encoding, auto-
regressive prediction, etc.
## 5 Conclusion
In this paper, we presented SparsePose, a learning-based solution to perform
sparse-view camera pose estimation from wide-baseline input images. The strong
performance of the method highlights the utility of leveraging large object-
centric datasets for learning pose regression and refinement. Moreover, we
show that accurate few-view pose estimation can enable few-view novel-view
synthesis even from challenging “in-the-wild” datasets, where our method
outperforms other baselines.
There remain many potential directions for future work, among which, we
believe that joint methods for pose regression and 3D scene geometry
prediction may enable further improved capabilities for novel view synthesis
from sparse views. Additional work on learning where and how to sample the 3D
points used in our pose refinement step may help extend the approach to other
camera motions beyond the “tracked-to-object” camera poses common in the CO3D
dataset. Furthermore, it may be interesting to apply variants of our approach
to the challenge of non-rigid structure from motion, where correspondence-
based methods tend to fail. Learning a prior over motion, may help with camera
pose estimation for highly non-rigid scenes, such as for scenes with smoke,
loose clothing, or humans performing complex movements. Finally, our work may
be broadly relevant to improving the robustness of robotic vision, autonomous
navigation systems, and for efficient digital asset creation. We envision that
creators will be able to take a few sparse images of common objects and
generate photorealistic 3D assets for applications in augmented or virtual
reality.
#### Acknowledgements
This project was supported in part by NSERC under the RGPIN program.
## References
* [1] Sameer Agarwal, Noah Snavely, Steven M Seitz, and Richard Szeliski. Bundle adjustment in the large. In ECCV, 2010.
* [2] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. NeurIPS, 2016.
* [3] Relja Arandjelović and Andrew Zisserman. Three things everyone should know to improve object retrieval. In CVPR, 2012.
* [4] Jonathan T Barron. A general and adaptive robust loss function. In CVPR, 2019.
* [5] Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In CVPR, 2022.
* [6] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (surf). Computer vision and image understanding, 2008.
* [7] Mark Boss, Andreas Engelhardt, Abhishek Kar, Yuanzhen Li, Deqing Sun, Jonathan T Barron, Hendrik Lensch, and Varun Jampani. Samurai: Shape and material from unconstrained real-world arbitrary image collections. NeurIPS, 2022.
* [8] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In CVPR, 2021.
* [9] Jiale Chen, Lijun Zhang, Yi Liu, and Chi Xu. Survey on 6d pose estimation of rigid object. In Chinese Control Conference (CCC), 2020.
* [10] Kefan Chen, Noah Snavely, and Ameesh Makadia. Wide-baseline relative camera pose estimation with directional learning. In CVPR, 2021.
* [11] Yun-Chun Chen, Haoda Li, Dylan Turpin, Alec Jacobson, and Animesh Garg. Neural shape mating: Self-supervised object assembly with adversarial shape priors. In CVPR, 2022.
* [12] Shin-Fang Chng, Sameera Ramasinghe, Jamie Sherrah, and Simon Lucey. Garf: Gaussian activated radiance fields for high fidelity reconstruction and pose estimation. arXiv e-prints, 2022.
* [13] Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan. Depth-supervised nerf: Fewer views and faster training for free. In CVPR, 2022.
* [14] Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superpoint: Self-supervised interest point detection and description. In CVPR (workshops), 2018.
* [15] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
* [16] Emilien Dupont, Miguel Bautista Martin, Alex Colburn, Aditya Sankar, Josh Susskind, and Qi Shan. Equivariant neural rendering. In International Conference on Machine Learning. PMLR, 2020.
* [17] Hugh Durrant-Whyte and Tim Bailey. Simultaneous localization and mapping: part i. IEEE robotics & automation magazine, 2006.
* [18] Shubham Goel, Angjoo Kanazawa, and Jitendra Malik. Shape and viewpoint without keypoints. In ECCV, 2020.
* [19] Lily Goli, Daniel Rebain, Sara Sabour, Animesh Garg, and Andrea Tagliasacchi. nerf2nerf: pairwise registration of neural radiance fields. arXiv preprint arXiv:2211.01600, 2022.
* [20] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
* [21] Philipp Henzler, Jeremy Reizenstein, Patrick Labatut, Roman Shapovalov, Tobias Ritschel, Andrea Vedaldi, and David Novotny. Unsupervised learning of 3d object categories from videos in the wild. In CVPR, 2021.
* [22] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8), 1997.
* [23] Ajay Jain, Matthew Tancik, and Pieter Abbeel. Putting nerf on a diet: Semantically consistent few-shot view synthesis. In ICCV, 2021.
* [24] Wonbong Jang and Lourdes Agapito. Codenerf: Disentangled neural radiance fields for object categories. In ICCV, 2021.
* [25] Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Anima Anandkumar, Minsu Cho, and Jaesik Park. Self-calibrating neural radiance fields. In ICCV, 2021.
* [26] Yuhe Jin, Dmytro Mishkin, Anastasiia Mishchuk, Jiri Matas, Pascal Fua, Kwang Moo Yi, and Eduard Trulls. Image matching across wide baselines: From paper to practice. IJCV, 129(2), 2021.
* [27] Hanbyul Joo and Hao Liu. Panoptic studio: A massively multiview system for social motion capture. In ICCV, 2015.
* [28] Wadim Kehl, Fabian Manhardt, Federico Tombari, Slobodan Ilic, and Nassir Navab. Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again. In ICCV, 2017.
* [29] Alex Kendall, Matthew Grimes, and Roberto Cipolla. Posenet: A convolutional network for real-time 6-dof camera relocalization. In ICCV, 2015.
* [30] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [31] Filippos Kokkinos and Iasonas Kokkinos. To the point: Correspondence-driven monocular 3d category reconstruction. NeurIPS, 2021.
* [32] Nilesh Kulkarni, Abhinav Gupta, and Shubham Tulsiani. Canonical surface mapping via geometric cycle consistency. In ICCV, 2019.
* [33] Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Varun Jampani, Ming-Hsuan Yang, and Jan Kautz. Self-supervised single-view 3d reconstruction via semantic consistency. In ECCV, 2020.
* [34] Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, and Simon Lucey. Barf: Bundle-adjusting neural radiance fields. In ICCV, 2021.
* [35] David B Lindell, Julien NP Martel, and Gordon Wetzstein. AutoInt: Automatic integration for fast neural volume rendering. In CVPR, 2021.
* [36] David B Lindell, Dave Van Veen, Jeong Joon Park, and Gordon Wetzstein. BACON: Band-limited coordinate networks for multiscale scene representation. In CVPR, 2022.
* [37] Philipp Lindenberger, Paul-Edouard Sarlin, Viktor Larsson, and Marc Pollefeys. Pixel-perfect structure-from-motion with featuremetric refinement. In ICCV, 2021.
* [38] Ce Liu, Jenny Yuen, and Antonio Torralba. Sift flow: Dense correspondence across scenes and its applications. IEEE TPAMI, 2010.
* [39] David G Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
* [40] Wei-Chiu Ma, Anqi Joyce Yang, Shenlong Wang, Raquel Urtasun, and Antonio Torralba. Virtual correspondence: Humans as a cue for extreme-view geometry. In CVPR, 2022.
* [41] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In ECCV, 2020.
* [42] Hans Peter Moravec. Obstacle avoidance and navigation in the real world by a seeing robot rover. Stanford University, 1980.
* [43] Michael Niemeyer, Jonathan T Barron, Ben Mildenhall, Mehdi SM Sajjadi, Andreas Geiger, and Noha Radwan. Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. In CVPR, 2022.
* [44] David Nistér, Oleg Naroditsky, and James Bergen. Visual odometry. In CVPR, 2004.
* [45] David Novotny, Nikhila Ravi, Benjamin Graham, Natalia Neverova, and Andrea Vedaldi. C3dpo: Canonical 3d pose networks for non-rigid structure from motion. In ICCV, 2019.
* [46] Onur Özyeşil, Vladislav Voroninski, Ronen Basri, and Amit Singer. A survey of structure from motion*. Acta Numerica, 2017.
* [47] Onur Özyeşil, Vladislav Voroninski, Ronen Basri, and Amit Singer. A survey of structure from motion*. Acta Numerica, 2017.
* [48] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, et al. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS, 2019.
* [49] Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3D classification and segmentation. In CVPR, 2017.
* [50] Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. Accelerating 3d deep learning with pytorch3d. arXiv preprint arXiv:2007.08501, 2020.
* [51] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
* [52] Jeremy Reizenstein, Roman Shapovalov, Philipp Henzler, Luca Sbordone, Patrick Labatut, and David Novotny. Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction. In ICCV, 2021.
* [53] Olinde Rodrigues. Des lois géométriques qui régissent les déplacements d’un système solide dans l’espace, et de la variation des coordonnées provenant de ces déplacements considérés indépendamment des causes qui peuvent les produire. J. Math. Pures Appl, 1840.
* [54] Barbara Roessle and Matthias Nießner. End2end multi-view feature matching using differentiable pose optimization. arXiv preprint arXiv:2205.01694, 2022.
* [55] Mehdi SM Sajjadi, Henning Meyer, Etienne Pot, Urs Bergmann, Klaus Greff, Noha Radwan, Suhani Vora, Mario Lučić, Daniel Duckworth, Alexey Dosovitskiy, et al. Scene representation transformer: Geometry-free novel view synthesis through set-latent scene representations. In CVPR, 2022.
* [56] Paul-Edouard Sarlin, Cesar Cadena, Roland Siegwart, and Marcin Dymczyk. From coarse to fine: Robust hierarchical localization at large scale. In CVPR, 2019.
* [57] Paul-Edouard Sarlin, Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabinovich. Superglue: Learning feature matching with graph neural networks. In CVPR, 2020.
* [58] Johannes L Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In CVPR, 2016.
* [59] Shih-Yang Su, Frank Yu, Michael Zollhöfer, and Helge Rhodin. A-nerf: Articulated neural radiance fields for learning human shape, appearance, and pose. NeurIPS, 2021.
* [60] Jiaming Sun, Zehong Shen, Yuang Wang, Hujun Bao, and Xiaowei Zhou. Loftr: Detector-free local feature matching with transformers. In CVPR, 2021.
* [61] Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P Srinivasan, Jonathan T Barron, and Henrik Kretzschmar. Block-nerf: Scalable large scene neural view synthesis. In CVPR, 2022.
* [62] Bill Triggs, Philip F McLauchlan, Richard I Hartley, and Andrew W Fitzgibbon. Bundle adjustment—a modern synthesis. In International workshop on vision algorithms, 1999.
* [63] Ben Usman, Andrea Tagliasacchi, Kate Saenko, and Avneesh Sud. Metapose: Fast 3d pose from multiple views without 3d supervision. In CVPR, 2022.
* [64] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. NeurIPS, 30, 2017.
* [65] Sen Wang, Ronald Clark, Hongkai Wen, and Niki Trigoni. Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In Proc. ICRA. IEEE, 2017.
* [66] Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, and Victor Adrian Prisacariu. Nerf–: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064, 2021.
* [67] Daniel Watson, William Chan, Ricardo Martin-Brualla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. Novel view synthesis with diffusion models. arXiv preprint arXiv:2210.04628, 2022.
* [68] Shangzhe Wu, Tomas Jakab, Christian Rupprecht, and Andrea Vedaldi. Dove: Learning deformable 3d objects by watching videos. arXiv preprint arXiv:2107.10844, 2021.
* [69] Shangzhe Wu, Christian Rupprecht, and Andrea Vedaldi. Unsupervised learning of probably symmetric deformable 3d objects from images in the wild. In CVPR, 2020.
* [70] Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199, 2017.
* [71] Gengshan Yang, Deqing Sun, Varun Jampani, Daniel Vlasic, Forrester Cole, Huiwen Chang, Deva Ramanan, William T Freeman, and Ce Liu. Lasr: Learning articulated shape reconstruction from a monocular video. In CVPR, 2021.
* [72] Nan Yang, Lukas von Stumberg, Rui Wang, and Daniel Cremers. D3vo: Deep depth, deep pose and deep uncertainty for monocular visual odometry. In CVPR, 2020.
* [73] Lin Yen-Chen, Pete Florence, Jonathan T Barron, Alberto Rodriguez, Phillip Isola, and Tsung-Yi Lin. inerf: Inverting neural radiance fields for pose estimation. In IROS, 2021.
* [74] Kwang Moo Yi, Eduard Trulls, Vincent Lepetit, and Pascal Fua. Lift: Learned invariant feature transform. In ECCV. Springer, 2016.
* [75] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelNeRF: Neural radiance fields from one or few images. In CVPR, 2021.
* [76] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In CVPR, 2021.
* [77] Zhixuan Yu, Jae Shin Yoon, In Kyu Lee, Prashanth Venkatesh, Jaesik Park, Jihun Yu, and Hyun Soo Park. Humbi: A large multiview dataset of human body expressions. In CVPR, 2020.
* [78] Jason Y Zhang, Deva Ramanan, and Shubham Tulsiani. Relpose: Predicting probabilistic relative rotation for single objects in the wild. arXiv preprint arXiv:2208.05963, 2022.
* [79] Kaifeng Zhang, Yang Fu, Shubhankar Borse, Hong Cai, Fatih Porikli, and Xiaolong Wang. Self-supervised geometric correspondence for category-level 6d object pose estimation in the wild. arXiv preprint arXiv:2210.07199, 2022.
* [80] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492, 2020.
SparsePose: Sparse-View Camera Pose Regression and Refinement
Suppemental Material
## Appendix A Further Ablation study
### A.1 LSTM iterations
Figure 10: Ablation results varying the number of LSTM steps.
We add an additional ablation experiment over the number of steps required for
the LSTM. We vary the number of LSTM iterations between 0 and 50, and report
the percentage of cameras that were predicted between $15^{\circ}$ of ground
truth. We report the results in Figure 10. As previously noted, all the
experiments were performed with 10 LSTM iterations, which balances out speed
and accuracy of predictions. While we observe slight improvements with 50 LSTM
iterations, overall, using 10 LSTM iterations performs similarly.
### A.2 Timing Analysis
| Time (seconds)
---|---
HLOC [56] | 38s
COLMAP + SIFT [58, 39] | 18s
Pix. Perfect SFM [37] | 55s
RelPose [78] | 48s
SRT [55] | 2.7s
MetaPose [63] | 2.6s
SparsePose | 3.6s
Table 1: Time (in seconds) to perform registration on a single sequence with
9 source images. To enable fair comparison between all methods, only sequences
where all baseline methods were able to register all the source images were
included in the analysis. Each of the methods are run on the same NVIDIA A6000
for fair comparison.
### A.3 Different rotation threshold
Figure 11: Evaluating on the percentage of cameras accurately predicted with
$10^{\circ}$ and $30^{\circ}$ thresholds.
## Appendix B Baseline details
#### MetaPose
For the MetaPose baseline [63], we used the initialization from SparsePose,
and adapt the MetaPose architecture in the officially released code to perform
pose updates on the current updates. We do not utilize the human-specific
information proposed, since our data is more general than the data used to
evaluate MetaPose. We train MetaPose on the same subset of CO3D [52] that was
used in training SparsePose.
#### Scene Representation Transformer (SRT)
SRT [55] proposes to learn a prior over the 3D geometry from data implicitly
by learning a “set-latent scene representation” from sparse or dense images of
the scene using transformer encoder and decoder layers. Although SRT does not
learn a direct 3D geometry of the scene, it does learn a prior over the 3D
geometry, as it can perform novel-view synthesis. To adapt SRT to our
evaluation protocol, we add an additional 3-layer MLP that is trained to
predict the relative rotations and translations for the input image sequence.
We train the “unposed” version of SRT, and add an additional MLP (with
3-hidden layers) to predict rotations and translations, and we train the
method with the the same training dataset and loss as SparsePose. For
training, we use the same hyperparameters as suggested in the original paper.
#### Heirarchical Localization (HLOC)
For HLOC [56], we use the officially released code from
https://github.com/cvg/Hierarchical-Localization, which uses SuperPoint [14]
for generating correspondances and SuperGlue [57] for image matching.
#### COLMAP + SIFT
For the COLMAP baseline with SIFT features, we used the officially released
code from HLOC https://github.com/cvg/Hierarchical-Localization, which
supports SIFT image features.
#### RelPose
For the RelPose baseline, we used the officially released code from
https://github.com/jasonyzhang/relpose, and trained on the same dataset used
to train SparsePose. We use the default RelPose hyperparameters.
#### Pixel Perfect SFM
For the Pixel Perfect SFM [37], we used the officially released codebase from
https://github.com/cvg/pixel-perfect-sfm.
## Appendix C More implementation details
Hyperparameter | Value
---|---
Number of training steps | 500,000
Number of source views during step | $\mathcal{U}[3,9]$
Number of sequences sampled per step | 1
Choice of $\mathcal{E}_{\text{init}}$ | DINO [8]
Architecture of $\mathcal{E}_{\text{init}}$ | ViT-B/8 [15]
Number of heads $\mathcal{T}_{\text{init}}$ | 8
Number of heads $\mathcal{T}_{\text{refine}}$ | 2
Number of hidden dim. $\mathcal{T}_{\text{init}}$, $\mathcal{T}_{\text{refine}}$ | 2048
Number of hidden layers $\mathcal{N}_{\text{init}}$, $\mathcal{N}_{\text{pose}}$ | 3
Number of hidden dim. $\mathcal{N}_{\text{init}}$, $\mathcal{N}_{\text{pose}}$ | 512
Activation for $\mathcal{N}_{\text{init}}$, $\mathcal{T}_{\text{init}}$, $\mathcal{T}_{\text{refine}}$, $\mathcal{N}_{\text{pose}}$ | GELU
Number of LSTM steps | 10
Optimizer | Adam [30]
Learning rate | $10^{-4}$
Learning rate decay iterations | 250,000
Learning rate decay factor | 10
Table 2: Hyperparameters and implementation details. These hyperparameters
are shared through all experiments for SparsePose, unless stated otherwise.
## Appendix D Qualitative results
In Figure 12 we provide additional qualitative results with different numbers
of source views and visualize the predicted camera poses by our method
compared to baselines. We also include additional qualitative novel-view
synthesis results for different categories over different numbers of source
views in Figure 13 and Figure 14. In both cases, we see that SparsePose
predicts more accurate camera poses, resulting in higher quality novel-view
synthesis compared to other baselines.
Figure 12: More qualitative results for the predicted camera poses. The
camera centers are projected to the $x-y$ plane for easy visual comparison.
The ground truth poses are shown in black, predicted poses in red, and the
first camera for each sequence (used to align predictions) in green. Gray
boxes indicate failure to converge. Figure 13: More qualitative renders from
a sparse set of unposed images. Figure 14: More qualitative renders from a
sparse set of unposed images.
|
# Extreme Audio Time Stretching using Neural Synthesis
###### Abstract
A deep neural network solution for time-scale modification (TSM) focused on
large stretching factors is proposed, targeting environmental sounds.
Traditional TSM artifacts such as transient smearing, loss of presence, and
phasiness are heavily accentuated and cause poor audio quality when the TSM
factor is four or larger. The weakness of established TSM methods, often based
on a phase vocoder structure, lies in the poor description and scaling of the
transient and noise components, or nuances, of a sound. Our novel solution
combines a sines-transients-noise decomposition with an independent WaveNet
synthesizer to provide a better description of the noise component and an
improve sound quality for large stretching factors. Results of a subjective
listening test against four other TSM algorithms are reported, showing the
proposed method to be often superior. The proposed method is stereo compatible
and has a wide range of applications related to the slow motion of media
content.
## 1 Introduction
Time-scale modification (TSM) refers to a change in the duration or the
playback speed of a sound that does not affect its spectral characteristics,
such as pitch, timbre, and brightness [1, 2, 3]. If an audio signal is simply
played at a different sample rate, the frequency content is deemed to be
changed as the formants of the sound are moved. TSM methods are applied to
avoid this phenomenon, aiming to preserve or retrieve the original spectral
characteristics of the sound.
TSM has been long used in speech, e.g. in audio books and language–learning
services [4, 5, 6], music and remixing [7], broadcasting services [2], and
streaming platforms [8]. The ratio between the modified and the original time
support is controlled by the TSM factor $\alpha$, defining time stretching for
$\alpha>1$ and time compression for $\alpha<1$. All of the speech TSM
applications typically involve small TSM factors (0.25 $\leq\alpha\leq$ 4),
and do not allow for more extreme time-scaling operations, as state-of-the-art
TSM algorithms present poor audio quality at large stretch factors [9, 10]. In
such cases, the sounds typically present strong phasiness and heavily smeared
transients, both known artifacts in phase-vocoder-based TSM implementations
[11, 9]. There is however an interest for extreme audio time-stretching in
applications such as slow motion [12] and ambient sound generation [13, 14].
Recent work by Fierro and Välimäki [15] showed that the quality of a TSM
algorithm can be improved by providing a better description of its sines,
transient, and noise (STN) components, which can be separated and individually
processed according to their classification. It was also hinted that the
potential weakness of the fuzzy phase vocoder [9], which received the highest
overall score both on average and for the largest tested $\alpha$ in a recent
comparison of audio TSM methods [10], lies in the poor description and scaling
of the “noisy” component of percussive events, whose smearing is not countered
by the phase randomization and that would not benefit from a full preservation
as the time support of the time-stretched event would be incorrect.
The audio synthesis landscape changed since the introduction of WaveNet, a
deep generative model capable of synthesising raw audio waveforms [16]. The
WaveNet synthesizer can be used for TSM by adjusting the number of audio
samples generated per frame of the local conditioning signal, similarly to how
the hop size between consecutive frames is adjusted from analysis to synthesis
in traditional DSP methods. This idea was first proposed by Huang et al. [17],
although the TSM performance of such a model was not evaluated as time-
stretching was out of its scope.
In this work, we propose a deep learning based method for TSM capable of
producing state-of-the-art results for real-world environmental sounds when
large TSM factors are used. The proposed solution combines established DSP
algorithms and deep learning to produce a hybrid TSM method, whose novelty
lies in the neural resynthesis of the noise component, which is separated from
sines and transient using the STN decomposition.
The rest of this paper is structured as follows. Sec. 2 summarizes the STN
decomposition technique. Sec. 3 describes the proposed WaveNet architecture,
used to synthesize the stretched noise component. Sec. 4 details the TSM
pipeline. Sec. 5 evaluates the proposed method against four previous
techniques, and Sec. 6 concludes.
## 2 STN Decomposition
The STN separation method proposed by Fierro and Välimäki [15] decomposes a
sound into three abstract classes: sines (tonal content), transients
(impulsive events), and noise (sound nuances). The decomposition is achieved
through soft spectral masks that are derived from the signal spectrogram and
allow for perfect reconstruction.
To derive the class masks for an audio signal $x(n)$, median filtering is
first applied to its magnitude spectrum $X(m,k)$ to highlight vertical
(frequency) and horizontal (time) structures [18]:
$X_{\textrm{v}}(m,k)\\\
=\textrm{med}\Big{[}|X(m,k-\frac{L_{\textrm{v}}}{2}+1)|,...,|X(m,k+\frac{L_{\textrm{v}}}{2})|\Big{]}$
(1)
and
$X_{\textrm{h}}(m,k)\\\
=\textrm{med}\Big{[}|X(m-\frac{L_{\textrm{h}}}{2}+1,k)|,...,|X(m+\frac{L_{\textrm{h}}}{2},k)|\Big{]},$
(2)
where $\textrm{med}[\cdot]$ is the median function, and $X_{\textrm{v}}$ and
$X_{\textrm{h}}$ are the resulting horizontally- and vertically-enhanced
magnitude spectrograms, respectively. Parameters $L_{\textrm{h}}$ and
$L_{\textrm{v}}$ are the median filter lengths (in samples) in the time and
frequency directions, respectively.
Matrices $X_{\textrm{h}}$ and $X_{\textrm{v}}$ are then used to extract the
tonalness $R_{\textrm{s}}$ and transientness $R_{\textrm{t}}$ matrices with
the following elements [18]:
$R_{\textrm{s}}(m,k)=\frac{X_{\textrm{h}}(m,k)}{X_{\textrm{h}}(m,k)+X_{\textrm{v}}(m,k)}$
(3)
and
$R_{\textrm{t}}(m,k)=1-R_{\textrm{s}}(m,k)=\frac{X_{\textrm{v}}(m,k)}{X_{\textrm{h}}(m,k)+X_{\textrm{v}}(m,k)},$
(4)
respectively. Finally, the soft masks are obtained as follows:
$\displaystyle S(m,k)=f$ $\displaystyle\left(R_{\textrm{s}}(m,k)\right),$ (5)
$\displaystyle T(m,k)=f$ $\displaystyle\left(R_{\textrm{t}}(m,k)\right),$ (6)
$\displaystyle N(m,k)=1$ $\displaystyle-S(m,k)-T(m,k),$ (7)
where
$\displaystyle f($ $\displaystyle a)=\begin{cases}1,&\mbox{if
}a\geq\beta_{\textrm{U}}\\\
\sin^{2}{\Big{(}\dfrac{\pi}{2}\dfrac{a-\beta_{\textrm{L}}}{\beta_{\textrm{U}}-\beta_{\textrm{L}}}}\Big{)},&\mbox{if
}\beta_{\textrm{L}}\leq a<\beta_{\textrm{U}}\\\
0,&\mbox{otherwise},\end{cases}$ (8)
which are consequently imposed onto $X(m,k)$ via element-wise multiplication
to obtain the separated components. A group of functions to determine soft STN
masks for $\beta_{\textrm{U}}$ = 0.8 and $\beta_{\textrm{L}}$ = 0.7 is
visualized in Fig. 1.
This decomposition process is repeated for two consecutive stages [15]. The
first implements a large analysis window for better frequency resolution,
separating the sines from the transient and noise residual mixture; the second
uses a short analysis window for better temporal resolution, extracting the
transients from the residual. An example of two-stage STN decomposition for a
violin and castanet sound mixture is shown in Fig. 2.
Fig. 1: Functions for determining soft spectral masks for STN separation, as
described in [15].
(a) Original
(b) Sines
(c) Transients
(d) Noise
Fig. 2: STN decomposition of a castanet and violin mixture.
The STN decomposition enables the application of different TSM algorithms for
each component. While traditional signal processing techniques have been
optimized to deal with either the sines or the transients, the noise component
remains relatively unexplored and hence is suitable for a deep learning
approach.
## 3 Noise Time Stretching via Wavenet
This section introduces a WaveNet architecture to synthesize the time-
stretched noise component.
### 3.1 WaveNet synthesizer
WaveNet is an autoregressive generative model capable of synthesizing raw
audio waveforms [16]. It was initially proposed for speech synthesis, and the
model and variants thereof are still commonly used as part of audio synthesis
pipelines. It consists of a stack of dilated 1D-convolutional layers, with the
dilation factor increasing exponentially at each layer. The input to the
network is also a raw audio waveform, with the model being trained to predict
the subsequent sample in the sequence, given the prior audio samples.
Additionally, it is possible for the WaveNet model to incorporate a
conditioning signal, allowing control over the audio generated by the model.
There are broadly two types of conditioning information: global and local. The
first describes features that influence the entirety of the generated audio,
e.g. the speaker identity in a speech synthesizer. The latter is used to
represent time-variant features of the desired audio waveform, such as
spectrograms and other time-frequency representations. In this work, we only
consider local conditioning signals.
### 3.2 WaveNet-based TSM
Time stretching via WaveNet is achieved by adjusting the number of audio
samples generated per frame of the local conditioning signal, according to
$\alpha$. For example, if the conditioning signal is the spectrogram extracted
from a target audio signal with a hop size of 256 samples, then the re-
synthesis for $\alpha=2$ will generate 512 audio samples for each frame of
conditioning signal.
In the proposed approach, the WaveNet synthesizer is used to re-synthesize the
noise component of the target signal, extracted using the two-stage STN
decomposition, according to the desired TSM factor $\alpha$. Three different
time-frequency representations were tested as local conditioning signals for
the network: spectrogram, mel-spectrogram, and Constant-Q Transform (CQT)
spectrogram. Different models were trained using each of these
representations, and it was found that using the CQT-spectrogram produced the
highest quality results. Note that the choice of time-frequency representation
is fixed and is part of the model architecture, so it cannot be changed during
or after training.
The use of the CQT spectrogram as a conditioning signal for a WaveNet
synthesizer was first proposed in [17]. The CQT transform [19] is a time-
frequency analysis method in which the frequency bins are logarithmically
spaced. Previous work has shown that the CQT is a good choice for synthesis of
musical sounds [17] and also environmental sound classification [20]. In this
work, to extract the CQT we use a 5.8 ms hop size (256 samples at the 44.1-kHz
sample rate), a minimum frequency of 32.7 Hz, a maximum frequency of Nyquist
(22.05 kHz), and 48 CQT bins per octave. This results in a total of 451
frequency bins.
The network was trained at a sample rate of 44.1 kHz, and a 10-component
Mixture-of-Logistic distributions (MoL) sampling method was used to generate
raw 16-bit audio [21]. Training was run for a total of 1,900,000 iterations,
and took approximately 200 hours on a GPU. At inference time, a beam-search
algorithm [17] was used to remove spurious impulse events generated by the
probabilistic sampling method.
### 3.3 Dataset
To include a diverse range of sounds, a dataset was constructed from three
source datasets. The datasets used were the ESC-50 dataset [22], a labeled
collection of environmental audio recordings, the Freesound Loops dataset
[23], a collection of short musical clips including electronic and acoustics
instrument sounds, as well as speech sounds from the LJ-speech dataset [24].
Samples were randomly removed from the LJ-speech and Freesound loops datasets,
so the different classes of sounds were evenly represented in the combined
dataset. In total, the dataset used for training contained approximately 4
hours of audio.
## 4 Hybrid TSM Method
The proposed model combines the use of traditional digital signal processing
techniques and the Wavenet synthesizer to improve the time-scale modification
performance for extreme time-stretching factor, and it is designed and trained
to provide high quality stretching for environmental sounds. The TSM pipeline
of the proposed method described in this section is visualized in Fig. 3.
Fig. 3: Block diagram of the TSM pipeline, for a single-channel signal. A
thicker line indicates a time-stretched signal.
### 4.1 Processing the individual STN components
The input audio signal is decomposed as described in Sec. 2. Transition
regions for the STN masks are defined by $\beta_{\textrm{U}}$ = [0.8 0.85] and
$\beta_{\textrm{L}}$ = [0.7 0.75], where the two elements refer to the
different transition region in each stage of the decomposition [15].
The sines are processed via phase vocoder with identity phase locking (IPL)
[25], as used in [9]. Transients are preserved and relocated onto the new time
axis [26] according to detected peaks, which are isolated due to the nature of
the STN decomposition. The noise component is neurally resynthesized at the
desired TSM factor, as described in Sec. 3.
### 4.2 Post-processing
Before mixing the three components, the time-stretched sines and noise are
processed through an envelope (see Fig. 3), which reshapes them according to
the envelope of the original signal to compensate for the pre-echo effect that
is typical of spectrogram-based TSM methods [3]. Finally, the three components
are added and the time-stretched output is obtained, as also illustrated in
Fig. 3.
A time-stretching output example is shown in Fig. 4 for the Soda sound
detailed in Table 1 and a TSM factor $\alpha$ = 4. The “hiss” of the can
opening is correctly resynthesized by the network, together with the noisy
part of the later clicks which are imposed over the preserved transients. The
STN decomposition is particularly helpful in this situation, as the merging of
unaltered transients and the neurally-synthesized noise helps generate
realistic sounds that retain the punch of percussive events while avoiding
common artifacts, such as loss of presence, metallic tones, and phasiness.
### 4.3 Stereo compatibility
The process described so far can be individually applied to the two channels
of a stereo sound, as STN decomposition is transparent with respect to the
stereo field. When a sound is decomposed into its individual components and
then recomposed, the stereo balance and phase coherence between the left and
the right channel, computed according to [27] are preserved. This allows for
each stereo channel to be processed independently.
Fig. 4: (Left) Soda sound and (right) its four-times stretched version processed via the proposed model. Table 1: Audio samples used in the listening test. Name | Description
---|---
Fireworks | Two fireworks exploding in the air
Soda | Hiss and click sound from a can opening
PingPong | A clip from an amateur ping pong game
Saw | Handsaw sawing through wood
Sneeze | A person sneezing two times
Rooster | A rooster crowing in the morning
(a)
(b)
Fig. 5: Barplots relative to the results of the subjective listening test,
showing the average mean opinion scores and 95% confidence intervals of the
data for TSM factors (a) $\alpha$ = 4 and (b) $\alpha$ = 8. The proposed
method outperforms the opposing algorithms in both cases for most of the
samples under test.
## 5 Evaluation
The proposed method has been validated and compared with previous TSM methods
in a listening test, which is reported in this section.
### 5.1 Listening test design
A formal blind listening test was conducted on a selection of 14 experienced
listeners, all of which reported previous experience in test design and no
hearing impairment or other relevant medical condition. The test software, a
customized version of WebMushra [28], was run on a desktop machine running
MacOS 10.14.6, using a single pair of Sennheiser HD 650 headphones, inside a
sound–proof listening booth at the Aalto Acoustics Lab, Espoo, Finland. A set
of six audio samples, listed in Table 1, were selected. As the test involves
extreme stretching factors, of up to 8 times, a very short duration for each
sample (approximately 2 s) was necessary to ensure that even the longest time-
stretched sounds remained below 18 s long.
The proposed method (PROP) was evaluated against the fuzzy phase vocoder (FPV)
[9], harmonic-percussive TSM (HP) [29], the two-step phase vocoder with
identity phase locking [25] (IPL), and WSOLA [4], which was provided as the
low-quality anchor (ANC). The last three algorithms are included in the TSM
Toolbox [30]. All the processed sounds were loudness normalized according to
ITU-BS.1770 recommendation [31] to prevent loudness differences from affecting
the grades. In each trial of the test, subjects were presented with one of the
audio excerpts, referred to as reference, and were asked to blindly rate the
quality of the time-scaling operation. The original reference was not included
among the samples to be rated. Subjects were instructed to rate each sample on
a scale from 0 to 100, with no obligation of using the full scale, as the
concept of perfect TSM is undefined and it was anticipated that none of the
samples would be perceived to be an ideal processing.
The test was divided in two parts of twelve trials (six trials repeated twice
for consistency) each, the first presenting a TSM factor of 4 and the second a
TSM factor of 8, for a total of 24 trials and 72 audio samples under
investigation. Listeners were allowed a short training session before starting
the actual test to get acquainted with the interface, the keyboard shortcuts,
and the task itself. The results of the training session were not included in
the statistical analysis. Prior to the test, familiarity of the subjects with
the concepts of time-scale modification and transient smearing was assessed.
### 5.2 Results
Mean Opinion Scores (MOS) were computed from the ratings given by the subjects
to estimate the quality of the time-stretching for the methods under test. Bar
plots displaying the mean and 95% confidence intervals of the data for are
shown, in Fig. 5. The proposed method clearly outperforms the other algorithms
under test under both time-stretching conditions for all samples but Rooster.
The majority of the test participants commented on this audio excerpt that the
noisy nature and the lack of sharp transients of the reference made it hard to
generate an expectation for a time-stretched version.
## 6 Conclusion
In this paper, we present a novel algorithm for extreme TSM, which takes
advantage of the decomposition of sound into sines, transients, and noise to
individually resynthesize the noise component using a deep neural network.
Sines and transients are time-stretched using established signal processing
techniques which also benefit from the decomposition. The results of a
subjective listening test suggest that the proposed algorithm performs
significantly better than previous TSM methods for real-world environmental
recordings, when a large time-stretching factor is used. Future work involves
redesigning the neural synthesis for standard time-stretching factors,
involving a larger dataset of more relevant sounds.
## References
* [1] E. Moulines and J. Laroche, “Non-parametric techniques for pitch-scale and time-scale modification of speech,” Speech Commun., vol. 16, no. 2, pp. 175–205, 1995.
* [2] P Dutilleux, G De Poli, A von dem Knesebeck, and U Zölzer, “Time-segment processing (chapter 6),” DAFX: Digital Audio Effects, Second Edition; Zölzer, U., Ed, pp. 185–217, 2011.
* [3] J. Driedger and M. Müller, “A review of time-scale modification of music signals,” Appl. Sci., vol. 6, no. 2, pp. 57, 2016.
* [4] W. Verhelst and M. Roelands, “An overlap-add technique based on waveform similarity (WSOLA) for high quality time-scale modification of speech,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), 1993, vol. 2, pp. 554–557.
* [5] O. Donnellan, E. Jung, and E. Coyle, “Speech-adaptive time-scale modification for computer assisted language-learning,” in Proc. 3rd IEEE Int. Conf. Advanced Learning Technologies, Athens, Greece, 2003, pp. 165–169.
* [6] E. Cohen, F. Kreuk, and J. Keshet, “Speech time-scale modification with GANs,” IEEE Signal Process. Lett., vol. 29, pp. 1067–1071, 2022.
* [7] D. Cliff, “Hang the DJ: Automatic sequencing and seamless mixing of dance-music tracks,” HP Laboratories Technical Report, vol. 104, 2000.
* [8] J. Nam, K. Choi, J. Lee, et al., “Deep learning for audio-based music classification and tagging: Teaching computers to distinguish rock from Bach,” IEEE Signal Process. Mag., vol. 36, no. 1, pp. 41–51, Jan. 2019\.
* [9] E-P. Damskägg and V. Välimäki, “Audio time stretching using fuzzy classification of spectral bins,” Appl. Sci., vol. 7, no. 12, pp. 1293, Dec. 2017.
* [10] T. Roberts, A. Nicolson, and K. K. Paliwal, “Deep learning-based single-ended quality prediction for time-scale modified audio,” J. Audio Eng. Soc., vol. 69, no. 9, pp. 644–655, Sept. 2021.
* [11] J. Laroche and M. Dolson, “Phase-vocoder: About this phasiness business,” in Proc. IEEE Workshop Appl. Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, Oct. 1997.
* [12] A. Moinet, Slowdio: Audio Time-Scaling for Slow Motion Sports Videos, Ph.D. thesis, University of Mons, Mons, Belgium, 2013.
* [13] V. Välimäki, J. Rämö, and F. Esqueda, “Creating endless sounds,” in Proc. 21st Int. Conf. Digital Audio Effects (DAFx), Aveiro, Portugal, Sep. 2018, pp. 32–39.
* [14] C. Malloy, “Timbral effects: The Paulstretch audio time-stretching algorithm,” J. Acous. Soc. Am., vol. 151, no. 4, pp. A158–A158, 2022.
* [15] L. Fierro and V. Välimäki, “Enhanced fuzzy decomposition of sound into sines, transients, and noise,” arXiv preprint arXiv:2210.14041, Oct. 2022.
* [16] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, et al., “WaveNet: A generative model for raw audio,” arXiv preprint arXiv:1609.03499, Sept. 2016.
* [17] S. Huang, Q. Li, C. Anil, et al., “TimbreTron: A WaveNet(CycleGAN(CQT(audio))) pipeline for musical timbre transfer,” arXiv preprint arXiv:1811.09620, May 2019.
* [18] D. Fitzgerald, “Harmonic/percussive separation using median filtering,” in Proc. Int. Conf. Digital Audio Effects (DAFx), Graz, Austria, 2010.
* [19] J. C. Brown, “Calculation of a constant Q spectral transform,” J. Acoust. Soc. Am., vol. 89, no. 1, pp. 425–434, 1991.
* [20] M. Huzaifah, “Comparison of time-frequency representations for environmental sound classification using convolutional neural networks,” arXiv preprint arXiv:1706.07156, 2017.
* [21] A. van den Oord, Y. Li, I. Babuschkin, K. Simonyan, et al., “Parallel WaveNet: Fast high-fidelity speech synthesis,” in Proc. Int. Conf. Machine Learning, July 2018, pp. 3918–3926.
* [22] K. J. Piczak, “ESC: Dataset for environmental sound classification,” in Proc. 23rd Annual ACM Conf. Multimedia, Oct. 2015, pp. 1015–1018.
* [23] A. Ramires, F. Font, D. Bogdanov, et al., “The Freesound loop dataset and annotation tool,” in Proc. 21st Int. Conf. Music Information Retrieval (ISMIR), 2020\.
* [24] K. Ito and L. Johnson, “The LJ speech dataset,” https://keithito.com/LJ-Speech-Dataset/, 2017.
* [25] J. Laroche and M. Dolson, “Improved phase vocoder time-scale modification of audio,” IEEE Trans. Speech and Audio Process., vol. 7, no. 3, pp. 323–332, 1999.
* [26] F. Nagel and A. Walther, “A novel transient handling scheme for time stretching algorithms,” in Proc. Audio Eng. Soc. 127th Conv., 2009.
* [27] T. Roberts and K. K. Paliwal, “Stereo time-scale modification using sum and difference transformation,” in Proc. 12th Int. Conf. Signal Process. Comm. Syst. (ICSPCS), Dec. 2018, pp. 1–5.
* [28] M. Schoeffler, S. Bartoschek, F-R. Stöter, et al., “WebMUSHRA—A comprehensive framework for web-based listening tests,” J. Open Research Software, vol. 6, no. 1, 2018.
* [29] J. Driedger, M. Müller, and S. Ewert, “Improving time-scale modification of music signals using harmonic-percussive separation,” IEEE Signal Process. Lett., vol. 21, no. 1, pp. 105–109, 2013.
* [30] J. Driedger and M. Müller, “TSM Toolbox: MATLAB implementations of time-scale modification algorithms,” in Proc. Int. Conf. Digital Audio Effects (DAFx), 2014, pp. 249–256.
* [31] ITU-R BS.1770-4, “Algorithms to measure audio programme loudness and true-peak audio level,” standard, ITU/Radiocommunications, Oct. 2015.
|
# Base Change for coherent cohomology in Berkovich geometry
Mathieu Daylies<EMAIL_ADDRESS>
We prove some base change theorems for coherent cohomology in the setting of
Berkovich spaces. In this setting, we get a flat base change theorem, and some
proper base change theorems that are analogue to theorems from scheme theory.
## 1\. Introduction
### 1.1. Motivation
When considering a cartesian square of spaces, base change theorems relate how
the direct and the inverse image of sheaves fit together. To be more precise,
let
$\textstyle{X^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{p^{\prime}}$$\textstyle{S^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{S}$
be a cartesian square in some category of spaces with sheaves on it (schemes,
topological spaces, analytic spaces,…) and let $\mathcal{F}$ be a sheaf on
$X$. Then base change theorems are statements concerning the natural
transformation of sheaves $G:p^{*}R^{i}(f_{*}\mathcal{F})\to
R^{i}f^{\prime}_{*}(p^{\prime*}\mathcal{F})$.
Suppose that all the spaces involved in the previous diagram are schemes, and
$\mathcal{F}$ is a quasi-coherent sheaf. Then the flat base change theorem
from algebraic geometry states that if $p$ is flat, and $f$ is quasi-compact
and quasi-separated, then $G$ is an isomorphism (see [12, Tag 02KH]). If the
base $S$ is noetherian, $f$ is proper, and $\mathcal{F}$ is flat over $S$, we
also have proper base change theorems for coherent cohomology that are a bit
weaker (see [12, Tag 07VL] or chapter 2, Section 5 of [10]).
Proper base change theorems are especially useful when we try to figure out
how cohomology behaves in familly : we can often relate the cohomology on the
fiber and the cohomology on some neighboorhood of a point (cf corollary 2 and
3 of section 5 of [10]). With motivation linked to the study of relative
ampleness locus of a morphism between Berkovich analytic spaces, we are
interested in those base change theorems for coherent cohomology in the non-
archimedean analytic setting. Berkovich stated some of these theorems at the
end of section 3.3 of [1]. In his work, these theorems are deduced from
theorem 3.3.9 of loc.cit., whose proof is supposed to be completely analogous
to the proofs of the corresponding facts in algebraic geometry. However, in
analytic geometry, the tensor products that appear are replaced by completed
tensor products which are not really compatible with tools from homological
algebra used in the previous proof. This fact leads us to think that the proof
in loc. cit. is not correct.
Note that in Berkovich geometry, we also have an étale cohomology and some
base change theorems for étale cohomology (see [5] for a reader friendly
introduction to étale cohomology of Berkovich analytic spaces, and section 7.7
and 7.8 of [2] for the precise theorems), but we only deal with coherent
cohomology here.
### 1.2. Overview of our results
#### 1.2.1. Flat base change for coherent cohomology
In Berkovich geometry, we have at our disposal a flatness theory, due to
Ducros in [6]. In algebraic geometry, the easiest base change theorem is the
flat base change theorem, so it is natural to try to prove such a theorem in
the analytic setting. We obtain the following theorem 2.1:
###### A Theorem.
Suppose
$\textstyle{X^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{p^{\prime}}$$\textstyle{S^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{S}$
is a cartesian diagram of $k$-analytic spaces with $f$ proper and $p$ flat.
Let $\mathcal{F}$ be a coherent sheaf on $X$. Then for all $i\geq 0$ the
natural morphism of coherent $\mathcal{O}_{S^{\prime}}$-modules
$p^{*}(R^{i}f_{*}\mathcal{F})\to R^{i}f^{\prime}_{*}(p^{\prime*}\mathcal{F})$
is an isomorphism.
We can reduce this statement to the case where $S=\mathcal{M(A)}$ and
$S^{\prime}=\mathcal{M(B)}$ are both affinoid, and then it becomes a theorem
on the Čech complex $C^{*}$ of $\mathcal{F}$ associated to some affinoid
covering of $X$. The Čech complex of $X_{\mathcal{B}}$ is now
$C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}$ and we just need to show that
the natural map $H^{i}(C^{*})\otimes_{\mathcal{A}}\mathcal{B}\to
H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$ is an isomorphism. In
scheme theory, this would be straightforward because tensor product by a flat
algebra commutes by definition with cohomology of a complex. On the contrary,
in analytic geometry, even if $p:S^{\prime}=\mathcal{M(B)}\to
S=\mathcal{M(A)}$ is flat, the functor
$-\hat{\otimes}_{\mathcal{A}}\mathcal{B}$ needs not to be exact in any sense.
We introduce in 2.5 the property $\mathcal{P_{A}}$. An $\mathcal{A}$ Banach
module $M$ satisfies the property $\mathcal{P_{A}}$ if the natural arrow
$H^{i}(C^{*})\otimes_{\mathcal{A}}\mathcal{B}\to
H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$ is an isomorphism. In
order to show the theorem, it is sufficient to show that $\mathcal{B}$
satisfies the property $\mathcal{P_{A}}$.
The first step is to show, by direct computation involving explicit polydisks,
that any quasi-smooth algebra overs $\mathcal{A}$ satisfies $\mathcal{P_{A}}$.
We then show that the $\mathcal{P_{A}}$ property behaves well with respect to
composition, and that this property is local for the $G$-topology on $S$. The
last and crucial step is now to use the analytic version of Raynaud-Gruson’s
theory of dévissages, due to Ducros in section 8.2 of [6], to handle the
general case of an affinoid algebra $\mathcal{B}$, analytically flat over
$\mathcal{A}$.
#### 1.2.2. Proper base change for coherent cohomology
In 3.3.9 of [1], Berkovich claimed that we could approach these problems in
the non-archimedean analytic setting with the strategy used by Mumford in
chapter 5 of [10] for schemes. The idea is to show that if we have a proper
scheme $X\to\operatorname{Spec}A$ over $A$, $\mathcal{F}$ a coherent sheaf
over $X$, flat over $A$, then there exists a complex of finite and flat
$A$-modules $K^{*}$ that compute universally the cohomology of $\mathcal{F}$
in the following sense: for all $A$-algebra $B$, we have an isomorphism
$H^{i}(K^{*}\otimes_{A}B)\to H^{i}(X_{B},\mathcal{F}_{B})$. To prove this, we
use the Čech complex of $\mathcal{F}$ relative to some open cover, and we use
tools from homological algebra. The finiteness of the $K^{i}$’s is the crucial
point of this theorem, because the Čech complex does not usually contain any
finite module. Classic proper base change theorems are then a consequence of
the existence of this complex.
If we try to adapt this proof to the non-archimedean analytic setting, it
seems to fail. Indeed, if we consider a proper analytic space
$X\to\mathcal{A}$ over an affinoid algebra $\mathcal{A}$, $\mathcal{F}$ a
coherent sheaf over $X$, flat over $\mathcal{A}$, the Čech complex of
$\mathcal{F}$ does not consist of type module over $\mathcal{A}$, so if
$p:\mathcal{M(B)}\to\mathcal{M(A)}$ is a morphism of affinoid algebras, the
tensor products appearing in the Čech complex of $\mathcal{F}_{\mathcal{B}}$
are completed tensor products that prevent us from using the tools from
homological algebra.
We however get the following theorem 3.1:
###### B Theorem.
Let $\mathcal{A}$ be a $k$-affinoid algebra, $X$ a proper
$\mathcal{A}$-analytic space, and $\mathcal{F}$ a coherent sheaf on $X$, that
is flat over $\mathcal{M(A)}$. Then there exists a finite complex $K^{*}$ of
finite and projective $\mathcal{A}$-modules such that for any non-archimedean
complete field extension $L$ of $k$, and any $L$-affinoid algebra
$\mathcal{B}$ with a morphism of analytic spaces
$p:\mathcal{M(B)}\to\mathcal{M(A)}$, we have, for all $n\in\mathbb{N}$, an
isomorphism $H^{n}(K^{*}\otimes_{\mathcal{A}}\mathcal{B})\to
H^{n}(X_{\mathcal{B}},\mathcal{F}_{\mathcal{B}})$.
The idea of the proof is to work with the Čech complex of $\mathcal{F}$, and
to note that the theorem is true in the following cases:
1. (1)
if $\mathcal{B}=\mathcal{A}_{L}$ with $L$ a non-archimedean complete extension
of $k$ because $\hat{\otimes}_{k}L$ transforms exact admissible sequences into
exact admissible sequences by Gruson’s theory in [8],
2. (2)
if $\mathcal{B}$ is $k$-affinoid, and $p$ is analytically flat by theorem A,
3. (3)
if $p$ is a finite morphism between $k$-affinoid spaces by the algebraic
theory and the construction by Mumford in Section 5 of [10], because in this
case, completed tensor product and usual tensor product are the same.
Then, we can use Ducros’s work on the relative dimension in [7], and in
particular corollary 4.7 which gives a nice $G$-local factorization of some
morphisms and allows to conclude.
Theorem B allows us to derived classical versions of proper base change
theorems in Berkovich geometry. In particular, we obtain the upper semi-
continuity of the dimension of the cohomology group of the fiber for the
Zariski topology, and the fact that the Euler characteristic is locally
constant.
We also find back a classic statement as a corollary in theorem 3.2:
###### C Theorem.
Let $f:X\to S$ be a proper morphism of $k$-analytic spaces with $S$ connected
and reduced, and $\mathcal{F}$ a coherent sheaf on $X$ that is flat on $S$.
Then, for all $p\in\mathbb{N}$, there is an equivalence between:
1. (1)
the function $s\in S\mapsto\dim_{\mathcal{H}(s)}H^{p}(X_{s},\mathcal{F}_{s})$
is constant ;
2. (2)
the sheaf $R^{p}f_{*}(\mathcal{F})$ is locally free on $S$, and for all $s\in
S$, the natural map
$R^{p}f_{*}(\mathcal{F})\otimes_{\mathcal{O}_{S}}\mathcal{H}(s)\to
H^{p}(X_{s},\mathcal{F}_{s})$ is an isomorphism.
If these equivalent conditions are fulfilled, then the natural map
$R^{p-1}f_{*}(\mathcal{F})\otimes_{\mathcal{O}_{S}}\mathcal{H}(s)\to
H^{p-1}(X_{s},\mathcal{F}_{s})$ is an isomorphism for all $s\in S$.
### 1.3. Acknowledgments
I would like to express my sincere gratitude to Antoine Ducros, without whom
this work would not have been possible.
## 2\. Flat base change
The main goal of this section is to show a flat base change theorem for
coherent modules in Berkovich geometry. Our strategy will be to use Ducros’s
theory of dévissages in Berkovich geometry (see chapter 8 of [6]) to reduce
the theorem to two elementary cases, the case of a quasi-smooth base change,
and the case of a $G$-covering. The case of a $G$-covering follows from
Kiehl’s theorem on direct images, and the case of quasi-smooth morphisms will
be handled by direct computation involving explicit polydisks.
We first give a reminder of Ducros theory of dévissages and give the explicit
version that we will use here. The following theorem is just a version of the
theorem 8.3.4 of [6] where we shrinked every quasi-smooth space.
###### 2.1 Proposition (Ducros).
Let $p:Y\to X$ be a morphism between good $k$-analytic spaces, let
$\mathcal{F}$ be a flat coherent sheaf on $Y$, let $y$ be a point of
$\mathrm{supp}\leavevmode\nobreak\ Y$, let $x$ be its image in $X$ and let $n$
be the relative dimension of $p$ at $x$. Then there exist $r\leq n$, a
decreasing sequence of non-negative integers $n_{1}>n_{2}>...>n_{r}$ and a
list of data $(V,\\{T_{i},\pi_{i},t_{i},L_{i},P_{i}\\}_{i\in\\{1,...r\\}})$,
where :
1. (1)
$V$ is an affinoid neighboorhood of $y\in Y$;
2. (2)
$T_{i}=\mathcal{M}(C_{i})$ is a $k$-affinoid domain of a smooth $X$-space of
pure relative dimension $n_{i}$ and $t_{i}$ is a point of $T_{i}$ lying over
$x$;
3. (3)
for every $i$, $L_{i}$ and $P_{i}$ are finite $C_{i}$-modules, and $L_{i}$ is
free over $C_{i}$;
4. (4)
$t_{i}\in\mathrm{supp}\leavevmode\nobreak\ (P_{i})$ if $i<r$ and $P_{r}=0$;
5. (5)
$\pi_{1}$ is a finite $X$-map from $\mathrm{supp}\leavevmode\nobreak\
(\mathcal{F}_{V})$ to $T_{1}$ such that $\pi_{1}^{-1}(t_{1})=\\{y\\}$ set-
theoretically;
6. (6)
$\pi_{i}$ is a finite $X$-map from $\mathrm{supp}\leavevmode\nobreak\
(P_{i-1})$ to $T_{i}$ such that $\pi_{1}^{-1}(t_{i})=\\{t_{i-1}\\}$ set-
theoretically;
7. (7)
There exist an exact sequence of finite $C_{1}$-modules $0\to L_{1}\to
H^{0}(V,\mathcal{F})\to P_{1}\to 0$ ;
8. (8)
for any $i\in\\{2,...,r\\}$, there is an exact sequence of finite
$C_{i}$-modules $0\to L_{i}\to P_{i-1}\to P_{i}\to 0$.
###### Proof.
Let
$(V^{\prime},\\{T_{i},\pi_{i},t_{i},\mathcal{L}_{i},\mathcal{P}_{i}\\}_{i\in\\{1,...,r\\}})$
be an $X$-dévissage of $\mathcal{F}$ at $y$ (such a dévissage exist by theorem
8.2.3 of [6]). Then the arrow $\mathcal{L}_{1}\to{\pi_{1}}_{*}\mathcal{F}_{V}$
is injective at $t_{1}$ and the arrow
$\mathcal{L}_{i}\to{\pi_{i}}_{*}\mathcal{P}_{i-1}$ is injective at $t_{i}$ for
every $i\geq 2$ by theorem 8.3.4 of [6] and flatness of $\mathcal{F}$.
By definition, and by theorem 2.4.9 of [6], we can shrink $V^{\prime}$ and all
the $T_{i}$ to reduce to the case where if $L_{i}$ (resp. $P_{i}$) is the
$O_{T_{i}}(T_{i})$-module associated with $\mathcal{L}_{i}$ (resp.
$\mathcal{P}_{i}$) then the arrows $L_{i}\to P_{i-1}$ and $L_{1}\to
H^{0}(V,\mathcal{F})$ are injective and the sequences $0\to L_{i}\to
P_{i-1}\to P_{i}\to 0$ for all $i>1$ and $0\to L_{1}\to
H^{0}(V,\mathcal{F})\to P_{1}\to 0$ are exact. ∎
###### 2.2 Remark.
By the reverse implication in 8.3.4 of [6], the coherent module associated to
$P_{i}$ over $T_{i}$ is flat over $\mathcal{M(A)}$ because we have a dévissage
of it $(V,\\{T_{i},\pi_{i},t_{i},L_{i},P_{i}\\}_{i\in\\{2,...r\\}})$ that is
just the restriction of the dévissage of $\mathcal{F}$
The main theorem of this section is the following:
###### 2.1 Theorem.
Suppose
$\textstyle{X^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{p^{\prime}}$$\textstyle{S^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{S}$
is a cartesian diagram of $k$-analytic spaces with $f$ proper and $p$ flat.
Let $\mathcal{F}$ be a coherent sheaf on $X$. Then for all $i\geq 0$ the
natural morphism of coherent $\mathcal{O}_{S^{\prime}}$-modules
$p^{*}(R^{i}f_{*}\mathcal{F})\to R^{i}f^{\prime}_{*}(p^{\prime*}\mathcal{F})$
is an isomorphism.
The rest of this section is devoted to the proof of this theorem.
###### 2.3 Remark.
By Kiehl theorem on coherence of direct images, theorem 3.3 in [9], the
theorem holds for every embedding of analytic domain $p:S^{\prime}\to S$.
Since this assertion is local on $S^{\prime}$ for the $G$-topology, we can
assume than $S^{\prime}$ is affinoid, and the theorem is just a consequence of
the following theorem:
###### 2.2 Theorem.
Suppose
$\textstyle{X^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\prime}}$$\scriptstyle{p^{\prime}}$$\textstyle{S^{\prime}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{S}$
is a cartesian diagram of $k$-analytic spaces with $f$ proper and
$p:S^{\prime}:=\mathcal{M(B)}\to S:=\mathcal{M(A)}$ a flat morphism between
affinoid spaces. Let $\mathcal{F}$ be a coherent sheaf on $X$. Then for all
$i\geq 0$ the natural morphism of $B$-modules
$H^{i}(X,\mathcal{F})\otimes_{\mathcal{A}}\mathcal{B}\to
H^{i}(X_{\mathcal{B}},\mathcal{F}_{B})$ is an isomorphism.
###### 2.4 Remark.
The space $X$ is compact so if we take a $G$-covering of $X$ by a finite
number of affinoid domains, the $i$-th cohomology module
$H^{i}(X,\mathcal{F})$ is given by $H^{i}(C^{*})$ the $i$-th cohomology module
of the Čech complex associated to $\mathcal{F}$, where the latter is denoted
by $(C^{*})$.
###### 2.5 Definition.
Let $\mathcal{A}$ be a $k$-affinoid algebra, and $M$ be a Banach module over
$\mathcal{A}$. We say that $M$ satisfies the property
$\mathcal{P}_{\mathcal{A}}$ if for every $i\geq 0$ and all coherent sheaf
$\mathcal{F}$ on a proper $\mathcal{A}$-space $X$ provided with a finite
$G$-covering the natural arrow $H^{i}(C^{*})\hat{\otimes}_{\mathcal{A}}M\to
H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}M)$ is an isomorphism where $(C^{*})$ is
the Čech complex of $\mathcal{F}$.
###### 2.6 Remark.
By properness, the $H^{i}(C^{*})$ are finite $\mathcal{A}$-modules, so we have
the equality
$H^{i}(C^{*})\hat{\otimes}_{\mathcal{A}}M=H^{i}(C^{*}){\otimes}_{\mathcal{A}}M$.
###### 2.7 Definition.
In the setting of theorem 2.1, we say that a morphism $p:S^{\prime}\to S$
satisfies property $\mathcal{Q}_{S}$ if for all proper morphism $f:X\to S$ and
$\mathcal{F}$ coherent sheaf on $X$, the conclusion of theorem 2.1 are
satisfied.
###### 2.8 Remark.
For a morphism of $k$-affinoid spaces $p:S^{\prime}\to S$ the following
properties are equivalent : $\mathcal{O}(S^{\prime})$ satisfies
$\mathcal{P}_{\mathcal{O}(S)}$ and the morphism $p$ satisfies
$\mathcal{Q}_{S}$. Nevertheless, the property $\mathcal{Q}$ makes sense for
non-affinoid analytic spaces.
###### 2.9 Example.
Let $\mathcal{M(A)}=\bigcup_{j\in J}\mathcal{M}(\mathcal{A}_{j})$ be a
covering of an affinoid space by a finite number of affinoid domains. Let
$M:=\prod_{j\in J}\mathcal{A}_{j}$. Then as in 2.3, by Kiehl coherence
theorem, $M$ sastisfies the property $\mathcal{P_{A}}$.
###### 2.10 Remark.
In the setting of theorem 2.2, let $X=\bigcup_{j\in J}X_{j}$ be a covering of
$X$ by a finite number of affinoid domains. Then we have a finite $G$-covering
of $X_{\mathcal{B}}$ by affinoid domains, namely
$X_{\mathcal{B}}=\bigcup_{i\in I}X_{i}\times_{\mathcal{A}}\mathcal{B}$, and by
properness we can compute its cohomology groups using Čech cohomology, so we
have the equality
$H^{i}(X_{\mathcal{B}},\mathcal{F}_{B})=H^{i}(C_{\mathcal{B}}^{*})$, where
$C_{\mathcal{B}}^{i}=(C^{i})\hat{\otimes}_{\mathcal{A}}\mathcal{B}$.
In order to show theorem 2.2, it is sufficient to show that
$H^{i}(C_{\mathcal{B}}^{*})=H^{i}(C)\otimes_{\mathcal{A}}\mathcal{B}$, that is
to say that the Banach $\mathcal{A}$-module $\mathcal{B}$ satisfies the
property $\mathcal{P}_{\mathcal{A}}$. This will be our main goal in the rest
of this section.
The following lemma shows in particular that the property $\mathcal{P}$ is
$G$-local on the source.
###### 2.11 Lemma.
Let $p:\mathcal{M(B)}\to\mathcal{M(A)}$ be a morphism of $k$-affinoid spaces,
and let $q:\mathcal{M(C)\to\mathcal{M(B)}}$ be a morphism of $k$-affinoid
spaces.
1. (1)
Assume that the induced $k$-algebra morphism $\mathcal{B}\to\mathcal{C}$ is
faithfully flat and sastisfies $\mathcal{P}_{\mathcal{B}}$ (e.g. the morphism
$q$ is a finite affinoid $G$-covering of $\mathcal{M(B)}$). Then if
$\mathcal{C}$ satisfies the property $\mathcal{P}_{\mathcal{A}}$ then
$\mathcal{B}$ satisfies the property $\mathcal{P}_{\mathcal{A}}$.
2. (2)
Assume that $p$ satisfies $\mathcal{P_{A}}$ and $q$ satisfies
$\mathcal{P}_{B}$. Then $p\circ q$ satisfies $\mathcal{P_{A}}$.
3. (3)
Suppose that there exist a $G$-covering $\mathcal{M(B)}=\bigcup_{i\in
I}\mathcal{M}(\mathcal{B}_{V_{i}})$ of $\mathcal{M((B)}$ such that for all
$i\in I$, the $\mathcal{A}$-module $\mathcal{B}_{V_{i}}$ satisfies the
property $\mathcal{P_{A}}$. Then $\mathcal{B}$ satisfies the property
$\mathcal{P_{A}}$.
###### Proof.
Let $X$ be a proper $\mathcal{A}$-space provided with a covering by a finite
number of affinoid domains. Denote by $(C^{*})$ the associated Čech complex.
We start with the first point. For all $i\geq 0$, we have a natural arrow
$H^{i}(C^{*})\otimes_{\mathcal{A}}\mathcal{B}\to
H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$. By faithfull flatness,
this arrow is an isomorphism if and only if its base change
$H^{i}(C^{*})\otimes_{\mathcal{A}}\mathcal{B}\otimes_{\mathcal{B}}\mathcal{C}\to
H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\otimes_{\mathcal{B}}\mathcal{C}$
is an isomorphism, but because $\mathcal{C}$ satisfies $\mathcal{P_{A}}$, the
first term is isomorphic to
$H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{C})$ and because $q$ satisfies
$\mathcal{P}_{\mathcal{B}}$, we have an isomorphism
$H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\otimes_{\mathcal{B}}\mathcal{C}\to
H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}\hat{\otimes}_{\mathcal{B}}\mathcal{C})=H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{C})$.
For the second point, we have an isomorphism
$H^{i}(C^{*})\otimes_{\mathcal{A}}\mathcal{C}=H^{i}(C^{*})\otimes_{\mathcal{A}}\mathcal{B}\otimes_{\mathcal{B}}\mathcal{C}\to
H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\otimes_{\mathcal{B}}\mathcal{C}$
because $p$ satisfies $\mathcal{P_{A}}$ and this last term is isomorphic to
$H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}\hat{\otimes}_{\mathcal{B}}\mathcal{C})$
because $q$ satisfies $\mathcal{P_{B}}$.
The last point is just a consequence of the first point, because by
compactness, we can chose $I$ to be finite, and then, the natural map
$\coprod_{i\in I}\mathcal{M}(\mathcal{B}_{V_{i}})\to\mathcal{A}$ satisfies
$\mathcal{P_{A}}$ and the arrow $\mathcal{B}\to\prod_{i\in
I}\mathcal{B}_{V_{i}}$ is faithfully flat. ∎
We have the same kind of good behaviour for the property $\mathcal{Q}$. Let’s
first introduce the following definition:
###### 2.12 Definition.
Let $f:Y\to X$ be a morphism of $k$-analytic spaces. Then $f$ is said to be
properly surjective if there exist a $G$-covering of $X$ by quasi-compact
analytic domain each of which is the image of a quasi-compact analytic domain
of $Y$.
###### 2.13 Lemma.
Let $p:S\to T$ and $q:R\to S$ be morphism of $k$-analytic spaces.
1. (1)
Assume that $q$ is flat and properly surjective and satisfies property
$\mathcal{Q}_{\mathcal{S}}$ (e.g. the morphism $q$ is a $G$-covering). Then if
$p\circ q$ satisfies $\mathcal{Q}_{T}$ then $p$ satisfies $\mathcal{Q}_{T}$.
2. (2)
Assume that $p$ satisfies $\mathcal{Q}_{T}$ and $q$ satisfies
$\mathcal{Q}_{S}$. Then $p\circ q$ satisfies $\mathcal{Q}_{T}$.
3. (3)
Assume that there exist a $G$-covering $S=\bigcup_{i\in I}S_{i}$ by quasi-
compact analytic domains such that for all $i\in I$, the arrow $S_{i}\to T$
satisfies $\mathcal{Q}_{T}$. Then $p$ satisfies $\mathcal{Q}_{T}$.
###### Proof.
Let $f:X\to T$ be a proper $T$-space and $\mathcal{F}$ be a coherent sheaf on
$X$. Let $X_{S}:=X\times_{T}S$ and $X_{R}:=X\times_{T}T$ and denote by
$p^{\prime}:X_{S}\to X$, $q^{\prime}:X_{R}\to X_{S}$, $f_{S}:X_{S}\to S$ and
$f_{R}:X_{R}\to R$ be the canonical morphisms. We have the following double
cartesian commutative square:
$\textstyle{R\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{S\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{p}$$\textstyle{T}$$\textstyle{X_{R}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{R}}$$\scriptstyle{q^{\prime}}$$\textstyle{X_{S}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{S}}$$\scriptstyle{p^{\prime}}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$
Assume that $p\circ q$ satisfies $\mathcal{Q}_{T}$ and $q$ is flat and
properly surjective and satisfies $\mathcal{Q}_{S}$, and let
$h:p^{*}(R^{i}f_{*}\mathcal{F})\to(R^{i}(f_{S})_{*}(p^{\prime})^{*}\mathcal{F})$
be the canonical morphism. Then we can pullback $h$ by $q^{*}$ and because $q$
satisfies $\mathcal{Q}_{S}$, $q^{*}h$ is the canonical morphism $(p\circ
q)^{*}(R^{i}f_{*}\mathcal{F})\to(R^{i}(f_{R})_{*}(p^{\prime}\circ
q^{\prime})^{*}\mathcal{F})$ and the latter is an isomorphism because $p\circ
q$ satisfies $\mathcal{Q}_{T}$. By descent of flat properly surjective
morphisms (theorem 3.13 of [4]), $h$ is an isomorphism, so $p$ satisfies
$\mathcal{Q_{T}}$.
Assume now that $p$ satisfies $\mathcal{Q}_{T}$ and $q$ satisfies
$\mathcal{Q}_{S}$. Then we have an isomorphism
$h:p^{*}R^{i}f_{*}\mathcal{F}\to
R^{i}((f_{S})_{*}(p^{\prime})^{*}\mathcal{F})$. We can now pullback this
isomorphism by $q^{*}$ to get another isomorphism $q^{*}h$ and because $q$
satisfies $\mathcal{Q}_{S}$, $q^{*}h$ is the canonical morphism $(p\circ
q)^{*}(R^{i}f_{*}\mathcal{F})\to(R^{i}(f_{R})_{*}((p^{\prime}\circ
q^{\prime}))^{*}\mathcal{F})$, so $p\circ q$ satisfies $\mathcal{Q}_{T}$.
Now, the last point is just a consequence of the first point, because the
morphism $\coprod_{i\in I}S_{i}\to S$ is properly surjective, and
$\coprod_{i\in I}S_{i}\to T$ satisfies $\mathcal{Q}_{T}$. ∎
###### 2.14 Lemma.
Let $p:S^{\prime}:=\mathcal{M(B)}\to S=\mathcal{M(A)}$ be a morphism of
$k$-affinoid spaces such that $\mathcal{B}$ satisfies $\mathcal{P_{A}}$, and
let $L$ be a finite free Banach $\mathcal{B}$-module. Then $L$ satisfies the
property $\mathcal{P}_{\mathcal{A}}$.
###### Proof.
Write $L=\bigoplus_{j\in J}\mathcal{B}$. Let $X$ be a proper
$\mathcal{A}$-space provided with a covering by a finite number of affinoid
domains and $\mathcal{F}$ a coherent sheaf on $X$. Denote by $(C^{*})$ the
associated Čech complex. Then by proposition 2.1.7.6 of [3], we have
$C^{*}\hat{\otimes}_{\mathcal{A}}L=\bigoplus_{j\in
J}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$ and because the sum is finite
for all $i\geq 0$ we have the equality
$H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}L)=\bigoplus_{j\in
J}H^{i}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$. Because the usual
tensor product commute with direct sum, and $\mathcal{B}$ satisfies
$\mathcal{P_{A}}$ we see that $L$ satisfies $\mathcal{P_{A}}$. ∎
###### 2.15 Lemma.
Let $p:Y:=\mathcal{M(B)}\to X:=\mathcal{M(A)}$ be a quasi-smooth morphism of
$k$-affinoid algebras. Then $\mathcal{B}$ satisfies the property
$\mathcal{P_{A}}$.
###### Proof.
Let $y\in\mathcal{M(B)}$ and denote by $d$ the relative dimension of $p$ at
$y$. Then by corollary 5.3.7 of [6], there exists an affinoid neighborhood
$V_{y}\subset\mathcal{M(B)}$ such that $p_{|V_{y}}$ is quasi-smooth of
relative dimension $d$. By part 1 of 2.11, we can replace $\mathcal{M(B)}$ by
$V_{y}$ and $f$ by $f_{|V_{y}}$ i.e. we can assume that $f$ is quasi-smooth of
constant relative dimension $d$ by shrinking again $Y$ if needed.
By corollary 5.3.2 of [6], we can shrink $Y$ to assume that the coherent
$\mathcal{B}$-module $\Omega_{Y/X}$ is free of rank $d$. Now let
$f_{1},...,f_{d}$ be analytic functions on $\mathcal{M(B)}$ such that
$((df_{j})(y))_{i}$ is a basis of
$(\Omega_{\mathcal{B}/\mathcal{A}})_{\mathcal{H}(y)}$. Then by lemma 5.4.5 of
[6], the map $\varphi:Y\to\mathbb{A}^{d}_{X}$ is quasi-étale at $y$. Now, the
quasi-étale (quasi-smooth of relative dimension zero) locus of $\varphi$ on
$Y$ is an open subset (it is even Zariski-open) of $Y$ by theorem 10.7.2 of
[6]. We can assume that there exist an affinoid neighborhood $U$ of $y$ in $Y$
such that the restriction $\varphi_{|U}$ remain quasi-étale, and by an
application of part 3 of the lemma 2.11, up to consider a $G$-covering of $Y$,
we can assume that $\varphi:Y\to\mathbb{A}^{d}_{X}$ is quasi-étale at each
point.
By definition of $\mathbb{A}^{d}_{X}$ and compactness of $Y$, there exist a
compact polydisk $D$ over $\mathcal{A}$ such that $\varphi(Y)\subset D$. Then
the induced morphism $Y\to D$ is quasi-étale, and by part 2 of lemma 2.11, it
is sufficient to show that $D\to X$ (resp. $Y\to D$) satisfies
$\mathcal{P_{\mathcal{A}}}$ (resp. $\mathcal{P}_{\mathcal{O}(D)})$. The
morphism $D\to X$ obviously satisfies $\mathcal{P_{A}}$ since if
$D=\mathcal{M}(\mathcal{A}\\{\underline{R}^{-1}\underline{T}\\})$ with
$\underline{R}\in(\mathbb{R}_{+}^{*})^{d}$ a polyradius, $Z$ is a proper
$\mathcal{A}$-space provided with a covering by a finite number of affinoid
domains, $\mathcal{F}$ is a coherent sheaf on $Z$ and $(C^{*})$ is the
associated Čech complex then for all $i\geq 0$ we have
$H^{i}(C^{*}\\{\underline{R}^{-1}\underline{T}\\})=H^{i}(C^{*})\otimes_{\mathcal{A}}\mathcal{A}\\{\underline{R}^{-1}\underline{T}\\}$
because for every exact admissible sequence of Banach $\mathcal{A}$-modules
$0\to M^{\prime}\to M\to M^{\prime\prime}\to 0$, the sequence $0\to
M^{\prime}\hat{\otimes}_{\mathcal{A}}\mathcal{A}\\{\underline{R}^{-1}\underline{T}\\}\to
M\hat{\otimes}_{\mathcal{A}}\mathcal{A}\\{\underline{R}^{-1}\underline{T}\\}\to
M^{\prime\prime}\hat{\otimes}_{\mathcal{A}}\mathcal{A}\\{\underline{R}^{-1}\underline{T}\\}\to
0$ is also exact admissible.
It remains to show that if $q:Y\to D$ is a quasi-étale morphism between
affinoid spaces then $\mathcal{O}(Y)$ satisfies $\mathcal{P}_{\mathcal{D}}$.
By part 3 of lemma 2.11, we can argue $G$-locally on $Y$, and assume that $Y$
is an affinoid domain of an étale space $T$ over $D$. Let $p:T\to D$ be the
structural morphism and $j:Y\to T$ the inclusion that identifie $Y$ with an
affinoid domain of $T$. To show that $\mathcal{O}(Y)$ satisfies
$\mathcal{P}_{\mathcal{D}}$, it is sufficient to show that $q$ satisfies
$\mathcal{Q}_{D}$. Let now suppose that étale morphisms satisfy the property
$\mathcal{Q}$. Then $q$ also satisfies $\mathcal{Q}$ by part 2 of lemma 2.13
because affinoid domain embeddings satisfy property $\mathcal{Q}$.
We can now assume that $q_{1}:Y\to D$ is an étale morphism between non
necessary affinoid spaces, and it is sufficient to show that $q_{1}$ satisfies
the property $\mathcal{Q}$. By definition, there exists $Y=\bigcup_{j\in
J}Y_{j}$ a $G$-covering of $Y$ by affinoid domains and $D_{j}$ affinoid
domains of $D$ such that the restriction $q_{1|Y_{j}}:Y_{j}\to D_{j}$ is
finite and étale. Let $q_{2}:\coprod_{j\in J}Y_{j}\to\coprod_{j\in J}D_{j}$ be
the induced finite morphism and $r_{2}:\coprod_{j\in J}Y_{j}\to Y$ and
$r_{1}:\coprod_{j\in J}D_{j}\to D$ the induced $G$-covering. We now have the
following commutative square :
$\textstyle{\coprod_{j\in
J}Y_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q_{2}}$$\scriptstyle{r_{2}}$$\textstyle{\coprod_{j\in
J}D_{j}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r_{1}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q_{1}}$$\textstyle{D}$
By lemma part 3 of 2.13, $q_{1}$ satisfies the property $\mathcal{Q}_{D}$ if
the morphism $\coprod_{j\in J}Y_{j}\to D$ satisfies property
$\mathcal{Q}_{D}$. Now, because the morphism $\coprod_{j\in J}D_{j}\to D$
satisfies property $\mathcal{Q}_{D}$, again by part 2 of lemma 2.13, it is
sufficient to show that the morphism $\coprod_{j\in J}Y_{j}\to\coprod D_{j}$
satisfies the property $\mathcal{Q}$.
We can now assume that the morphism $q:Y\to D$ is a finite étale morphism of
affinoid spaces and we want to show that it possess the property
$\mathcal{Q}$. Let $f:Z\to D$ be a proper ${D}$-space provided with a covering
by a finite number of affinoid domains $Z=\bigcup Z_{r}$. Let $\mathcal{F}$ be
a coherent sheaf on $Z$. Denote by $(C^{*})$ the associated Čech complex. Let
$f^{\prime}=f\times Y$ and $q^{\prime}=q\times X$. Then, by properness, we
have
$H^{i}(Z\times_{D}Y,q^{\prime*}\mathcal{F})=H^{i}(C^{*}\hat{\otimes}_{\mathcal{O}(D)}\mathcal{O}(Y))$
because the inverse image $(q^{\prime-1}(Z_{r}))$ induce an affinoid covering
of $X\times_{D}Y$, and because the arrow $\mathcal{O}(D)\to\mathcal{O}(Y)$ is
finite, we have
$C^{i}\hat{\otimes}_{\mathcal{O}(D)}\mathcal{O}(Y)=C^{i}{\otimes}_{\mathcal{O}(D)}\mathcal{O}(Y)$
and because the same arrow is flat by proposition 4.3.1 of [6], the cohomology
of the complex $(C^{*})$ commute with (ordinary) tensor product, so we have
$H^{i}(C^{*}{\otimes}_{\mathcal{O}(D)}\mathcal{O}(Y))=H^{i}(C^{*})\otimes_{\mathcal{O}(D)}\mathcal{O}(Y)$,
so we eventually have the equality
$H^{i}(X\times_{D}Y,q^{\prime*}\mathcal{F})=H^{i}(C^{*})\otimes_{\mathcal{O}(D)}\mathcal{O}(Y)$
and $q$ satisfies property $\mathcal{Q}_{D}$. ∎
###### 2.16 Remark.
Combining lemma 2.14 and 2.15, we see that if $\mathcal{A}$ is a $k$-affinoid
space, and $M$ is any finite free module on a quasi-smooth algebra, then $M$
satisfies the property $\mathcal{P}_{\mathcal{A}}$.
Now, we give a last lemma that will allow us to show the theorem 2.2 by
induction on the relative dimension of the morphism.
###### 2.17 Lemma.
Let $p:R\to S$ be a morphism of affinoid spaces with $\mathcal{M(A)}=S$ and
$\mathcal{F}$ be a coherent sheaf on $R$ that is flat over $S$. Assume that
there exists a quasi-smooth affinoid $S$-space $T=\mathcal{M(C)}$, and
$\mathcal{L}$, $\mathcal{N}$ some coherent sheaves on $T$ such that
$L:=\mathcal{L}(T)$ is free over $\mathcal{C}$, $\mathcal{N}$ is flat over $S$
and such that there exists a finite $S$-map
$\pi:\mathrm{supp}\leavevmode\nobreak\ \mathcal{F}\to T$ and an exact sequence
of finite $C$-modules $0\to L\to H^{0}(R,\mathcal{F})\to N\to 0$, where $N$ is
equal to $\mathcal{N}(T)$.
Assume that $N$ satisfies property $\mathcal{P}_{\mathcal{A}}$. Then
$F:=H^{0}(R,\mathcal{F})$ also satisfies $\mathcal{P}_{\mathcal{A}}$.
###### Proof.
Let $X$ be a proper $\mathcal{A}$-space provided with a covering by a finite
number of affinoid domains $X=\bigcup X_{i}$ and a coherent sheaf
$\mathcal{G}$ on it. Denote by $(C^{*})$ the Čech complex associated to
$\mathcal{G}$. Let $q_{i}:Y_{i}:=\coprod_{i_{0}<..<i_{i}}X_{i_{0}}\cap...\cap
X_{i_{i}}\to X$ be the morphism of $k$-affinoid spaces given by inclusion.
Then we have the equality $H^{0}(Y_{i},q_{i}^{*}\mathcal{G})=C^{i}$. We also
have an exact sequence of coherent $\mathcal{O}_{T}$-modules
$0\to\mathcal{L}\to\pi_{*}\mathcal{F}\to\mathcal{N}\to 0$. We can summarize
the situation by the following commutative cartesian square, where
$Z:=T\times_{S}Y_{i}$:
$\textstyle{Z\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{Y_{i}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{T\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{S}$
By flatness of $\mathcal{N}$, we can apply theorem 4.5.7 of [6] to this
square, the previous exact sequence and the coherent sheaf
$q_{i}^{*}\mathcal{G}$ on $Y_{i}$ and we have the following exact sequence for
all $i\geq 0$ : $0\to C^{i}\hat{\otimes}_{\mathcal{A}}L\to
C^{i}\hat{\otimes}_{\mathcal{A}}F\to C^{i}\hat{\otimes}_{\mathcal{A}}N\to 0$.
This short exact sequence of complexes of modules now induces a long exact
sequence of modules, and this shows that the second row of the following
commutative diagram is exact at the middle:
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{n}(C^{*})\otimes_{\mathcal{A}}L\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{n}(C^{*})\otimes_{\mathcal{A}}F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{n}(C^{*})\otimes_{\mathcal{A}}N\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}L)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}F)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}N)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
The first row of this commutative diagram is exact at each of its term because
since $\mathcal{N}$ is flat over $S$, the $\mathcal{A}$-module $N$ is
$\mathcal{A}$ flat, so we have for all $n\geq 0$ the equality
$\mathrm{Tor}^{\mathcal{A}}_{1}\leavevmode\nobreak\ (C^{n},N)=0$. Now, by
diagram-chasing, since $N$ satisfies $\mathcal{P_{A}}$, the arrow
$H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}F)\to
H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}N)$ is surjective, and this shows that
the connecting morphism $H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}N)\to
H^{n+1}(C^{*}\hat{\otimes}_{\mathcal{A}}L)$ from the long exact sequence is
injective, so the arrow $H^{n+1}(C^{*}\hat{\otimes}_{\mathcal{A}}L)\to
H^{n+1}(C^{*}\hat{\otimes}_{\mathcal{A}}F)$ is injective, so the second row is
also exact on its left part for $n\geq 1$ applying the above with $n$ instead
of $n+1$. The case $n=0$ is also exact since we have a commutative diagram of
modules with exact rows:
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C^{0}\hat{\otimes}_{\mathcal{A}}L\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C^{0}\hat{\otimes}_{\mathcal{A}}F\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C^{1}\hat{\otimes}_{\mathcal{A}}L\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{C^{1}\hat{\otimes}_{\mathcal{A}}F}$
And by diagram-chasing, the canonical arrow
$H^{0}(C^{0}\hat{\otimes}_{\mathcal{A}}L)\to
H^{0}(C^{0}\hat{\otimes}_{\mathcal{A}}F)$ is also injective. Now, since $N$
satisfies $\mathcal{P}_{\mathcal{A}}$, and $L$ also by 2.16 because it is free
over a quasi-smooth algebra over $\mathcal{A}$, on the three descending
morphism of the diagram, two are isomorphisms, and by diagram chase, we dedude
that $H^{n}(C^{*})\otimes_{\mathcal{A}}F\to
H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}F)$ is also an isomorphism and it is
what we aimed to show. ∎
We will now use the previous lemma repeatedly to show :
###### 2.18 Lemma.
Let $\mathcal{A}$ be an affinoid algebra, $d\geq 0$ an integer, $T$ a quasi-
smooth space $T\to\mathcal{M(A)}$ purely of relative dimension $d$, and
$\mathcal{F}$ a coherent sheaf on $T$ that is flat on $\mathcal{A}$. Then for
all $i\geq 0$, $H^{0}(T,\mathcal{F})$ satisfies the property
$\mathcal{P}_{\mathcal{A}}$.
###### Proof.
We will show this by induction on the relative dimension $d\geq 0$.
Assume $d=0$. Then by 2.1, there exist a quasi-smooth space $T_{1}$ over
$\mathcal{A}$, of pure relative dimension zero and a finite morphism of
$\mathcal{A}$-analytic spaces $\pi_{1}:\mathrm{supp}\leavevmode\nobreak\
\mathcal{F}\to T_{1}$, with a coherent sheaf $\mathcal{L}_{1}$ on $T_{1}$
whose global section $L_{1}:=H^{0}(T_{1},\mathcal{L}_{1})$ are free over
$\mathcal{A}$ and such that we have an isomorphism $0\to L_{1}\to
H^{0}(T,\mathcal{F})\to 0$. From this equality, we deduce that
$H^{0}(T,\mathcal{F})$ is free over a quasi-smooth algebra over $\mathcal{A}$,
so by remark 2.16, the $\mathcal{A}$-module $H^{0}(T,\mathcal{F})$ satisfies
the property $\mathcal{P_{A}}$.
Assume now that the proposition hold for all $d^{\prime}\leq d$, and let
$T\to\mathcal{M(A)}$ be a quasi-smooth space purely of dimension $d+1$ with
$\mathcal{F}$ a coherent sheaf on $T$. Then using the same notation as
proposition 2.1, we have an affinoid quasi-smooth $\mathcal{A}$-space
$T_{1}=\mathcal{M(C_{1})}$ of pure relative dimension $n_{r}\leq d+1$, an
affinoid quasi-smooth $\mathcal{A}$-space $T_{2}=\mathcal{M(C_{2})}$ of pure
relative dimension $n_{r}\leq d$, some coherent $\mathcal{C}_{i}$-module
$L_{i}$ and $P_{i}$ for $i\in\\{1;2\\}$ on $\mathcal{C}_{i}$ such that $L_{i}$
is free over $\mathcal{C}_{i}$, $P_{i}$ is (analytically) flat over
$\mathcal{M(A)}$ and some finite morphism
$\pi_{1}:\mathrm{supp}\leavevmode\nobreak\ \mathcal{F}\to T_{1}$ and
$\pi_{2}:\mathrm{supp}\leavevmode\nobreak\ P_{1}\to T_{2}$. Now, we have an
exact sequence $0\to L_{2}\to H^{0}(T_{1},P_{1})\to P_{2}\to 0$, and by
induction hypothesis, $P_{2}$ satisfies $\mathcal{P_{A}}$, and $L_{2}$ is free
over $C_{2}$, so by the previous lemma 2.17, $P_{1}$ satisfies
$\mathcal{P_{A}}$. Now applying again the lemma 2.17 to the exact sequence of
$\mathcal{C}_{1}$ modules $0\to L_{1}\to H^{0}(T,\mathcal{F})\to P_{1}\to 0$
we get that $H^{0}(T,\mathcal{F})$ satisfies $\mathcal{P}_{\mathcal{A}}$. ∎
Now, theorem 2.2 is just an easy consequence of the previous lemmas.
###### Proof.
Let $p:\mathcal{M(B)}\to\mathcal{M(A)}$ be a flat morphism between affinoid
spaces. We want to show that the $\mathcal{A}$-module $\mathcal{B}$ satisfies
the property $\mathcal{P}_{\mathcal{A}}$. By lemma 2.11, the property is
$G$-local on $\mathcal{M(B)}$ and it is sufficient to show it on an affinoid
neighborhood of any point.
So let $y\in\mathcal{M(B)}$ and let $n$ be the relative dimension of $p$ at
$y$. Using notation of the proposition 2.1, there exist an affinoid
neighborhood $V=\mathcal{M}(\mathcal{B}_{V})$ of $y$ in $Y$ and a quasi-smooth
$\mathcal{A}$-space $T_{1}=\mathcal{M(C_{1})}$ purely of dimension $d$ smaller
than $n$, finite $C_{1}$-modules $P_{1}$ and $L_{1}$ such that $L_{1}$ is free
over $\mathcal{C}_{1}$ and a finite morphism $\pi_{1}:V\to T_{1}$ such that
there exist an exact sequence of $\mathcal{C}_{1}$-modules $0\to
L_{1}\to\mathcal{B}_{V}\to P_{1}\to 0$.
Now by the 2.18, $P_{1}$ satisfies $\mathcal{P_{A}}$ and $L_{1}$ is free over
a quasi-smooth $\mathcal{A}$-algebra, so by lemma 2.17, the
$\mathcal{A}$-module $\mathcal{B}_{V}$ satisfies $\mathcal{P_{A}}$, this shows
that the property holds for $\mathcal{B}$ and the theorem 2.2 is now proved. ∎
## 3\. Proper base change
We now want to show a proper base change theorem for coherent modules. We
remind two following theorems that are stated at section 5 of the chapter 2 of
[10] for the cohomology of proper schemes.
###### 3.1 Proposition.
Let $A$ be a noetherian ring and $C^{*}$ be a complex of $A$-modules such that
its cohomology groups $H^{i}(C^{*})$ are finitely generated $A$-modules and
$C^{p}\neq\\{0\\}$ if and only if $0\leq p\leq n$. Then there exist a complex
$K^{*}$ of finitely generated $A$-modules such that $K^{p}\neq\\{0\\}$ if and
only if $0\leq p\leq n$ and $K^{p}$ is free for $1\leq p\leq n$, and a quasi-
isomorphism of complexes of $A$-modules $\varphi:K^{*}\to C^{*}$. Moreover, if
the $C^{p}$ are $A$-flat, then $K^{0}$ can be taken to be $A$-flat.
###### 3.2 Proposition.
Let $A$ be a noetherian ring, and $C^{*}$, $K^{*}$ be any finite complexes of
flat $A$-modules and let $\varphi:K^{*}\to C^{*}$ be a quasi-isomorphism of
complexes of $A$-modules. Then for every $A$-algebra $B$, the maps
$H^{p}(K^{*}\otimes_{A}B)\to H^{p}(C^{*}\otimes_{A}B)$ are isomorphisms for
all $p\in\mathbb{Z}$ i.e. the natural morphism $\varphi\otimes_{A}B$ is a
quasi-isomorphism.
If we have a quasi-coherent sheaf $\mathcal{F}$ on a proper space $X$ over a
noetherian affine scheme $A$, we can apply these two proposition to the Čech
complex of $\mathcal{F}$, and we have the existence of a complex $K^{*}$ of
$A$-module that computes the cohomology of the space $X$ universally, i.e.
such that we have for every $A$-algebra $B$ and every integer $n$ the equality
$H^{n}(X_{B},\mathcal{F}_{B})=H^{n}(K^{*}_{A}\otimes B)$.
###### 3.3 Definition.
Let $\mathcal{A}$ be an affinoid algebra, and $X$ a proper
$\mathcal{A}$-analytic space. Let $\mathcal{F}$ be a coherent sheaf on $X$
that is flat over $\mathcal{A}$, and let $X=\bigcup_{i\in I}X_{i}$ be
$G$-covering of $X$ by a finite number of affinoid domains. Denote by $C^{*}$
the Čech complex associated to $\mathcal{F}$ relatively to this covering.
Given these data, we will say that a morphism of affinoid algebras
$p:\mathcal{M(B)}\to\mathcal{M(A)}$ satisfies the property
${{\mathcal{R}_{\mathcal{A}}}}$ if for every finite complex of finitely
generated flat $A$-module $K^{*}$ and every quasi-isomorphism
$\varphi:K^{*}\to C^{*}$ of complexes of $\mathcal{A}$-modules then $\varphi$
induce an isomorphism $H^{n}(K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\to
H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$ i.e.
$\varphi\hat{\otimes}_{\mathcal{A}}\mathcal{B}$ is a quasi-isomorphism.
From now on, all the data involving the previous definition will be fixed.
###### 3.4 Remark.
Let $p:\mathcal{M(B)}\to\mathcal{M(A)}$ be a finite morphism between
$k$-affinoid algebras. Then $p$ satisfies the property
${\mathcal{R}_{\mathcal{A}}}$. In fact, the complex $C^{*}$ satisfies the
hypothesis of the proposition 3.2, so for every finite flat complex of
finitely generated $A$-module $K^{*}$ and every quasi-isomorphism
$\varphi:K^{*}\to C^{*}$ of complexes of $\mathcal{A}$-modules, we have an
isomorphism $H^{n}(K^{*}{\otimes}_{\mathcal{A}}\mathcal{B})\to
H^{n}(C^{*}{\otimes}_{\mathcal{A}}\mathcal{B})$ and since $C^{i}$ and $K^{i}$
are noetherian and $p$ is finite, we have the equality
$K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}=K^{*}{\otimes}_{\mathcal{A}}\mathcal{B}$
and
$C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}=C^{*}{\otimes}_{\mathcal{A}}\mathcal{B}$,
so $\varphi\hat{\otimes}\mathcal{B}$ is a quasi-isomorphism.
###### 3.5 Remark.
Let $p:\mathcal{M(B)}\to\mathcal{M(A)}$ be a flat morphism between affinoid
algebras. Then $p$ satisfies ${\mathcal{R}_{\mathcal{A}}}$. In fact, let
$K^{*}$ be a finite complex of finitely generated flat $A$-modules and
$\varphi$ a quasi-isomorphism $\varphi:K^{*}\to C^{*}$ of complexes of
$\mathcal{A}$-modules. Because $K^{i}$ is finite for all $i\in\mathbb{Z}$, and
the $\mathcal{A}$-algebra $\mathcal{B}$ is flat, using theorem 2.2, the arrow
$H^{n}(K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\to
H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$ is now identified with the
arrow $H^{n}(K^{*})\otimes_{\mathcal{A}}\mathcal{B}\to
H^{n}(C^{*})\otimes_{\mathcal{A}}\mathcal{B}$ and the latter is an isomorphism
by definition of the complex $K^{*}$.
###### 3.6 Lemma.
Let $L$ be a non-archimedean field extension of $k$ and $\mathcal{A}$ be a
$k$-affinoid algebra. Then the natural morphism of analytic spaces
$p:\mathcal{M}({A}_{L})\to\mathcal{M(A)}$ satifies
$\mathcal{R}_{\mathcal{A}}$.
###### Proof.
Let $K^{*}$ be a finite complex of finitely generated flat $A$-module and
$\varphi$ a quasi-isomorphism $\varphi:K^{*}\to C^{*}$ of complexes of
$\mathcal{A}$-modules. Then the differential of the Čech complex $C^{*}$ are
admissible, and by theorem 1 of part 3 of [8], the functor
$\hat{\otimes}_{k}L$ from $k$-Banach modules to $L$-Banach modules transform
admissible exact sequences into admissible exact sequences, so for all
$n\in\mathbb{Z}$, the morphism $H^{n}(K^{*}\hat{\otimes}_{k}L)\to
H^{n}(C^{*}\hat{\otimes}_{k}L)$ is identified with the morphism
$H^{n}(K^{*})\hat{\otimes}_{k}L\to H^{n}(C^{*})\hat{\otimes}_{k}L$ and the
latter is an isomorphism by hypothesis, so $p$ satisfies
$\mathcal{R}_{\mathcal{A}}$. ∎
###### 3.7 Remark.
Note the previons lemma and the remark just before did not use the flatness of
$K^{*}$.
###### 3.8 Lemma.
Let $q:\mathcal{M(D)}\to\mathcal{M(B)}$ and
$p:\mathcal{M(B)}\to\mathcal{M(A)}$ be morphisms of $k$-affinoid spaces.
1. (1)
Assume that $p$ satisfies ${\mathcal{R}_{\mathcal{A}}}$ (e.g. flat) and $q$ is
finite. Then $p\circ q$ satisfies ${\mathcal{R}_{\mathcal{A}}}$.
2. (2)
Assume that there exist a $G$-covering $\mathcal{M(B)}=\bigcup_{j\in
J}\mathcal{M}(\mathcal{B}_{j})$ of $\mathcal{M(B)}$ by a finite number of
affinoid domains such that for all $j\in J$, the induced arrow
$\mathcal{M}(\mathcal{B}_{j})\to\mathcal{M(A)}$ satisfies
${\mathcal{R}_{\mathcal{A}}}$. Then $p$ satisfies
${\mathcal{R}_{\mathcal{A}}}$.
3. (3)
Assume that $p$ satisfies ${\mathcal{R}_{\mathcal{A}}}$ and $q$ satisfies
$\mathcal{R}_{\mathcal{B}}$. Then $p\circ q$ satisfies $\mathcal{R}_{A}$.
###### Proof.
Let $K^{*}$ be a finite complex of finitely generated flat $A$-modules and
$\varphi$ a quasi-isomorphism $\varphi:K^{*}\to C^{*}$ of complexes of
$\mathcal{A}$-modules.
For the first point, we have a homomorphism
$\varphi_{\mathcal{B}}:K\hat{\otimes}_{\mathcal{A}}\mathcal{B}\to
C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}$ and by flatness of $p$ and remark
3.5, this arrow induces an isomorphism
$H^{n}(K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\to
H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$ for all $n\in\mathbb{Z}$.
Now we can apply 3.2 to the complex
$K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}$ and
$C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}$ over $\mathcal{B}$, to obtain an
isomorphism
$H^{n}((K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\otimes_{\mathcal{B}}\mathcal{D})\to
H^{n}((C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\otimes_{\mathcal{B}}\mathcal{D})$,
and since $K^{i}$ is a finite $\mathcal{A}$-module for all $i\in\mathbb{Z}$
and $C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}$ is noetherian and $q$ is
finite, we obtain an isomorphism
$H^{n}((K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\hat{\otimes}_{\mathcal{B}}\mathcal{D})\to
H^{n}((C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\hat{\otimes}_{\mathcal{B}}\mathcal{D})$,
and this shows that $p\circ q$ satisfies ${\mathcal{R}_{\mathcal{A}}}$.
For the second point, we have an isomorphism
$H^{n}(K^{*}\hat{\otimes}_{\mathcal{A}}{\bigoplus_{j\in J}\mathcal{B}_{j}})\to
H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}{\bigoplus_{j\in J}\mathcal{B}_{j}})$.
Since cohomology and completed tensor product commute with finite direct sums
and the affinoid domain inclusion
$\mathcal{M}(\mathcal{B}_{i})\to\mathcal{M(B)}$ is flat, we can use 2.2 to get
an isomorphism $H^{n}(K^{*})\hat{\otimes}_{\mathcal{A}}\bigoplus_{j\in
J}\mathcal{B}_{i}\to H^{n}(C^{*})\hat{\otimes}_{\mathcal{A}}\bigoplus_{j\in
J}\mathcal{B}_{i}$. By flatness of the morphism $\mathcal{M}(\bigoplus_{j\in
J}\mathcal{B}_{j})\to\mathcal{M(B)}$, and descent proposition 3.11 of [4], we
deduce that the map $H^{n}(K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\to
H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$ is an isomorphism, and $p$
satisfies ${\mathcal{R}_{\mathcal{A}}}$.
For the last point, since $p$ satisfies $\mathcal{R}_{\mathcal{A}}$, we have
an isomorphism $H^{n}(K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})\to
H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B})$, and this show that the
morphism of complexes of $\mathcal{B}$-modules $\varphi_{\mathcal{B}}$ induce
an isomorphism on cohomology, so since $q$ satisfies
$\mathcal{R}_{\mathcal{B}}$, we have an isomorphism
$H^{n}(K^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}\hat{\otimes}_{\mathcal{B}}\mathcal{D})\to
H^{n}(C^{*}\hat{\otimes}_{\mathcal{A}}\mathcal{B}\hat{\otimes}_{\mathcal{B}}\mathcal{D})$
so $p\circ q$ satisfies $\mathcal{R}_{\mathcal{A}}$. ∎
###### 3.1 Theorem.
Let $L$ be a non-archimedean field extension of $k$, and let $\mathcal{B}$ be
an $L$-affinoid algebra. Then for every $k$-affinoid algebra and every
morphism $p:\mathcal{M(B)}\to\mathcal{M(A)}$ of analytic spaces, $p$ satisfies
the property ${\mathcal{R}_{\mathcal{A}}}$. In particular, for every proper
$\mathcal{A}$-space $X$, and every coherent sheaf $\mathcal{F}$ on it that is
flat over $\mathcal{M(A)}$, there exist a finite complex of finite and
projective $\mathcal{A}$-modules $K^{*}$ such that for every
$\mathcal{A}$-algebra $\mathcal{B}$ as above, we have an isomorphism
$H^{n}(K^{*}\otimes_{\mathcal{A}}\mathcal{B})\to
H^{n}(X_{\mathcal{B}},\mathcal{F}_{\mathcal{B}})$.
###### Proof.
The morphism $p$ can be written as the composition
$\mathcal{M(B)}\to\mathcal{M}(\mathcal{A}_{L})\to\mathcal{M(A)}$, and
$\mathcal{M}(\mathcal{A}_{L})\to\mathcal{M(A)}$ satisfies
$\mathcal{R}_{\mathcal{A}}$ by 3.6 so using 3.8, we can assume that $L=k$, so
we can assume that $\mathcal{B}$ is $k$-affinoid.
Let $p:\mathcal{M(B)}\to\mathcal{M(A)}$ be a morphism of $k$-affinoid spaces,
and let $y\in\mathcal{B}$. By corollary 4.7 of [7], there exist an affinoid
neighboorhood $V_{y}$ of $y$ in $\mathcal{M(B)}$, a relatively smooth
$\mathcal{A}$-space $T_{y}$, an affinoid domain $W_{y}$ of $T_{y}$ and a
finite morphism $V_{y}\to W_{y}$ such that the restriction $p_{|V_{y}}$ can be
written as the composition $V_{y}\to W_{y}\to T_{y}\to\mathcal{M(A)}$.
We can extract a finite $G$-covering $\mathcal{M(B)}=\bigcup_{i\in I}V_{i}$ of
$\mathcal{M(B)}$ from the $G$-covering
$\mathcal{M(B)}=\bigcup_{y\in\mathcal{M(B)}}V_{y}$ and by part 2 of lemma 3.8,
it is sufficient to show that each restriction $p_{|V_{i}}\to\mathcal{M(A)}$
satisfies the property $\mathcal{R}_{\mathcal{A}}$. Since $V_{i}\to W_{i}$ is
finite, by remark 3.4, it satisfies $\mathcal{R}_{\mathcal{O}(W_{i})}$ and
since $W_{i}\to\mathcal{M(A)}$ is flat, by remark 3.5, it satisfies
$\mathcal{R}_{\mathcal{A}}$. By the third point of lemma 3.8, we have that
$V_{i}\to\mathcal{M(A)}$ satisfies $\mathcal{R}_{\mathcal{A}}$. ∎
###### 3.9 Remark.
Let $\mathcal{M(A)}$ be a $k$-affinoid spaces and $s\in\mathcal{M(A)}$. Then
we can identifie $\mathcal{M(H}(s))$ with a rigid point of
$\mathcal{M(A}_{\mathcal{H}(s)})$, and write the inclusion
$\mathcal{M(H}(s))\to\mathcal{M(A)}$ as a closed immersion
$\mathcal{M(H}(s))\to\mathcal{M(A}_{\mathcal{H}(s)})$ followed by a base field
extension morphism $\mathcal{M(A}_{\mathcal{H}(s)})\to\mathcal{M(A)}$. These
two morphism satisfies the property
$\mathcal{R}_{\mathcal{A}_{\mathcal{H}(s)}}$ (resp the property
$\mathcal{R}_{\mathcal{A}}$), so by lemma 3.8, the composition
$\mathcal{M}(\mathcal{H}(s))\to\mathcal{M(A)}$ satisfies the property
$\mathcal{R}_{\mathcal{A}}$.
In particular, for any proper space $X$ over $\mathcal{M(A)}$, any flat
coherent sheaf $\mathcal{F}$ over $X$ and any $s\in\mathcal{M(A)}$, there
exist a finite complex $K^{*}$ of finite and projective $\mathcal{A}$-modules
$K^{*}$ such that we have an isomorphism
$H^{n}(K^{*}\otimes_{\mathcal{A}}\mathcal{H}(s))\to
H^{n}(X_{s},\mathcal{F}_{s})$. This remark led us to the following corollary
that was stated without proof in corollary 3.3.11 of [1].
###### 3.10 Corollary.
Let $f:X\to S$ be a proper morphism of $k$-analytic spaces, and let
$\mathcal{F}$ be a coherent sheaf on $X$ which is flat over $S$. Then:
1. (1)
for all $p\geq 0$ and $n\geq 0$, the set $\\{s\in
S|\dim_{\mathcal{H}(s)}H^{p}(X_{s},\mathcal{F}_{s})\geq n\\}$ is a Zariski-
closed subset of $S$.
2. (2)
the Euler characteristic $\chi:X\to\mathbb{Z}$ defined by
(3.0.0.1)
$s\mapsto\chi(\mathcal{F}_{s}):=\sum_{p=0}^{\infty}(-1)^{p}\dim_{\mathcal{H}(s)}H^{p}(X_{s},\mathcal{F}_{s})$
is $G$-locally (and locally) constant on $S$.
###### Proof.
Both result are $G$-local on $S$ so we can assume that $S=\mathcal{M(A)}$ is
$k$-affinoid and $X$ is a proper $\mathcal{A}$-space. By the previous remark,
there exist a finite complex $K^{*}$ of finite and projective
$\mathcal{A}$-modules $K^{*}$ such that we have an isomorphism
$H^{n}(K^{*}\otimes_{\mathcal{A}}\mathcal{H}(s))\to
H^{n}(X_{s},\mathcal{F}_{s})$. For $p\in\mathbb{Z}$, denote by
$\delta^{p}:K^{p}\to K^{p+1}$ the $p$-th differential of the complex $K^{*}$.
Then for all $s\in S$ we have the equality:
(3.0.0.2)
$\begin{split}\dim_{\mathcal{H}(s)}H^{p}(X_{s},\mathcal{F}_{s})=\dim_{\mathcal{H}(s)}\mathrm{Ker}(\delta^{p}\hat{\otimes}_{\mathcal{A}}\mathcal{H}(s))-\dim_{\mathcal{H}(s)}\mathrm{Im}(\delta^{p-1}\hat{\otimes}_{\mathcal{A}}\mathcal{H}(s))\\\
=\dim_{\mathcal{H}(s)}(K^{p}\hat{\otimes}_{\mathcal{A}}\mathcal{H}(s))-\dim_{\mathcal{H}(s)}\mathrm{Im}(\delta^{p}\hat{\otimes}_{\mathcal{A}}\mathcal{H}(s))-\dim_{\mathcal{H}(s)}\mathrm{Im}(\delta^{p-1}\hat{\otimes}_{\mathcal{A}}\mathcal{H}(s))\end{split}$
Now
$s\mapsto\dim_{\mathcal{H}(s)}\mathrm{Im}(\delta^{p}\hat{\otimes}_{\mathcal{A}}\mathcal{H}(s))$
is a lower semicontinous function for the Zariski topology on $S$ because the
locus where this dimension is less than a number $N$ is the Zariski closed
subset defined by the vanishing of some finite number of minor of the matrix
$K^{p}\to K^{p+1}$, so
$s\mapsto\dim_{\mathcal{H}(s)}H^{p}(X_{s},\mathcal{F}_{s})$ is upper
semicontinuous for the Zariski topology and this shows the first point.
For the second point, using 3.0.0.2, we have for all $s\in S$ the equality
$\chi(\mathcal{F}_{s})=\sum_{p=0}^{\infty}(-1)^{p}\dim_{\mathcal{H}(s)}(K^{p}\hat{\otimes}_{\mathcal{A}}\mathcal{H}(s))$
which is $G$-locally constant on $S$ since $K^{p}$ is flat over $\mathcal{A}$
using lemma 4.1.14 of [6]. ∎
The rest of this section follows the chapter 2, section 5 of the book of [10]
of Mumford. The proofs are quite similar, and we will give the arguments that
differs from scheme theory.
###### 3.11 Lemma.
Let $Y$ be a reduced analytic space, $\mathcal{F}$ a coherent sheaf on it such
that for all $y\in Y$, we have the equality
$\dim_{\mathcal{H}(y)}[\mathcal{F}\otimes_{\mathcal{O}_{Y}}\mathcal{H}(y)]=r$.
Then, $\mathcal{F}$ is a locally free of rank $r$ on $Y$.
###### Proof.
The proof is exactly the same proof as in lemma 1 of section 5 of the book
[10], combined with 2.5.2 of [6] that is specific to the analytical case.
∎
The next lemma is the analogue of lemma 2 of section 5 of [10].
###### 3.12 Lemma.
Let $S=\mathcal{M(A)}$ be a reduced $k$-affinoid space and let
$\varphi:\mathcal{F}\to\mathcal{D}$ be a morphism of coherent locally free
$\mathcal{O}_{S}$-sheaves. If
$\dim_{\mathcal{H}(s)}[\textrm{Im}(\varphi\otimes\mathcal{H}(s))]$ is locally
constant, then there are splittings :
$\mathcal{F}\simeq\mathcal{F}_{1}\oplus\mathcal{F}_{2}$
$\mathcal{D}\simeq\mathcal{D}_{1}\oplus\mathcal{D}_{2}$
such that $\varphi_{|\mathcal{F}_{1}}=0$,
$\textrm{Im}(\varphi)\subset\mathcal{D}_{1}$, and
$\varphi_{|\mathcal{F}_{2}}\to\mathcal{D}_{1}$ is an isomorphism.
###### Proof.
The proof is the same as in section 5 of [10] once we know that if
$\mathcal{F}$ is a coherent locally free $\mathcal{O}_{S}$-module, the
associated $\mathcal{A}$-module $M:=\mathcal{H}^{0}(S,\mathcal{F})$ is
projective.
So let $\mathcal{F}$ be a coherent locally free $\mathcal{O}_{S}$-module,
$S^{\textrm{alg}}:=\operatorname{Spec}\mathcal{A}$ and let $s\in S$ be a rigid
point. It is sufficient to show that if $x$ is the image of $s$ in
$S^{\textrm{alg}}$, then $M_{x}$ is free. We know that
$M\otimes_{\mathcal{A}}\mathcal{O}_{S,s}$ is a free
$\mathcal{O}_{S,s}$-module, so by faithfully flatness of $\mathcal{O}_{S,s}$
over $\mathcal{O}_{S^{\textrm{alg}},x}$, we knows that
$M\otimes_{\mathcal{A}}\mathcal{O}_{S^{\textrm{alg}},x}$ is projective by [11]
so it is free over $\mathcal{O}_{S^{\textrm{alg}},x}$, and $M$ is a projective
$\mathcal{A}$-module. ∎
From these two lemma, we deduce the following theorem, whose proof is the same
as in [10], because since the complex $K^{*}$ and the cohomology groups are
finite modules, there is no completion in the tensors products.
###### 3.2 Theorem.
Let $f:X\to S$ be a proper morphism of $k$-analytic spaces with $S$ connected
and reduced, and $\mathcal{F}$ a coherent sheaf on $X$ that is flat on $S$.
Then, for all $p\in\mathbb{N}$, there is an equivalence between:
1. (1)
the function $s\in S\mapsto\dim_{\mathcal{H}(s)}H^{p}(X_{s},\mathcal{F}_{s})$
is constant ;
2. (2)
the sheaf $R^{p}f_{*}(\mathcal{F})$ is locally free on $S$, and for all $s\in
S$, the natural map
$R^{p}f_{*}(\mathcal{F})\otimes_{\mathcal{O}_{S}}\mathcal{H}(s)\to
H^{p}(X_{s},\mathcal{F}_{s})$ is an isomorphism.
If these equivalent conditions are fulfilled, then the natural map
$R^{p-1}f_{*}(\mathcal{F})\otimes_{\mathcal{O}_{S}}\mathcal{H}(s)\to
H^{p-1}(X_{s},\mathcal{F}_{s})$ is an isomorphism for all $s\in S$.
## References
* [1] Vladimir G Berkovich “Spectral theory and analytic geometry over non-Archimedean fields” American Mathematical Soc., 2012
* [2] Vladimir G. Berkovich “Étale cohomology for non-Archimedean analytic spaces” In _Publications Mathématiques de l’IHÉS_ 78 Institut des Hautes Études Scientifiques, 1993, pp. 5–161
* [3] Siegfried Bosch, Ulrich Güntzer and Reinhold Remmert “Non-Archimedean analysis, volume 261 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]” Springer-Verlag, Berlin, 1984
* [4] Mathieu Daylies “Descente fid$\backslash$element plate et alg$\backslash$’ebrisation en g$\backslash$’eom$\backslash$’etrie de Berkovich” In _arXiv preprint arXiv:2103.10490_ , 2021
* [5] Antoine Ducros “Étale cohomology of schemes and analytic spaces” arXiv, 2011 DOI: 10.48550/ARXIV.1101.0683
* [6] Antoine Ducros “Families of Berkovich spaces” In _Astérisque_ , 2018, pp. vii+262
* [7] Antoine Ducros “Variation de la dimension relative en géométrie analytique p-adique” In _Compositio Mathematica_ 143.6 London Mathematical Society, 2007, pp. 1511–1532 DOI: 10.1112/S0010437X07003193
* [8] Laurent Gruson “Théorie de Fredholm $p$-adique” In _Bulletin de la Société Mathématique de France_ 94, 1966, pp. 67–95
* [9] Reinhardt Kiehl “Der Endlichkeitssatz für eigentliche Abbildungen in der nichtarchimedischen Funktionentheorie” In _Inventiones mathematicae_ 2.3 Springer, 1967, pp. 191–214
* [10] David Mumford, Chidambaran Padmanabhan Ramanujam and Jurij Ivanovič Manin “Abelian varieties” Oxford university press Oxford, 1974
* [11] Alexander Perry “Faithfully flat descent for projectivity of modules” arXiv, 2010 DOI: 10.48550/ARXIV.1011.0038
* [12] The Stacks project authors “The Stacks project”, https://stacks.math.columbia.edu, 2021
|
Entanglement Islands from Hilbert Space Reduction
Debarshi Basu1, Qiang Wen2, Shangjie Zhou2,3
1 Indian Institute of Technology, Kanpur 208016, India
2 Shing-Tung Yau Center and School of Physics, Southeast University, Nanjing
210096, China
3 School of Physics and Technology, Wuhan University, Wuhan, Hubei 430072,
China
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
## Abstract
In this paper we try to understand the Island formula from a purely quantum
information perspective. We propose that the island phase is a property of the
quantum state and the Hilbert space where the state is embedded in. More
explicitly we show that, in a quantum system when the state of a subset is
totally encoded in the state of another subset, the Hilbert space of the
system will reduce, and the way we compute the reduced density matrix and
related entropy quantities will also change essentially. Such reductions of
the Hilbert space result in a new island formula in quantum systems, which we
conjecture to be the same island formula in gravity recently proposed to
rescue the unitarity in the process of black hole evaporation. In this
context, we give a simple resolution to the Mathur/AMPS paradox. Furthermore,
we propose a non-gravitational field theory configuration where entanglement
islands emerge, give a description for the entanglement structure of the
island phase and propose how to realize the island phase in the lab.
###### Contents
1. 1 Introduction
2. 2 Reduction in two-spins system
1. 2.1 Self-encoded quantum systems
2. 2.2 The simplest case of two spins
3. 3 Islands and Hilbert space reduction
1. 3.1 Islands from more generic reductions
2. 3.2 Requirements for the Hilbert space reductions
3. 3.3 Islands beyond spatial regions
4. 3.4 The resolution of the Mathur/AMPS paradox in a toy model
4. 4 A non-gravitational field theory with entanglement islands
1. 4.1 Islands in Weyl transformed CFT2
2. 4.2 Weyl transformed CFT vs AdS/BCFT
3. 4.3 Islands in the lab
5. 5 Discussion
6. A The cutoff spheres and their common tangent
## 1 Introduction
The information paradox for evaporating black holes [1, 2, 3, 4, 5] is one of
the most important mysteries in our understanding of nature. It was expected
that finding a solution to the information paradox could lead us to a window
to understand the quantum theory of gravity. The AdS/CFT correspondence [6, 7,
8], which relates asymptotically AdS gravity theories with certain conformal
field theories with large central charges and strong coupling, provides us a
framework to study the quantum aspects of gravity through its field theory
dual. This is called the holographic property of gravity and is believed to be
a general property for gravity theories. The holographic property of gravity
strongly indicates that the quantum theory of gravity should be manifestly
unitary. Nevertheless, a concrete understanding of how the information is
preserved during the black hole evaporation is not obvious at all. A major
breakthrough along this line is the study of quantum entanglement structure of
the field theories and their corresponding dual gravity counterparts. In the
context of AdS/CFT, the Ryu-Takayanagi (RT) formula [9, 10, 11] relates the
entanglement entropy of any subregion in the boundary CFT to certain co-
dimension two minimal (extremal) surfaces in the bulk which are homologous to
the corresponding boundary subregion. This formula was further refined to the
quantum extremal surface (QES) formula which included the quantum correction
from bulk fields [12, 13]111See [14, 15, 16, 17] for similar refinements of
the covariant holographic entanglement entropy proposal in [18]..
The authors of Ref. [19, 20, 21, 22] applied the QES formula to compute the
entanglement entropy of the radiation of an evaporating black hole after the
page time. Remarkably they found that, the result deviates from Hawking’s
result and is consistent with unitary evolution. These computations further
inspired the proposal of the so-called “island formula” [23, 24, 25, 26, 27],
which is claimed to be the formula to compute the entanglement entropy in
quantum systems with gravitation (or coupled to a gravitational universe). The
systems where the island formula applies usually consist of a system on a
fixed spacetime background (or non-gravitational system) and a system with
dynamical gravity. The two systems are glued together at some surface with
transparent boundary conditions, and hence the radiation can enter the non-
gravitational system freely. In this step the non-gravitational system plays
the role of a reservoir that absorbs the Hawking radiation.
When we calculate the entanglement entropy for certain subregion $\mathcal{R}$
in the non-gravitational region, for example the radiation that is stored in
the non-gravitational reservoir, the standard way is to calculate the von
Neumann entropy for the quantum fields in $\mathcal{R}$, as was done by
Hawking [1],
$\displaystyle Hawking^{\prime}s~{}calculation:\qquad
S(\rho_{\mathcal{R}})=S(\tilde{\rho}_{\mathcal{R}})\,.$ (1)
Here $\rho_{\mathcal{R}}$ is the density matrix for any region $\mathcal{R}$
in the full quantum theory (including quantum gravity which we do not have the
exact description), while $\tilde{\rho}_{\mathcal{A}}$ is the density matrix
for quantum fields settled on the region $A$ by tracing out all the degrees of
freedom in the complement $\bar{\mathcal{R}}$ in the semi-classical
description. In ordinary quantum systems without gravity, these two density
matrices are exactly the same by definition. Nevertheless, for systems
involving gravity this equality may be questioned,
$\displaystyle\rho_{\mathcal{R}}=\tilde{\rho}_{\mathcal{R}}\qquad??$ (2)
The island formula claims that, we should not only consider the quantum fields
in the non-gravitational region $\mathcal{R}$, but also certain disconnected
region in the gravitational region which is called an entanglement island,
whose boundary is called the quantum extremal surface $X$. For these
configurations the entanglement entropy for $\mathcal{R}$ should be calculated
by the island formula in the semi-classic description, which is given by [23,
20, 24, 25, 26]:
$\displaystyle Island~{}formula~{}I:\qquad
S(\rho_{\mathcal{R}})=\text{min}\left\\{\text{ext}_{X}\left[\frac{Area(X)}{4G}+S(\tilde{\rho}_{I\cup\mathcal{R}})\right]\right\\}\,,$
(3)
where $X$ is the co-dimension two surface that extrimizes the
$S({\rho_{\mathcal{R}}})$ calculated by (3), and $I$ represents the region
bounded by $X$ and is named after the word “island” since it is usually
disconnected from the region $\mathcal{R}$. When $I=\emptyset$ extremizes the
$S({\rho_{\mathcal{R}}})$, the island formula (3) coincides with Hawking’s
calculation (1). Remarkably, there are configurations where (3) gives the
smaller entropy with $I\neq\emptyset$, thus the configuration enters the
island phase. See Fig.1 for a simple configuration of the island phase.
Figure 1: Schematics of the quantum extremal surface $X$ for a subregion
$\mathcal{R}$ in a quantum field theory coupled to semiclassical gravity. The
region $I$ in the gravitational region bounded by $X$ is the island
corresponding to $\mathcal{R}$.
The doubly holographic set-up discussed in [23] provides a special framework
to explain the origin of the island formula. In this scenario, the quantum
field theory describing the Hawking radiation is assumed to be holographic.
The corresponding bulk dual gravitational theory (Fig.2(a)) has a lower
dimensional effective description (Fig.2(b)) in terms of the radiation bath
coupled to semi-classical gravity where the island prescription applies.
Furthermore, when the AdS2 gravity part is again holograhically dual to a
(0+1)-dimensional quantum dot, we arrived at the third picture of the same
configuration, which is called the fundamental description (Fig.2(c)). In the
doubly holographic framework, the entanglement entropy of a subsystem in the
radiation bath is computed through the usual (H)RT formula [9, 18] which is
equivalent to the island prescription in the lower dimensional effective
description (Fig.2). The double holographic setup naturally encapsulates the
idea of the island in the black hole interior being encoded in the
entanglement wedge of the radiation. Moreover, the island formula has been
derived via gravitational path integrals where wormholes are allowed to exist
as the new saddle when calculating the partition function in the replica
manifold [26, 27]. For a subset of relevant works that may be related to this
paper, see [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42]. For
more references, see [43, 44] and the references therein for a detailed review
on this topic.
(a) Double holography description: AdS3 spacetime truncated by an end-of-the-
world Planck brane
(b) Effective theory description: CFT2 on a half line coupled to semi-
classical gravity on the AdS2 brane.
(c) Fundamental description: CFT2 on a half line coupled to a quantum dot.
Figure 2: Three different descriptions for the same configuration.
In both of the above configurations gravitation plays a crucial role to
explain the emergence of islands. So far, since all the configurations where
islands are necessary to rescue unitarity involve gravitation, it is tempting
to believe that gravitation is responsible for the appearance of the islands.
However, the possibility that entanglement islands can emerge in pure quantum
systems is rarely studied. Although the semi-classical calculation of
$S({\rho_{\mathcal{R}}})$ reproduces the Page curve hence consistent with
unitarity, the exact density matrix $\rho_{\mathcal{R}}$ in quantum gravity is
not clear to us. Furthermore, the island formula in the semi-classical
effective theory is quite surprising and counter-intuitive to our previous
understanding of fundamental quantum information. Is it possible to understand
the Island formula from a purely quantum information point of view? And how to
incorporate the Island formula and quantum information in the semi-classical
theory description? These questions are fundamental and crucial to get a
deeper understanding of the entanglement islands. Furthermore, the answer to
these questions may lead us to a new research field of quantum information,
where we may ask, is entanglement island be created in the lab and how can we
use them?
In this paper we propose a mechanism for the emergence of the entanglement
islands in quantum systems without gravity, and argue that the island phase is
a fundamental property of quantum information, while those island
configurations involving gravitation are just special cases. We argue that,
for a generic quantum system when the Hilbert space of the total system is
properly reduced following certain constraints while keeping the Hilbert space
of the region $\mathcal{R}$ fixed, the island for $\mathcal{R}$ could emerge
in a natural way. The constraints that induce islands for the system could be
described as follows:
* •
for all the states in the Hilbert space of a system, there exists a subset $I$
of the system whose state can be totally determined or reconstructed from the
state of another non-overlapping subregion $\mathcal{R}$, which may be
disconnected from $I$.
We call systems that satisfy the above constraints the self-encoded systems.
This type of constraints is inspired by the island formula in the double
holography model, which indicates that the degrees of freedom in the island
could be included as part of the entanglement wedge of $\mathcal{R}$.
According to the “bulk reconstruction” program [45, 46, 14, 47, 48, 49, 50],
the bulk degrees of freedom inside the entanglement wedge of a boundary region
$A$ can be reconstructed from the operators inside $A$ in a quantum error
correction way [48]. Here we extend this indication in quantum gravity to
ordinary quantum systems and show that, the entanglement islands can also
emerge without gravitation. The reconstruction of $I$ from $\mathcal{R}$
should follow certain codes, which are applied to all the states in the
reduced Hilbert space. This encoding reduces the dimension of the Hilbert
space, which essentially changes the way we calculate the reduced density
matrix and entanglement entropy. See [51, 52, 53, 54] for examples which, in
some sense, also attempted to study entanglement islands in quantum
information without gravitation.
In section 2, we begin with the case of two spins with one of the spins
determined by the other, thus the Hilbert space reduces. Applying the replica
trick [55, 56] to the system, we explicitly show that the spin which is
determined by the other can be naturally understood as an island. Following
this, in section 3, we consider more generic theories with more generic
constraints, and show how the islands emerge. Eventually we will arrive at a
new Island formula to calculate entanglement entropy in such systems, which is
quite similar to (3). Based on the new island formula we also provide a simple
resolution to the Mathur/AMPS (Almheiri-Marolf-Polchinski-Sully)
paradox222Though this puzzle is usually referred to as the AMPS paradox after
names of the authors of [5], the puzzle is orginally proposed by Mathur in
[4]. In this paper we call it the Mathur/AMPS puzzle [4, 5] in the three spin
self-encoded system. In section 4, we construct a non-gravitational field
theory configuration with islands by applying certain Weyl transformation to a
2-dimensional CFT. Furthermore, we give a proposal to create entanglement
islands in many-body systems that could be synthesized in the lab. In the last
section we summarize our main results, discuss the important implications or
lessons we can learn from them, and their relation to other related works.
## 2 Reduction in two-spins system
### 2.1 Self-encoded quantum systems
Let us first give a general description on the self-encoded systems, where a
subset $I$ can be encoded in or reconstructed from another region
$\mathcal{R}$ in the system. We denote the compliment of $\mathcal{R}\cup I$
by $B$. The denotations are chosen to match the black hole configurations,
where $\mathcal{R}$ matches the black hole radiation in the non-gravitational
reservoir, $I$ matches the island in black hole interior and $B$ matches the
black hole degrees of freedom. Nevertheless, we stress that our discussion
goes beyond the black hole configurations. For brevity we only consider static
systems in two-dimensional spacetime with time reflection symmetry.
Firstly, let us review the computation of the reduced density matrix of
$\mathcal{R}$ when the total system $I\cup B\cup R$ is in a pure state
$\rho=\ket{\Psi}\bra{\Psi}$, where
$\displaystyle\ket{\Psi}=\sum_{i,j,k}C_{ijk}\ket{i}_{I}\ket{j}_{B}\ket{k}_{\mathcal{R}},\qquad\sum_{i,j,k}C_{ijk}C_{ijk}^{*}=1\,.$
(4)
Here $\\{\ket{i}\\},\\{\ket{j}\\},\\{\ket{k}\\}$ are the orthonormal bases of
the Hilbert spaces $\mathcal{H}_{I}$, $\mathcal{H}_{B}$ and $\mathcal{H}_{R}$.
In ordinary quantum systems, the degrees of freedom in different subsystems
are independent, and the Hilbert space of the total system is assumed to be
factorized,
$\displaystyle\mathcal{H}=\mathcal{H}_{I}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{R}\,.$
(5)
The reduced density matrix of the subsystem $\mathcal{R}$ is then given by
tracing out the degrees of freedom of the complement $I\cup B$ while setting
boundary conditions for $\mathcal{R}$ with $\bra{k_{\mathcal{R}}}$ and
$\ket{k^{\prime}_{\mathcal{R}}}$,
$\displaystyle\rho_{\mathcal{R}}{}_{kk^{\prime}}$
$\displaystyle=\sum_{i,j}\bra{k}_{\mathcal{R}}\bra{i}_{I}\bra{j}_{B}\rho\ket{j}_{B}\ket{i}_{I}\ket{k^{\prime}}_{\mathcal{R}}=\sum_{i,j}\rho_{(ijk)(ijk^{\prime})}$
(6) $\displaystyle=\sum_{i,j}C_{i,j,k}C_{i,j,k^{\prime}}^{*}\,.$ (7)
As we can see, any matrix element of the reduced density matrix
$\rho_{\mathcal{R}}{}_{kk^{\prime}}$ is a summation of certain class of matrix
elements of the density matrix $\rho$ of the total system, which is computed
within the Hilbert space $\mathcal{H}$. This implies a summation of all
possibilities outside $\mathcal{R}$ for a given boundary condition on
$\mathcal{R}$. For a local observer who can only measure the observables
inside $\mathcal{R}$, the state of $\mathcal{R}$ is exactly given by the
reduced density matrix $\rho_{\mathcal{R}}$. The entanglement entropy of
$\mathcal{R}$ is then calculated by
$\displaystyle S_{\mathcal{R}}\equiv
S(\rho_{\mathcal{R}})=-\text{Tr}\left(\rho_{\mathcal{R}}\log\rho_{\mathcal{R}}\right)\,.$
(8)
In quantum field theories, we use the path integral representation to compute
the reduced density matrix [55, 56]. More explicitly, for scenarios with time
reflection symmetry, $\rho_{\mathcal{R}ij}$ for $\mathcal{R}$ can be computed
by cutting $\mathcal{R}$ open and setting different boundary conditions on the
upper and lower edges, see Fig.3. Then $\rho_{\mathcal{R}}^{n}$ is calculated
by the replica trick via considering $n$ copies of the manifold and gluing
them cyclically along the cuts present at $\mathcal{R}$. Upon taking the limit
$n\to 1$ we get the entanglement entropy,
$\displaystyle S(\rho_{\mathcal{R}})=S(\tilde{\rho}_{\mathcal{R}})=-\lim_{n\to
1}\partial_{n}\log\text{tr}(\rho_{\mathcal{R}}^{n}).$ (9)
Figure 3: $\rho_{\mathcal{R}ij}$ in ordinary quantum systems
The above paragraph reviewed the standard way to compute the reduced density
matrix and entanglement entropy in an ordinary quantum system where the
Hilbert space factorizes following (5). Now we consider the self-encoded
systems where the factorization no longer holds. We consider again a pure
state of the system $I\cup B\cup\mathcal{R}$, but with an extra requirement
that, for all the states in the Hilbert space the state of the region $I$ is
encoded in the state of $\mathcal{R}$. More explicitly, for all the states in
the Hilbert space, the states in $\mathcal{R}$ and $I$ should satisfy certain
mapping representing the encoding,
$\displaystyle\ket{i}_{\mathcal{R}}\Rightarrow\ket{f(i)}_{I}\,,$ (10)
with two further requirements,
* •
1) all the states in $\mathcal{H}_{\mathcal{R}}$ should be mapped to a unique
state in $\mathcal{H}_{I}$,
* •
2) all the states in $\mathcal{H}_{I}$ should have at least one image in
$\mathcal{H}_{\mathcal{R}}$.
One of the direct and crucial consequences of the constraint (10) is that, the
dimension of the Hilbert space $\mathcal{H}$ reduces and the degrees of
freedom in each subregion are no longer different from each other
$\displaystyle\mathcal{H}=\mathcal{H}_{B}\otimes\mathcal{\mathcal{R}}\subset\mathcal{H}_{I}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{R}\,.$
(11)
In this case, the state of $I$ is determined by the state of $\mathcal{R}$ and
hence $I$ does not add any independent degrees of freedom to the total system.
The factorization (5) no longer holds. Later we will see that, this kind of
constraint will essentially change the way we compute the reduced density
matrix.
### 2.2 The simplest case of two spins
Let us consider the simplest configuration with two spins, where we can
realize our previous statements. We denote one of the spins as $I$ while
denote the other as $\mathcal{R}$, and denote the spin up (down) state as
$\ket{0}$ ($\ket{1}$). At the beginning, we assume the system is in the pure
state
$\displaystyle\rho=\ket{\Psi}\bra{\Psi}\,,\qquad\ket{\Psi}=\frac{\sqrt{2}}{2}\left(\ket{0_{I}0_{\mathcal{R}}}+\ket{1_{I}1_{\mathcal{R}}}\right)\,.$
(12)
Firstly, let us consider the familiar scenario where the spin $I$ is
independent of $\mathcal{R}$. The Hilbert space
$\mathcal{H}_{I\cup\mathcal{R}}$ is four-dimensional with the following four
orthonormal basis
$\displaystyle\mathcal{H}_{I\cup\mathcal{R}}=\left\\{\ket{0_{I}0_{\mathcal{R}}},\ket{0_{I}1_{\mathcal{R}}},\ket{1_{I}0_{\mathcal{R}}},\ket{1_{I}1_{\mathcal{R}}}\right\\}\,.$
(13)
The reduced density matrix can be calculated by, for example
$\displaystyle\rho_{\mathcal{R}}{}_{00}=$
$\displaystyle\bra{0_{\mathcal{R}}0_{I}}\rho\ket{0_{I}0_{\mathcal{R}}}+\bra{0_{\mathcal{R}}1_{I}}\rho\ket{1_{I}0_{\mathcal{R}}}=\frac{1}{2}\,,$
(14) $\displaystyle\rho_{\mathcal{R}}{}_{01}=$
$\displaystyle\bra{0_{\mathcal{R}}0_{I}}\rho\ket{0_{I}1_{\mathcal{R}}}+\bra{0_{\mathcal{R}}1_{I}}\rho\ket{1_{I}1_{\mathcal{R}}}=0\,,$
(15)
and hence we find
$\displaystyle\rho_{\mathcal{R}}=\begin{pmatrix}\frac{1}{2}&0\\\
0&\frac{1}{2}\end{pmatrix}\,.$ (16)
The von Neumann entropy for $\rho_{\mathcal{R}}$ is then given by
$\displaystyle
S(\rho_{\mathcal{R}})=-\text{Tr}\left(\rho_{\mathcal{R}}\log\rho_{\mathcal{R}}\right)=\log
2\,,$ (17)
which indicates that the two spins are maximally entangled with each other.
Then, we consider the the new configuration with constraints on the Hilbert
space such that the state of the spin $I$ is somehow totally determined by
$\mathcal{R}$. For example, we impose that the spin of $I$ must be the same as
$\mathcal{R}$, i.e.
$\displaystyle
Constraint:\qquad\ket{0_{\mathcal{R}}}\Rightarrow\ket{0_{I}}\,,\qquad\ket{1_{\mathcal{R}}}\Rightarrow\ket{1_{I}}\,.$
(18)
Again, we consider the system to be in the state (12), which is consistent
with the constraint. Then the dimension of the Hilbert space reduces to be
$2$, with the basis given by
$\displaystyle
Reduced:\qquad\mathcal{H}_{I\cup\mathcal{R}}=\left\\{\ket{0_{I}0_{\mathcal{R}}},\ket{1_{I}1_{\mathcal{R}}}\right\\}\,.$
(19)
Although the state here we consider is the again (12), it is embedded in a
different Hilbert space. It is a vector in the Hilbert space (19) rather than
the four-dimensional one given in (13). Note that, the two states
$\ket{0_{I}1_{\mathcal{R}}},\ket{1_{I}0_{\mathcal{R}}}$ are no longer basis of
$\mathcal{H}_{I\cup\mathcal{R}}$, and the density matrix of the total system
becomes $2\times 2$ dimensional. When computing the reduced density matrix,
the elements of $\rho$ appearing on the right hand side of (15) are not well-
defined.
Then how do we trace out the degrees of freedom for $I$ in the reduced Hilbert
space? It turns out that, due to the constraint there is no room to perform
the trace operation. More explicitly, when we set boundary conditions for
$\mathcal{R}$, we are fixing the state of $\mathcal{R}$. Since the state of
$I$ is totally determined by $\mathcal{R}$, we simultaneously set boundary
conditions on $I$. The reduced density matrix $\rho_{\mathcal{R}}$ is then
calculated by, for example
$\displaystyle\rho_{\mathcal{R}}{}_{00}=\bra{0_{\mathcal{R}}0_{I}}\rho\ket{0_{I}0_{\mathcal{R}}}=\frac{1}{2}\,,$
(20)
$\displaystyle\rho_{\mathcal{R}}{}_{01}=\bra{0_{I}0_{\mathcal{R}}}\rho\ket{1_{I}1_{\mathcal{R}}}=\frac{1}{2}\,,$
(21)
and eventually we get (see Fig.4)
$\displaystyle\rho_{\mathcal{R}}=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\\\
\frac{1}{2}&\frac{1}{2}\end{pmatrix}=\rho_{I\cup\mathcal{R}}\,.$ (22)
One can further check that the von Neumann entropy for $\rho_{\mathcal{R}}$ is
zero and hence $\rho_{\mathcal{R}}$ is a pure state. This is expected, as we
have mentioned that the additional spin $I$ does not add any independent
degrees of freedom to the system.
Figure 4: Illustration of the density matrice $\rho$ for the two-spin system
before(LHS) and after(RHS) the reduction of the Hilbert space.
One may be confused about the the way we compute $\rho_{\mathcal{R}}$ and ask
why we have not traced out the degrees of freedom of $I$ in (21) as we did in
(15). Firstly, the Hilbert space (19) is reduced such that the terms in (15)
is not the matrix element of the density matrix $\rho$. If we insist to
compute following (15), then the state $\rho$ is again embedded in the four-
dimensional Hilbert space, which is incorrect in this case. Secondly, if we
perform the tracing, the coding relation (18) will be broken. Though the two-
spin system is extremely simple, we learn the following important lesson from
it.
* •
The reduced density matrix and relevant entropy quantities not only depend on
the state in which the system is settled, but also on the Hilbert space where
the state is embedded in.
It is worth mentioning that, this idea can be used to clarify the ambiguity of
the entanglement entropy in gauge theories (see [57, 58] and especially
[59]333At the final stage of this paper, professor Ling-yan Hung pointed out
to us that, a similar discussion on the two-spin system is already given in
[59] from the perspectives of what can be measured in an experiment.). In
gauge theories where the Hilbert space is usually redundantly labelled, the
ambiguity of the entanglement entropy can naturally be understood as arising
from the different choices of the Hilbert space where the state is embedded
in.
Before we go ahead to the reduction in more general systems, we give some
other reductions for the Hilbert space of the two-spin system, and the
corresponding constraint. Here we reduce the Hilbert space under another set
of basis,
$\displaystyle\ket{\Psi}_{1}=\frac{\sqrt{2}}{2}\left(\ket{0_{I}0_{\mathcal{R}}}+\ket{1_{I}1_{\mathcal{R}}}\right)\,,$
(23)
$\displaystyle\ket{\Psi}_{2}=\frac{\sqrt{2}}{2}\left(\ket{0_{I}0_{\mathcal{R}}}-\ket{1_{I}1_{\mathcal{R}}}\right)\,,$
(24)
$\displaystyle\ket{\Psi}_{3}=\frac{\sqrt{2}}{2}\left(\ket{0_{I}1_{\mathcal{R}}}+\ket{1_{I}0_{\mathcal{R}}}\right)\,,$
(25)
$\displaystyle\ket{\Psi}_{4}=\frac{\sqrt{2}}{2}\left(\ket{0_{I}1_{\mathcal{R}}}-\ket{1_{I}0_{\mathcal{R}}}\right)\,.$
(26)
The sub-space $\\{\ket{\Psi}_{1},\ket{\Psi}_{2}\\}$ is the same as
$\\{\ket{0_{I}0_{\mathcal{R}}},\ket{1_{I}1_{\mathcal{R}}}\\}$, which can be
arrived at by requiring the two spins to be same. Similarly, the sub-space
$\\{\ket{\Psi}_{3},\ket{\Psi}_{4}\\}$ can be arrived by requiring the two
spins to be the opposite. In the sub-space
$\\{\ket{\Psi}_{1},\ket{\Psi}_{3}\\}$, the state of $\mathcal{R}$ is fixed to
be $\left(\ket{0_{\mathcal{R}}}+\ket{1_{\mathcal{R}}}\right)/\sqrt{2}$ while
$I$ can be either the same as or opposite to $\mathcal{R}$. In the sub-space
$\\{\ket{\Psi}_{1},\ket{\Psi}_{4}\\}$, $I$ is required to be the same as
$\mathcal{R}$ when $\mathcal{R}$ is in the state
$\left(\ket{0_{\mathcal{R}}}+\ket{1_{\mathcal{R}}}\right)/\sqrt{2}$, while
opposite to $\mathcal{R}$ when $\mathcal{R}$ is in the state
$\left(\ket{0_{\mathcal{R}}}-\ket{1_{\mathcal{R}}}\right)/\sqrt{2}$. All in
all, the reductions of the Hilbert space to two dimensions can be understood
as the self-encoding of one spin. This is independent of the basis we chose.
## 3 Islands and Hilbert space reduction
### 3.1 Islands from more generic reductions
Now we generalize the above picture to a generic system with a subregion $I$
encoded in a disconnected region $\mathcal{R}$. Unlike the two spin system, in
general the total system also includes a region $B$ where the degrees of
freedom are independent. Since $I$ is totally determined by $\mathcal{R}$, it
does not contribute any independent degrees of freedom. In other words, the
dimension of the Hilbert space $\mathcal{H}$ of the total system is the same
as $\mathcal{H}_{B\cup\mathcal{R}}$,
$\displaystyle\text{dim}\mathcal{H}=\text{dim}\mathcal{H}_{B\cup\mathcal{R}}\,.$
(27)
To compute the reduced density matrix for $\mathcal{R}$, again, in the path
integral description we cut $\mathcal{R}$ open and set boundary conditions for
the upper and lower edges at $\mathcal{R}$. Since the state of $I$ is totally
determined by $\mathcal{R}$, when we set boundary conditions for
$\mathcal{R}$, we simultaneously set boundary conditions at $I$. More
explicitly, we should simultaneously cut $I$ open and impose certain boundary
conditions on the upper and lower edges at $I$ following the codes (10). See
Fig.5 for a illustration of the reduced density matrix
$\rho_{\mathcal{R}}{}_{ij}$.
Then we calculate the entanglement entropy via the replica trick, which glues
the $n$ copies of the density matrix cyclically. Since certain boundary
conditions are imposed on $I$ as we set boundary conditions on $\mathcal{R}$
due to the constraint, when the boundary conditions are settled such that we
cyclically glue different copies of the system at $\mathcal{R}$, the
corresponding boundary conditions at $I$ following the codes (10) also imply
that we simultaneously glue different copies at $I$. In other words the cyclic
gluing performed on $\mathcal{R}$ induces the cyclic gluing on $I$. This
results in an additional twist operator inserted at $X$, which is the boundary
of $I$. In this scenario the region $I$ is nothing but the so-called
entanglement “island” in the literature. As in the two-spin system, if we
insist to trace out the degrees of freedom on $I$, then the calculation will
involve states that are not in the reduced Hilbert space $\mathcal{H}$, and
the encoding relation (10) will break down. In other words the notations
$\tilde{\rho}_{\mathcal{R}}$ and $S(\tilde{\rho}_{\mathcal{R}})$ are ill
defined.
Figure 5: $\rho_{\mathcal{R}ij}$, $\rho_{\mathcal{R}ji}$ and
$\text{Tr}\rho_{\mathcal{R}}^{2}$ in self encoding quantum systems with $I$
encoded in $\mathcal{R}$.
Let us denote
$\displaystyle\tilde{S}_{\mathcal{R}}\equiv S(\tilde{\rho}_{\mathcal{R}})\,,$
(28)
as the von Neumann entropy calculated by cyclically gluing only the region
$\mathcal{R}$ in replica trick. In ordinary systems where the Hilbert space is
not reduced, we have the trivial relation
$\displaystyle S_{\mathcal{R}}=\tilde{S}_{\mathcal{R}}.$ (29)
Nevertheless, in the self-encoded configurations we have considered, this
relation no longer holds. Based on the above discussions, we arrive at the
following crucial relation for self-encoded systems
$\displaystyle S_{\mathcal{R}}=\tilde{S}_{\mathcal{R}\cup I}\,.$ (30)
This looks quite similar to the island formula (3) proposed in gravitational
systems with one important difference that, here we do not have the area term.
Note that in the above discussion, the system is a quantum system that does
not involve gravitation. If $I$ is settled on a gravitational system while
$\mathcal{R}$ is settled in a non-gravitational region, then according to [11,
26, 27] we will receive additional contribution proportional to the area of
the fixed points of the replica symmetry in gravity, which is just the QEC $X$
where the additional twist operator is inserted. After gravitation is
included, we arrive at the following formula
$\displaystyle Island~{}formula~{}II:\quad
S_{\mathcal{R}}=\tilde{S}_{\mathcal{R}\cup
I}+\frac{Area(X)}{4G}\,,\quad\ket{i}_{\mathcal{R}}\Rightarrow\ket{f(i)}_{I}\,,$
(31)
which looks the same as the quantity within the parenthesis in the “island”
formula (3).
Nevertheless, there are important differences between the two formulas (3) and
(31). In (3), the island region $I$ is determined by extremizing the entropy
among all the choices of $X$, and we enter the island phase when the island
formula (3) gives smaller entanglement entropy than $\tilde{S}_{R}$. The
existence of gravitation in the system is essential for the derivation of
island formula (3). On the other hand, in (31) the island $I$ appears
naturally from the reduction of the Hilbert space due to the mapping (10), and
gravitation is not necessary. When the coding relation is determined for the
system, the island for $\mathcal{R}$ should be the maximal region which can be
reconstructed from $\mathcal{R}$.
Despite the important differences, it is very tempting to conjecture that the
two island formulas (3) and (31) describe the same configuration,
$\displaystyle\text{Conjecture}:\quad Island~{}formula~{}I\equiv
Island~{}formula~{}II\,.$ (32)
If this is true, we will arrive at the following strong statements:
* •
On the one hand, we can use (3) to judge whether a system is self-encoded or
not. For a given $\mathcal{R}$, when there is an island phase transition then
the system is self-encoded, and the region $I$ satisfying the extremal
condition is encoded in $\mathcal{R}$.
* •
On the other hand, according to (31), the region $I$ in the island formula (3)
is encoded in $\mathcal{R}$.
We make this conjecture because of that, the Island formula $II$ seems to be
the only way we can incorporate the island formula $I$ into the quantum
information theory for the effective description. This conjecture also
indicates that the configurations involving gravity with entanglement island
is only some special cases where entanglement islands emerge.
### 3.2 Requirements for the Hilbert space reductions
The self-encoding constraints (10) is only one particular way to reduce the
Hilbert space of the system. There are certainly other ways to reduce the
Hilbert space, among which one will be introduced in later sections where the
eliminated degrees of freedom are not localized in a definite spatial region.
Nevertheless, not all reductions will essentially change the reduced density
matrix and some of them are not even well defined. Here we present the
following four requirements for the type of Hilbert space reductions which are
interesting to us:
* •
first of all, the state $\ket{\Psi}$ under consideration should remain in the
reduced Hilbert space;
* •
secondly, when we impose boundary conditions for $\mathcal{R}$, we should have
a square matrix block in $\rho$ from which the corresponding element of the
reduced density matrix can be computed by tracing out the matrix block;
* •
thirdly, since we would like to study the reduced density matrix of the region
$\mathcal{R}$ under reduction without reducing the degrees of freedom of
$\mathcal{R}$, we require that the reduction is required to only reduce the
degrees of freedom of the complement $\bar{\mathcal{R}}$, hence the dimension
of $\rho_{\mathcal{R}}$ is still $\text{dim}\mathcal{H}_{\mathcal{R}}$;
* •
at last, the reduction is expected to change the von Newmann entropy of the
reduced density matrix $\rho_{\mathcal{R}}$.
The second requirement implies that, after the reduction of the Hilbert space,
the degrees of freedom for the complement $\bar{\mathcal{R}}$ should be
preserved, no matter in which state the subsystem $\mathcal{R}$ is settled.
For example, we can reduce the Hilbert space of the two-spins system to be
$\displaystyle\mathcal{H}_{I\cup\mathcal{R}}=\\{\ket{0_{I}0_{\mathcal{R}}},\ket{1_{I}0_{\mathcal{R}}},\ket{1_{I}1_{\mathcal{R}}}\\}\,,$
(33)
in which $I$ is fixed to be the same as $\mathcal{R}$ when $\mathcal{R}$ is in
the state $\ket{1_{\mathcal{R}}}$. On the other hand when $\mathcal{R}$ is in
the state $\ket{0_{\mathcal{R}}}$, there is no constraint on the spin $I$. In
other words the degrees of freedom of $I$ differ depending on the state of
$\mathcal{R}$, hence our second requirement is not satisfied. This could be
problematic, since when setting boundary conditions for $\mathcal{R}$ and
computing the elements of $\rho_{\mathcal{R}}$, we will find that the matrix
block is no longer a square matrix, which means that the degrees of freedom of
$I$ cannot be described in the usual sense of a density matrix anymore. One
should further study and define the physical meaning of density matrix blocks
that are not square. Nevertheless, this is beyond the scope of this paper and
we will naively consider such reductions to be un-physical. One necessary
condition for the second requirement is that, the dimension of the reduced
Hilbert space should be an integer times $\dim\mathcal{H}_{\mathcal{R}}$. As
an explicit example, we consider a vector $\ket{a}$ in the Hilbert space of
the two-spins system before reduction
$\displaystyle\ket{a}=\frac{1}{\sqrt{3}}\left(\ket{0_{I}0_{\mathcal{R}}}+\ket{1_{I}0_{\mathcal{R}}}+\ket{1_{I}1_{\mathcal{R}}}\right)\,,$
(34)
and the corresponding density matrix in the unreduced Hilbert space can be
worked out as depicted on the left panel of Fig.6. After reducing the Hilbert
space to $\mathcal{H}_{I\cup\mathcal{R}}$ in (33) in which the degrees of
freedom of $I$ depend on the state in $\mathcal{R}$, the density matrix can be
shown to be given in the form of the right panel of Fig.6. The matrix elements
in the orange area all vanish since the state $\ket{a}$ does not contain any
$\ket{0_{I}1_{\mathcal{R}}}$ component, which is the constraint from our first
requirement. The matrix blocks enclosed by the purple areas are not square
which is different from density matrices before the reduction of Hilbert
space. Thus, the reduced density matrix after the reduction of the Hilbert
space is not well-defined in the usual way and this is exactly the point of
our second requirement.
Figure 6: Illustration of the density matrices of $\ket{a}$ before(LHS) and
after(RHS) the reduction of the Hilbert space.
The third requirement implies that, in the reduced Hilbert space of the total
system the state of $\mathcal{R}$ can be any state in $\mathcal{H}_{R}$. Under
such a requirement we are tempted to choose the reductions which still contain
a factor of $\mathcal{H}_{\mathcal{R}}$ in the reduced Hilbert space, i.e.
$\displaystyle\mathcal{H}=\mathcal{H}_{\bar{\mathcal{R}}}\otimes\mathcal{H}_{\mathcal{R}}\quad{Reduction}:\quad\mathcal{H}=\mathcal{H}_{B}\otimes\mathcal{H}_{\mathcal{R}}\,,\qquad\mathcal{H}_{B}\subset\mathcal{H}_{\bar{\mathcal{R}}}\,.$
(35)
These reductions automatically satisfy the second requirement. Then we
consider the a state
$\ket{\Psi}\in\mathcal{H}_{B}\otimes\mathcal{H}_{\mathcal{R}}$, which is now
embedded in the original Hilbert space
$\mathcal{H}_{\bar{\mathcal{R}}}\otimes\mathcal{H}_{\mathcal{R}}$. When we set
boundary conditions for $\mathcal{R}$, the block matrix in $\rho$ becomes a
$\text{dim}\mathcal{H}_{\bar{\mathcal{R}}}$ dimensional square matrix, which
contains the $\dim\mathcal{H}_{B}$ dimensional block matrix of $\rho$ in the
reduced case as a sub-block. While all the other elements in the block outside
the sub-block are zero. The reason is that in the state $\ket{\Psi}$ we
consider, the sub-system $\bar{\mathcal{R}}$ is confined entirely in the
subspace $\mathcal{H}_{B}$. The von Neumann entropy of $\rho_{\mathcal{R}}$
does not change after reduction, hence the last requirement is not satisfied.
One can also check that the reduced Hilbert space (19) for the two-spins
system which leads to a different entanglement entropy does not factorize
following (35).
It will be interesting to explore other ways of reduction that satisfy our
four requirements beyond the self-encoded systems.
### 3.3 Islands beyond spatial regions
Now we introduce a class of reductions where the reduced degrees of freedom in
$\bar{\mathcal{R}}$ are not localized in a spatial subregion $I$. These
reductions also satisfy the above four requirements. Let us use two sets of
parameters $\\{a_{i}\\}$ and $\\{b_{j}\\}$ to denote the states in the Hilbert
space $\mathcal{H}_{\bar{\mathcal{R}}}$ as follows
$\displaystyle\ket{a_{1},a_{2}\cdots a_{n},b_{1},b_{2}\cdots
b_{m}}_{\bar{\mathcal{R}}}\in\ket{\mathcal{V}_{a}\otimes\mathcal{V}_{b}}=\mathcal{H}_{\bar{\mathcal{R}}}\,.$
(36)
and a generic state for the entire system can be expressed by
$\displaystyle\ket{\Psi}=\sum_{a_{1},\cdots a_{n},b_{1},\cdots
b_{m},i}C_{a_{1},\cdots a_{n},b_{1},\cdots b_{m},i}\ket{a_{1},\cdots
a_{n},b_{1},\cdots b_{m}}_{\bar{\mathcal{R}}}\ket{i}_{\mathcal{R}}\,.$ (37)
Here, for example, $\mathcal{V}_{a}$ represents a vector space in which a
generic vector can be specified by the set of parameters $\\{a_{i}\\}$. It is
not the Hilbert space of $a$, since $a$ is not a subregion of
$\bar{\mathcal{R}}$.
Now we reduce the Hilbert space following certain constraints. We assume that
the state of $\mathcal{R}$ determines the parameters $\\{a_{i}\\}$ in the
following way
$\displaystyle\ket{i}_{\mathcal{R}}\quad\Rightarrow\quad\ket{\vec{f}(i),b_{1},b_{2}\cdots
b_{m}}_{\bar{\mathcal{R}}}\,,$ (38)
where $\vec{f}(i)$ is a vector in the space $\mathcal{V}_{a}$. This means that
for any state $\ket{\Psi}$ in the reduced Hilbert space, if the state of the
subregion $\mathcal{R}$ is $\ket{i}_{\mathcal{R}}$, then the corresponding
state of $\bar{\mathcal{R}}$ in the state $\ket{\Psi}$ is partially determined
with the parameters in the subspace $\mathcal{V}_{a}$ fixed to be
$\vec{f}(i)$. Hence the dimension of the Hilbert space reduces to be
$\text{dim}(\mathcal{V}_{b})\times\text{dim}(\mathcal{H}_{\mathcal{R}})$. In
the reduced Hilbert space, a generic state $\ket{\Psi}$ can be expressed by
$\displaystyle\ket{\Psi}=\sum_{b_{1},b_{2}\cdots b_{m},i}C_{b_{1},b_{2}\cdots
b_{m},i}\ket{\vec{f}(i),b_{1},b_{2}\cdots
b_{m}}_{\bar{\mathcal{R}}}\ket{i}_{\mathcal{R}}\,.$ (39)
Note that, in the reduced Hilbert space the independent degrees of freedom are
confined in the subspace $\mathcal{V}_{b}$, which are usually parameters
characterizing the state of $\bar{\mathcal{R}}$, rather than a subset $I$
inside $\bar{\mathcal{R}}$. Then the reduced density matrix is calculated by
$\displaystyle\rho_{\mathcal{R}ij}=\sum_{b_{1},b_{2}\cdots
b_{m}}C_{b_{1}\cdots b_{m},i}C^{*}_{b_{1}\cdots
b_{m},j}\ket{\vec{f}(i),b_{1},\cdots
b_{m}}_{\bar{\mathcal{R}}}\bra{\vec{f}(j),b_{1},\cdots
b_{m}}_{\bar{\mathcal{R}}}\,,$ (40)
where we have only traced the independent degrees of freedom in
$\mathcal{V}_{b}$.
In this type of reductions (38) the reduced degrees of freedom in
$\mathcal{V}_{a}$ are not required to be mapped to the degrees of freedom
localized in any spacial region $I$ inside $\bar{\mathcal{R}}$, hence there
could be no spatial region $I$ that plays the role of an island. Rather, the
island is a sub-space in the parameter space which characterize the Hilbert
space.
### 3.4 The resolution of the Mathur/AMPS paradox in a toy model
The authors of [4, 5] pointed out that, the Mathur/AMPS puzzle in quantum
information for the process of black hole evaporation already appears after
the Page time, if black hole evaporation is assumed to be unitary. The
Mathur/AMPS paradox point out that an impossible quantum state emerges after
the Page time. Let us assume unitarity and consider a black hole which is
collapsed from a quantum system in a pure state. After the Page time the late
radiation should be maximally entangled with the early radiation, hence purify
the early radiation. On the other hand, this quanta of late radiation should
also be maximally entangled to its partner quanta inside the black hole if the
horizon is smooth. However, according to monogamy of entanglement, a given
quanta cannot be maximally entangled to two separate systems, otherwise the
strong subadditivity of the entropy will be violated. A resolution was
provided in [5] which discards the entanglement between the late radiation and
its partner in the interior by forming a "firewall" at the horizon.
Now we try to understand this paradox based on our previous discussion. Again
let us denote the late radiation quanta and its partner in the interior as $B$
and $I$, while denote the early radiation quanta that was purified by $B$ as
$\mathcal{R}$. Note that, the statement that $B$ cannot be maximally entangled
to $\mathcal{R}$ and $I$ simultaneously, is built on the pre-condition that,
all the three qubits are independent degrees of freedom such that the Hilbert
space $\mathcal{H}$ factorizes in the following way
$\displaystyle\mathcal{H}=\mathcal{H}_{I}\otimes\mathcal{H}_{B}\otimes\mathcal{H}_{\mathcal{R}}\,.$
(41)
Although, this pre-condition is usually taken for granted, it is invalid when
we set constraints on the whole system. Furthermore, as we have shown
previously, under a proper reduction of the Hilbert space the computation of
the reduced density matrix, as well as the related entropy quantities, will
change essentially. For example, let us consider the following state for a
three-spin system,
$\displaystyle\ket{\Psi}=\frac{\sqrt{2}}{2}\left(\ket{1_{I}0_{B}1_{\mathcal{R}}}+\ket{0_{I}1_{B}0_{\mathcal{R}}}\right)\,.$
(42)
Then we consider the Hilbert space is reduced by imposing the requirement that
the spin of $I$ should be the same as $\mathcal{R}$, hence one of them is not
an independent degrees of freedom. The observer outside the black hole will
take the qubit $\mathcal{R}$ as the independent one, and conclude that the two
qubits $\mathcal{R}$ and $B$ are maximally entangled. While an infalling
observer would take $I$ as the independent one and claim that the two qubits
$B$ and $I$ are maximally entangled. However, for the outsider observer who
take $I$ to be not independent, the entanglement between $I$ and $B$ is not
well defined, or just vanished. Hence $B$ is not maximally entanglement to
$\mathcal{R}$ and $I$ simultaneously, and the monogamy of entanglement is not
violated.
Provided that $I$ belongs to the entanglement island thus is not independent,
the Mathur/AMPS puzzle is resolved in the reduced Hilbert space. If we donot
embed this system into the black hole evaporation process, then we face a
quantum information problem in the following. We have a constraint that the
quanta at the horizon is maximally entanglement with the early radiation
$\mathcal{R}$. Then after horizon quanta decays into $I$ and $B$, how does
this constraint reduce the Hilbert space such that the interior quanta $I$
becomes non-independent while the outside quanta $B$ inherits its entanglement
structure with $\mathcal{R}$? Or do we need more constraints? Solving this
problem may help us understand how exactly the information are transferred
from the black hole to the radiation.
## 4 A non-gravitational field theory with entanglement islands
Previously we proposed that the two island formulas are indeed the same which
indicates that, on the one hand the island formula for gravity also applies to
non-gravitational systems, while on the other hand the existence of
entanglement islands represent the self-encoded property of the quantum
system. In this section based on this conjecture, we explore the possibility
of the island phase in non-gravitational systems, i.e. we apply the Island
formula $I$ to quantum system by searching for smaller entanglement entropy in
configurations with entanglement islands.
In quantum field theories with a UV fixed point, the inclusion of islands will
introduce extra boundary. For a generic theory that satisfies the area law,
the entanglement entropy is infinite and the leading contribution is
proportional to the area of the boundary. The reason is that the entanglement
entropy collects the entanglement that across the boundary from the scale of
the region to the scale of the infinitesimal UV cutoff. Since the entanglement
at each scale is almost the same when the scale is small enough, the
entanglement entropy diverges as the UV cutoff scale goes to zero. The extra
boundary usually increases the entanglement entropy for an infinitely large
amount since it increases the area of the boundary. In such scenarios, it is
impossible to get smaller entanglement entropy by including islands.
The only way to incorporate entanglement islands with field theories is to
consider the existence of certain special subregions where the UV cutoff is
finite and bounded from below. Then we may consider putting an island inside
these regions. Since the cutoff scale is bounded from below, the entanglement
entropy ceases to accumulate at certain scale, hence the boundary of the
island only give finite contribution to the entanglement entropy. In fact,
such configurations can be easily achieved by performing certain Weyl
transformations which adjust the cutoff scale at any point, such that we get a
cutoff function that is bounded from below and depends on the spacial
coordinate $x$. This is inspired by the double holography (or AdS/BCFT) setup
[23] and several papers studying mixed state entanglement via Weyl transformed
CFT [60, 61, 62].
### 4.1 Islands in Weyl transformed CFT2
Let us consider the vacuum state of a holographic CFT2 on a Euclidean flat
space444Here the overall factor $\frac{1}{\delta^{2}}$ is inspired by AdS/CFT
and eq. 44 acts as the boundary metric corresponding to the dual AdS3
geometry, $\displaystyle
ds^{2}=\frac{\ell^{2}}{z^{2}}(-dt^{2}+dx^{2}+dz^{2})\,,$ (43) with the cutoff
settled to be $z=\delta$.
$\displaystyle ds^{2}=\frac{1}{\delta^{2}}\left(d\tau^{2}+dx^{2}\right)\,.$
(44)
Here $\delta$ is an infinitesimal constant representing the UV cutoff of the
boundary CFT. The theory is invariant under the Weyl transformation of the
metric
$\displaystyle
ds^{2}=e^{2\varphi(x)}\left(\frac{d\tau^{2}+dx^{2}}{\delta^{2}}\right)\,.$
(45)
This effectively changes the cutoff scale in the following way
$\displaystyle\delta\Rightarrow e^{-\varphi(x)}\delta\,.$ (46)
The entanglement entropy of a generic interval $\mathcal{R}=[a,b]$ in the CFT
after the Weyl transformation picks up additional contributions from the
scalar field ${\varphi}(x)$ as follows [60, 61, 62]
$\displaystyle
S_{\mathcal{R}}=\frac{c}{3}\log\left(\frac{b-a}{\delta}\right)+\frac{c}{6}{\varphi}(a)+\frac{c}{6}{\varphi}(b)\,.$
(47)
This formula can be achieved by performing the Weyl transformation on the two-
point function of the twist operators.
Before we go ahead, we give a physical interpretation for the Weyl
transformation, as well as the entropy formula (47). Before the Weyl
transformation the UV cutoff of the CFT is a uniform infinitesimal constant
$\delta$. The Weyl transformation is indeed a scale transformation that
changes the cutoff scale of the system at each point. Such a non-uniform
cutoff would definitely affect the multi-scale entanglement structure for the
CFT. After the Weyl transformation, the formula (47) tells us that the
entanglement entropy is just the original one subtracting two constants which
are totally determined by the scalar field at the two endpoints. More
importantly the subjected constant is independent from the position of the
other endpoint, as long as the two endpoints are not close enough to give a
negative entanglement entropy following (47). Hence we conclude that, the Weyl
transformation at any point effectively excludes all the small distance
entanglement across this point below the cutoff scale, which is a constant,
while keeping the long distance entanglement unaffected.
Figure 7: Two configurations in a Weyl-transformed CFT2 where the island
formula may be realized: (left) the entanglement entropy of
$\mathcal{R}=[a,\infty)$ in the region $x>0$, admits a minimal saddle by
including the island $I=(-\infty,-a^{\prime}]$, (right) $\mathcal{R}=[a,b]$
acquires the island $I=[-b^{\prime},-a^{\prime}]$.
Now we show that the island formula (3) applies to the CFT with certain type
of Weyl transformations. For example, we can perform a specific Weyl
transformation on the CFT2, which duals to Poincaré AdS3, such that when $x<0$
the metric is AdS2 while when $x>0$ the metric is flat. Such a Weyl
transformation can be easily found and is given by
$\displaystyle\varphi(x)=\begin{cases}0\,&,\text{if}\quad x>0\,,\\\ \\\
-\log\left(\frac{2|x|}{\delta}\right)+\kappa\,.\,&,\text{if}\quad
x<0\,,\end{cases}$ (48)
where $\kappa$ is an un-determined constant. The corresponding metric at $x<0$
after the Weyl transformation becomes
$\displaystyle
ds^{2}=\frac{e^{2\kappa}}{4}\left(\frac{d\tau^{2}+dx^{2}}{x^{2}}\right)\,,$
(49)
As expected, the above metric is $AdS_{2}$ up to an overall coefficient. Let
us consider $\mathcal{R}$ to be the semi-infinite region $x>a$ (see the left
figure in Fig.7) and apply the island formula to calculate the entanglement
entropy. The entanglement entropy calculated by the island formula is given by
$\displaystyle
S_{\mathcal{R}}=\text{min}\left\\{\frac{c}{3}\log\frac{a+a^{\prime}}{\delta}-\frac{c}{6}\log\left(\frac{2a^{\prime}}{\delta}\right)+\frac{c}{6}\kappa\right\\}\,,$
(50)
where we have admitted the region $I=(-\infty,-a^{\prime}]$ as the island
corresponding to $\mathcal{R}$. The entropy in eq. 50 has a minimal saddle
point
$\displaystyle
S_{\mathcal{R}}=\frac{c}{6}\log\left(\frac{2a}{\delta}\right)+\frac{c}{6}\kappa\,,\qquad
a^{\prime}=a\,.$ (51)
This result is smaller than the entanglement entropy in the non-island phase,
hence the island emerges. Note that, the area term $\frac{Area(X)}{4G}$ does
not appear since the Weyl transformed CFT we consider is a non-gravitational
field theory that lives on a fixed geometric background.
It is important the stress that, when $a\to 0$ we have $-a^{\prime}\to 0$ thus
the region $x<0$ is the island of the region $x>0$. In other words, according
to the Island formula $II$ the independent degrees of freedom only distribute
in the region $x\geq 0$. So only the entanglement entropy for subregions in
$x\geq 0$ is well defined. Also note that, the scalar field diverges as
$\delta\to 0$, which implies the cutoff scale becomes finite and bounded from
below as we expected.
Similarly when we consider $\mathcal{R}$ to be an interval $[a,b]$ inside the
region $x>0$ and include the corresponding island
$I=[-b^{\prime},-a^{\prime}]$ (see the right figure in Fig.7), the island
formula will give
$\displaystyle
S_{\mathcal{R}}=\text{min}\left\\{\frac{c}{3}\log\frac{a+a^{\prime}}{\delta}+\frac{c}{3}\log\frac{b+b^{\prime}}{\delta}-\frac{c}{6}\log\left(\frac{2a^{\prime}}{\delta}\right)-\frac{c}{6}\log\left(\frac{2b^{\prime}}{\delta}\right)+\frac{c}{3}\kappa\right\\}\,,$
(52)
which has the saddle point
$\displaystyle
S_{\mathcal{R}}=\frac{c}{6}\log\left(4\frac{ab}{\delta^{2}}\right)+\frac{c}{3}\kappa\,,\qquad
a^{\prime}=a\,,\quad b^{\prime}=b\,.$ (53)
Then we compare the entanglement entropy (53) in island phase with the one in
the non-island phase, which is given by
$\displaystyle
S_{\mathcal{R}}=\frac{c}{3}\log\left(\frac{b-a}{\delta}\right)\,.$ (54)
We find that when
$\displaystyle
0<a<b\left(1-2\sqrt{e^{2\kappa}+e^{4\kappa}}+2e^{2\kappa}\right)\,,$ (55)
the entanglement entropy (53) calculated by the island formula is smaller,
hence the configuration enters the island phase. In particular when
$\kappa=0$, we enter the island phase for
$\displaystyle 0<a<(3-2\sqrt{2})b\,,\qquad\kappa=0\,.$ (56)
### 4.2 Weyl transformed CFT vs AdS/BCFT
Now we consider the Weyl transformed CFT to be holographic and compare our
previous discussion with the version of island formula in the AdS/BCFT or
doubly holographic setup. The physical interpretation for Weyl transformation
is more intuitive if the CFT is holographic, hence the entanglement structure
has a geometric interpretation. Before we perform the Weyl transformation the
vacuum state of the CFT is dual to Poincaré AdS3 (43), and according to the RT
formula the entanglement entropy for any interval is given by the length of
the minimal bulk geodesic homologous to this interval. The integral computing
the length of the RT surface represents the collection of the entanglement at
all the scales [63]. In this context, the Weyl transformation adjusts the
cutoff scale by adjusting the position of the cutoff point on the RT curve,
where we stop integrating the length of the RT curve. According to the formula
(47), for any RT surface anchored at the site $x_{0}$ on the boundary, we need
to push the cutoff point on the RT surface from $z=\delta$ to certain position
in the bulk, such that the length of the RT surface is reduced by certain
constant $|{\varphi}(x_{0})|$. In other words the cutoff points for all the RT
curves anchored at $(\delta,x_{0})$ form a sphere in the bulk. Interestingly,
for static configurations, the cutoff sphere in the AdS3 background is a
circle in flat background,
$\displaystyle\left(x-x_{0}\right){}^{2}+\left(z-|x_{0}|e^{-\kappa}\right)^{2}=|x_{0}|^{2}e^{-2\kappa}\,,$
(57)
with the center being $(|x_{0}|e^{-\kappa},x_{0})$ and the radius
$r=|x_{0}|e^{-\kappa}$. One may consult Appendix A for the derivation. The
formula (47) then can be understand as follows: when we integrate the length
of the RT surface, we only integrate up to the cutoff sphere (see Fig.8).
Figure 8: Cutoff sphere at $x=x_{0}$ in the bulk dual geometry for a Weyl
transformed CFT. For a minimal surface anchored at $x_{0}$, the portion inside
the cutoff sphere is excluded.
One may immediately realizes that the entanglement entropy obtained by
applying the island formula (3) to the Weyl transformed CFT2 in subsection 4.1
looks quite similar to the entanglement entropy for a (holographic) BCFT [64],
which is also recently observed in [65]. Then it will be interesting to
compare the Weyl transformed CFT with the holographic BCFT, which may provide
us a different perspective to understand the so called AdS/BCFT
correspondence. In the AdS/BCFT setup it is more convenient to use another set
of coordinates $(t,\rho,y)$ to describe the $3d$ bulk geometry,
$\displaystyle x=y\tanh\rho~{}~{},~{}~{}z=-y\text{sech}\rho\,.$ (58)
The bulk metric in these coordinates is given by the standard Poincaré
slicing, as follows
$\displaystyle ds^{2}$
$\displaystyle=d\rho^{2}+\cosh^{2}\rho\frac{-dt^{2}+dy^{2}}{y^{2}}=\frac{-dt^{2}+dx^{2}+dz^{2}}{z^{2}}\,,$
where the AdS3 radius is set to be unity. In the Poincaré slicing555A
convenient choice for a polar coordinate is
$\theta=\text{arccos}\left(\text{sech}\rho\right)$, which determines the
angular position of the brane from the vertical. described by the $(t,\rho,y)$
coordinate chart the EoW brane is situated at a constant $\rho=\rho_{0}$ slice
[66]. In the AdS/BCFT setup, the RT surface $\mathcal{E}_{\mathcal{R}}$
corresponding to a subsystem $\mathcal{R}$ is also allowed to be anchored on
the EoW brane $\mathbb{Q}$ [64, 66] (see Fig.9), which was confirmed in [67]
via a direct computation of the correlation functions of twist operators in
BCFTs with large central charge. This new formula for holographic entanglement
entropy is indeed equivalent to the island formula in the double holography
setup [23, 21].
Figure 9: Holographic entanglement entropy in the AdS/BCFT setup: (left) For
an interval $\mathcal{R}=[a,\infty]$ in the BCFT, the RT surface has the
disconnected topology and lands on the EoW brane at $a^{\prime}=a$. (right)
For $\mathcal{R}=[a,b]$, the connected(disconnected) RT surfaces are shown by
the blue dashed(solid) curves. In both left and right panels, the portion of
the brane lying in the entanglement wedge of $\mathcal{R}$ may be interpreted
as the island region from a double holographic point of view.
In this context, the entanglement entropy of the region
$\mathcal{R}:=[a,\infty)$ may be computed through the length of the RT curve
$\mathcal{E}_{\mathcal{R}}$ homologous to $\mathcal{R}$ as follows
$\displaystyle
S_{\mathcal{R}}=\frac{Area(\mathcal{E}_{\mathcal{R}})}{4G}=\frac{c}{6}\log\left(\frac{2a}{\delta}\right)+\frac{c}{6}\rho_{0}\,,$
(59)
See the left panel in Fig.9. Here we also ignored the area term
$\frac{AreaX}{4G}$ by suppressing the gravity theory settled on the EoW brane,
which is necessary to compare with the configuration we discussed in the last
subsection. For the choice $\mathcal{R}:=[a,b]$ there are two possible saddles
for the RT surface, one is a connected geodesic that anchored on the two
endpoints of $\mathcal{R}$, while the other consists of two disconnected
geodesics which also anchor on the EoW brane $\mathbb{Q}$ (see the right panel
in Fig.9). The holographic entanglement entropy is simply given by
$\displaystyle
S_{\mathcal{R}}=\begin{cases}\frac{c}{3}\log\left(\frac{b-a}{\delta}\right)\,&,\,\text{if}\quad
a>b\left(1-2\sqrt{e^{2\rho_{0}}+e^{4\rho_{0}}}+2e^{2\rho_{0}}\right)\,,\\\ \\\
\frac{c}{6}\log\left(\frac{2a}{\delta}\right)+\frac{c}{6}\log\left(\frac{2b}{\delta}\right)+\frac{c}{3}\rho_{0}\,.\,&,\,\text{if}\quad
0<a<b\left(1-2\sqrt{e^{2\rho_{0}}+e^{4\rho_{0}}}+2e^{2\rho_{0}}\right)\,.\end{cases}$
(60)
Remarkably, the above results for $S_{\mathcal{R}}$ is exactly those we
obtained for the Weyl transformed CFT via the island formula if we set,
$\displaystyle\rho_{0}=\kappa\,.$ (61)
This coincidence implies that the entanglement structure of the Weyl
transformed CFT and the BCFT share common properties. Before the Weyl
transformation, the vacuum of the CFT2 is dual to the Poincaré AdS3. The Weyl
transformation then adjusts the cutoff scale at the points with $x<0$, which
in some sense push the cutoff slice from $z=\delta$ into the bulk. We find
that the common tangent of all the cutoff spheres is just the straight line
that overlap with the EoW brane at $\rho_{0}=\kappa$, see Fig.10 and appendix
A for details. This common tangent line indeed plays the role of the EoW brane
$\mathbb{Q}$ in AdS/BCFT. Firstly, the RT surfaces are allowed to anchor on
the tangent line in the sense that the RT surfaces are cut off at this line.
Secondly, one may claim that there are no degrees of freedom behind the
tangent line as this region represent the physics below the cutoff scale.
Therefore, we may conclude that the CFT2 after the Weyl transformation has the
“same” entanglement structure as a holographic BCFT. This could be another
reason that is responsible for the emergence of entanglement islands in
AdS/BCFT, and support our claim that the island phase is a property of the
quantum state.
Figure 10: The common tangent surface to all the cutoff spheres can be
compared to the EoW brane in the AdS/BCFT scenario.
Our comparison looks quite similar to the comparison between the holographic
BCFT and a CFT2 coupled to a gravity on AdS2 considered in [65]. The authors
of [65] argued that the BCFT is equivalent to the CFT coupled to an AdS2
gravity and call this equivalence the Island/BCFT correspondence.
Nevertheless, there are essential differences between our setup and the one in
[65], which we present in the following.
* •
In our setup, the Weyl transformed CFT on the left hand side $x<0$ is a CFT
settled on AdS2 geometry background with no gravitation, while in [65] the
left hand side is a gravity on AdS2. Although the entanglement entropy are
calculated in the same way, this is an essential difference.
* •
In our comparison, the Weyl transformed CFT2 is always settled on the
boundary, and the common tangent of the cutoff spheres plays the role of the
EoW brane. While in [65] the AdS2 gravity coupled to CFT plays the role of the
EoW brane.
* •
In [65], besides the UV cutoff $\delta$, an additional infinitesimal parameter
$\epsilon$ is introduced by only performing Weyl transformation at
$x<-\epsilon$. When comparing the EoW brane to the AdS2 gravity, they needed
an identification between these two parameters, based on which they can match
the entanglement entropy on both sides. In our comparison the position of the
cutoff brane is determined by the $\kappa$ parameter in the Weyl
transformation. Nevertheless, unlike the matching in our case, the matching in
[65] is not exact.
### 4.3 Islands in the lab
In the previous subsections, we assumed that the Weyl transformed CFT to be
holographic such that we have a clear geometric description for the
entanglement structure. Nevertheless, we argue that holography is not
necessary for the emergence of entanglement islands. For non-holographic CFTs,
the entanglement structure may be described by a MERA tensor network [68, 69].
In such scenarios, the tensor network is an analogue of the dual AdS space,
and the entanglement entropy is captured by the number of cuts when a path
intersects with the tensor network. This path in the tensor network is an
analogue to the RT curve [70, 71] and satisfies similar key properties, like
it is homologous to the region in the CFT and the number of the cuts should be
minimized. Due to such key properties, provided that we can conduct an
operation that can adjust the cutoff scale for a certain region like the Weyl
transformation, the logic supporting the emergence of islands for the Weyl
transformed CFT should also work for non-holographic quantum systems. Such an
operation should be understood as eliminating the short distance entanglement
inside a region while keeping the long range entanglement between this region
and its complement unaffected.
Figure 11: A rough modified MERA tensor network description for the
entanglement structure of the Weyl transformed CFT, where islands emerge. The
red path clearly has a smaller number of cuts than the black one, hence should
be used to evaluate the entanglement entropy of $\mathcal{R}$. The island
naturally emerge under the choice of the new path.
We start from a lattice chain where an interacting quantum theory lives and
consider the state to be the ground state of the theory. The lattice spacing
is settled to be some small constants $\delta$. Then we perform an analogue of
the Weyl transformation by adjusting the lattice space for the $x<0$ region.
The position dependent lattice spacing $\delta(x)$ controls the cutoff scale.
If the lattice spacing $\delta(x)$ increases rapidly enough with $|x|$, then
the density of degrees of freedom is vastly reduced and the local short
distance entanglement below the scale $\delta(x)$ is eliminated, while the
long distance entanglement is almost unaffected. Under such an operation the
entanglement structure of the system would roughly be described by the tensor
network shown in Fig.11, where a new path homologous to $\mathcal{R}$ emerges
with a smaller number of cuts and an island on the left hand side. The new
path induces an additional twist operator in the complement of $\mathcal{R}$,
hence the entanglement island $I$ emerges. It will be very interesting to
construct tensor network models and synthesize lattice models where the
special entanglement structure can be (roughly) realized, then explicitly
study how the state of $I$ can be encoded in the state of $\mathcal{R}$.
## 5 Discussion
In this paper we provide a framework for the emergence of the entanglement
islands from a purely quantum information perspective. In this framework the
emergence of islands is a result of certain constraints on the system such
that the Hilbert space of the entire system is reduced. For some specific
reductions, the degrees of freedom in certain region $I$ are encoded in the
state of another subregion $\mathcal{R}$ such that the system becomes self-
encoded. We show explicitly how the reduction changes the way we compute the
reduced density matrix and how the island formula $II$ arises when we compute
the entanglement entropy for $\mathcal{R}$. This can also be understood in
probability theory in the following way: the constraints that are applied to
all the states in the reduced Hilbert space is the information we know before
we study the probability distribution for all the possible configurations,
hence the entropy we calculate under the constraints can be understood as the
conditional entropy (see also [51] for related discussion), which definitely
differs from the entropy under no constraints. It seems that this is the only
way we can incorporate the island formula $I$ into quantum information theory
in the effective description of the black hole evaporation. So we conjecture
that, the island formula $I$ and $II$ are essentially the same formula.
An important implication of this conjecture is that, the island phase is a
property of the quantum state and the Hilbert space where the state is
embedded in, rather than a special property of gravity. The island formula $I$
also applies to non-gravitational quantum systems and the system in island
phase is also self-encoded. Inspired by this conjecture, we apply the Island
formula $I$ to the Weyl transformed CFT2 to simulate the AdS/BCFT (or double
holography) setup and give a new quantum information perspective to understand
the entanglement structure of the AdS/BCFT setup. Furthermore we propose a
experimental test for the existence of island in lattice models, by adjusting
the lattice spacing. When we apply the Weyl transformation to adjust the
cutoff scale or lattice spacing, we simultaneously adjust the density of the
degrees of freedom. For the specific Weyl transformations which induce the
islands, the cutoff scale goes from infinitesimal to finite, hence the density
of degrees of freedom goes from infinitely large to finite. This of course
reduces the Hilbert space of the system vastly. However, how exactly the
reduction of the Hilbert space induces the self encoding property of this
system is not clear. This may be explicitly studied in certain tensor network
models.
In the Weyl transformed CFT2, the AdS2 metric on the left hand side is a
result of the Weyl transformation which adjusts the cutoff scale. In this
configuration, we should not consider the left hand side to be a quantum field
theory settled on a fixed curved background with an infinitesimal UV cutoff,
rather the cutoff scale of the AdS2 background is finite and controlled by the
AdS2 metric. This observation could be the answer to another confusion about
gravity, which is that the degrees of freedom in the black hole interior
suggested by the effective theory vastly exceed the entropy in the fundamental
description, which is calculated by the Bekenstein-Hawking entropy captured by
the area of the black hole horizon. If the effective theory also take the
finite cutoff scale proportional to the metric, the degrees of freedom in the
effective theory will be vastly reduced and could be matched to the
Bekenstein-Hawking entropy. We think the claim that the cutoff scale for
gravity should be proportional to the metric should be valid for generic
gravity theory. This further implies that the short distance entanglement is
minimized in the near horizon region, hence the near horizon region has the
biggest possibility to bear the boundary of the island.
In [53], inspired by this confusion the authors introduced a non-isometric
mapping between the effective theory and the fundamental description. Under
this non-isometric mapping the dimension of the Hilbert space in the effective
theory is tremendously reduced to match the dimension of the Hilbert space of
the fundamental description. They claim that in the effective theory, there
are large number of null states which vanishes under this mapping. They
perform a direct calculation for the reduced density matrix and entanglement
entropy for the “reservoir” $\mathcal{R}$ and found that the entropy has two
saddles, which is just what is expected from the QES formula or island
formula. We believe that their non-isometric mapping plays a similar role as
our Weyl transformation which reduces the Hilbert space, excluding the short
distance entanglement, while keeping the long range entanglement between the
AdS2 and the reservoir unaffected.
Although the island formula has been widely accepted by the high energy theory
community since its discovery, there are still important criticisms [36, 37,
38, 39, 40, 41, 42] which remain to be properly addressed. For example, it was
shown in [36] that, the setup where gravity is stopped at the common boundary
of the gravity and reservoir will result in massive gravitons. Furthermore in
[41] the authors considered a situation in which gravity enters the reservoir
while the island does not emerge to rescue unitarity. Also in [42] an
inconsistency between Gauss law and the Island formula was pointed out in a
theory with long-range gravity. First of all, our Island formula $II$ in non-
gravitational systems is definitely free from these criticisms since
gravitation is not involved and the factorization of the Hilbert space is
well-defined. Secondly, our conclusion that the special entanglement structure
of the gravity system should be responsible to the emergence of islands
indicates that, neither the sharp boundary between gravity and non-
gravitational reservoir nor a non-gravitational reservoir itself is crucial
for the emergence of islands. Our new perspective on the Island formula may
shed light on explicitly solving the above criticisms.
## Acknowledgements
We would like to thank Huajia Wang, and especially Hao Geng and Ling-yan Hung
very much for very insightful discussions. We also thank Hao Geng and Ling-yan
Hung for valuable comments and suggestions on the manuscript. Qiang Wen and
Shangjie Zhou are supported by the “Zhishan” scholars program of Southeast
University.
## Appendix A The cutoff spheres and their common tangent
The AdS3 metric in Poincaré coordinates $(t,x,z)$ is given by
$\displaystyle\differential{s^{2}}=\frac{-\differential{t^{2}}+\differential{x^{2}}+\differential{z^{2}}}{z^{2}}.$
(62)
In the light-cone coordinates
$\displaystyle U=\frac{x+t}{2},\quad
V=\frac{x-t}{2},\quad\rho=\frac{2}{z^{2}},$ (63)
the length of the geodesic connecting two spacelike-separated points is[72]
$\displaystyle L_{\mathrm{AdS}}$
$\displaystyle(U_{1},V_{1},\rho_{1},U_{2},V_{2},\rho_{2})$
$\displaystyle=\frac{1}{2}\log\left[\frac{\rho_{2}(\rho_{2}+X)+\rho_{1}(\rho_{2}Y(2\rho_{2}+X)+X)+(\rho_{1}+\rho_{2}\rho_{1}Y)^{2}}{2\rho_{1}\rho_{2}}\right]\,,$
(64)
where
$\displaystyle\begin{aligned} Y=&2(U_{1}-U_{2})(V_{1}-V_{2})\,,\\\
X=&\sqrt[]{\rho_{1}^{2}+2\rho_{2}\rho_{1}(\rho_{1}Y-1)+(\rho_{2}+\rho_{1}\rho_{2}Y)^{2}}.\end{aligned}$
(65)
We look for the set of points $(0,x,z)$ whose geodesic distance from a fixed
point $(0,x_{0},\delta)$ is a constant
$\absolutevalue{\phi(x_{0})}=\log(\frac{2\absolutevalue{x_{0}}}{\delta})-\kappa$.
The equation that $x$ and $z$ should satisfy can be obtained straightforwardly
by applying eq. 64:
$\displaystyle\frac{1}{2}\log\left[\frac{\rho_{2}(\rho_{2}+X)+\rho_{1}(\rho_{2}Y(2\rho_{2}+X)+X)+(\rho_{1}+\rho_{2}\rho_{1}Y)^{2}}{2\rho_{1}\rho_{2}}\right]=\log(\frac{2\absolutevalue{x_{0}}}{\delta})-\kappa\,,$
(66)
which, after taking the limit $\delta\to 0$, can be simplified to be,
$\displaystyle\left(x-{x_{0}}\right){}^{2}+\left(z-\absolutevalue{x_{0}}e^{-\kappa}\right)^{2}=\absolutevalue{x_{0}}^{2}e^{-2\kappa}\,,$
(67)
which is a circle at $(x_{0},\absolutevalue{x_{0}}e^{-\kappa})$ with radius
$r=\absolutevalue{x_{0}}e^{-\kappa}$.
Figure 12: A generic cutoff sphere at $-x$ with radius $r=xe^{-\kappa}$ is
centered in the bulk at the point $(x,xe^{-\kappa})$. The tangent from $x=0$
shown by red line acts as a cutoff brane which is equivalent to the EoW brane
in the AdS/BCFT scenario.
In fig. 12, a generic cut-off sphere at the point $-x$ (x>0) with radius
$r=xe^{-\kappa}$ is depicted. The tangent to the cut-off sphere from $x=0$ is
shown by the red line. We may obtain the angle $\theta_{0}$ of the tangent
line with the vertical as follows
$\displaystyle\tan\left(\frac{\pi}{4}-\frac{\theta_{0}}{2}\right)=\frac{r}{|x|}=e^{-\kappa}\,.$
(68)
Hence, the hyperbolic angle $\rho$ for the tangent line is obtained as
$\displaystyle\rho=\text{arccosh}\left(\frac{1}{\cos\theta_{0}}\right)=\kappa\,,$
(69)
which confirms our claim that the cutoff brane obtained from the common
tangent line of all the cut-off spheres is equivalent to the end-of-the-world
brane in the AdS/BCFT setup.
## References
* [1] S. W. Hawking, _Breakdown of predictability in gravitational collapse_ , Phys. Rev. D 14, 2460 (1976), 10.1103/PhysRevD.14.2460.
* [2] D. N. Page, _Information in black hole radiation_ , Phys. Rev. Lett. 71, 3743 (1993), 10.1103/PhysRevLett.71.3743.
* [3] D. N. Page, _Time Dependence of Hawking Radiation Entropy_ , JCAP 09, 028 (2013), 10.1088/1475-7516/2013/09/028, 1301.4995.
* [4] S. D. Mathur, _The Information paradox: A Pedagogical introduction_ , Class. Quant. Grav. 26, 224001 (2009), 10.1088/0264-9381/26/22/224001, 0909.1038.
* [5] A. Almheiri, D. Marolf, J. Polchinski and J. Sully, _Black Holes: Complementarity or Firewalls?_ , JHEP 02, 062 (2013), 10.1007/JHEP02(2013)062, 1207.3123.
* [6] J. M. Maldacena, _The Large N limit of superconformal field theories and supergravity_ , Adv. Theor. Math. Phys. 2, 231 (1998), 10.1023/A:1026654312961, hep-th/9711200.
* [7] S. S. Gubser, I. R. Klebanov and A. M. Polyakov, _Gauge theory correlators from noncritical string theory_ , Phys. Lett. B 428, 105 (1998), 10.1016/S0370-2693(98)00377-3, hep-th/9802109.
* [8] E. Witten, _Anti-de Sitter space and holography_ , Adv. Theor. Math. Phys. 2, 253 (1998), 10.4310/ATMP.1998.v2.n2.a2, hep-th/9802150.
* [9] S. Ryu and T. Takayanagi, _Holographic derivation of entanglement entropy from AdS/CFT_ , Phys. Rev. Lett. 96, 181602 (2006), 10.1103/PhysRevLett.96.181602, hep-th/0603001.
* [10] S. Ryu and T. Takayanagi, _Aspects of Holographic Entanglement Entropy_ , JHEP 08, 045 (2006), 10.1088/1126-6708/2006/08/045, hep-th/0605073.
* [11] A. Lewkowycz and J. Maldacena, _Generalized gravitational entropy_ , JHEP 08, 090 (2013), 10.1007/JHEP08(2013)090, 1304.4926.
* [12] T. Faulkner, A. Lewkowycz and J. Maldacena, _Quantum corrections to holographic entanglement entropy_ , JHEP 11, 074 (2013), 10.1007/JHEP11(2013)074, 1307.2892.
* [13] N. Engelhardt and A. C. Wall, _Quantum Extremal Surfaces: Holographic Entanglement Entropy beyond the Classical Regime_ , JHEP 01, 073 (2015), 10.1007/JHEP01(2015)073, 1408.3203.
* [14] A. C. Wall, _Maximin Surfaces, and the Strong Subadditivity of the Covariant Holographic Entanglement Entropy_ , Class. Quant. Grav. 31(22), 225007 (2014), 10.1088/0264-9381/31/22/225007, 1211.3494.
* [15] X. Dong, A. Lewkowycz and M. Rangamani, _Deriving covariant holographic entanglement_ , JHEP 11, 028 (2016), 10.1007/JHEP11(2016)028, 1607.07506.
* [16] C. Akers, N. Engelhardt, G. Penington and M. Usatyuk, _Quantum Maximin Surfaces_ , JHEP 08, 140 (2020), 10.1007/JHEP08(2020)140, 1912.02799.
* [17] X. Dong and A. Lewkowycz, _Entropy, Extremality, Euclidean Variations, and the Equations of Motion_ , JHEP 01, 081 (2018), 10.1007/JHEP01(2018)081, 1705.08453.
* [18] V. E. Hubeny, M. Rangamani and T. Takayanagi, _A Covariant holographic entanglement entropy proposal_ , JHEP 07, 062 (2007), 10.1088/1126-6708/2007/07/062, 0705.0016.
* [19] G. Penington, _Entanglement Wedge Reconstruction and the Information Paradox_ , JHEP 09, 002 (2020), 10.1007/JHEP09(2020)002, 1905.08255.
* [20] A. Almheiri, N. Engelhardt, D. Marolf and H. Maxfield, _The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole_ , JHEP 12, 063 (2019), 10.1007/JHEP12(2019)063, 1905.08762.
* [21] M. Rozali, J. Sully, M. Van Raamsdonk, C. Waddell and D. Wakeham, _Information radiation in BCFT models of black holes_ , JHEP 05, 004 (2020), 10.1007/JHEP05(2020)004, 1910.12836.
* [22] H. Z. Chen, Z. Fisher, J. Hernandez, R. C. Myers and S.-M. Ruan, _Information Flow in Black Hole Evaporation_ , JHEP 03, 152 (2020), 10.1007/JHEP03(2020)152, 1911.03402.
* [23] A. Almheiri, R. Mahajan, J. Maldacena and Y. Zhao, _The Page curve of Hawking radiation from semiclassical geometry_ , JHEP 03, 149 (2020), 10.1007/JHEP03(2020)149, 1908.10996.
* [24] A. Almheiri, R. Mahajan and J. E. Santos, _Entanglement islands in higher dimensions_ , SciPost Phys. 9(1), 001 (2020), 10.21468/SciPostPhys.9.1.001, 1911.09666.
* [25] A. Almheiri, R. Mahajan and J. Maldacena, _Islands outside the horizon_ (2019), 1910.11077.
* [26] A. Almheiri, T. Hartman, J. Maldacena, E. Shaghoulian and A. Tajdini, _Replica Wormholes and the Entropy of Hawking Radiation_ , JHEP 05, 013 (2020), 10.1007/JHEP05(2020)013, 1911.12333.
* [27] G. Penington, S. H. Shenker, D. Stanford and Z. Yang, _Replica wormholes and the black hole interior_ , JHEP 03, 205 (2022), 10.1007/JHEP03(2022)205, 1911.11977.
* [28] Y. Chen, _Pulling Out the Island with Modular Flow_ , JHEP 03, 033 (2020), 10.1007/JHEP03(2020)033, 1912.02210.
* [29] Y. Chen, X.-L. Qi and P. Zhang, _Replica wormhole and information retrieval in the SYK model coupled to Majorana chains_ , JHEP 06, 121 (2020), 10.1007/JHEP06(2020)121, 2003.13147.
* [30] H. Z. Chen, R. C. Myers, D. Neuenfeld, I. A. Reyes and J. Sandor, _Quantum Extremal Islands Made Easy, Part II: Black Holes on the Brane_ , JHEP 12, 025 (2020), 10.1007/JHEP12(2020)025, 2010.00018.
* [31] J. Hernandez, R. C. Myers and S.-M. Ruan, _Quantum extremal islands made easy. Part III. Complexity on the brane_ , JHEP 02, 173 (2021), 10.1007/JHEP02(2021)173, 2010.16398.
* [32] G. Grimaldi, J. Hernandez and R. C. Myers, _Quantum extremal islands made easy. Part IV. Massive black holes on the brane_ , JHEP 03, 136 (2022), 10.1007/JHEP03(2022)136, 2202.00679.
* [33] I. Akal, Y. Kusuki, N. Shiba, T. Takayanagi and Z. Wei, _Entanglement Entropy in a Holographic Moving Mirror and the Page Curve_ , Phys. Rev. Lett. 126(6), 061604 (2021), 10.1103/PhysRevLett.126.061604, 2011.12005.
* [34] F. Deng, J. Chu and Y. Zhou, _Defect extremal surface as the holographic counterpart of Island formula_ , JHEP 03, 008 (2021), 10.1007/JHEP03(2021)008, 2012.07612.
* [35] T. Anous, M. Meineri, P. Pelliconi and J. Sonner, _Sailing past the End of the World and discovering the Island_ , SciPost Phys. 13(3), 075 (2022), 10.21468/SciPostPhys.13.3.075, 2202.11718.
* [36] H. Geng and A. Karch, _Massive islands_ , JHEP 09, 121 (2020), 10.1007/JHEP09(2020)121, 2006.02438.
* [37] A. Karlsson, _Replica wormhole and island incompatibility with monogamy of entanglement_ (2020), 2007.10523.
* [38] S. Raju, _Lessons from the information paradox_ , Phys. Rept. 943, 2187 (2022), 10.1016/j.physrep.2021.10.001, 2012.05770.
* [39] S. Raju, _Failure of the split property in gravity and the information paradox_ , Class. Quant. Grav. 39(6), 064002 (2022), 10.1088/1361-6382/ac482b, 2110.05470.
* [40] A. Laddha, S. G. Prabhu, S. Raju and P. Shrivastava, _The Holographic Nature of Null Infinity_ , SciPost Phys. 10(2), 041 (2021), 10.21468/SciPostPhys.10.2.041, 2002.02448.
* [41] H. Geng, A. Karch, C. Perez-Pardavila, S. Raju, L. Randall, M. Riojas and S. Shashi, _Information Transfer with a Gravitating Bath_ , SciPost Phys. 10(5), 103 (2021), 10.21468/SciPostPhys.10.5.103, 2012.04671.
* [42] H. Geng, A. Karch, C. Perez-Pardavila, S. Raju, L. Randall, M. Riojas and S. Shashi, _Inconsistency of islands in theories with long-range gravity_ , JHEP 01, 182 (2022), 10.1007/JHEP01(2022)182, 2107.03390.
* [43] A. Almheiri, T. Hartman, J. Maldacena, E. Shaghoulian and A. Tajdini, _The entropy of Hawking radiation_ , Rev. Mod. Phys. 93(3), 035002 (2021), 10.1103/RevModPhys.93.035002, 2006.06872.
* [44] R. Bousso, X. Dong, N. Engelhardt, T. Faulkner, T. Hartman, S. H. Shenker and D. Stanford, _Snowmass White Paper: Quantum Aspects of Black Holes and the Emergence of Spacetime_ (2022), 2201.03096.
* [45] R. Bousso, B. Freivogel, S. Leichenauer, V. Rosenhaus and C. Zukowski, _Null Geodesics, Local CFT Operators and AdS/CFT for Subregions_ , Phys. Rev. D 88, 064057 (2013), 10.1103/PhysRevD.88.064057, 1209.4641.
* [46] B. Czech, J. L. Karczmarek, F. Nogueira and M. Van Raamsdonk, _The Gravity Dual of a Density Matrix_ , Class. Quant. Grav. 29, 155009 (2012), 10.1088/0264-9381/29/15/155009, 1204.1330.
* [47] M. Headrick, V. E. Hubeny, A. Lawrence and M. Rangamani, _Causality & holographic entanglement entropy_, JHEP 12, 162 (2014), 10.1007/JHEP12(2014)162, 1408.6300.
* [48] A. Almheiri, X. Dong and D. Harlow, _Bulk Locality and Quantum Error Correction in AdS/CFT_ , JHEP 04, 163 (2015), 10.1007/JHEP04(2015)163, 1411.7041.
* [49] X. Dong, D. Harlow and A. C. Wall, _Reconstruction of Bulk Operators within the Entanglement Wedge in Gauge-Gravity Duality_ , Phys. Rev. Lett. 117(2), 021601 (2016), 10.1103/PhysRevLett.117.021601, 1601.05416.
* [50] D. Harlow, _The Ryu–Takayanagi Formula from Quantum Error Correction_ , Commun. Math. Phys. 354(3), 865 (2017), 10.1007/s00220-017-2904-z, 1607.03901.
* [51] R. Renner and J. Wang, _The black hole information puzzle and the quantum de Finetti theorem_ (2021), 2110.14653.
* [52] X. Wang, K. Zhang and J. Wang, _What can we learn about islands and state paradox from quantum information theory?_ (2021), 2107.09228.
* [53] C. Akers, N. Engelhardt, D. Harlow, G. Penington and S. Vardhan, _The black hole interior from non-isometric codes and complexity_ (2022), 2207.06536.
* [54] A. Almheiri and H. W. Lin, _The entanglement wedge of unknown couplings_ , JHEP 08, 062 (2022), 10.1007/JHEP08(2022)062, 2111.06298.
* [55] P. Calabrese and J. L. Cardy, _Entanglement entropy and quantum field theory_ , J. Stat. Mech. 0406, P06002 (2004), 10.1088/1742-5468/2004/06/P06002, hep-th/0405152.
* [56] P. Calabrese and J. Cardy, _Entanglement entropy and conformal field theory_ , J. Phys. A 42, 504005 (2009), 10.1088/1751-8113/42/50/504005, 0905.4013.
* [57] H. Casini, M. Huerta and J. A. Rosabal, _Remarks on entanglement entropy for gauge fields_ , Phys. Rev. D 89(8), 085012 (2014), 10.1103/PhysRevD.89.085012, 1312.1183.
* [58] S. Ghosh, R. M. Soni and S. P. Trivedi, _On The Entanglement Entropy For Gauge Theories_ , JHEP 09, 069 (2015), 10.1007/JHEP09(2015)069, 1501.02593.
* [59] L.-Y. Hung and Y. Wan, _Revisiting Entanglement Entropy of Lattice Gauge Theories_ , JHEP 04, 122 (2015), 10.1007/JHEP04(2015)122, 1501.04389.
* [60] P. Caputa, N. Kundu, M. Miyaji, T. Takayanagi and K. Watanabe, _Anti-de Sitter Space from Optimization of Path Integrals in Conformal Field Theories_ , Phys. Rev. Lett. 119(7), 071602 (2017), 10.1103/PhysRevLett.119.071602, 1703.00456.
* [61] P. Caputa, M. Miyaji, T. Takayanagi and K. Umemoto, _Holographic Entanglement of Purification from Conformal Field Theories_ , Phys. Rev. Lett. 122(11), 111601 (2019), 10.1103/PhysRevLett.122.111601, 1812.05268.
* [62] H. A. Camargo, P. Nandy, Q. Wen and H. Zhong, _Balanced partial entanglement and mixed state correlations_ , SciPost Phys. 12(4), 137 (2022), 10.21468/SciPostPhys.12.4.137, 2201.13362.
* [63] B. Swingle, _Mutual information and the structure of entanglement in quantum field theory_ (2010), 1010.4038.
* [64] T. Takayanagi, _Holographic Dual of BCFT_ , Phys. Rev. Lett. 107, 101602 (2011), 10.1103/PhysRevLett.107.101602, 1105.5165.
* [65] K. Suzuki and T. Takayanagi, _BCFT and Islands in two dimensions_ , JHEP 06, 095 (2022), 10.1007/JHEP06(2022)095, 2202.08462.
* [66] M. Fujita, T. Takayanagi and E. Tonni, _Aspects of AdS/BCFT_ , JHEP 11, 043 (2011), 10.1007/JHEP11(2011)043, 1108.5152.
* [67] J. Sully, M. V. Raamsdonk and D. Wakeham, _BCFT entanglement entropy at large central charge and the black hole interior_ , JHEP 03, 167 (2021), 10.1007/JHEP03(2021)167, 2004.13088.
* [68] G. Vidal, _Entanglement Renormalization_ , Phys. Rev. Lett. 99(22), 220405 (2007), 10.1103/PhysRevLett.99.220405, cond-mat/0512165.
* [69] G. Vidal, _Class of Quantum Many-Body States That Can Be Efficiently Simulated_ , Phys. Rev. Lett. 101, 110501 (2008), 10.1103/PhysRevLett.101.110501, quant-ph/0610099.
* [70] B. Swingle, _Entanglement Renormalization and Holography_ , Phys. Rev. D 86, 065007 (2012), 10.1103/PhysRevD.86.065007, 0905.1317.
* [71] B. Swingle, _Constructing holographic spacetimes using entanglement renormalization_ (2012), 1209.3304.
* [72] Q. Wen, _Towards the generalized gravitational entropy for spacetimes with non-Lorentz invariant duals_ , JHEP 01, 220 (2019), 10.1007/JHEP01(2019)220, 1810.11756.
|
# Electron Dynamics at High-Energy Densities in Nickel from Non-linear
Resonant X-ray Absorption Spectra
Robin Y. Engel Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607
Hamburg, Germany Department of Physics, Universität Hamburg, Luruper Chaussee
149, 22761 Hamburg, Germany Oliver Alexander Imperial College London,
Department of Physics, Exhibition Rd, London SW7 2BX, United Kingdom Kaan
Atak Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg,
Germany Uwe Bovensiepen Faculty of Physics and Center for Nanointegration
Duisburg-Essen (CENIDE), University of Duisburg-Essen, Lotharstr. 1, 47057
Duisburg, Germany Institute for Solid State Physics, University of Tokyo,
Kashiwa, Chiba, 277-8581, Japan Jens Buck Deutsches Elektronen-Synchrotron
DESY, Notkestr. 85, 22607 Hamburg, Germany Christian-Albrechts-Universität zu
Kiel, Institut für Experimentelle und Angewandte Physik, Leibnizstraße 11-19,
24118 Kiel, Germany Robert Carley European XFEL, Holzkoppel 4, 22869
Schenefeld, Germany Michele Cascella MAX IV Laboratory, Lund University, PO
Box 118, SE-221 00 Lund, Sweden Valentin Chardonnet Sorbonne Université,
CNRS, Laboratoire de Chimie Physique-Matière et Rayonnement, 4 Pl. Jussieu
Barre 43-44, 75005 Paris, France Gheorghe Sorin Chiuzbaian Sorbonne
Université, CNRS, Laboratoire de Chimie Physique-Matière et Rayonnement, 4 Pl.
Jussieu Barre 43-44, 75005 Paris, France Christian David Paul Scherrer
Institut, Forschungsstrasse 111, 5232 Villigen, Switzerland Florian Döring
Paul Scherrer Institut, Forschungsstrasse 111, 5232 Villigen, Switzerland
Andrea Eschenlohr Faculty of Physics and Center for Nanointegration Duisburg-
Essen (CENIDE), University of Duisburg-Essen, Lotharstr. 1, 47057 Duisburg,
Germany Natalia Gerasimova European XFEL, Holzkoppel 4, 22869 Schenefeld,
Germany Frank de Groot Utrecht University, Debye Institute for Nanomaterials
Science, Inorganic Chemistry and Catalysis, Princetonplein 1, Universiteitsweg
99, 3584 CC Utrecht, Netherlands Loïc Le Guyader European XFEL, Holzkoppel
4, 22869 Schenefeld, Germany Oliver S. Humphries European XFEL, Holzkoppel
4, 22869 Schenefeld, Germany Manuel Izquierdo European XFEL, Holzkoppel 4,
22869 Schenefeld, Germany Emmanuelle Jal Sorbonne Université, CNRS,
Laboratoire de Chimie Physique-Matière et Rayonnement, 4 Pl. Jussieu Barre
43-44, 75005 Paris, France Adam Kubec Paul Scherrer Institut,
Forschungsstrasse 111, 5232 Villigen, Switzerland Tim Laarmann Deutsches
Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany The Hamburg
Centre for Ultrafast Imaging CUI, Luruper Chaussee 149, 22761 Hamburg, Germany
Charles-Henri Lambert Department of Materials, ETH Zurich, 8093 Zurich,
Switzerland Jan Lüning Helmholtz-Zentrum Berlin für Materialien und Energie
GmbH, Hahn-Meitner-Platz 1, 14109 Berlin, Germany Jonathan P. Marangos
Imperial College London, Department of Physics, Exhibition Rd, London SW7 2BX,
United Kingdom Laurent Mercadier European XFEL, Holzkoppel 4, 22869
Schenefeld, Germany Giuseppe Mercurio European XFEL, Holzkoppel 4, 22869
Schenefeld, Germany Piter S. Miedema Deutsches Elektronen-Synchrotron DESY,
Notkestr. 85, 22607 Hamburg, Germany Katharina Ollefs Faculty of Physics and
Center for Nanointegration Duisburg-Essen (CENIDE), University of Duisburg-
Essen, Lotharstr. 1, 47057 Duisburg, Germany Bastian Pfau Max Born Institute
for Nonlinear Optics and Short Pulse Spectroscopy, Max-Born-Str. 2A, 12489
Berlin, Germany Benedikt Rösner Paul Scherrer Institut, Forschungsstrasse
111, 5232 Villigen, Switzerland Kai Rossnagel Deutsches Elektronen-
Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany Christian-Albrechts-
Universität zu Kiel, Institut für Experimentelle und Angewandte Physik,
Leibnizstraße 11-19, 24118 Kiel, Germany Nico Rothenbach Faculty of Physics
and Center for Nanointegration Duisburg-Essen (CENIDE), University of
Duisburg-Essen, Lotharstr. 1, 47057 Duisburg, Germany Andreas Scherz
European XFEL, Holzkoppel 4, 22869 Schenefeld, Germany Justine Schlappa
European XFEL, Holzkoppel 4, 22869 Schenefeld, Germany Markus Scholz
Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany
European XFEL, Holzkoppel 4, 22869 Schenefeld, Germany Jan O. Schunck
Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany
Department of Physics, Universität Hamburg, Luruper Chaussee 149, 22761
Hamburg, Germany Kiana Setoodehnia European XFEL, Holzkoppel 4, 22869
Schenefeld, Germany Christian Stamm Department of Materials, ETH Zurich,
8093 Zurich, Switzerland Institute for Electric Power Systems, University of
Applied Sciences and Arts Northwestern Switzerland, 5210 Windisch, Switzerland
Simone Techert Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607
Hamburg, Germany Institute for X-ray Physics, Goettingen University,
Friedrich Hund Platz 1, 37077 Goettingen, Germany Sam M. Vinko Department of
Physics, Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1
3PU, United Kingdom Central Laser Facility, STFC Rutherford Appleton
Laboratory, Didcot OX11 0QX, United Kingdom Heiko Wende Faculty of Physics
and Center for Nanointegration Duisburg-Essen (CENIDE), University of
Duisburg-Essen, Lotharstr. 1, 47057 Duisburg, Germany Alexander A.
Yaroslavtsev Uppsala University, Department of Physics and Astronomy,
Regementsvägen 1 Uppsala, Sweden Zhong Yin International Center for
Synchrotron Radiation Innovation Smart, Tohoku University, 2-1-1 Katahira,
Aoba-ku, Sendai, Miyagi 980-8577, Japan ETH Zürich, Laboratorium für
Physikalische Chemie, Vladimir-Prelog-Weg 1-5, 8093 Zürich, Switzerland
Martin Beye Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607
Hamburg, Germany Department of Physics, Universität Hamburg, Luruper Chaussee
149, 22761 Hamburg, Germany<EMAIL_ADDRESS>
###### Abstract
The pulse intensity from X-ray free-electron lasers (FELs) can create extreme
excitation densities in solids, entering the regime of non-linear X-ray-matter
interactions. We show $L_{3}$-edge absorption spectra of metallic nickel thin
films with fluences entering a regime where several X-ray photons are incident
per absorption cross-section. Main features of the observed non-linear
spectral changes are described with a predictive rate model for electron
population dynamics during the pulse, utilizing a fixed density of states and
tabulated ground-state properties.
The modern understanding of complex solid materials relies on appropriate
approximations to the unabridged quantum mechanical description of the full,
correlated many-body problem. To assess the predictive power of theoretical
models and the selected approximations, detailed experimental studies far away
from known territory are especially insightful. Absorbing the high power
densities available from an X-ray free-electron laser (FEL) in a solid metal
generates a very unusual state of warm dense matter far from equilibrium:
Individual electronic excitations reach up to hundreds of eV and excitation
levels average out to many eV per atom [1, 2, 3, 4, 5, 6, 7, 8, 9]. As the
absorption of an intense X-ray pulse depends on the changes it drives in the
electronic system [10, 11, 12, 13, 14, 15, 16, 15], a single-pulse non-linear
absorption measurement can be used to investigate its evolution on the
timescale of the pulse duration.
We present fluence-dependent X-ray absorption spectra recorded with
monochromatic X-rays on metallic nickel thin films around the nickel
2$p_{3/2}$ ($L_{3}$) edge, revealing a changing valence electron system around
the Fermi level as a consequence of the high excitation densities from
fluences up to 60 J/cm2 (corresponding to 2$\times 10^{15}$ W/cm2).
The electronic processes that ensue after the absorption of photons at core
levels trigger a complex dynamical process that is challenging to treat in
purely ab-initio simulations [17, 18, 19, 20, 21]. Here, we take an
alternative approach and develop a simple rate equation model that provides an
intuitive understanding of the relevant processes [22]. The resulting picture
of the evolution of electronic populations within a fixed ground-state density
of states successfully describes the largest part of the non-linear changes in
the spectra. This corroborates the dominant impact of electron redistribution
from the strong non-equilibrium state towards a thermalized electronic system.
Some of the observed changes, especially in the close vicinity of the
resonance, deviate from the predictions of the rate model and call for more
evolved theories. Here, our work provides a benchmark to identify observations
of advanced physical processes and effects. While this letter discusses the
experiment and resulting insights, we lay out the framework of the model in
detail in a separate publication [22].
Additionally, our straightforward picture of intense core-resonant X-ray pulse
interaction with the valence system of a 3$d$ metal lays a solid knowledge-
based foundation for the planning and interpretation of non-linear X-ray
spectroscopy experiments at FELs; in particular, the relevance of electronic
scattering processes observed here is expected to affect methods relying on
stimulated emission from core excitations and X-ray or X-ray/optical wave-
mixing [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38].
Figure 1: (a&b) Sketch of absorption at different fluences. The unoccupied
states determine the XAS spectrum as they are probed by core-resonant photons.
(a) In the low-fluence case (blue unoccupied states and resulting spectrum),
the electronic system mostly remains in the ground state. (b) In the high-
fluence case (yellow unoccupied states and spectrum), later parts of the X-ray
pulse probe a hot electronic system and experience spectral bleaching at the
probed photon energy.
Setup for non-linear XAS (c) The split-beam-normalization scheme uses a
special zone plate [39], which generates two adjacent beam foci for
transmission through the sample and a reference membrane before the beams
impinge on the detector.
X-ray absorption spectra of the nickel 2$p_{3/2}$ ($L_{3}$) edge were recorded
at the Spectroscopy and Coherent Scattering Instrument (SCS) of the European
XFEL [40].
The XAS spectra were measured by continuously scanning the SASE3 monochromator
[41] (synchronized with the undulator gap) back-and-forth many times in the
range 846-856 eV. The photon bandwidth was about 420 meV and the FEL pulse
duration on the sample was about 30 fs FWHM. The overall beam intensity was
controlled using a gas attenuator filled with nitrogen and monitored using an
X-ray gas-monitor (XGM) downstream of the monochromator [42, 43].
For X-ray absorption measurements at FELs based on Self-Amplified Spontaneous
Emission (SASE), beam-splitting schemes can deliver optimal normalization of
SASE-fluctuations [44, 45, 46]. Here, a focusing and beam-splitting zone plate
also creates the required tight focusing to achieve extreme fluences. Figure 1
shows the schematic experimental layout.
The zone plate combines an off-axis Fresnel structure for focusing and a line
grating for beam-splitting in a single optical element [39]. It thus produces
two $\mu$m-sized, identical foci in the sample plane, 1.9 mm apart,
originating from the first-order diffraction of the zone plate, as well as the
positive and negative first orders of the line grating. The sample has a
square support of 25 mm size, containing Si3N4 membrane windows (orange in
Figure 1) of 0.5 mm size and 200 nm thickness with a distance of 2 mm between
adjacent windows. Every second pair of rows (blue in Figure 1) was
additionally coated with a 20 $\mathrm{nm}$ sample layer of polycrystalline
metallic Ni by sputter deposition, on top of a 2 $\mathrm{nm}$ bonding layer
of Ta; a 2 $\mathrm{nm}$ Pt capping layer prevents oxidation during sample-
handling.
The sample frame was positioned such that one zone plate focus impinged on a
nickel-coated membrane, while the other hit a bare silicon-nitride membrane.
Thus, the difference in transmission of both beams can be attributed solely to
the nickel film.
The detector was a fast readout-speed charge-coupled device (FastCCD) with
high dynamic range, enabling 10 Hz read-out and increasing the fluence range
available to the experiment [47, 48, 49]. Due to the unstable detector
temperature, significant retroactive calibration of the detector was necessary
(see supplement). To prevent detector saturation, an additional aluminum
filter of about 13 $\mathrm{\mu m}$ thickness was used between sample and
detector for measurements with the unattenuated beam.
During these high-intensity measurements, sample and reference films were
locally damaged by intense individual FEL shots. Thus, the FEL was operated in
single-shot mode at 10 Hz repetition rate, and the sample was scanned through
the beam continuously at 0.5 $\mathrm{mm\cdot s^{-1}}$, resulting in 10 shots
per membrane window.
The shot craters in the reference membranes were later analyzed with scanning
electron microscopy (SEM) to determine the effective focal size at specific
photon energies. The resulting spot sizes were used to calibrate ray-tracing
calculations which delivered the photon-energy-dependent spot size, ranging
from 0.4 $\mathrm{\mu m^{2}}$ to about 3 $\mathrm{\mu m^{2}}$ (see supplement
for details on the spot size determination).
Figure 2: Fluence-dependent Ni $L_{3}$-edge spectra, measured (top) and
simulated (bottom). The fluence of events contributing to each spectrum is
given in the legend in terms of mean and standard deviation. Dashed simulated
spectra do not have a corresponding measurement. The regions of interest from
which absorbance changes shown in panels b), d), and e) of Figure 3 were
quantified are shaded and labeled (I), (II), and (III), respectively. The
error bars are shown for the measured spectra and represent the 95% confidence
intervals for each bin of 102 meV width; solid lines of the measured spectra
are smoothed using a Savitzky-Golay filter using windows of 21 bins and 4th-
order polynomials. The experimental spectra are vertically offset by 100 mOD.
Figure 2 shows the spectra for the nickel $L_{3}$-edge next to simulated
spectra for increasing X-ray fluence over more than 3 orders or magnitude,
from 0.03 to 60 J/cm2. Each measured point represents an average of several
FEL shots, sorted by X-ray fluence and photon energy. The varying statistical
uncertainty is a result of the pulse intensity fluctuations of monochromatized
SASE radiation [50] in combination with photon energy-dependent spot sizes
(see supplement for details on the shot sorting).
We observe four main fluence-dependent effects, which we quantify and compare
to the simulated results in Figure 3: a) a red-shift of the absorption peak of
up to 0.9 $\pm$0.1 eV in the rising flank; b) an increase of the pre-edge
absorbance, as the rising edge of the absorption peak shifts and broadens; c)
a reduced peak absorbance and d), e) a reduced post-edge absorbance. The
integration regions from which the effects b), d) and e) are derived, are
highlighted in Figure 2 as (I), (II) and (III), respectively. The shift of the
absorption edge is quantified by the photon energy at which the absorbance
reaches half of the peak value; its uncertainty is propagated from the
statistical uncertainty of the absorption peak measurement.
Before we analyze these observations in detail, let us quickly paraphrase our
modeling approach [22]: In contrast to earlier rate models [51, 12], we
describe the evolution of the electronic system with an energy-resolved
population of the valence band. Tracking the full non-thermal population
history proved crucial to describe the non-linear absorption changes near and
around the Fermi level. As coupling between electrons and phonons in metals is
typically not yet important on the timescale of 30 fs [52, 53, 54] and we do
not account for collective electron correlation effects, we test the
approximation that the Density of States (DoS) remains unchanged within the
pulse duration.
Transition rates between electronic states are determined by scaling ground
state rates with the populations of initial and final states. The relevant
process rates are compiled into differentials of electronic populations and
photon density in space and time and implemented in a finite-element
simulation to derive the electron population history and ultimately the X-ray
transmission of a three-dimensional sample.
The model implements the processes of resonant absorption from the 2$p_{3/2}$
core level and non-resonant absorption from other (mostly valence) electrons.
Stimulated emission is described as a time-inverted resonant absorption
process. Electronic thermalization is modeled with a bulk timescale
$\tau_{\mathrm{th}}$ (essentially quantifying electron-electron scattering)
that moves the non-thermal valence electron distribution towards a target
Fermi-Dirac distribution that corresponds to the momentary internal energy and
population of the valence band. Finally, scattering cascades initiated by fast
Auger-electrons and photo-electrons from non-resonant absorption are
parameterized by another scattering time $\tau_{\mathrm{scatt}}$.
With this simple description of the underlying processes, we provide a
microscopic picture of the electronic system and its interaction with resonant
X-rays as a complementary approach to more complex calculations [20, 21].
Solely considering the population dynamics of the electronic system, the
simulation already achieves good agreement with the experimental data across
more than three orders of magnitude in fluence. This is particularly
remarkable since nearly all input parameters are experimental parameters or
well-known ground-state properties of the material, such as density,
electronic configuration, and ground-state spectrum. Only the valence
thermalization time $\tau_{\mathrm{th}}$ and electron scattering time
$\tau_{scatt}$ were varied to achieve the best match to the experimental
results. The found value $\tau_{\mathrm{th}}=6$ fs compares well to recent
estimates for excitations on this energy scale [33, 38, 55, 56].
The time constant $\tau_{\mathrm{{scatt}}}=1.5$ fs characterizes the secondary
electron scattering cascade which transfers energy (and population) from fast
electrons to (unoccupied) valence states. The constant summarizes many
individual electron scattering events and compares to the tabulated time
between individual collisions in ground-state nickel of roughly 100
attoseconds [57].
Figure 3: Comparison of spectral effects between simulation (blue lines) and
experiment (orange lines with error bars). The shift of absorption edge in
panel a) represents the photon energy at which the half-maximum of the
absorption peak is reached. The absorbance changes in panels b), d) and e) are
integrated from the gray shaded regions in Figure 2, while panel c) shows the
global maximum of the spectrum. Figure 4: Evolution of electronic populations
(simulation) in a single voxel at the sample surface for a pulse of 858.3 eV,
with a pulse energy of 30 $\mathrm{J/cm^{2}}$. Panel a) shows the total DoS
used as an input for the simulation. Panel b) shows the energy-resolved
(relative to the Fermi energy) occupation of the valence band over time. The
population (in electrons/atom/eV) is the product of the DoS and the
occupation. The thermalized valence occupation lags a few femtoseconds behind
the current chemical potential $\mu$; the temperature $T$ of the valence
system rises rapidly, ultimately reaching up to 25 eV. The bleaching of
valence states (highlighted with a blue dotted ellipse) is visible as a high
non-thermal population at the resonant photon energy around 7 eV above the
Fermi level. Panel c) shows the number of core holes and free electrons over
time, as well as the number of electrons in the valence system below and above
the Fermi energy.
Figure 4 shows an example of a simulated valence band population history,
specifically from the uppermost 4 Å thick layer of the sample, excited with a
Gaussian pulse profile centered around $t=0$ with 30 fs FWHM duration and 30
J/cm2 fluence. While panel a) shows the calculated DoS as used by the
simulation and published in [58, 59], the colormap in b) shows the occupation
of these states over time. Panel c) shows the number of electrons per atom in
the valence band below and above the Fermi level (blue solid and dashed
curves, respectively) as well as the average number of core holes and the
number of free electrons over time. Even though the direct interaction with
the photons creates core holes via resonant absorption and free electrons via
non-resonant absorption, the excitation energy of both processes is so quickly
transferred to the valence electrons that only the valence electron
distribution ever deviates strongly from the ground state. By the end of the
pulse in this example, more than half of the $3d$ valence electrons are
excited to valence states above the Fermi level, while the highest
instantaneous number of core holes was only about one per 100 atoms, as shown
in Figure 4 c). Due to the small-bandwidth excitation, the core- and resonant
valence states operate like a two-level system. Since the number of resonant
valence states is small in comparison to the number of core electrons, the
resonant absorption process saturates due to occupied valence states long
before the core level is depleted. A heated Fermi-Dirac distribution further
contributes to the occupation of states above the Fermi level.
Since in our experiment, the same monochromatic pulse excites and probes the
sample, the situation is different for energies below the edge: absorption
only rises after non-resonant absorption has led to sufficient electronic
heating until the tail of the hot hole distribution reaches the probed energy.
Only then, additional resonant absorption begins to occur and accelerates
further electronic heating and in turn additional pre-edge absorption. Since
this process occurs exponentially faster near the absorption edge, it
contributes significantly to the observed spectral red-shift (see Figure 3 a)
and b)).
Another cause of the observed edge shift is the shift of the chemical
potential $\mu$, which strongly depends on the exact shape of the DoS and is
shown in Figure 4 b) as a green line. Initially, $\mu$ increases with absorbed
fluence, as thermally excited electrons from the $3d$ states must spread out
in energy to the lower DoS above the Fermi level. With rising electronic
temperature, the high DoS of the $3d$ states becomes less relevant and the
chemical potential drops again as expected in regular metals. A similar
evolution of the chemical potential and electronic temperature was predicted
for optically excited nickel by previous experiments and calculations [60, 61,
62, 4].
A significant deviation between model and experiment can be observed at the
resonance peak itself, where the simulated electron dynamics lead us to expect
a much stronger saturation effect than observed experimentally (Figure 3 c)).
This underestimation may be related to a fluence-dependent decrease of the
excited state lifetime and consequent energetic broadening of the resonant
core-valence transition, which is not considered in our model. While it is
unsurprising to find additional resonant effects in the resonance peak itself,
the lack of any significant saturation around 852 eV (Figure 3 d)) is even
more surprising. Both disagreements point to additional physical effects and
call for more sophisticated models.
We speculatively propose mechanisms which could contribute to these
disagreements: The transition matrix elements could get modified at higher
excitation densities, especially around the resonance, while we model the
absorption only based on the ground-state spectrum. An energy dependence of
the electron-electron scattering cross-section could allow for particularly
fast scattering of electrons with certain energies, counteracting the
saturation. Furthermore, a collective, correlated response of the electronic
system could modify the DoS or the transitions even on the fast time scale of
the FEL pulse duration [63]. Despite these remaining discrepancies, the main
aspects of the spectral changes are covered in our very simple population
dynamics model.
We want to point out that substantially smaller spectral red-shifts were
observed before in nickel after excitation with optical lasers, albeit at
three orders of magnitude lower excitation fluence. These required
qualitatively different interpretations [64, 65, 56, 63], where the
explanation for time-dependent changes included a variable DoS, calculated
using (Time-Dependent) Density Functional Theory (TD)DFT; this dependency is
overshadowed in our high-fluence study by the effects of electron population
dynamics.
To summarize, we interpret the fluence-dependent near-edge X-ray absorption
spectra of the nickel 2$p_{3/2}$ core level at X-ray fluences of up to 60
$\mathrm{J/cm^{2}}$. We propose a rate-equation model, describing the various
excitation and decay processes that connect core- and valence electronic
states using differential equations based on scaling of known ground-state
properties with the evolving electron populations. For the measured spectra of
metallic nickel, the model successfully predicts the increase of absorption
before and its decrease beyond the resonance, as well as the observed shift of
the absorption peak over more than three orders of magnitude in fluence.
It therefore allows us to identify the most important processes responsible
for spectral changes: Heating of valence electrons due to secondary electron
cascades from Auger electrons, as well as electrons emitted from the valence
band due to non-resonant absorption, appeared particularly relevant.
Furthermore, saturation appears dominated to by the heated valence states
rather than the core holes.
This study provides the fingerprints of how strong X-ray fluences may alter
the electronic system and thus the spectra in studies, where the X-ray pulses
were originally assumed to be non-disturbing. It becomes clear that a complete
modeling of high-fluence spectra needs to build upon dominant population
dynamics and requires special treatment around resonances. This provides an
excellent benchmark for sophisticated theories. Our results also apply to the
resonant regime which is particularly interesting for pioneering non-linear
X-ray studies.
###### Acknowledgements.
We acknowledge European XFEL in Schenefeld, Germany, for provision of X-ray
free-electron laser beamtime at the SCS instrument and would like to thank the
staff for their assistance. Funding by the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) - Project-ID 278162697 - SFB 1242 and the
Helmholtz Association (grant VH-NG-1105) is gratefully acknowledged. Access to
Synchrotron SOLEIL through proposal ID 20160880 for characterization of static
properties of the Ni films is acknowledged. Parts of this work were funded by
the Swiss National Science Foundation (Grants No. PZ00P2-179944)
## Author Contributions
M.B., C.D., F.D., L.L.G., J.L., J.P.M., B.R. and S.T. conceptualized and
planned the experiment; M.C., C.D., F.D., N.G., L.L.G., M.I., E.J., A.K.,
C.-H.L., L.M., B.P., B.R., A.S., K.S., C.S., H.W. and A.Y. prepared the
measurement apparatus and samples; O.A., K.A., M.B., J.B., R.C., M.C., V.C.,
G.S.C., C.D., F.D., R.Y.E., A.E., N.G., L.L.G., O.S.H., M.I., L.M., G.M.,
P.S.M., B.R., N.R., A.S., J.S., S.T. A.Y. and Z.Y. performed the experiment;
O.A., R.Y.E, L.L.G., O.S.H., B.R. and N.R. analyzed and visualized the
results; M.B. and R.Y.E. wrote the manuscript; M.B., U.B., C.D., R.Y.E., A.E.,
L.L.G., O.S.H., M.I., E.J., T.L., P.S.M., K.O., K.R., M.S., J.O.S., S.M.V.,
H.W. and Z.Y. reviewed and edited the manuscript; M.B. and J.P.M. supervised
or administered the project.
## References
* Zastrau _et al._ [2008] U. Zastrau, C. Fortmann, R. R. Fäustlin, L. F. Cao, T. Döppner, S. Düsterer, S. H. Glenzer, G. Gregori, T. Laarmann, H. J. Lee, A. Przystawik, P. Radcliffe, H. Reinholz, G. Röpke, R. Thiele, J. Tiggesbäumker, N. X. Truong, S. Toleikis, I. Uschmann, A. Wierling, T. Tschentscher, E. Förster, and R. Redmer, Bremsstrahlung and line spectroscopy of warm dense aluminum plasma heated by xuv free-electron-laser radiation, Phys. Rev. E 78, 066406 (2008).
* Vinko _et al._ [2012] S. M. Vinko, O. Ciricosta, B. I. Cho, K. Engelhorn, H.-K. Chung, C. R. D. Brown, T. Burian, J. Chalupský, R. W. Falcone, C. Graves, V. Hájková, A. Higginbotham, L. Juha, J. Krzywinski, H. J. Lee, M. Messerschmidt, C. D. Murphy, Y. Ping, A. Scherz, W. Schlotter, S. Toleikis, J. J. Turner, L. Vysin, T. Wang, B. Wu, U. Zastrau, D. Zhu, R. W. Lee, P. A. Heimann, B. Nagler, and J. S. Wark, Creation and diagnosis of a solid-density plasma with an X-ray free-electron laser, Nature 482, 59 (2012).
* Cho _et al._ [2012] B. I. Cho, K. Engelhorn, S. M. Vinko, H.-K. Chung, O. Ciricosta, D. S. Rackstraw, R. W. Falcone, C. R. D. Brown, T. Burian, J. Chalupský, C. Graves, V. Hájková, A. Higginbotham, L. Juha, J. Krzywinski, H. J. Lee, M. Messersmidt, C. Murphy, Y. Ping, N. Rohringer, A. Scherz, W. Schlotter, S. Toleikis, J. J. Turner, L. Vysin, T. Wang, B. Wu, U. Zastrau, D. Zhu, R. W. Lee, B. Nagler, J. S. Wark, and P. A. Heimann, Resonant K$\alpha$ Spectroscopy of Solid-Density Aluminum Plasmas, Physical Review Letters 109, 245003 (2012).
* Humphries _et al._ [2020] O. S. Humphries, R. S. Marjoribanks, Q. Y. van den Berg, E. C. Galtier, M. F. Kasim, H. J. Lee, A. J. F. Miscampbell, B. Nagler, R. Royle, J. S. Wark, and S. M. Vinko, Probing the electronic structure of warm dense nickel via resonant inelastic x-ray scattering, Phys. Rev. Lett. 125, 195001 (2020).
* García Saiz _et al._ [2008] E. García Saiz, G. Gregori, D. O. Gericke, J. Vorberger, B. Barbrel, R. J. Clarke, R. R. Freeman, S. H. Glenzer, F. Y. Khattak, M. Koenig, O. L. Landen, D. Neely, P. Neumayer, M. M. Notley, A. Pelka, D. Price, M. Roth, M. Schollmeier, C. Spindloe, R. L. Weber, L. Van Woerkom, K. Wünsch, and D. Riley, Probing warm dense lithium by inelastic X-ray scattering, Nature Physics 4, 940 (2008).
* Vinko [2015] S. M. Vinko, X-ray free-electron laser studies of dense plasmas, Journal of Plasma Physics 81, 365810501 (2015).
* Bailey _et al._ [2015] J. E. Bailey, T. Nagayama, G. P. Loisel, G. A. Rochau, C. Blancard, J. Colgan, P. Cosse, G. Faussurier, C. J. Fontes, F. Gilleron, I. Golovkin, S. B. Hansen, C. A. Iglesias, D. P. Kilcrease, J. J. MacFarlane, R. C. Mancini, S. N. Nahar, C. Orban, J. C. Pain, A. K. Pradhan, M. Sherrill, and B. G. Wilson, A higher-than-predicted measurement of iron opacity at solar interior temperatures, Nature 517, 56 (2015).
* Hollebon _et al._ [2019] P. Hollebon, O. Ciricosta, M. P. Desjarlais, C. Cacho, C. Spindloe, E. Springate, I. C. E. Turcu, J. S. Wark, and S. M. Vinko, Ab initio simulations and measurements of the free-free opacity in aluminum, Physical Review E 100, 043207 (2019), arXiv:1806.02726 .
* Preston _et al._ [2017] T. R. Preston, S. M. Vinko, O. Ciricosta, P. Hollebon, H.-K. Chung, G. L. Dakovski, J. Krzywinski, M. Minitti, T. Burian, J. Chalupský, V. Hájková, L. Juha, V. Vozda, U. Zastrau, R. W. Lee, and J. S. Wark, Measurements of the K-Shell Opacity of a Solid-Density Magnesium Plasma Heated by an X-Ray Free-Electron Laser, Physical Review Letters 119, 085001 (2017).
* Nagler _et al._ [2009] B. Nagler, U. Zastrau, R. R. Fäustlin, S. M. Vinko, T. Whitcher, A. J. Nelson, R. Sobierajski, J. Krzywinski, J. Chalupsky, E. Abreu, S. Bajt, T. Bornath, T. Burian, H. Chapman, J. Cihelka, T. Döppner, S. Düsterer, T. Dzelzainis, M. Fajardo, E. Förster, C. Fortmann, E. Galtier, S. H. Glenzer, S. Göde, G. Gregori, V. Hajkova, P. Heimann, L. Juha, M. Jurek, F. Y. Khattak, A. R. Khorsand, D. Klinger, M. Kozlova, T. Laarmann, H. J. Lee, R. W. Lee, K.-H. Meiwes-Broer, P. Mercere, W. J. Murphy, A. Przystawik, R. Redmer, H. Reinholz, D. Riley, G. Röpke, F. Rosmej, K. Saksl, R. Schott, R. Thiele, J. Tiggesbäumker, S. Toleikis, T. Tschentscher, I. Uschmann, H. J. Vollmer, and J. S. Wark, Turning solid aluminium transparent by intense soft x-ray photoionization, Nature Physics 5, 693 (2009).
* Recoules and Mazevet [2009] V. Recoules and S. Mazevet, Temperature and density dependence of XANES spectra in warm dense aluminum plasmas, Physical Review B 80, 064110 (2009).
* Di Cicco _et al._ [2014] A. Di Cicco, K. Hatada, E. Giangrisostomi, R. Gunnella, F. Bencivenga, E. Principi, C. Masciovecchio, and A. Filipponi, Interplay of electron heating and saturable absorption in ultrafast extreme ultraviolet transmission of condensed matter, Phys. Rev. B 90, 220303(R) (2014).
* Rackstraw _et al._ [2015] D. S. Rackstraw, O. Ciricosta, S. M. Vinko, B. Barbrel, T. Burian, J. Chalupský, B. I. Cho, H.-K. Chung, G. L. Dakovski, K. Engelhorn, V. Hájková, P. Heimann, M. Holmes, L. Juha, J. Krzywinski, R. W. Lee, S. Toleikis, J. J. Turner, U. Zastrau, and J. S. Wark, Saturable absorption of an x-ray free-electron-laser heated solid-density aluminum plasma, Phys. Rev. Lett. 114, 015003 (2015).
* Principi _et al._ [2016] E. Principi, E. Giangrisostomi, R. Cucini, F. Bencivenga, A. Battistoni, A. Gessini, R. Mincigrucci, M. Saito, S. Di Fonzo, F. D’Amico, A. Di Cicco, R. Gunnella, A. Filipponi, A. Giglia, S. Nannarone, and C. Masciovecchio, Free electron laser-driven ultrafast rearrangement of the electronic structure in Ti, Structural Dynamics 3, 023604 (2016).
* Chen _et al._ [2018] Z. Chen, D. J. Higley, M. Beye, M. Hantschmann, V. Mehta, O. Hellwig, A. Mitra, S. Bonetti, M. Bucher, S. Carron, T. Chase, E. Jal, R. Kukreja, T. Liu, A. H. Reid, G. L. Dakovski, A. Föhlisch, W. F. Schlotter, H. A. Dürr, and J. Stöhr, Ultrafast Self-Induced X-Ray Transparency and Loss of Magnetic Diffraction, Physical Review Letters 121, 137403 (2018).
* Yoneda _et al._ [2014] H. Yoneda, Y. Inubushi, M. Yabashi, T. Katayama, T. Ishikawa, H. Ohashi, H. Yumoto, K. Yamauchi, H. Mimura, and H. Kitamura, Saturable absorption of intense hard x-rays in iron, Nature communications 5, 5080 (2014).
* Chen and Lopata [2020] M. Chen and K. Lopata, First-Principles Simulations of X-ray Transient Absorption for Probing Attosecond Electron Dynamics, Journal of Chemical Theory and Computation 16, 4470 (2020).
* Mo _et al._ [2020] C. Mo, Z.-G. Fu, P. Zhang, W. Kang, W. Zhang, and X. T. He, First-principles method for x-ray Thomson scattering including both elastic and inelastic features in warm dense matter, Physical Review B 102, 195127 (2020).
* Williams and Fajardo [2020] G. O. Williams and M. Fajardo, Collisional ionization and recombination in degenerate plasmas beyond the free-electron-gas approximation, Phys. Rev. E 102, 063204 (2020).
* Medvedev _et al._ [2018] N. Medvedev, V. Tkachenko, V. Lipp, Z. Li, and B. Ziaja, Various damage mechanisms in carbon and silicon materials under femtosecond x-ray irradiation, 4open 1, 3 (2018).
* Lipp _et al._ [2022] V. Lipp, V. Tkachenko, M. Stransky, B. Aradi, T. Frauenheim, and B. Ziaja, Density functional tight binding approach utilized to study x-ray-induced transitions in solid materials, Scientific reports 12, 1551 (2022).
* Engel _et al._ [2022] R. Y. Engel, M. Scholz, J. O. Schunck, and M. Beye, A rate-model for nonlinear x-ray near edge absorption spectra, Jointly submitted to Phys. Rev. B (2022).
* Mukamel [2005] S. Mukamel, Multiple core-hole coherence in x-ray four-wave-mixing spectroscopies, Phys. Rev. B 72, 235110 (2005).
* Glover _et al._ [2012] T. E. Glover, D. M. Fritz, M. Cammarata, T. K. Allison, S. Coh, J. M. Feldkamp, H. Lemke, D. Zhu, Y. Feng, R. N. Coffee, M. Fuchs, S. Ghimire, J. Chen, S. Shwartz, D. A. Reis, S. E. Harris, and J. B. Hastings, X-ray and optical wave mixing, Nature 488, 603 (2012).
* Beye _et al._ [2013] M. Beye, S. Schreck, F. Sorgenfrei, C. Trabant, N. Pontius, C. Schüßler-Langeheine, W. Wurth, and A. Föhlisch, Stimulated x-ray emission for materials science, Nature 501, 191 (2013).
* Weninger _et al._ [2013] C. Weninger, M. Purvis, D. Ryan, R. A. London, J. D. Bozek, C. Bostedt, A. Graf, G. Brown, J. J. Rocca, and N. Rohringer, Stimulated electronic x-ray raman scattering, Phys. Rev. Lett. 111, 233902 (2013).
* Shwartz _et al._ [2014] S. Shwartz, M. Fuchs, J. B. Hastings, Y. Inubushi, T. Ishikawa, T. Katayama, D. A. Reis, T. Sato, K. Tono, M. Yabashi, S. Yudovich, and S. E. Harris, X-ray second harmonic generation, Physical Review Letters 112, 163901 (2014).
* Tamasaku _et al._ [2014] K. Tamasaku, E. Shigemasa, Y. Inubushi, T. Katayama, K. Sawada, H. Yumoto, H. Ohashi, H. Mimura, M. Yabashi, K. Yamauchi, and T. Ishikawa, X-ray two-photon absorption competing against single and sequential multiphoton processes, Nature Photonics 8, 313 (2014).
* Bencivenga _et al._ [2015] F. Bencivenga, R. Cucini, F. Capotondi, A. Battistoni, R. Mincigrucci, E. Giangrisostomi, A. Gessini, M. Manfredda, I. P. Nikolov, E. Pedersoli, E. Principi, C. Svetina, P. Parisse, F. Casolari, M. B. Danailov, M. Kiskinova, and C. Masciovecchio, Four-wave mixing experiments with extreme ultraviolet transient gratings, Nature 520, 205 (2015).
* Schreck _et al._ [2015] S. Schreck, M. Beye, and A. Föhlisch, Implications of stimulated resonant x-ray scattering for spectroscopy, imaging, and diffraction in the regime from soft to hard x-rays, Journal of Modern Optics 62, S34 (2015).
* Lam _et al._ [2018] R. K. Lam, S. L. Raj, T. A. Pascal, C. D. Pemmaraju, L. Foglia, A. Simoncig, N. Fabris, P. Miotti, C. J. Hull, A. M. Rizzuto, J. W. Smith, R. Mincigrucci, C. Masciovecchio, A. Gessini, E. Allaria, G. De Ninno, B. Diviacco, E. Roussel, S. Spampinati, G. Penco, S. Di Mitri, M. Trovò, M. Danailov, S. T. Christensen, D. Sokaras, T. C. Weng, M. Coreno, L. Poletto, W. S. Drisdell, D. Prendergast, L. Giannessi, E. Principi, D. Nordlund, R. J. Saykally, and C. P. Schwartz, Soft x-ray second harmonic generation as an interfacial probe, Physical Review Letters 120, 023901 (2018).
* Tamasaku _et al._ [2018] K. Tamasaku, E. Shigemasa, Y. Inubushi, I. Inoue, T. Osaka, T. Katayama, M. Yabashi, A. Koide, T. Yokoyama, and T. Ishikawa, Nonlinear spectroscopy with x-ray two-photon absorption in metallic copper, Physical Review Letters 121, 083901 (2018).
* Higley _et al._ [2019] D. J. Higley, A. H. Reid, Z. Chen, A. A. L. Loïc Le Guyader and, T. Liu, T. Chase, G. L. Dakovski, A. Mitra, E. Yuan, H. A. Dürr, W. F. Schlotter, and J. Stöhr, Femtosecond x-ray induced changes of the electronic and magnetic response of solids from electron redistribution, Nature communications 10, 5289 (2019).
* Rottke _et al._ [2022] H. Rottke, R. Y. Engel, D. Schick, J. O. Schunck, P. S. Miedema, M. C. Borchert, M. Kuhlmann, N. Ekanayake, S. Dziarzhytski, G. Brenner, U. Eichmann, C. von Korff Schmising, M. Beye, and S. Eisebitt, Probing electron and hole colocalization by resonant four-wave mixing spectroscopy in the extreme ultraviolet, Science Advances 8, eabn5127 (2022).
* Rouxel _et al._ [2021] J. R. Rouxel, D. Fainozzi, R. Mankowsky, B. Rösner, G. Seniutinas, R. Mincigrucci, S. Catalini, L. Foglia, R. Cucini, F. Döring, A. Kubec, F. Koch, F. Bencivenga, A. A. Haddad, A. Gessini, A. A. Maznev, C. Cirelli, S. Gerber, B. Pedrini, G. F. Mancini, E. Razzoli, M. Burian, H. Ueda, G. Pamfilidis, E. Ferrari, Y. Deng, A. Mozzanica, P. J. M. Johnson, D. Ozerov, M. G. Izzo, C. Bottari, C. Arrell, E. J. Divall, S. Zerdane, M. Sander, G. Knopp, P. Beaud, H. T. Lemke, C. J. Milne, C. David, R. Torre, M. Chergui, K. A. Nelson, C. Masciovecchio, U. Staub, L. Patthey, and C. Svetina, Hard X-ray transient grating spectroscopy on bismuth germanate, Nature Photonics 15, 499 (2021).
* Bencivenga _et al._ [2021] F. Bencivenga, R. Mincigrucci, F. Capotondi, A. Calvi, R. Cucini, L. Foglia, E. Pedersoli, E. Principi, A. Simoncig, P. Cinquegrana, M. B. Danailov, G. De Ninno, S. Di Mitri, G. Gaio, A. Gessini, L. Giannessi, N. Mahne, M. Manfredda, I. P. Nikolov, G. M. Penco, L. Raimondi, P. R. Ribic, C. Svetina, M. Trovo, M. Zangrando, and C. Masciovecchio, An approach for realizing four-wave-mixing experiments stimulated by two-color extreme ultraviolet pulses, in _International Conference on X-Ray Lasers 2020_, July 2021, edited by D. Bleiner (SPIE, 2021) p. 28.
* Wirok-Stoletow _et al._ [2022] S. Wirok-Stoletow, R. Jin, D. Kolbasova, S.-K. Son, A. Aquila, and R. Santra, Nonsequential two-photon absorption in solid Ge irradiated by an intense x-ray free-electron-laser pulse, Physical Review A 106, 023118 (2022).
* Higley _et al._ [2022] D. J. Higley, Z. Chen, M. Beye, M. Hantschmann, A. H. Reid, V. Mehta, O. Hellwig, G. L. Dakovski, A. Mitra, R. Y. Engel, T. Maxwell, Y. Ding, S. Bonetti, M. Bucher, S. Carron, T. Chase, E. Jal, R. Kukreja, T. Liu, A. Föhlisch, H. A. Dürr, W. F. Schlotter, and J. Stöhr, Stimulated resonant inelastic x-ray scattering in a solid, Communications Physics 5, 1 (2022).
* Döring _et al._ [2020] F. Döring, B. Rösner, M. Langer, A. Kubec, A. Kleibert, J. Raabe, C. A. F. Vaz, M. Lebugle, and C. David, Multifocus off-axis zone plates for x-ray free-electron laser experiments, Optica 7, 1007 (2020).
* Tschentscher _et al._ [2017] T. Tschentscher, C. Bressler, J. Grünert, A. Madsen, A. Mancuso, M. Meyer, A. Scherz, H. Sinn, and U. Zastrau, Photon Beam Transport and Scientific Instruments at the European XFEL, Applied Sciences 7, 592 (2017).
* Gerasimova _et al._ [2022] N. Gerasimova, D. La Civita, L. Samoylova, M. Vannoni, R. Villanueva, D. Hickin, R. Carley, R. Gort, B. E. Van Kuiken, P. Miedema, L. Le Guyarder, L. Mercadier, G. Mercurio, J. Schlappa, M. Teichman, A. Yaroslavtsev, H. Sinn, and A. Scherz, The soft X-ray monochromator at the SASE3 beamline of the European XFEL: from design to operation, Journal of Synchrotron Radiation 29, 1299 (2022).
* Grünert _et al._ [2019] J. Grünert, M. P. Carbonell, F. Dietrich, T. Falk, W. Freund, A. Koch, N. Kujala, J. Laksman, J. Liu, T. Maltezopoulos, K. Tiedtke, U. F. Jastrow, A. Sorokin, E. Syresin, A. Grebentsov, and O. Brovko, X-ray photon diagnostics at the European XFEL, Journal of Synchrotron Radiation 26, 1422 (2019).
* Maltezopoulos _et al._ [2019] T. Maltezopoulos, F. Dietrich, W. Freund, U. F. Jastrow, A. Koch, J. Laksman, J. Liu, M. Planas, A. A. Sorokin, K. Tiedtke, and J. Grünert, Operation of X-ray gas monitors at the European XFEL, Journal of Synchrotron Radiation 26, 1045 (2019).
* Engel _et al._ [2021] R. Y. Engel, M. Ekimova, P. S. Miedema, C. Kleine, J. Ludwig, M. Ochmann, B. Grimm-Lebsanft, R. Ma, M. Teubner, S. Dziarzhytski, G. Brenner, M. K. Czwalinna, B. Rösner, T. K. Kim, C. David, S. Herres-Pawlis, M. Rübhausen, E. T. J. Nibbering, N. Huse, and M. Beye, Shot noise limited soft x-ray absorption spectroscopy in solution at a SASE-FEL using a transmission grating beam splitter, Structural Dynamics 8, 014303 (2021).
* Schlotter _et al._ [2020] W. F. Schlotter, M. Beye, S. Zohar, G. Coslovich, G. L. Dakovski, M. F. Lin, Y. Liu, A. Reid, S. Stubbs, P. Walter, K. Nakahara, P. Hart, P. S. Miedema, L. Le Guyader, K. Hofhuis, P. T. P. Le, J. E. T. Elshof, H. Hilgenkamp, G. Koster, X. H. Verbeek, S. Smit, M. S. Golden, H. A. Durr, and A. Sakdinawat, Balanced detection in femtosecond x-ray absorption spectroscopy to reach the ultimate sensitivity limit (2020), arXiv:2006.13968 .
* Guyader _et al._ [2022] L. L. Guyader, A. Eschenlohr, M. Beye, W. Schlotter, F. Döring, C. Carinan, D. Hickin, N. Agarwal, C. Boeglin, U. Bovensiepen, J. Buck, R. Carley, A. Castoldi, A. D’Elia, J.-T. Delitz, W. Ehsan, R. Engel, F. Erdinger, H. Fangohr, P. Fischer, C. Fiorini, A. Föhlisch, L. Gelisio, M. Gensch, N. Gerasimova, R. Gort, K. Hansen, S. Hauf, M. Izquierdo, E. Jal, E. Kamil, S. Karabekyan, T. Kluyver, T. Laarmann, T. Lojewski, D. Lomidze, S. Maffessanti, T. Mamyrbayev, A. Marcelli, L. Mercadier, G. Mercurio, P. S. Miedema, K. Ollefs, K. Rossnagel, B. Rösner, N. Rothenbach, A. Samartsev, J. Schlappa, K. Setoodehnia, G. S. Chiuzbaian, L. Spieker, C. Stamm, F. Stellato, S. Techert, M. Teichmann, M. Turcato, B. Van Kuiken, H. Wende, A. Yaroslavtsev, J. Zhu, S. Molodtsov, C. David, M. Porro, and A. Scherz, Photon shot-noise limited transient absorption soft x-ray spectroscopy at the european xfel (2022), arXiv:2211.0426 .
* Denes _et al._ [2009] P. Denes, D. Doering, H. A. Padmore, J.-P. Walder, and J. Weizeorick, A fast, direct x-ray detection charge-coupled device, Review of Scientific Instruments 80, 083302 (2009).
* Januschek _et al._ [2016] F. Januschek, I. Klačkova, N. Andresen, P. Denes, S. Hauf, J. Joseph, M. Kuster, and C. Tindall, Performance of the LBNL FastCCD for the European XFEL, in _2016 IEEE Nuclear Science Symposium, Medical Imaging Conference and Room-Temperature Semiconductor Detector Workshop (NSS/MIC/RTSD)_ (2016) pp. 1–3.
* Klačková _et al._ [2019] I. Klačková, G. Blaj, P. Denes, A. Dragone, S. Göde, S. Hauf, F. Januschek, J. Joseph, and M. Kuster, Characterization of the ePix100a and the FastCCD semiconductor detectors for the european XFEL, Journal of Instrumentation 14 (01), C01008.
* Saldin _et al._ [2006] E. Saldin, E. Schneidmiller, and M. Yurkov, Statistical properties of the radiation from VUV FEL at DESY operating at 30 nm wavelength in the femtosecond regime, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 562, 472 (2006).
* Hatada and Di Cicco [2014] K. Hatada and A. Di Cicco, Modeling saturable absorption for ultra short x-ray pulses, Journal of Electron Spectroscopy and Related Phenomena 196, 177 (2014).
* Anisimov _et al._ [1973] S. Anisimov, B. L. Kapeliovich, and T. L. Perel’man, Electron emission from metal surfaces exposed to ultrashort laser pulses, Sov. Phys. JETP 39, 375 (1973).
* Chen _et al._ [2006] J. Chen, D. Tzou, and J. Beraun, A semiclassical two-temperature model for ultrafast laser heating, International Journal of Heat and Mass Transfer 49, 307 (2006).
* Hartley _et al._ [2019] N. Hartley, J. Grenzer, W. Lu, L. Huang, Y. Inubushi, N. Kamimura, K. Katagiri, R. Kodama, A. Kon, V. Lipp, M. Makita, T. Matsuoka, N. Medvedev, S. Nakajima, N. Ozaki, T. Pikuz, A. Rode, K. Rohatsch, D. Sagae, A. Schuster, K. Tono, J. Vorberger, T. Yabuuchi, and D. Kraus, Ultrafast anisotropic disordering in graphite driven by intense hard X-ray pulses, High Energy Density Physics 32, 63 (2019).
* Mueller and Rethfeld [2013] B. Y. Mueller and B. Rethfeld, Relaxation dynamics in laser-excited metals under nonequilibrium conditions, Phys. Rev. B 87, 035139 (2013).
* Chang _et al._ [2021] H.-T. Chang, A. Guggenmos, S. K. Cushing, Y. Cui, N. U. Din, S. R. Acharya, I. J. Porter, U. Kleineberg, V. Turkowski, T. S. Rahman, D. M. Neumark, and S. R. Leone, Electron thermalization and relaxation in laser-heated nickel by few-femtosecond core-level transient absorption spectroscopy, Phys. Rev. B 103, 064305 (2021).
* Powell and Jablonski [2000] C. Powell and A. Jablonski, NIST electron inelastic-mean-free-path database 71, version 1.1 (2000).
* Jain _et al._ [2013] A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, and K. a. Persson, The Materials Project: A materials genome approach to accelerating materials innovation, APL Materials 1, 011002 (2013).
* Persson [2016] K. Persson, Materials data on Ni (sg:225) by materials project (2016).
* Bévillon _et al._ [2014] E. Bévillon, J. P. Colombier, V. Recoules, and R. Stoian, Free-electron properties of metals under ultrafast laser-induced electron-phonon nonequilibrium: A first-principles study, Phys. Rev. B 89, 115117 (2014).
* Lin _et al._ [2008] Z. Lin, L. V. Zhigilei, and V. Celli, Electron-phonon coupling and electron heat capacity of metals under conditions of strong electron-phonon nonequilibrium, Phys. Rev. B 77, 075133 (2008).
* Lin and Zhigilei [2007] Z. Lin and L. V. Zhigilei, Temperature dependences of the electron–phonon coupling, electron heat capacity and thermal conductivity in Ni under femtosecond laser irradiation, Applied Surface Science 253, 6295 (2007).
* Lojewski _et al._ [2022] T. Lojewski, M. F. Elhanoty, L. L. Guyader, O. Grånäs, N. Agarwal, C. Boeglin, R. Carley, A. Castoldi, C. David, C. Deiter, F. Döring, R. Y. Engel, F. Erdinger, H. Fangohr, C. Fiorini, P. Fischer, N. Gerasimova, R. Gort, F. de Groot, K. Hansen, S. Hauf, D. Hickin, M. Izquierdo, B. E. Van Kuiken, Y. Kvashnin, C.-H. Lambert, D. Lomidze, S. Maffessanti, L. Mercadier, G. Mercurio, P. S. Miedema, K. Ollefs, M. Pace, M. Porro, J. Rezvani, B. Rösner, N. Rothenbach, A. Samartsev, A. Scherz, J. Schlappa, C. Stamm, M. Teichmann, P. Thunstrom, M. Turcato, A. Yaroslavtsev, J. Zhu, M. Beye, H. Wende, U. Bovensiepen, O. Eriksson, and A. Eschenlohr, The interplay of local electron correlations and ultrafast spin dynamics in fcc Ni (2022), arXiv:2210.13162 .
* Stamm _et al._ [2007] C. Stamm, T. Kachel, N. Pontius, R. Mitzner, T. Quast, K. Holldack, S. Khan, C. Lupulescu, E. F. Aziz, M. Wietstruk, H. A. Dürr, and W. Eberhardt, Femtosecond modification of electron localization and transfer of angular momentum in nickel, Nature materials 6, 740 (2007).
* Dürr _et al._ [2008] H. A. Dürr, C. Stamm, T. Kachel, N. Pontius, R. Mitzner, T. Quast, K. Holldack, S. Khan, C. Lupulescu, E. F. Aziz, M. Wietstruk, and W. Eberhardt, Ultrafast electron and spin dynamics in nickel probed with femtosecond x-ray pulses, IEEE Transactions on Magnetics 44, 1957 (2008).
|
# Real time QKD Post Processing based on Reconfigurable
Hardware Acceleration
Foram P Shingala1, *Natarajan Venkatachalam1, Selvagangai C1, Hema Priya S1,
Dillibabu S1, Pooja Chandravanshi2, Ravindra P. Singh 2
1 Society For Electronic Transactions and Security, India
2Physical Research Laboratory, Ahmedabad
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
## Abstract
Key Distillation is an essential component of every Quantum Key Distribution
system because it compensates the inherent transmission errors of quantum
channel. However, throughput and interoperability aspects of post-processing
engine design often neglected, and exiting solutions are not providing any
guarantee. In this paper, we propose multiple protocol support high throughput
key distillation framework implemented in a Field Programmable Gate Array
(FPGA) using High-Level Synthesis (HLS). The proposed design uses a Hadoop
framework with a map-reduce programming model to efficiently process large
chunks of raw data across the limited computing resources of an FPGA. We
present a novel hardware-efficient integrated post-processing architecture
that offer dynamic error correction, a side-channel resistant authentication
scheme, and an inbuilt high-speed encryption application, which uses the key
for secure communication. We develop a semi automated High level synthesis
framework capable of handling different QKD protocols with promising speedup.
Overall, the experimental results shows that there is a significant
improvement in performance and compatible with any discrete variable QKD
systems.
Keywords— Hadoop, Key Distillation Engine, HLS, FPGA, classical post-
processing, QKD
## 1 Introduction
Secure data communication is a vital challenge in today’s high-speed networks.
The basic and most critical element of a cryptographic solution is the
encryption key, as defined by August Kerckhoffs in 1983[14]. Classical methods
of cryptography exploiting the hardness of mathematical problems may sooner be
vanquished by the advent of quantum computers. Nevertheless, the same quantum
principles that could empower an adversary with enormous computing power can
be used to achieve unconditional security in establishing a secret key between
two communicating parties, with Quantum key distribution (QKD). There are
various implementations of QKD protocols that differ in the way the
information is encoded/decoded in quantum states [3, 13, 11, 28]. In general,
it comprises two channels, a quantum communication channel, for transmitting
quantum information, and an authenticated classical channel. A quantum
communication channel essentially consists of the quantum state of a photon in
a particular degree of freedom such as polarization, time bin, phase,
frequency, etc., on which the information can be encoded and decoded. The
authenticated classical communication channel is used for secret key
reconciliation. It is also required to synchronize the transmitter and
receiver, separated by large distances. Due to the inherently noisy nature of
quantum channels, measurement device imperfections, and improper
encoding/decoding, the raw key bits extracted from the quantum channel contain
bit-flip errors. On account of the error and the channel losses, post-
processing techniques are required to construct the final secret key at both
ends. A robust implementation of QKD, which gives sufficient throughput, in
terms of the key rate, in a real-time environment, is challenging. Catering to
the speed of quantum communication, a fast key distillation layer or data
processing layer, with efficient control hardware, is crucial.
The QKD post-processing can be broken down into submodules namely, a)
Synchronization, b) Sifting for erasure, c) Sifting for basis reconciliation,
d) Random sampling, e) Parameter estimation, f) Information reconciliation
(IR) and verification, g) Privacy amplification (PA), h) Key management. These
classical components require massive computing and memory resources, hence,
were earlier implemented on server systems. But due to complex infrastructure
and security assumption of device isolation (trusted node architecture), a
solution that is stand-alone, compact, and provides reconfigurability with
massive data parallelization, and processing ability, at low power
consumption, is the need of the hour. Hence the focus is on FPGA accelerators
.
Hardware Description Language (HDL) is a specialized language used to describe
the structure and behavior of electronic circuits. FPGAs can be programmed
using HDL. Any HDL design is directly correlated to resource consumption in
FPGA. As the design gets more complex the need to speed the design-flow
process makes FPGA developers look at software-based, productivity tools to
automate Register-Transfer Level (RTL) design flow. HLS is one such tool and
its adaptation to QKD Key Distillation Engine (KDE) is further described in
section III. One of the engineering problems that need to be solved in real-
time implementations of QKD is the continuous storage and processing of large
amounts of data, as quantum encoding and modulation occur at frequencies of
hundreds of GHz. FPGA has limited memory storage and management capabilities.
Extended memory units like SRAMs and DRAMs can be used along with the FPGA to
overcome this drawback. Efficient management and utilization of these
additional memory devices can be performed by incorporating a framework like
Hadoop to manage big data. This is further analyzed and discussed in section
III.
In high-performance QKD networks, processing becomes a major bottleneck due to
the massive amounts of data collected in the quantum networks, resulting in
overhead for the memory and computational capabilities of the targeted
systems. Quantum information reconciliation problems exhibit excessive dynamic
computations and memory accesses. Therefore, the overall processing time is
dominated by complex computations and unstructured memory management.
Throughput and efficiency are the main performance metrics in large-scale
quantum key distribution systems. Recently, there is an increased interest to
accelerate post-processing using FPGA. However, performance optimizations and
secure processing have not been explored. A technological gap still exists in
the practical implementation of a high-throughput and efficient hardware-
software co-design for large-scale quantum key post-processing. In this paper,
we propose Hadoop framework-based FPGA design for high-volume data processing
that optimizes memory utilization and at the same time is secure. We report a
comprehensive experimental study to evaluate the performance and efficiency
using different QKD protocols. The Quantum information reconciliation
algorithm is implemented using the MapReduce paradigm, in which the error
correction process is carried out in a parallel fashion in the FPGA. With any
given error correction algorithm and respective constraints, as inputs, the
proposed system determines the performance parameters through simulation and
then generates the optimized design of the FPGA accelerator. Therefore, any
error correction algorithm can easily be implemented using our proposed
framework. We summarize the main contributions of this paper below:
* •
A hardware-based Key distillation engine with the capability to support multi-
protocol discrete variable QKD systems.
* •
Hardware-based hadoop accelerator to achieve a speedup of the computationally
intensive task of information reconciliation and privacy amplification.
* •
Re-configurable architecture attributing to the framework developed using
high-level synthesis technology.
* •
Side-channel attack resistant device authentication algorithm implantation.
* •
Rate – adaptive error reconciliation codes to optimize classical channel
throughput, with higher error correction capacity.
* •
On device, high-speed encryptor with throughput up to 10$Gbps$.
* •
Detailed experimental field trial for three different protocols, namely
coherent one-way, BB84, and BBM92.
The rest of the paper is organized as follows: Section II covers related work
and literature review; section III covers the proposed system design,
architecture, and the implementation of KDE in hardware. Section IV defines
the experimental setup of the QKD protocols and section V gives the
implementation results and the performance analysis of the design and section
VI describes the conclusion and future work.
## 2 Related Work
One of the first few attempts, in the year 2012, at designing a complete
compact QKD system by integrating optics, control hardware, and data
processing system into a single chassis, was attempted by Zhang et al [32].
The protocol implemented was the decoy-state BB84 protocol. The key
distillation software was designed to handle quantum operations at much lower
rates as a result of inefficient devices. The Winnow protocol was chosen as an
error correction algorithm. The KDE was implemented as a software stack on a
computer as part of the integrated design. There have been further
advancements in protocol and technology, since. Publishing around the same
time, Tanaka, Akihiro, et al [26], tried to achieve a high-speed phase encoded
BB84 QKD system, covering a distance of 50 km, by transmitting with a
repetition frequency of tens of GHz using parallel transmission of photons
(parallelly connected LED sources) with wavelength Division multiplexing
(WDM). Their work also highlights the requirement of large computing and
memory resources to be able to derive 1Mb of secure key, using eight XFPs, in
Small Form Factor Pluggable (SFP) format, for communication, and multiple FPGA
as computing resources for post-processing. Such a mammoth and complicated
system architecture would only suffice as a proof of concept.
Further, in order to speed up the data processing of measured qubits,
individual modules that are part of the post-processing, ought to perform
efficiently. The QKD post-processing can be broken down into multiple
submodules, each of which is identified to have a specific cryptographic.
Multiple teams around the world have worked on all these individual aspects,
but a particular article by Cui, Ke, et al. [8], aimed at an efficient
implementation of the error reconciliation module by exploiting FPGA
parallelism and splitting the module such that pipelined execution can be
performed between read, write and compute. The team implemented the Winnow
error-correcting codes. Based on further studies, it was ascertained that LDPC
codes achieved efficiency closest to Shannon’s limit and hence have been
prescribed as part of the information reconciliation stack for QKD.
Early methodical works were carried out for a large-scale project by six
research teams in Switzerland [27]. This was a significant step towards a
field deployable QKD system. They concentrated on building a QKD system with
integrated control hardware and data processing units. The modules for key
distillation are described in detail by Constantin, Jeremy, et al [7]. We have
used this work as a reference. The complete firmware was built on a single
Xilinx Virtex 6 FPGA. The engine had a post-processing block size of a smaller
order. They used a commercial Quantum Random Number Generation (QRNG) from
Idquantique and fed the generated random sequence as a seed to a pseudo-random
number generator. The COW QKD protocol for distances of 50 km was implemented.
The alternative to an FPGA accelerated post-processing engine is a pipelined
software implementation. Zhou, Jianyi, et al. [33] proposed to implement a
multi-threaded pipelined approach exploiting more than one CPU core to achieve
the optimized arrangement of the major performance parameters. The performance
of the pipelined execution is optimized by allowing all stages in the pipeline
to have identical processing times. We understand the necessity of
parallelization to optimize throughput. Therefore, in our work, we propose to
incorporate a parallel data storage and processing framework implemented over
the accelerator. In a more recent work by Yuan, Zhiliang, et al. [31], a
configured host server system with two FPGA-based accelerators, one for QKD
control hardware plus sifting and the other for error reconciliation, serves
as the post-processing engine in the QKD system. PA of large block size
(100Mb) is implemented on the server system using GPU-based parallel
architecture. A collective performance throughput of 10Mbps is achieved. The
error correction module implemented gives a max correction capacity of 10%. By
adding computational resources like a server, to the architecture, their
implementation lacked the main security assumption of an isolated device. This
effort was followed by another attempt at improving the throughput of the
processing engine by parallelization of the IR phase with larger block sizes
of 250 to 350 Kb by Yang et al. [30]. This post-processing engine was
developed for the continuous variable QKD protocol.
From a review article published by Li, He et al. [15], it is observed that
FPGA is an almost mandatory choice for this application and that FPGA also
offers an advantage in terms of power consumption [22], which can be a key
feature for critical applications such as for Satellite Quantum Communication
(CubeSat missions) [20]. This is the first review paper accounting for the
work done so far using FPGA accelerators in QKD. The authors also highlight
the design productivity gained by using High-Level Synthesis (HLS) to design
and configure the accelerator, with features like arbitrary precision
arithmetic and parallelization pragmas, etc. Finally moving from FPGA to
(system on chip) SoC architecture [24], this recent work presents the hardware
and software-integrated architecture that can be used in systems that
implement practical QKD and QRNG schemes. This architecture fully exploits the
capability of an SoC by assigning the time-related tasks to the FPGA and the
management to the CPU.
All the designs up until now have focused on a classical post-processing
engine for a specific QKD protocol, implemented either on programmable
hardware, software, or GPU and not for a complete, stand-alone, and isolated
solution. The focus in the past has been on the algorithmic aspects of the
post-processing protocols. In this work, we try to overcome these limitations
and give a detailed design and flow of the implementation of the proposed
architecture, in the following sections.
## 3 System Architecture and Design Methodology
We propose an FPGA-based flexible, high throughput, and multiple protocol
support, distillation engine for QKD systems. The generalized quantum key
distillation framework is designed to provide flexibility to adapt to
different kinds of QKD protocols. The Key Distillation Engine does all the
post-processing and provides the final key to the encryption application. For
effective implementation, the entire post-processing flow can be executed in
two different phases.
The preparatory phase, described further in section 3.2, includes gathering,
aligning, and transforming raw data prior to the reconciliation phase. The
software components of the preparatory phase are specific to the QKD protocol.
In this work, HLS framework is adopted to perform tasks that require
modifications with respect to QKD protocol and implementation style. In
particular, the methods like clock synchronization, measurement alignment, and
data sifting are executed in this phase. These strictly depend on the protocol
and implementation specifications. The proposed work utilized the HLS
framework to create a unified integration platform for QKD post-processing.
This platform is depicted in the complete development process flow of the key
distillation system in Figure. 2
The reconciliation phase, described in section 3.3, includes the error
correction, verification, and privacy amplification modules independent of the
QKD protocols. Techniques used in error correction and privacy amplification
are computationally intensive. To accommodate this, our hardware design
effectively utilizes the Hadoop map-reduce programming model to further
optimize the throughput. This helps to attain a trade-off between hardware
utilization and fast data processing. The Hadoop data storage and processing
framework also aid’s in handling the distribution of complex data processing
tasks across limited computing resources of the FPGA. FPGA-Based Processor
Design
The FPGA fabric is designed as a soft-core processor-based System on Chip
(Soc) architecture. Process control, task scheduling, and interface for data
processing modules are defined as a Software Development toolkit(SDK) API-
based software application. Baklouti, Mouna, et al [16] highlights the
advantages of an FPGA-based SoC architecture. Figure. 1 shows the high-level
architecture, which consists of the control unit, data processing unit, and
memory management unit. The soft-core processor along with the data processing
and control modules are implemented on the PL fabric. 1GB DDR3 SODIMM memory
is available on board and is used to store data required while processing. The
I/O ports used to connect to the host device are USB UART or high-speed serial
I/O transceivers (GTH) over Gigabit Ethernet protocol. MicroBlaze is a soft
processor core designed for Xilinx FPGAs. MicroBlaze is implemented with AXI
interconnect peripheral bus for system-memory mapped transactions with master-
slave capability. Each data processing unit module is interfaced with the AXI
data port of MicroBlaze for communication. The DDR3 is interfaced with
MicroBlaze over AXI instruction cache (IC) and data cache (DC) ports. The
framework includes interfaces to and from the optical setup using digital-to-
analog converters (DAC) and analog-to-digital converters (ADC), respectively.
ADC and DAC are used to control the optical components, e.g. ADC is used to
control the synchronous optical laser, and DAC is used for amplitude and phase
modulation. The ADC and DAC communicate to the MicroBlaze through data
acquisition and optical hardware control units over the AXI- Interconnect. The
data acquired from the quantum channel can be buffered in Block Ram (BRAM)
units defined as FIFOs. The implementation details of the individual data
processing modules and synchronization module are further described in
sections 3.2,3.3, and 3.4. The control and data processing units are designed
as custom-developed hardware IP blocks, each with a specific function. The
software application defines how the control scripts run each custom hardware
accelerator.
Figure 1: Architecture of the proposed system.
### 3.1 Multi-protocol QKD Support Workflow
Our design approach is directed towards identifying an efficient,
configurable, control and process framework that can be incorporated as part
of the design solution for any QKD protocol implementation, irrespective of
the technology used. This generic framework, for programmable hardware, is
designed using high level synthesis(HLS) technology. This technology provides
reconfigurability and easy transformation of software description of the
algorithm from high-level code into RTL model. The major benefit of HLS is
that it provides a platform for individuals that are not experienced in HDL
development, to program FPGA [12]. Vivado HLS is the most popular HLS tool in
FPGA Design.
Our framework uses the concept of modularity, by including independent
IPCores, for each control and data processing task. Control modules can be
created by adding software drivers of optoelectronic components (used to
perform QKD experiments). These are generally available as open-source
libraries in high-level languages (C/ C$++$), and hence can be directly
incorporated into our design through HLS. The optoelectronic components are
configured to define the parameters (variable attenuation, interference
visibility, modulator bias, State of polarization..etc) that help establish a
secure quantum channel. Libraries for math and computational utility functions
can also be added as part of our design. Figure. 2 shows the design flow
adopted by us to develop our data processing modules. The phase I modules or
preparatory modules described in section 3.2 are designed after being tested
with a QKD simulator. These have to be refined to comply with the
synthesizable subset, a test bench has to be written to ensure that the
functionality is still intact. A test bench created to validate the algorithm
can be used at both the C++ and RTL levels. Optimization directives and
datatypes can be added to improve performance. The refined implementation is
synthesized using the HLS tool to generate Functional RTL (FRTL). C functions
synthesize into RTL blocks, and function arguments synthesize to RTL I/O while
arrays synthesize to memory: RAM or ROM, or FIFO.
The benefit of developing in a high-level language is that the control path is
implicitly represented, whereas RTL requires the user to explicitly define the
data and control paths. For hardware resource optimization, HLS provides user
directives. To meet timing constraints, pipelining directives can be
incorporated [34]. Arbitrary precision data types are also provided by Xilinx
for vivado HLS [1]. Vivado HLS can easily export RTL IP by using “export RTL”
option available as part of the tool.
Figure 2: HLS Based design flow for a reconfigurable framework.[23]
### 3.2 Preparatory Phase
Operation of the control unit follows the QKD protocol configuration and
consists mostly of software drivers of the optoelectronic components. Steps to
integrate this functionality have been described in section 3.1. The remaining
components of the present key distillation engine can be easily re-configured
to any kind of QKD protocol without much effort. All the sub-modules present
in the system are designed as a separate proprietary library (IP core) and it
can be easily reused for different kinds of QKD setups. Thus the switching
time between one protocol to another protocol is decreased, hence making it
easy to build interoperable systems.
#### 3.2.1 Synchronization and Classical Channel Communication
The timing information of the launch of a quantum state from the transmitter’s
end and its arrival at the receiver’s end is very crucial in key-sifting and
in the establishment of the secret key. We have used the commercial White
Rabbit Lite Embedded node (WR-LEN) to achieve time synchronization between the
two FPGA boards. WR-LEN is a versatile synchronization solution with any type
of classical communication protocol implemented [18]. It uses the Synchronous
Ethernet (SyncE) and enhanced Precision Time Protocol (PTP) for frequency
matching and offset adjustments of the clock. A single channel is a time
multiplexed and used for both synchronization and exchange of information for
classical post-processing.
The Aurora 8B/10B protocol is a link layer communications protocol for use on
point-to-point serial links. Developed by Xilinx, it is used as the classical
high-speed(gigabits/second) communication protocol. The protocol is an open
standard and is available for implementation by anyone. Our proposed system
supports integration with any standard communication protocol.
#### 3.2.2 Sifting
Post-quantum transmission and measurement, Alice and Bob use the classical
channel to communicate and derive a secret key. The Alignment step comprises
of all the procedures necessary to align the detection times (timestamps) in
Bob’s reference frame, to Alice’s key bits in her reference frame. The effort
and the amount of transmitted information can be quite extensive. The protocol
used to align is the sliding window protocol and the measurement that confirms
the alignment is the correlation coefficient.
The basis sifting procedure filters out any inconclusive or incompatible
measurement results at Bob compared to that of Alice’s preparation. Depending
on the implemented QKD protocol the required information transfer (say
measurement basis) might need to be bi-directional as in standard BB84, or
uni-directional, as in the distributed-phase-reference protocol. As the output
from the sifting procedure, Alice and Bob each hold a set with $n_{sift}$
elements.
In the BB84 protocol, Alice stores two bits to describe the prepared state
containing key bit value and basis choice. Bob also stores two-bit information
on the measurement choice and measurement outcome along with the time-stamp of
the detection. For each detection, after the alignment step, Bob announces one
bit describing the measurement choice, and Alice responds with one bit
containing the XOR value between Bob’s and her measurement choice. Sifting can
be achieved in multiple ways, but this method is chosen to reduce the
communication overhead in the classical channel. Hence, the communication rate
for base sifting is $m^{BB84}_{BS}=2*n_{Q}$ bits, where $n_{Q}$ is the number
of measurement outcomes Bob has recorded.
In the COW protocol, Alice encodes the key information in two consecutive time
bins. Alice’s $n_{sift}$ elements contain two bits corresponding to each
element, to describe the prepared state and an indicator if the state is
signal or decoy. Bob’s $n_{sift}$ elements contain two-bit information, one
for the key bit decoded and the other bit corresponds to which detector
clicked i.e. data-line or monitoring detector[25]. Another outcome of the
measurement Bob stores is the time-stamp of the detection. For each detection,
Bob only announces the $\lfloor timestamp/2\rfloor$ information (in the
alignment phase) and one bit describing the detector click (in the basis
sifting phase), while no response is required from Alice to sift out
incompatible detections. Hence, the communication rate for base sifting in COW
protocol is $m^{COW}_{BS}=n_{Q}$ bits, where $n_{Q}$ is the number of
measurement outcomes Bob has recorded.
### 3.3 Reconciliation Phase
Information reconciliation is an essential phase in QKD system to arrive at an
identical key between both the sender and receiver. Correcting errors in the
key material is an important task and useful for a wide range QKD protocols,
as below an error threshold, the errored key can also be used to derive a
secure secret key. In this section, we present a novel method for the
effective implementation of a computationally expensive part of QKD post-
processing stack. We combine error correction, verification, and privacy
amplification to adapt the Hadoop map-reduce framework and show the
computational benefits. Furthermore, we discuss the hardware realization
possibilities which could help to speed up and optimize the efficiency.
Additionally, we present case studies for applying the proposed framework for
different types of QKD protocols in order to study the practical relevance of
the proposed techniques.
#### 3.3.1 Hadoop Based Accelerator for Reconciliation
The proposed design framework incorporates the Hadoop Map-Reduce programming
model to efficiently store and process large chunks of sifted key bits
obtained from the quantum setup, at a rate faster than the FPGA’s clock. To
overcome a possible bottleneck at this point we use a mechanism to buffer the
incoming data and parallelly process multiple blocks of sifted key bits
simultaneously. Map Reduce programming model comprises of Splitting, Mapping,
Combining, and Reducing processes. The flow of data across these stages is
depicted in Figure. 3 and the Algorithm 1 captures the implementation aspect
of the framework.
Input data is stored in an onboard memory SDRAM (DDR3) and then accessed by
the scheduler as blocks of fixed size. The scheduler runs a $C++$ SDK-based
application that controls and interconnects all the data processing and
control modules of the KDE. The input data is split into blocks by the
scheduler with respect to user-defined block size and fed to the Mapper.
Mapper here is implemented as the IR and verification module. Due to finite
key security effects, the input block size for PA is chosen to be large
$(10^{6}bits)$. Being limited by the resource-intensive IR step for larger
block sizes, multiple iterations of IR with smaller block sizes is the chosen
design approach. Multiple instances of the mapper run simultaneously. The
output of the mappers is temporarily stored in the Block Ram and is then
combined and sent to the PA module, which here acts as the reducer.
Incorporating a map-reduce framework in KDE has helped achieve high throughput
by parallelizing the KDE modules with respect to the block size. Raw key
obtained by quantum transmission is first sifted. The error introduced while
transmission of single photons (channel transmissivity) is then estimated.
This information drives the selection of the IR code parameters, which is
explained in section 3.3.3. Post this the scheduler (MicroBlaze) invokes the
map-reduce framework. After the PA step, the final key is written back to the
DDR3 memory unit. The inspiration to incorporate the Hadoop framework into our
design model was derived from the implementation by Neshatpour, Katayoun, et
al [19].
Figure 3: Hadoop-based framework for Information Reconciliation. Algorithm 1
Combined Error correction and Privacy Amplification process using map-reduce
framework
Siftedbits, Bit Error Rate (BER), security parameter
Final key
Initialisation : Matrix $[H]_{mxn}$, $BER=0$, security parameter $\rho$
class Mapper
Method Map (Sifted_bits, BER, LDPC Hadoop Instance)
for All blocks do
Error_correction (Sifted_bits, Blocksize)
if $BER=0$ then
Emit (Error_Corrected_Key)
end if
end for
for All blocks do
Error_verification
if (Result=success) then
Emit (Error_Corrected_Key, Result)
end if
end for
class Combiner
Method Combine (Error_Corrected_Key, LDPC Hadoop Instance)
Emit (Result)
return Combined_Corrected_Key
class Reducer
Method Reduce (Combined_Corrected_Key, security parameter)
Privacyamplification()
Emit (Result)
return FinalKey
#### 3.3.2 Parameter Estimation
Once Alice and Bob have exchanged timestamps and sifted the key bits, the
transmitter tries to estimate the approximate lower limit of the error
introduced while transmission of the quantum states. This is done by checking
with reference to a random subset of the sifted key bits (extracted by random
sampling) shared by the receiver.
Error rate of Sampled subset $\approx$ Error rate of remaining bits + $\Delta$
The above inequality is proved using Chernoff - Hoeffding type bounds and
depends on the sample size. Random Sampling is done using an FPGA-based True
Random Number Generator (TRNG)). After randomly sampling a subset, these bits
along with their time stamps are exposed over the classical channel and hence
are discarded by both parties. The approximate error rate is estimated by,
$\sum_{n=1}^{r}\frac{1}{N}$, where $r$ is the number of errored bits received
and $N$ is the total number of exposed bits. This value is captured in the
QBER of the quantum channel. In the proposed design, the random sampling of
exposed bits and the QBER calculation is implemented on the MicroBlaze
processing system of the FPGA as it requires modulo 2 additions and a single
32-bit division operation. The QBER estimate plays a major role in determining
if a secure secret key can be extracted by further processing. If the QBER is
above a given threshold (defined for each protocol, depending on the errors
due to device and measurement imperfections, and on the amount of information
that can be leaked to an all-powerful quantum adversary through the devices or
the channel) the iteration is aborted and the key derived is discarded.
#### 3.3.3 LDPC Error Correction
Recently, LDPC codes have been researched extensively, from among the family
of Forward Error Correction Codes for QKD [17, 10, 9]. LDPC codes are
implemented as an IP core using the HLS design flow described in section 2.
The technique used to construct the parity check matrix is protograph code
construction. Protograph codes are constructed by expansion of a base
protograph. The resulting LDPC parity check matrix is a combination of sub-
matrices. The proposed system implements an irregular parity check matrix
populated with elements from the GF(2) field and a soft-decision message-
passing decoder. Index positions of the elements of the matrix containing a
one are stored in the local memory of MicroBlaze. The row and column indexes
are used to construct a tanner graph at the decoder to iteratively decode the
syndrome and a strong belief is derived for the received key bits. The
encoding and decoding technique used in the proposed design is a standard
technique described in literature [10, 9]. The QBER also helps us in selecting
an efficient LDPC code from the ensemble of codes, with the dimension of the
given code defined such that the limit derived through Shannon’s channel
capacity theorem is achieved to the maximum possible extent. This is
established by determining the threshold of the LDPC code using the measured
QBER. Andrew Thangaraj et al. [21] elaborates more on this and we refer to
this work as we have derived the LDPC codes for the proposed design from their
work.
Figure. 4 describes the algorithmic flow of the implementation. Based on the
QBER one can quantify the amount of extra information which is to be added or
extracted to reconcile the errors. Refer to the work by Elkouss, David et al.
[10] for more details. These are termed as rate-adaptive LDPC codes and are
required to withstand the variation in QBER. Based on Monte Carlo simulations,
the codes chosen for the proposed work have the capacity to correct 25% block
errors with a maximum tested input size of $10^{9}$ bits (1 Gb) and the
maximum decoder iterations set to 50. The technique used to optimize code rate
and achieve maximum channel capacity is known as shortening of message bits
and puncturing of parity bits, which is covered in detail in [10]. In our
proposed work we have used the above techniques and reduced the dimension of
the parity check matrix to $((7680-f)x(8192-f))$, $f$ here is the reduction
factor. Take a full-capacity LDPC code and reduce it based on the QBER. The
Monte Carlo simulations were run for error rates from the range of 15% to 25%
to identify this reduction factor.
Figure 4: The flow diagram of the LDPC codes
#### 3.3.4 Error Verification
Error verification is an essential step after error correction, for the
integrity check of the decoded key bits. There is a finite chance that the
sifted key bits shared by Alice and Bob still have some error, and an
additional step of error verification is necessary. To implement this
integrity check, the same Universal Hash Function [6] technique, which is used
for authentication, described in section 3.4, can be reused. The error-
corrected key will be the input to the hash function along with a pre-shared
key, which can be generated from the TRNG module or as part of the QKD-
generated key from the previous iteration. Based on our implementation, the
block size of the input is approximately $10^{6}$ bits. To further ensure
information-theoretic security, a pre-processing procedure is followed as
shown in Algorithm 2, to extract 128-bits, required as per the architecture of
Poly 1305 algorithm, from a large chunk of input. The error corrected key
(N-Bit) is divided into 16 chunks labeled as (C1, C2….C16) each of size N/16
bits. Further, each chunk is equally divided into two sub-blocks labeled as
(D1, D2) … (D31, D32), of size N/32 bits, and the XOR of the two sub-blocks is
evaluated and the result of which is again divided into two sub-blocks and the
procedure is repeated until we arrive at an output of 128-bit. This output
associated with each chunk ((C1, C2….C16)) along with the 256-bit pre-shared
key is given as input to poly1305, which will generate a 128-bit tag. The tags
generated by both Alice and Bob are compared at Bob. Bob will then share the
error verification flag with Alice. If any of the tags are erroneous, the
specific chunk will be discarded.
Algorithm 2 Error Verification Algorithm
Error corrected key of size N.
16 Tags each $128$ bits.
$t\ \leftarrow\ 16$
$k\ \leftarrow\ 128$
$Eck\ \leftarrow\ Error\\_Corrected\\_Key[b_{1},\cdots,b_{N}]$
$[C_{1},C_{2},\cdots,C_{t}]\leftarrow Split(Eck)$
$Size\leftarrow\frac{N}{t}$
while $Size>k$ bits do
for $i\leftarrow 1,t$ do$\triangleright$ Iterate over each chunk
$D_{2i-1},D_{2i}\leftarrow Split(C_{i})$$\triangleright$ $D_{i},size=size/2$
$C_{i}\leftarrow XOR(D_{2i-1},D_{2i})$
end for
$Size\leftarrow\frac{Size}{2}$
end while
$Tag_{i}\leftarrow\textbf{Poly1305}(C_{i},random$ $seed))$
#### 3.3.5 Privacy Amplification
Privacy amplification (PA) is one of the foremost vital post-processing
procedures of Quantum Key Distribution (QKD). Within the post-processing part,
the transmission of key bits from Alice to Bob requires mandatory
communication on the service channel for sifting, error correction, and
detection, leading to information leakage, which can also occur due to
eavesdropping attacks. Hence, to ensure an exact level of security, it’s
necessary to get rid of this amount of leaked information through a privacy
amplification step. In privacy amplification, the partly secure string is to
be reworked into an extremely secret key by public discussion. It’s been shown
that this could be done by computing the output of a random, however publicly
chosen two-universal hash function, applied to the input string, resulting in
a secret and secure key. For this, we use Toeplitz hashing to cut the error-
corrected bit stream by the adjustable compression ratio, guaranteeing the
safety of the remaining secret key bits. The simplest implementation idea of a
large-scaled PA scheme is directly performing multiplication operations
between secure key W and Toeplitz matrix T(A), resulting in the computational
complexity of O($n^{2}$). We reduce the complexity to $O(nlogn)$ by performing
fast Fourier transform (FFT) instead of normal multiplication [7].
Weak secure key W with a length of $n$ from the basis key sifting and error
correction procedures. Then, Alice and Bob decide on the final secure key
length r with rigorous statistical analysis. Further, Alice and Bob publicly
discuss a random seed with a length of $(n-1)$ bits to construct the universal
hash function. The random seed is generated using an FPGA-based TRNG on
Alice’s side.
Our HLS-based PA scheme for KDE mainly consists of three steps: splitting and
shuffling, sub-PA, and secure-key merging.
### 3.4 Authentication
We make use of Wegman-Carter framework based authentication scheme for message
integrity and user identity verification over the classical channel. Wegman-
Carter framework [6] based authentication is proven to be information-
theoretically secure. For achieving a practical authentication we need a
universal hash family which will then have to be combined with a pseudo-random
function to get a MAC. Universal Polynomial hashing is said to be highly
collision resistant. Bernstein found a special prime number of the form
${2^{130}-5}$ for working with 128-bit coefficients and proposed a new MAC
defined in equation 2, Poly1305-AES [4]. The original proposal of Carter and
Wegman for a universal family was to pick a prime $p\ \geq\ m$, m be an
integer, and then define
$h_{a,b}(x)\ =\ ((ax+b)mod\ p)mod\ m$ (1)
Commercial QKD systems use a universal family based on polynomial evaluation.
We use an implementation-friendly adaptation of their proposal following the
work of Bernstein. The basic idea is to parse the message into 16-byte chunks
which form coefficients of a polynomial and evaluate it at r(key) modulo a
suitable prime number.
$tag(t):=Poly1305\oplus PRF:=h_{k_{1}}(m)\oplus k_{2}$ (2)
Horner’s method is the most efficient and hence recommended implementation for
polynomial evaluation due to the repeated multiplication with a fixed (secret)
multiplicand. A polynomial of the form $p(x)$ is evaluated at $k$ using
Horner’s method as given in equation 3
$p(k)=a_{0}+k(a_{1}+k(a_{2}+\ldots k(a_{n-1}+a_{n}k))).$ (3)
Hence Horner’s method requires only $n$ multiplications instead of $n(n+1)/2$
multiplications needed by the naive method.
### 3.5 AES Encryption
An efficient hardware architecture design of Advanced Encryption Standard
(AES-128) is implemented. The AES algorithm as defined by the National
Institute of Standards and Technology (NIST) of the United States, has been
widely accepted. The throughput of our implementation is beyond 10Gbps for the
encryption and decryption process with device XC7VX485T of the Xilinx Virtex
Family. The hardware design approach is entirely based on pre-calculated look-
up tables (LUTs) and parallelly executable instances which results in a less
complex architecture, thereby providing high throughput and low latency. The
speedup achieved is by running the key expansion module independent of the AES
rounds. AES with a larger key size is considered resistant to attack by a
powerful quantum adversary and hence is chosen as the application that uses
the QKD-generated key, thus carrying out secure image, video, or message
encryption.
## 4 Experimental Setup
### 4.1 Description
To validate and analyze the performance of the key distillation engine
designed and implemented, data is collected from quantum experiments performed
at Physics Research Laboratory, Ahmedabad, implementing polarization-based
BB84 QKD protocol [5] and BBM92 QKD protocol. The details of the experimental
setups are shown in Firgure.5. In the BB84 setup, Figure. 5(a), weak coherent
pulses are generated by using a variable optical attenuator at the output of a
pulsed laser with a repetition rate of 80 MHz. The encoded state is then
propagated in a free space lossy medium with channel transmissivity estimated
at 70%. At Bob’s end, there is a polarization-based detection setup consisting
of a balanced beam splitter (passive random basis selector) with a polarizing
beam splitter (PBS) on the reflected arm (measurement in ${H,V}$) and a
combination of the half-wave plate with PBS (measurement in ${D,A})$ at the
transmitted arm. Photons at the output ports of the PBS are detected by fiber-
coupled avalanche photodiodes (Excelitas SPCM AQRH-14-FC). BBM92 protocol is
just the entangled version of the BB84 protocol. BBM92 protocol involves pairs
of entangled photons. In this protocol, a common sender (EPS) prepares the
entangled photon source (EPS) and sends them to Alice and Bob through the
quantum channel. In Figure. 5(b), the Polarization Sagnac interferometer is
used to prepare entangled photons. In this interferometry, a diagonally
polarized $405nm$ continuous-wave laser with an output power of $\sim 5mW$ is
used to pump a 30 mm long Type-0 PPKTP crystal of period $3.425\mu m$. A lens
L1 of focal length $400mm$ is used to focus the pump beam on the crystal to
generate entangled photons. The horizontally polarized pump beam is
transmitted through DPBS in a clockwise direction, and vertically polarized
light is reflected through the DPBS in a counter-clockwise direction. Since
both the clockwise and counter-clockwise pump beams follow the same path but
in opposite directions inside the Sagnac interferometer and the Type-0 PPKTP
crystal is placed symmetric to the DPBS, the implemented scheme is robust
against any optical path changes to produce SPDC photons in orthogonal
polarizations with ultra-stable phase. At the output of the Sagnac
interferometer, a filter (F) is used to block the pump beam while transmitting
the entangled photons. A prism mirror (PM) is used to separate the entangled
photon pairs. One photon is sent to Alice, and another photon is sent to Bob
through launching optics. The detection setup is the same as BB84. The output
from the SPD is fed into electronics for recording the counts per integration
time and this data is then used to derive the sifted key bits. The sifted key
bits are the input to the key distillation engine.
Figure 5: Experimental scheme of $(a)$ BB84 protocol over 200 meters and of
$(b)$BBM92 protocol, that includes both optical and electronic arrangements.
EPS: entangled photon source, FM: flip mirror, PM: prism mirror, M: mirror, F:
filter, FC: fiber coupler, BS: beam splitter, PBS: polarization beam splitter,
DPBS: duel wavelength PBS, HWP: half-wave plate, SMF: single-mode fiber, MMF:
multi-mode fiber, SPCM: single-photon counting modules, PPKTP: Periodically
poled potassium titanyl phosphate. LD: laser driver, BD: beam dumper.
## 5 Implementation and Analysis
### 5.1 Security Analysis of the Implementation
Instead of trying to break the theoretical foundations of a given
cryptographic system (which is proved to be unconditional), another “attack
philosophy” is to attack its implementation via loopholes, in order to gain
some secret information via unconventional channels. A practical
implementation of QKD protocol is never perfect and the performance of the
protocol depends on the applicability of the security proofs and assumptions
to the real devices [2, 29], as well as on numerous parameters, including
post-processing efficiency and the level of noise added to the signal at each
stage (including the noise added due to attenuation). These assumptions
include the existence of an authenticated channel between Alice and Bob, the
isolation of the trusted devices (i.e., that Eve cannot access Alice and Bob’s
devices), and that the devices perform in the way that they are expected to.
Exploitable imperfections in the trusted party, that allows quantum hacking,
are called side channels. One such side channel was identified by performing
the side-channel analysis (SCA) and a countermeasure was proposed by us. The
authentication module, implemented as part of the QKD key distillation engine,
is assessed to be a side-channel resistant implementation on hardware. Side
channel analysis of these classical components contributes significantly to
the security model of the QKD system. Due to the composability of QKD, the
security of the QKD system depends on the security of all its other
constituent components.
#### 5.1.1 Side-Channel Attack on Implementation
Correlation power analysis of FPGA is used to reveal the first 13-bit key part
$k_{1}$, for that 13-bit key hypothesis is required. Therefore, the attack
complexity required to recover the key, $k_{1}$ is $2^{13}$. We recorded one
lakh power traces as shown in the Figure. 6 to derive the 13-bit key part.
Figure 6: Correlation Power analysis of Poly1305
Attack Complexity: Thus the total attack complexity amounts to 5*
$(2^{13})$\+ 1*$(2^{12})$ \+ 1*$(2^{8})$ \+ 3* $(2^{7})$ $\approx$
$6*(2^{13})$ $\approx$ $2^{15}$. Therefore, the overall attack complexity to
retrieve the entire key is about $2^{15}$.
Implication: $h_{k_{1}}(m)\oplus k_{2}:=t$ where $h$ is a universal hash,
$k_{1}$ is fixed and $k_{2}$ is one-time key. For encrypting each tag, the
one-time-pad key is replaced with a part of the key produced in the previous
QKD session. Note that the described attack gives $k_{1}$ and $(m,t)$ is
public. Thus one can compute $k_{2}=h_{k_{1}}(m)\oplus t$ and hence, forgery
or MITM(Man-in-the-Middle attack) is possible by computing
$t^{\prime}=h_{k_{1}}(m^{\prime})\oplus k_{2}$ for any desired message. In
fact, commercial systems from IDQuantique, use polynomial evaluation-based
universal hashing for authentication and thus appear to be susceptible to the
attack.
Countermeasure: The proposed MAC construction is as follows. Using the
universal hash function, a new key, $k_{1}^{new}$ has to be generated for each
authentication as shown in the equation 4. $k_{1}^{new}:=T_{r}*k_{2}$
By selecting a random bit string, r = ($r_{1},r_{2},\cdot,r_{N+L-1}$) of
$N+L-1$ bits, a Toeplitz matrix $T_{r}$ is constructed, which is multiplied by
vector component, one time key $k_{2}$ to get $k_{1}^{new}$. The random bit
string is generated using TRNG.
$k_{1}^{new}=\begin{bmatrix}r_{L}&r_{L+1}&r_{L+2}&...&r_{N+L-1}\\\
r_{L-1}&r_{L}&r_{L+1}&...&r_{N+L-2}\\\ ...&...&...&...&...\\\
r_{2}&r_{3}&r_{4}&...&r_{N+1}\\\ r_{1}&r_{2}&r_{3}&...&r_{N}\\\
\end{bmatrix}\begin{bmatrix}k_{2}^{1}\\\ k_{2}^{2}\\\ ...\\\ \dots\\\
k_{2}^{N}\\\ \end{bmatrix}$ (4)
Then protected tag can be generated by equation 5
$t:=h_{{k_{1}}^{new}}(m)\oplus k_{2}.$ (5)
#### 5.1.2 Implementation Result
The developed Hadoop-based reconfigurable KDE hardware design is tested with
quantum bits (raw key bits) obtained from an experimental setup of the BB84
protocol, BBM92 protocol, and COW protocol at the Physics Research Laboratory
(PRL), Ahmadabad. The results obtained are recorded in the below tables.
Table.1 records the area and utilization parameters for the hardware
implementation of our design, and Table.2 records the performance of our
implemented design in various experimental settings. The designs are
implemented on a Virtex-7 VX485T Xilinx FPGA.
DESIGN | LUT% | FF(Flip-flop%) | BRAM(%) | Total Power(W)
---|---|---|---|---
Without Hadoop framework (Transmitter) | 9.14 | 4.50 | 23.6 | 3.286
With Hadoop framework (Transmitter) | 9.71 | 4.91 | 38.7 | 3.525
Without Hadoop framework (receiver) | 11.99 | 5.56 | 32.2 | 3.837
With Hadoop framework (receiver) | 18.46 | 8.26 | 61.2 | 5.131
Table 1: Utilization report of the integrated Key Distillation Engine design with and without hadoop framework on VC707 dev board for both transmitter and receiver. Protocol | Input Block size (bits) | QBER | No of parallel Instances | Key Rate (Kbps) | Time (seconds)
---|---|---|---|---|---
| 10,07,616 | 25% | 1 | 39 | 25.7
| 10,07,616 | | 3 | 116.7 | 8.6
| 9,83,040 | | 4 | 157.9 | 6.2
BB84 | 10,07,616 | 2.63% | 1 | 96.8 | 10.37
| 10,07,616 | | 3 | 285.1 | 3.52
| 9,83,040 | | 4 | 368.1 | 2.66
COW | 10,07,616 | 21.40% | 1 | 45.4 | 22.13
| 10,07,616 | | 3 | 128.7 | 7.8
| 9,83,040 | | 4 | 175.8 | 5.57
BBM92 | 10,07,616 | 9.03% | 1 | 70.6 | 14.22
| 10,07,616 | | 3 | 207.4 | 4.84
| 9,83,040 | | 4 | 273.5 | 3.58
Table 2: Implementation results for the Hadoop framework for multiple
instances of the mapper.
In terms of Area, from Table. 1, it is observed that there is an increase in
the utilization of Logic Cells and Block RAM units, only on the receiver with
the Hadoop framework, due to the LDPC decoder (which is computationally
intensive). This can be reduced by adopting optimized decoder implementations.
The experiment is run for each protocol, and $10Mb$ of raw key bits are
collected and processed. The variation in QBER captured in the experiments run
is to validate the ability of the rate-adaptive LDPC codes with threshold
error correction capacity of 25% at 90% efficiency. In terms of Performance,
it can be observed from the graph in Figure. 7 that with an increase in QBER
the execution time or latency also increases and this inversely affects the
rate at which the secret key can be extracted. But if we leverage the number
of Hadoop parallel instances in the design, the latency can be reduced,
thereby increasing the key rate. The graph thus asserts the relation:
$No.\ of\ parallel\ instance\ \propto\ QBER$
, and,
$No.\ of\ parallel\ instance\ \propto\ \frac{1}{Latency}$
Figure 7: Plot of (a) Number of parallel instances of mapper in the Hadoop
framework vs execution time in seconds, and (b) Execution time, in seconds, vs
Key rate in Kbps, for COW, BB84, and BBM92 QKD protocols at varying QBER
(measured during the quantum experiment).
## 6 Conclusion
In this paper, a design has been presented that strengthens the security and
enables faster key reconciliation for QKD systems using the concept of FPGA-
based Hadoop-map reduce architecture. A new reconfigurable architecture to
optimize the resources by using reusable blocks has been proposed. The results
of our experiments show that the hardware design has a balance between
resource utilization and throughput. Therefore, implementing the Hadoop-based
QKD post-processing functionality directly in the hardware is a preferred
technique to meet the computation and critical security needs of commercial
QKD systems. An additional advantage of FPGA based key distillation process is
to improve the performance and compatibility with different types of QKD
experimental setups. Typically, QKD key distillation engine uses a combination
of individual IP core blocks to build complete post-processing protocols,
including a data encryption module. This makes it possible to build a secure
multi-QKD protocol supporting quantum key distillation hardware with volatile
FPGA systems.
## 7 Acknowledgement
This research was carried out with support from the grant provided by the
Department of Science and Technology (DST), within the Ministry of Science and
Technology, Government of India, through the “Quantum enabled Information
Science and Technology (QueST)” program. The authors wish to acknowledge and
thank Dr. Jothi Ramalingam and Mrs. Sarika Menon for their invaluable
assistance and insightful discussions.
## 8 Author Contributions
The theoretical aspect and ideation of the work were done by NV and FS.
Processing of quantum keys, design of the Hadoop framework, and hardware
implementation of all the key distillation algorithms were done by FS, SG, and
HP. The quantum experiment of all the listed QKD protocols was carried out by
PC and RP. The side-channel-resistant authentication scheme was implemented by
DB. The paper was written by FS and NV, and the other authors proofread it.
All authors discussed the results and commented on the manuscript.
## References
* [1] Abdulrahman Alhamali, Nibal Salha, Raghid Morcel, Mazen Ezzeddine, Omar Hamdan, Haitham Akkary, and Hazem Hajj. Fpga-accelerated hadoop cluster for deep learning computations. In 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pages 565–574. IEEE, 2015.
* [2] Davide Bacco, Matteo Canale, Nicola Laurenti, Giuseppe Vallone, and Paolo Villoresi. Experimental quantum key distribution with finite-key security analysis for noisy channels. Nature communications, 4(1):1–8, 2013.
* [3] Charles H Bennett, François Bessette, Gilles Brassard, Louis Salvail, and John Smolin. Experimental quantum cryptography. Journal of cryptology, 5(1):3–28, 1992.
* [4] Daniel J Bernstein. The poly1305-aes message-authentication code. In International workshop on fast software encryption, pages 32–49. Springer, 2005.
* [5] Ayan Biswas, Anindya Banerji, Nijil Lal, Pooja Chandravanshi, Rupesh Kumar, and Ravindra P Singh. Quantum key distribution with multiphoton pulses: an advantage. Optics Continuum, 1(1):68–79, 2022.
* [6] J Lawrence Carter and Mark N Wegman. Universal classes of hash functions. Journal of computer and system sciences, 18(2):143–154, 1979.
* [7] Jeremy Constantin, Raphael Houlmann, Nicholas Preyss, Nino Walenta, Hugo Zbinden, Pascal Junod, and Andreas Burg. An fpga-based 4 mbps secret key distillation engine for quantum key distribution systems. Journal of Signal Processing Systems, 86(1):1–15, 2017.
* [8] Ke Cui, Jian Wang, Hong-Fei Zhang, Chun-Li Luo, Ge Jin, and Teng-Yun Chen. A real-time design based on fpga for expeditious error reconciliation in qkd system. IEEE transactions on information forensics and security, 8(1):184–190, 2012.
* [9] AR Dixon and H Sato. High speed and adaptable error correction for megabit/s rate quantum key distribution. Scientific reports, 4(1):1–6, 2014.
* [10] David Elkouss, Jesus Martinez-Mateo, and Vicente Martin. Information reconciliation for quantum key distribution. arXiv preprint arXiv:1007.1616, 2010.
* [11] Bernd Fröhlich, Marco Lucamarini, James F Dynes, Lucian C Comandar, Winci W-S Tam, Alan Plews, Andrew W Sharpe, Zhiliang Yuan, and Andrew J Shields. Long-distance quantum key distribution secure against coherent attacks. Optica, 4(1):163–167, 2017.
* [12] Muhsin Gurel. A comparative study between rtl and hls for image processing applications with fpgas. University of California, San Diego, 2016.
* [13] K Inoue, E Waks, and Y Yamamoto. Differential-phase-shift quantum key distribution using coherent light. Physical Review A, 68(2):022317, 2003.
* [14] Auguste Kerckhoffs. Modern cryptography. Journal militaries sciences, 9:5–38,161–191, 1883.
* [15] He Li and Yaru Pang. Fpga-accelerated quantum computing emulation and quantum key distillation. IEEE Micro, 41(4):49–57, 2021.
* [16] Mariem Makni, Mouna Baklouti, Smail Niar, and Mohamed Abid. Hardware resource estimation for heterogeneous fpga-based socs. In Proceedings of the Symposium on Applied Computing, pages 1481–1487, 2017.
* [17] Jesus Martinez-Mateo, Christoph Pacher, Momtchil Peev, Alex Ciurana, and Vicente Martin. Demystifying the information reconciliation protocol cascade. arXiv preprint arXiv:1407.3257, 2014.
* [18] Pedro Moreira, Javier Serrano, Tomasz Wlostowski, Patrick Loschmidt, and Georg Gaderer. White rabbit: Sub-nanosecond timing distribution over ethernet. In 2009 International Symposium on Precision Clock Synchronization for Measurement, Control and Communication, pages 1–5. IEEE, 2009.
* [19] Katayoun Neshatpour, Maria Malik, Avesta Sasan, Setareh Rafatirad, Tinoush Mohsenin, Hassan Ghasemzadeh, and Houman Homayoun. Energy-efficient acceleration of mapreduce applications using fpgas. Journal of Parallel and Distributed Computing, 119:1–17, 2018.
* [20] Daniel KL Oi, Alex Ling, Giuseppe Vallone, Paolo Villoresi, Steve Greenland, Emma Kerr, Malcolm Macdonald, Harald Weinfurter, Hans Kuiper, Edoardo Charbon, et al. Cubesat quantum communications mission. EPJ Quantum Technology, 4:1–20, 2017.
* [21] Asit Kumar Pradhan, Andrew Thangaraj, and Arunkumar Subramanian. Construction of near-capacity protograph ldpc code sequences with block-error thresholds. IEEE Transactions on Communications, 64(1):27–37, 2015.
* [22] Murad Qasaimeh, Kristof Denolf, Jack Lo, Kees Vissers, Joseph Zambreno, and Phillip H Jones. Comparing energy efficiency of cpu, gpu and fpga implementations for vision kernels. In 2019 IEEE international conference on embedded software and systems (ICESS), pages 1–8. IEEE, 2019.
* [23] Umesh Sisodia. Using high-level synthesis to migrate open source software algorithms to semiconductor chip designs. In System Level Flows for SoC Architecture Analysis and Design. CircuitSutra Technologies, 2020.
* [24] Andrea Stanco, Francesco BL Santagiustina, Luca Calderaro, Marco Avesani, Tommaso Bertapelle, Daniele Dequal, Giuseppe Vallone, and Paolo Villoresi. Versatile and concurrent fpga-based architecture for practical quantum communication systems. IEEE Transactions on Quantum Engineering, 3:1–8, 2022.
* [25] Damien Stucki, Nicolas Brunner, Nicolas Gisin, Valerio Scarani, and Hugo Zbinden. Fast and simple one-way quantum key distribution. Applied Physics Letters, 87(19):194108, 2005.
* [26] Akihiro Tanaka, Mikio Fujiwara, Ken-ichiro Yoshino, Seigo Takahashi, Yoshihiro Nambu, Akihisa Tomita, Shigehito Miki, Taro Yamashita, Zhen Wang, Masahide Sasaki, et al. High-speed quantum key distribution system for 1-mbps real-time key generation. IEEE Journal of Quantum Electronics, 48(4):542–550, 2012.
* [27] Nino Walenta, Andreas Burg, Dario Caselunghe, Jeremy Constantin, Nicolas Gisin, Olivier Guinnard, Raphaël Houlmann, Pascal Junod, Boris Korzh, Natalia Kulesza, et al. A fast and versatile quantum key distribution system with hardware key distillation and wavelength multiplexing. New Journal of Physics, 16(1):013047, 2014.
* [28] Weilong Wang, Kiyoshi Tamaki, and Marcos Curty. Measurement-device-independent quantum key distribution with leaky sources. Scientific reports, 11(1):1–11, 2021.
* [29] Feihu Xu, Xiongfeng Ma, Qiang Zhang, Hoi-Kwong Lo, and Jian-Wei Pan. Secure quantum key distribution with realistic devices. Reviews of Modern Physics, 92(2):025002, 2020.
* [30] Shen-Shen Yang, Zhen-Guo Lu, and Yong-Min Li. High-speed post-processing in continuous-variable quantum key distribution based on fpga implementation. Journal of Lightwave Technology, 38(15):3935–3941, 2020.
* [31] Zhiliang Yuan, Alan Plews, Ririka Takahashi, Kazuaki Doi, Winci Tam, Andrew W Sharpe, Alexander R Dixon, Evan Lavelle, James F Dynes, Akira Murakami, et al. 10-mb/s quantum key distribution. Journal of Lightwave Technology, 36(16):3427–3433, 2018.
* [32] Hong-Fei Zhang, Jian Wang, Ke Cui, Chun-Li Luo, Sheng-Zhao Lin, Lei Zhou, Hao Liang, Teng-Yun Chen, Kai Chen, and Jian-Wei Pan. A real-time qkd system based on fpga. Journal of Lightwave Technology, 30(20):3226–3234, 2012.
* [33] Jianyi Zhou, Bo Liu, and Baokang Zhao. A pipeline optimization model for qkd post-processing system. In Information and Communication Technology-EurAsia Conference, pages 472–481. Springer, 2014.
* [34] Michael D Zwagerman. High level synthesis, a use case comparison with hardware description language. Master thesis, 2015.
|
# Elastic Moduli of the Vertex Model
Michael F. Staddon Center for Systems Biology Dresden, Dresden, Germany Max
Planck Institute for the Physics of Complex Systems, Dresden, Germany Max
Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
Arthur Hernandez Department of Physics, University of California Santa
Barbara, Santa Barbara, CA 93106 Mark J. Bowick<EMAIL_ADDRESS>Kavli
Institute for Theoretical Physics, University of California Santa Barbara,
Santa Barbara, CA 93106 Michael Moshe<EMAIL_ADDRESS>Racah
Institute of Physics, The Hebrew University of Jerusalem, Jerusalem, Israel
91904 M. Cristina Marchetti<EMAIL_ADDRESS>Department of Physics,
University of California Santa Barbara, Santa Barbara, CA 93106
###### Abstract
The vertex model of confluent epithelia describes the apical surface of a
tissue as a tiling of polygonal cells, with a mechanical energy governed by
deviations in cell shape from preferred, or target, area, $A_{0}$, and
perimeter, $P_{0}$. Even in the absence of topological rearrangements, the
model exhibits a solid-solid rigidity transition driven by geometric
incompatibility and tuned by the target shape index of the cell,
$p_{0}=P_{0}/\sqrt{A_{0}}$. For $p_{0}>p_{*}(6)=\sqrt{8\sqrt{3}}\approx 3.72$,
with $p_{*}(6)$ the perimeter of a regular hexagon of unit area, a cell can
simultaneously attain both the preferred area and preferred perimeter. As a
result, the tissue is in a mechanically soft compatible state, with a manifold
of degenerate zero-energy states and zero shear and Young’s moduli. For
$p_{0}<p_{*}(6)$, it is geometrically impossible for any cell to realize the
preferred area and perimeter simultaneously, and the tissue is in an
incompatible rigid solid state with finite prestress. Using a mean-field
approach, we present a complete analytical calculation of both the ground
states and the linear elastic moduli of an ordered vertex model. In addition
to the standard affine transformations, we analyze a relaxation step that
allows for non-affine deformations, leading to a softening of the system. The
origin of the vanishing shear and Young’s moduli in the compatible state is
the presence of zero-energy deformations of cell shape within the manifold of
degenerate ground states. The bulk modulus exhibits a jump discontinuity at
the transition and, for some parameters, can be lower in the rigid state than
in the fluid-like state. The Poisson ratio becomes negative at small $p_{0}$
and intermediate contractility, which lowers the bulk and Young’s moduli. Non-
affine relaxation results in a softer response of the rigid state than
previously reported. Our work provides a unified treatment of linear
elasticity for the vertex model and demonstrates that this linear response is
protocol-dependent.
††preprint: APS/123-QED
## I Introduction
Many biological processes, such as morphogenesis [1, 2, 3, 4], wound healing
[5, 6, 7, 8], and cancer metastasis [9, 10], require coordinated motion and
shape changes of many cells. An important open question in biology is how the
large scale mechanics of biological tissue emerges from the properties of
individual cells, which are in turn governed by force-generating proteins
within the cytoskeleton and adhesion molecules between cells [11, 12, 13].
Many theoretical models have been proposed to describe dense epithelia, single
layers of very tightly packed cells [14, 15, 16, 17, 18]. Among these, vertex
models, originally developed from models of soap films [19], have proven a
powerful starting point for capturing the mechanical properties of epithelia.
The vertex model describes the apical surface of a confluent tissue as a
polygonal tiling of the plane (Fig. 1a) [18, 19, 20, 21, 22, 23]. Each polygon
represents a cell, each edge a cell-cell junction, and each vertex a
multicellular junction. Each cell’s mechanics are controlled by multiple bio-
mechanical processes that were proposed to be effectively described by a
mechanical energy determined by deviations of their area and perimeter from
preferred values. These preferred values encode bio-mechanical properties such
as cadherin molecules concentration, apical ring-contractiliy and more. Force
balance via energy minimization then determines the position of the vertices
and thus the shape of cells in the tissue. Topological rearrangements
resulting in cell intercalation, cell division and motility have also been
incorporated, and the model has been highly successful in capturing a range of
biological processes, such as tissue growth [24], wound healing [7], and
tissue organization [21].
Figure 1: The vertex model for epithelia. (a) (a) The apical surface of an
epithelium is modeled by a polygonal tiling, with each polygon representing a
cell. (b) Vertex model phase diagram in the $p_{0}$, $r$ plane. Within the
blue region cells are unstable and collapse. Within the green region the
tissue is in an incompatible state, with neither preferred perimeter nor area
achieved, and the ground state is a regular hexagonal lattice. The tissue acts
like a solid in response to shear. Within the yellow region both preferred
perimeter and area are achieved and cells have a degenerate ground state and
the tissue has zero shear modulus. The cell shapes show example energy minima,
where a cell may elongate, increase its pointiness using the angle $\phi$, or
increasing its shear tilt angle $\theta$ as described in Ref. [25], in order
to increase its perimeter while maintaining unit area.
It has been shown that vertex models exhibit a transition between a fluid-like
state and a solid-like state where cells are jammed and unable to rearrange
(Fig. 1b). This rigidity transition occurs at constant cell density and is
driven by both active processes, such as fluctuations in cell-edge tension and
cell motility [26, 23, 27, 28, 29, 30, 21, 22], as well as by geometric
constraints [31, 25]. Recent work by us and others has shown that even in the
absence of fluctuations and topological rearrangements, vertex models exhibit
a rigidity transition associated with geometrical frustration [31, 32, 25]. In
the rigid or incompatible state cells are unable to achieve the target values
of area and perimeter and the system is under finite tension, with a unique
gapped ground state. In the soft or compatible state, cells achieve both
target area and perimeter and the ground state has zero energy. Due to the
underconstrained nature of the vertex model, however, the liquid ground state
is degenerate as for a given $n$-sided polygon there are many shapes that
preserve area and perimeter. This allows the system to accommodate small shear
deformations by finding a new zero energy shape, resulting in vanishing shear
modulus.
The linear elastic response of an ordered hexagonal vertex model to external
deformations has been examined through calculations of the shear and bulk
moduli [22, 33, 25]. Staple _et al._ [22, 25] evaluated the elastic moduli and
first demonstrated the vanishing of the shear modulus in the compatible state.
More recently, we showed that the vanishing of both shear and Young moduli in
the soft regime stems from the degeneracy of the compatible ground states,
which allows the deformed tissue to spontaneously shear to a new compatible
ground state to accommodate the external deformation [25]. We additionally
discovered that the response is highly singular at the critical point, with
breakdown of linear elasticity and anomalous coupling between compression and
shear, as quantified by the development of a new elastic constant [25].
The above studies only allow for affine deformations of the cells. This
approximation can be viewed as appropriate for determining the short-time
response of the vertex model to strain. The vertex model has, however,
additional degrees of freedom and can relax stress by moving vertices in a
non-affine way. Murisic _et al._ [33] incorporated these effects by
considering the hexagonal lattice as the union of two sub-lattices with a
microscopic shift between them and found that the shear modulus is $2/3$
softer than previously reported. Tong _et al._ [28] used simulations to
measure the shear storage modulus and viscosity in both ordered and disordered
model tissues.
In this paper, we expand upon previous work by incorporating simple non-affine
deformations. Using a mean field model for a hexagonal lattice, we derive
analytic expressions for all the linear elastic moduli of the tissue, and
verify these results using simulations. We show that, away from the critical
point, the elastic constants of a regular VM satisfy the standard relations of
two-dimensional elasticity of isotropic solids. Despite this Hookean
relationship, the mechanical linear response exhibits robust non-affine
contributions that can significantly reduce the elastic constants. For
instance, the bulk modulus can be softer in the rigid state than in the soft
fluid-like state and jumps discontinuously across the solid to fluid
transition. We highlight several novel behaviors of vertex model elasticity,
such as negative Poisson’s ratio and a softening of the tissue as the ratio of
area to perimeter stiffness increases. We verify our analytical results using
numerical vertex model simulations of a regular tissue.
The remainder of the paper is organized as follows. In Section II we state the
vertex model simulation and deformation protocol to extract various elastic
constants. In Section III we introduce the VM and its mean-field
implementation used in the present work, and present a new derivation of the
ground states that allows us to quantify the degeneracy of the compatible
regime. In Section IV, after highlighting the distinction between the affine
and non-affine deformations allowed in our model, we present results for all
the elastic constants. We conclude in Section V with a brief discussion.
## II Vertex model: simulation and deformation protocol
### II.1 The vertex model of epithelia
The vertex model describes cells in a confluent tissue as polygons of area
$A_{\alpha}$ and perimeter $P_{\alpha}$ (Fig. 1a). The tissue energy is
written as
$E_{\text{tissue}}=\frac{1}{2}\sum_{\alpha}K(A_{\alpha}-A_{\alpha
0})^{2}+\frac{1}{2}\sum_{\alpha}\Gamma P_{\alpha}^{2}+\sum_{\langle
ij\rangle}\Lambda_{ij}L_{ij},$ (1)
where $\alpha$ labels individual cells and $\langle ij\rangle$ indexes edges
connecting vertices $i$ and $j$. The first term embodies the energy cost of
cell area deformations, with $K$ the area elasticity and $A_{\alpha 0}$ the
preferred or target area. The second term represents active contractility and
elasticity of the cytoskeleton, with $\Gamma$ the contractility. The third
term represents interfacial energy between neighboring cells, with $L_{ij}$
the length of edge $ij$ and $\Lambda_{ij}$ the associated tension controlled
by the interplay of cell-cell adhesion and cortex contractility. The tension
can become negative when adhesion overcomes contractile surface forces.
The mechanical force on vertex $i$ with position $\mathbf{x}_{i}$ is given by
$\mathbf{F}_{i}=-\frac{\partial E_{\text{tissue}}}{\partial\mathbf{x}_{i}}$.
The tissue rearranges vertices to locally minimize the energy. This can be
described quasi-statically by requiring force balance at each time-step, or
dynamically by assuming that vertices relax according to overdamped dynamics
where viscous drag balances the mechanical forces:
$\gamma\frac{\partial\mathbf{x}_{i}}{\partial t}=\mathbf{F}_{i}$, with
$\gamma$ a friction coefficient.
As the network relaxes, edges may shorten and cells may shrink, resulting in
topological rearrangements that reconfigure the network. In T1 transitions,
also known as cell-cell intercalations, a junction between two cells shrinks
to a point and a new edge is formed, causing two originally neighboring cells
to lose contact and two previously unconnected cells to form a new interface.
T1 transitions allow the tissue to relax shear stresses through cell
rearrangements rather than cell elongation. A T2 transition, also known as
cell extrusion, occurs as a cell shrinks to zero area and is replaced by a
single vertex. The mechanical state of the tissue is controlled by both
topological rearrangements driven by active processes and geometric
frustration. Both types of processes can drive transitions between rigid and
fluid states. Here we neglect topological rearrangements to focus on the role
of geometry.
We further simplify the model by assuming that all cells have the same
preferred area $A_{\alpha 0}=A_{0}$ and all edges have the same tension
$\Lambda_{ij}=\Lambda$. The interfacial energy can then be written in terms of
the cell perimeter, $\sum_{\langle ij\rangle}\Lambda
L_{ij}=\frac{1}{2}\sum_{\alpha}\Lambda P_{\alpha}$, where the factor of
$\frac{1}{2}$ arises because the interfacial energy of each edge is shared by
two cells. The tissue energy can then be recast in the form
$E_{\text{tissue}}=\frac{1}{2}\sum_{\alpha}K(A_{\alpha}-A_{0})^{2}+\frac{1}{2}\sum_{\alpha}\Gamma(P_{\alpha}-P_{0})^{2}+E_{0}\;,$
(2)
where $P_{0}=-\frac{\Lambda}{2\Gamma}$ is the preferred perimeter, and $E_{0}$
is a constant term obtained from completing the square. Since we care about
the gradient of energy and not the absolute value, we discard $E_{0}$ in the
following.
### II.2 Deformation Protocol
To numerically obtain the elastic moduli, we simulate the mechanical response
of the vertex model under different deformations using a tissue of 4 hexagonal
cells in a periodic box of lengths $L_{x}(0)$ and $L_{y}(0)$, and area
$A(0)=L_{x}(0)L_{y}(0)$ determined by energy minimisation, and implemented in
the Surface Evolver software [34]. First, we use an intermediate rigidity
ratio of $r=0.1$, and test the response across a range of preferred values of
the shape index, from $p_{0}=0$ to $p_{0}=4.2$, covering both the compatible
and incompatible regimes.
Figure 2: Strain protocols for measuring elastic moduli of the vertex model.
(a - c) From the ground state, the periodic box lengths and vertex positions
are transformed and constrained according to an affine transformation, shown
by the arrows. From the constrained state, the system is relaxed according to
tissue-scale or box constrained. (a) The shear modulus is calculated by
applying a shear transformation to the box. In the constrained state, every
edge has the same tension, producing a net force on the vertices, hence this
is not a force-balanced state. After relaxation, forces are balanced through a
non-affine transformation on the vertices. During relaxation the box size is
fixed. (b) The bulk modulus is calculated by applying an isotropic expansion
to the box and vertices. During relaxation the box size is fixed. (c) The
Young’s modulus and Poisson’s ratio are calculated by applying a uniaxial
strain to the box and vertices. During relaxation the height of the box may
change and vertices may move.
To calculate the shear modulus, we deform the ground state (Fig. 2a, left) by
applying an initially affine deformation to vertices and the boundaries:
$x_{i}(\epsilon)=(1+\epsilon/2)x_{i}(0)$,
$y_{i}(\epsilon)=(1+\epsilon/2)^{-1}y_{i}(0)$, and
$L_{x}(\epsilon)=(1+\epsilon/2)L_{x}(0)$ and
$L_{y}(\epsilon)=(1+\epsilon/2)^{-1}L_{y}(0)$, where $\epsilon=0.001$ (Fig.
2a, middle). We then allow the vertex positions to relax to an energy minima
(Fig. 2a, right), and record the change in tissue energy $\delta E$ before and
after the deformation. The shear modulus is then numerically estimated by
$G=\frac{1}{A(0)}\frac{2\delta E}{\epsilon^{2}}$.
Figure 3: Non-affine deformations allow for a softer mechanical response. (a)
Shear modulus $G$, (b) bulk modulus $K$, (c) Young’s modulus $Y$, and (d)
Poisson’s ratio $\nu$ against target shape index $p_{0}$ for various rigidity
ratios $r$. The constrained values represent elastic moduli where vertices are
constrained by the given deformation. The relaxed values lines represent the
moduli allowing for non-affine deformations, where vertices may relax, subject
to the boundary conditions. Dots represent simulated values. Lines represent
analytic values.
In the ground state of the incompatible regime, cell edges are under tension
and meet at 120$\degree$ angles. After the initial affine deformation, the
angles change and the tissue is no longer in a force-balanced configuration
(Fig. 2a, middle). As we allow the tissue to relax, it responds with a non-
affine deformation; vertices which are of the same y-coordinate alternate
between moving left and moving right during relaxation, returning the angles
between edges to a stable 120$\degree$ configuration (Fig. 2a, right). Such a
deformation cannot be described by a single affine transformation, but by two
affine transformations applied to different subsets of vertices [33].
To demonstrate the importance of this relaxation step, we report the response
to two types of deformation protocols: (i) “constrained” deformations which
are obtained where after deformation of the bounding box the cell vertices are
not allowed to move to minimise the energy of the tissue, and (ii) “relaxed”
deformations where the vertices are allowed to adjust their position to
achieve force balance and the global tissue shape remains controlled by the
geometry of the deformed box. Note that in the compatible regime the relaxed
state can also be achieved by allowing the tissue to change its shape [25],
and the resulting linear elastic constants are the same.
For an intermediate rigidity ratio $r=0.1$, we find that the shear modulus
decreases as $p_{0}$ increases and becomes zero at the transition to the
compatible regime. In particular, the relaxation step allows cells to decrease
their perimeter, and thus system energy after strain, resulting in a relaxed
shear modulus that is softer than in the constrained case (Fig.4a). In the
compatible regime, the tissue is initially under no tension since the
preferred perimeter is achieved. Upon straining the tissue, the perimeter
increases and during the subsequent relaxation step the vertices move to
reduce the perimeter until the preferred perimeter is achieved again, allowing
for the net energy to remain constant, leading to a zero shear modulus.
To calculate the bulk modulus, we apply the isotropic transformation
$x_{i}(\epsilon)=(1+\epsilon)^{\frac{1}{2}}x_{i}(0)$,
$y_{i}(\epsilon)=(1+\epsilon)^{\frac{1}{2}}y_{i}(0)$, and
$L_{x}(\epsilon)=(1+\epsilon)^{\frac{1}{2}}L_{x}(0)$ and
$L_{y}(\epsilon)=(1+\epsilon)^{\frac{1}{2}}L_{y}(0)$, where $\epsilon=0.001$,
such that $A(\epsilon)=(1+\epsilon)A(0)$. During the relaxation step, we allow
both the vertices and box lengths to change, subject to the constraint
$L_{x}(\epsilon)L_{y}(\epsilon)=A(\epsilon)$ (Fig. 2b). The bulk modulus is
then given by $K=\frac{1}{A(0)}\frac{2\delta E}{\epsilon^{2}}$.
In the incompatible regime, force balance requires a constant 120$\degree$
angle between edges, thus the tissue expands isotropically. We find that the
bulk modulus increases as the target shape index $p_{0}$ increases, and is
equal between the relaxed and constrained cases (Fig.4b).
In the compatible regime, the deformation initially increases the perimeter.
During the relaxation step, the tissue responds in a non-affine way to restore
its perimeter to its preferred value and so energy change only arises from the
area term and we have a bulk modulus $K=1$. Interestingly, this is lower than
the bulk modulus in the incompatible regime just before the transition and
thus, there is a discontinuity in the bulk modulus as $p_{0}$ changes. In
contrast, the constrained case is unable to relax the cells perimeters and so
has a higher bulk modulus and does not exhibit the discontinuity (Fig.4b).
Next, we apply a uniaxial deformation to calculate the Young’s modulus and
Poisson’s ratio: $x_{i}(\epsilon)=(1+\epsilon)x_{i}(0)$ and
$L_{x}(\epsilon)=(1+\epsilon)L_{x}(0)$ (Fig. 1c). We then allow the vertex
positions and box height $L_{y}(\epsilon)$ to relax to minimise energy. The
Young’s modulus is given by $Y=\frac{1}{A(0)}\frac{2\delta E}{\epsilon^{2}}$
and the Poisson’s ratio by
$\nu=-\frac{(L_{y}(\epsilon)-L_{y}(0))/L_{y}(0)}{(L_{x}(\epsilon)-L_{x}(0))/L_{x}(0)}$.
Note that this definition of the Poison’s ratio is equivalent to that in 2D
elasticity and therefore its values are limited between $-1<\nu<1$. The
extreme case $\nu=1$ corresponds to incompressible solid, analogous to the
case of $\nu_{3d}=0.5$ for incompressible 3D solids.
Again, the tissue undergoes a similar non-affine relaxation as under shear
strain, reducing the shear modulus compared to the constrained case (Fig. 4c).
In this case, though, we find that the Young’s modulus is non-monotonic. For
$p_{0}$ close to zero, the Young’s modulus increases as $p_{0}$ increases. For
higher $p_{0}$, increasing $p_{0}$ further decreases the Young’s modulus
towards zero at the transition point, after which the Young’s modulus is zero.
Interestingly, the Poisson’s ratio begins negative for small $p_{0}$ and
increases towards a value of 1 as $p_{0}$ increases, before remaining 1 in the
compatible regime (Fig. 4c). In the constrained case, the Poisson’s ratio is
actually lower than in the relaxed case for small $p_{0}$.
This phenomena highlights the counter-intuitive nature of VM mechanics. In
classical elasticity $\nu=1$ corresponds to incompressible solids, commonly
considered as very stiff. Here we find that the tissue approaches $\nu=1$ for
higher values of $p_{0}$ corresponding to compatible tissue with floppy
response. This seeming contradiction is resolved by noting that in this limit
cells can accommodate rest area and perimeter simultaneously and therefore
upon deformation their area remains intact, just as in incompressible solids.
The simulations highlight the complex mechanical behaviour of the vertex model
to applied tissue-level strains, both in its elastic moduli and the vertex-
level non-affine deformations while relaxing the energy. The non-affine
relaxation step allows the tissue to reduce its shear modulus and Young’s
modulus in the incompatible regime, and the bulk modulus in the compatible
regime. However, the simulations do not give an intuitive understanding for
why the bulk modulus is discontinuous, or why we can get a negative Poisson’s
ratio. Thus, in the remainder of the paper, we develop a mean-field theory of
the vertex model that can account for non-affine relaxation of the tissue
under strain to derive analytic expressions for the elastic moduli and
understand the source of the complex phenomena mentioned above.
## III Vertex model: mean-field theory and ground states
### III.1 Mean-field theory of vertex model
To understand the numerical results, we construct a mean-field theory by
assuming that all cells responds equally. In this case the tissue energy is
just $E_{\text{tissue}}=NE$ and one can simply consider the energy $E$ of a
single cell. We work in dimensionless units by normalizing the energy with
$KA_{0}^{2}$ and lengths with $\sqrt{A_{0}}$. The dimensionless single-cell
energy is then given by
$E=\frac{1}{2}(a-1)^{2}+\frac{1}{2}r(p-p_{0})^{2},$ (3)
where $a=A/A_{0}$, $p=P/\sqrt{A_{0}}$, $r=\Gamma/KA_{0}$ is the rigidity
ratio, and $p_{0}=P_{0}/\sqrt{A_{0}}$ is the target shape index of the cell.
Figure 4: Shape parameterization of the vertex model and ground states. (a)
Schematic of the vertex model and cell shape parametrization. Cells are
defined by the lattice height $h$, width $w$, and angle between edges $\phi$.
(b) The ground state in the solid state is a regular hexagonal lattice, with
$\phi=2\pi/3$. (c) The ground state used in the soft state with $\phi>2\pi/3$.
(d - f) Cell area (d), cell perimeter (e), and edge tension (f) vs target
shape index $p_{0}$ for various values of the rigidity ratio $r$. Dots
represent simulated values, lines are the analytical results.
Each cell consists of horizontal edges of length $l_{1}$ and diagonal edges of
length $l_{2}$, with $\phi$ the angle between horizontal and diagonal edges
(Fig. 4a). This parameterization captures the behavior of the tissue observed
in our numerical simulations, where it is the angle between edges that changes
during relaxation. Although cells have additional degrees of freedom, the
description in terms of these three degrees of freedom is sufficient to
capture the ground states of the tissue VM, and the response of the tissue
under shear and bulk deformations in simulations (Fig. 2). To both examine the
ground states and the response to deformation, it is convenient to parametrize
each cell in terms of the height $h$ and width $w$, as shown in Fig. 4a, given
by
$h=2l_{2}\sin\phi\;,\hskip 14.45377ptw=l_{1}-l_{2}\cos\phi\;.$ (4)
Each cell then contributes an area $a=wh$ to the tissue. We stress that the
angle $\phi$ is distinct from the shear tilt angle $\theta$ introduced
previously in Ref. [25], where cells may tilt or untilt in order to change
their perimeter. While the shapes obtained from an initial regular hexagon by
varying the shear tilt angle $\theta$ correspond to affine deformations of the
hexagon, those parametrized by $\phi$ generally correspond to non-affine
deformations of the regular hexagon. Inverting Eqs. (4), we obtain
$\displaystyle l_{1}=w+\frac{1}{2}h\cot\phi\;,$ (5) $\displaystyle
l_{2}=\frac{1}{2}h\csc\phi\;.$ (6)
Cell area and perimeter can then be written as
$\displaystyle a=hw\;,$ (7) $\displaystyle p=2w+hf(\phi)\;,$ (8)
where
$f(\phi)=\frac{2+\cos\phi}{\sin\phi}\;,$ (9)
resulting in an energy
$E=\frac{1}{2}(hw-1)^{2}+\frac{1}{2}r(2w+hf(\phi)-p_{0})^{2}\;.$ (10)
This form makes it evident that the VM energy is underconstrained as area and
perimeter do not uniquely determine cell shape.
### III.2 Ground states
The ground state configurations are obtained by minimizing the energy with
respect to the cell width $w$, height $h$, and angle $\phi$ and are solutions
of the three coupled equations
$\displaystyle\frac{\partial E}{\partial\phi}$ $\displaystyle=$ $\displaystyle
hrf^{\prime}(\phi)(2w+hf(\phi)-p_{0})=0\;,$ (11) $\displaystyle\frac{\partial
E}{\partial h}$ $\displaystyle=$ $\displaystyle
w(hw-1)+rf(\phi)(2w+hf(\phi)-p_{0})=0\;,$ (12) $\displaystyle\frac{\partial
E}{\partial w}$ $\displaystyle=$ $\displaystyle
h(hw-1)+2r(2w+hf(\phi)-p_{0})=0\;.$ (13)
As shown in previous work, we find a transition at $p_{0}=p_{*}$ between two
distinct states. For a regular lattice of $n$-sided polygons $p_{*}$ is given
by the isoperimetric value $p_{*}(n)=\sqrt{4n\tan(\pi/n)}$, with
$p_{*}(6)=\sqrt{8\sqrt{3}}\approx 3.72$. The isoperimetric inequality $p\geq
p_{*}(n)$ provides a lower bound on the perimeter of a regular $n$-sided
polygon for given area [35]. For $p_{0}>p_{*}(n)$ the cell is in a
geometrically compatible regime, where both preferred area and perimeter may
be achieved, and the tissue has zero shear modulus [22, 33, 31, 36] (Fig. 1b).
For $p_{0}<p_{*}(n)$ the cell is in an incompatible regime, where both
preferred area and perimeter cannot be simultaneously satisfied, and the
tissue behaves like a solid by resisting shear deformation [21, 31, 32, 25].
The corresponding ground state of the tissue is a lattice of identical
hexagonal cells (Fig. 1b). As $p_{0}$ is further lowered the cell may become
unstable and collapse to zero area and perimeter (Fig. 1b). Additionally, in a
small range of parameters near the collapsing region more exotic ground states
exist, with mixed lattices of square and octagonal, or dodecahedral and
triangular cells providing lower energy than hexagonal cells [22].
#### III.2.1 Compatible State, $p_{0}>p_{*}(6)$
For $p_{0}>p_{*}(6)$ Eqs. (11) are identically solved by $hw=1$ and
$p=2w+hf(\phi)=p_{0}$, and the zero ground state energy vanishes (Fig. 4c-e).
We refer to this situation as the compatible state. The ground state
configuration is a family of 6-sided polygons parametrized by the angle
$\phi$, with
$h=\frac{p_{0}\pm\sqrt{p_{0}^{2}-8f(\phi)}}{2f(\phi)}\;,$ (14)
$w=\frac{p_{0}\mp\sqrt{p_{0}^{2}-8f(\phi)}}{4}\;,$ (15)
where both roots are acceptable solutions for a given value of $\phi$,
corresponding to either tall and thin or short and wide cells. It is evident
from Eq. (14) and Eq. (15) that such a solution exists provided $p_{0}^{2}\geq
8f(\phi)$. The function $f(\phi)$ has a minimum at $\phi=\frac{2\pi}{3}$, with
$f(\frac{2\pi}{3})=\sqrt{3}$ corresponding to
$p_{0}=2^{\frac{3}{2}}3^{\frac{1}{4}}=p_{*}(6)$. At this value of $p_{0}$
there is a single zero energy solution that corresponds to a hexagon of unit
area. For $p_{0}>p_{*}$ there is degenerate continuum of zero energy solutions
corresponding to deformed hexagons of unit area, perimeter $p_{0}$ and
$\phi\in\left[\frac{2\pi}{3},\phi_{m}(p_{0})\right]$, with $\phi_{m}$
determined by $p_{0}^{2}=8f(\phi_{m})$ (Fig. 4c). There exist many other
parameterizations that can give ground state shapes in the compatible regime,
for example, cells becoming tall and thin, cells decreasing the angle $\phi$
to increase their perimeter, or cells tilting as in Ref. [25] (Fig. 4b).
#### III.2.2 Incompatible State, $p_{0}<p_{*}(6)$
For $p_{0}<p_{*}$ the cell cannot simultaneously realize the target area and
perimeter. We refer to this situation as the incompatible state. Eq. 11
requires $f^{\prime}(\phi)=0$, with solution $\phi=\frac{2\pi}{3}$, such that
$f(\phi)=\sqrt{3}$. An intuitive explanation for this fixed angle is that all
edges are under identical tension, and so by force balance a junction of three
edges must have equally spaced angles. Eqs. (12) and (13) then imply that
$\sqrt{3}h=2w$. This gives a perimeter $p=4w$ and area
$a=\frac{2}{\sqrt{3}}w^{2}=p^{2}/8\sqrt{3}$, which means that the cells are
regular hexagons in the incompatible state (Fig. 4b).
We can combine Eqs. (12) and (13) to obtain a cubic equation for the perimeter
$p^{3}+\left(\frac{rp_{*}^{4}-2p_{*}^{2}}{2}\right)p-\frac{rp_{*}^{4}}{2}p_{0}=0\;,$
(16)
with $p_{*}^{2}=8\sqrt{3}$ and $a=p^{2}/p_{*}^{2}$. The cubic equation can be
solved perturbatively in the limit of low and high rigidity ratio $r$.
At low rigidity ratio, i.e., $r\ll 1/p_{*}^{2}$, we find
$\displaystyle p$
$\displaystyle=p_{*}-p_{*}^{2}\left(\frac{p_{*}-p_{0}}{4}\right)r+\mathcal{O}(r^{2})\;,$
(17) $\displaystyle a$
$\displaystyle=1-p_{*}\left(\frac{p_{*}-p_{0}}{2}\right)r+\mathcal{O}(r^{2})\;.$
(18)
The cell remains close in shape to a hexagon of unit area, with a reduction of
the perimeter relative to the value $p_{*}$ (Fig. 4d-e). The tension, given by
$2r(p-p_{0})=2r\left(p_{*}-p_{0}\right)+\mathcal{O}(r^{2})$, decreases
monotonically as $p_{0}$ increases and vanishes at $p_{0}=p_{*}$, where the
cell reaches the compatible state and is under no tension (Fig. 4f). If
$p_{0}$ becomes too small, the stable configuration collapses to a point with
zero area and perimeter (Fig. 1b).
For high rigidity ratio, i.e., $r\gg 1/p_{*}^{2}$, the perimeter and area can
be expanded in inverse powers of $r$, with the result
$\displaystyle p$
$\displaystyle=p_{0}+\frac{2p_{0}}{p_{*}^{2}}\left(1-\frac{p_{0}^{2}}{p_{*}^{2}}\right)\frac{1}{r}+\mathcal{O}\left(\frac{1}{r^{2}}\right)\;,$
(19) $\displaystyle a$
$\displaystyle=\frac{p_{0}^{2}}{p_{*}^{2}}+\frac{4p_{0}^{2}}{p_{*}^{4}}\left(1-\frac{p_{0}^{2}}{p_{*}^{2}}\right)\frac{1}{r}+\mathcal{O}\left(\frac{1}{r^{2}}\right)\;.$
(20)
In this limit the cell shape index is close to the target shape index (Fig.
4d-e). The tension
$2r(p-p_{0})=\frac{4p_{0}}{p_{*}^{2}}\left(1-\frac{p_{0}^{2}}{p_{*}^{2}}\right)+\mathcal{O}\left(\frac{1}{r}\right)$
is nonmonotonic in $p_{0}$, increasing from almost no tension at $p_{0}=0$
before decreasing as $p_{0}$ approaches $p_{*}$ (Fig. 4f). As the rigidity
ratio increases the tension saturates to the value
$\frac{4p_{0}}{p_{*}^{2}}\left(1-\frac{p_{0}^{2}}{p_{*}^{2}}\right)$, which
has a maximum value of $2^{\frac{3}{2}}3^{-\frac{7}{4}}\approx 0.414$ at
$p_{0}=8/\sqrt{3}\approx 2.149$ (Fig. 4d). For $p_{0}\leq 0$, the cell
collapses to a point with zero area and perimeter (Fig. 4b).
Finally, we note that when topological transitions are allowed, tissues may
also unjam and undergo a solid-to-liquid phase transition when cell
rearrangements cost zero energy. In disordered realizations of the VM, the
unjamming transition occurs at $p_{0}\approx 3.81$, a value close to, but
slightly larger than $p_{*}(5)$ [26]. Intuitively, for the tissue to rearrange
in a T1 transition with zero energy barrier, two hexagonal cells must
momentarily lose an edge and become pentagons while still maintaining their
preferred perimeter and area. Cell motility can further promote fluidity and
lower the transition point [23].
## IV Mechanical response of the vertex model
### IV.1 Deformations protocol
It is evident from Eq. (10) that area and perimeter (or equivalently height
and width of the box shown in Fig. 4a) do not uniquely specify a polygonal
shape. In the compatible regime there is a family of zero energy shapes
corresponding to either tilted polygonal shapes obtained by affine
deformations or non-affinely deformed polygons parametrized by the angle
$\phi$. In the incompatible regime, if only affine deformations are allowed,
both the ground state and each deformed state are unique for fixed area and
perimeter. Allowing non-affine deformations introduces, however, additional
degrees of freedom that can lower the energy for a given set of parameters.
In the following we examine the response of a tissue initially in a ground
state to an externally imposed strain. The deformation is imposed globally on
the tissue by changing the shape of the bounding box. Such a deformation
uniformly changes the shape of the cells and generally results in a state
where individual vertices are no longer force balanced (Fig. 2a, middle). We
will refer to this state as the “constrained” deformed state. Due to the
presence of hidden degrees of freedom the system can, however, lower its
energy and relax a state of local force balance. In the compatible regime this
relaxation can occur via motion of the vertices that corresponds to non-affine
deformations (Fig. 1a, right). In the compatible regime the relaxation can
occur either via non-affine deformations with fixed box shape or through a
global tilting of the tissue, which entails affine cell deformations, as in
Hernandez et al. [25]. The elastic constants measured in the “relaxed” state
of the compatible regime are the same for the two relaxation protocols.
Operationally, constrained deformations are achieved by first fixing either
the cell height, width, or both, and then transforming the vertices according
to the given deformation, as done in Staple _et al._ [22]. We prevent
spontaneous tilting of the tissue, which can be used to soften the mechanical
response using only affine deformations in the compatible regime [25].
We next evaluate the various elastic moduli of the vertex model. As we will
see below, a new result of our work is that in the incompatible regime cells
can find new deformed states by relaxing through non-affine deformations (Fig.
2), resulting in a softer response than obtained in previous studies [22]
(Fig. 3).
### IV.2 Shear Modulus
To calculate the shear modulus of the tissue, we apply an area-preserving pure
shear deformation, corresponding to $w\rightarrow w(1+\epsilon/2)$ and
$h\rightarrow h(1+\epsilon/2)^{-1}$, with $\epsilon$ the strain. We allow for
a non-affine deformation to relax the tissue by minimising energy with respect
to the angle $\phi$ for each value of strain (Fig. 4). The shear modulus is
defined as
$G=\frac{1}{a}\left.\frac{\partial^{2}}{\partial\epsilon^{2}}\left(\min_{\phi}E\right)\right|_{\epsilon=0}\;,$
(21)
where the factor of $4$ accounts for the fact that we are considering pure
shear instead of simple shear deformations. As area is preserved under pure
shear, we only need to consider the energy cost due to changes in perimeter.
#### IV.2.1 Compatible State, $p_{0}>p_{*}$
In the compatible case, cells can accommodate shear and maintain their area
and perimeter at the target values $a=1$ and $p=p_{0}$ by changing shape,
i.e., by adjusting the angle $\phi$ to a value other than $2\pi/3$. The
perimeter of the deformed call is given by
$p(\epsilon,\phi)=2w(1+\epsilon/2)+hf(\phi)/(1+\epsilon/2)$. The cell can
maintain $p=p_{0}$ by deforming to a new compatible ground state corresponding
to an angle $\phi^{*}$ given by the solution of $p(\epsilon,\phi^{*})=p_{0}$.
Clearly the energy remains zero, demonstrating that the shear deformation cost
no energy and
$G=0$ (22)
for all rigidity ratios and $p_{0}>p_{*}$ (Fig. 5a). This of course only holds
up to a maximum value of strain determined by the angle $\phi_{m}(p_{0})$.
Beyond this value the fluid-like compatible tissue stiffens and acquires a
finite shear modulus, as mentioned by [37], and recently by [36].
Figure 5: Elastic moduli of the vertex model. (a) Shear modulus, (b) bulk
modulus, (c) Young’s modulus, and (d) Poisson’s ratio against target shape
index $p_{0}$ and rigidity ratio $r$.
#### IV.2.2 Incompatible Case, $p_{0}<p_{*}$
In the incompatible case cell edges are under uniform tension. By force
balance, this implies that the angle between edges remains
$\phi=\frac{2\pi}{3}$ even under small deformations at the tissue scale. The
ground state configuration is a regular hexagon with perimeter
$p=2w+\sqrt{3}h$. Using Eq. (21) and the relations for height and width in
terms of the perimeter, $h=\frac{1}{2\sqrt{3}}p$ and $w=\frac{1}{4}p$, we
obtain
$G=\frac{r(p-p_{0})p}{4a}\;.$ (23)
In the rigid, incompatible state the shear modulus is a monotonically
increasing function of $r$ and vanishes at the transition $p_{0}=p_{*}$ (Fig.
3a, Fig. 5a), in agreement with earlier results [33]. The non-affine
deformations of the relaxed tissue allow for a softer response of the tissue,
with the shear modulus being a factor of $3/2$ stiffer when only considering
vertices constrained by the affine shear strain [37, 22], as confirmed by
simulations (Fig. 3a).
For small rigidity ratio ($r\ll 1/p_{*}^{2}$), we can expand $G$ in powers of
$r$, with the result
$G=\frac{1}{4}p_{*}(p_{*}-p_{0})r+\mathcal{O}(r^{2})\;.$ (24)
The opposite limit of large rigidity ratio ($r\gg 1/p_{*}^{2}$) yields
$G=\frac{1}{2}\left(1-\frac{p_{0}^{2}}{p_{*}^{2}}\right)-\frac{1}{p_{*}^{6}}\left(p_{*}^{2}-p_{0}^{2}\right)^{2}\frac{1}{r}+\mathcal{O}\left(\frac{1}{r^{2}}\right)\;.$
(25)
### IV.3 Bulk Modulus
To calculate the bulk modulus, we change the area $a\rightarrow a(1+\epsilon)$
by rescaling the height $h\rightarrow h(1+\epsilon)^{\frac{1}{2}}$ and width
$w\rightarrow w(1+\epsilon)^{\frac{1}{2}}$, and allow the angle $\phi$ to vary
to minimize the deformation energy. The bulk modulus is then given by
$K=\frac{1}{a}\left.\frac{\partial^{2}}{\partial\epsilon^{2}}\left(\min_{\phi}E\right)\right|_{\epsilon=0}\;,$
(26)
with
$\frac{\partial^{2}E}{\partial\epsilon^{2}}=a^{2}+r(p-p_{0})\frac{\partial^{2}p}{\partial\epsilon^{2}}+r\left(\frac{\partial
p}{\partial\epsilon}\right)^{2}\;.$ (27)
To evaluate this expression we need to consider separately the compatible and
incompatible states.
#### IV.3.1 Compatible state, $p_{0}>p_{*}$
We have previously shown that in the compatible case the angle $\phi$ can
adjust to maintain a fixed cell perimeter under small deformations. Thus
$\left.\frac{\partial p}{\partial\epsilon}\right|_{\epsilon=0}=0$ and
$\left.\frac{\partial^{2}p}{\partial\epsilon^{2}}\right|_{\epsilon=0}=0$, and
the bulk modulus is simply
$K=1$ (28)
for all $r$ and $p_{0}>p_{*}$ (Fig. 3b, Fig. 3b).
By contrast, if we allow for only affine deformations then the perimeter
expands isotropically $p(\epsilon)=(1+\epsilon)^{\frac{1}{2}}p(0)$. In this
case $\left.\frac{\partial
p}{\partial\epsilon}\right|_{\epsilon=0}=\frac{1}{2}p(0)$ and
$\left.\frac{\partial^{2}p}{\partial\epsilon^{2}}\right|_{\epsilon=0}=-\frac{1}{4}p(0)$,
giving a bulk modulus equal to
$K_{\text{affine}}=1+\frac{1}{4}rp_{0}^{2},$ (29)
which can be significantly higher than the non-affine result for high $r$,
emphasising the need to consider non-affine displacements (Fig. 2b).
#### IV.3.2 Incompatible State, $p_{0}<p_{*}$
In the incompatible state for $p_{0}<p_{*}$ the angle that minimizes energy
remains $\phi=\frac{2\pi}{3}$ for small perturbations to cell height and width
to ensure tension balance at the cell vertices. The cell then expands
isotropically, such that $p(\epsilon)=(1+\epsilon)^{\frac{1}{2}}p(0)$,
resulting in a bulk modulus
$K=a+\frac{1}{4a}rpp_{0}\>$ (30)
shown in Fig. 3b and Fig. 5b. We note that as $p_{0}$ approaches the critical
value $p_{*}(6)$ from below, the bulk modulus has the value
$\rm{lim}_{p_{0}\rightarrow p_{*}^{-}}K=1+\frac{1}{4}rp_{*}^{2}$. On the other
hand, in the compatible regime $K=1$. Thus the bulk modulus exhibits a jump
discontinuity at the critical point separating compatible and incompatible
states. In contrast, if vertex positions are fixed by uniform dilation without
relaxation, the bulk modulus is continuous and higher in the compatible region
(Fig. 3b).
In the limit of low rigidity ratio ($r\ll 1/p_{*}^{2}$), we find
$K=1+\frac{1}{4}p_{*}(3p_{0}-2p_{*})r+\mathcal{O}(r^{2})\;.$ (31)
The bulk modulus increases with $p_{0}$ up to the critical value, at which
point it discontinuously jumps to $1$ for all $p_{0}>p_{*}$, independent of
$r$. Interestingly, for $p_{0}<\frac{2}{3}p_{*}$ the bulk modulus of the
incompatible solid is lower than that of the compatible fluid, suggesting that
contractility can actually reduce the bulk stiffness of the tissue.
Additionally, increasing the rigidity ratio further reduces the bulk modulus
for low $p_{0}$.
In the limit of high rigidity ratio ($r\gg 1/p_{*}^{2}$), we find
$K=\frac{1}{4}p_{*}^{2}r+\left(\frac{3}{2}\frac{p_{0}^{2}}{p_{*}^{2}}-\frac{1}{2}\right)+\mathcal{O}\left(\frac{1}{r}\right)\;.$
(32)
Thus the bulk modulus increases with the rigidity ratio.
We find that for low $p_{0}$ the bulk modulus can be significantly lower in
the rigid than in the fluid state, which can affect the rate of spreading
monolayers [38]. This is due to the fact that at small $p_{0}$ cells have a
smaller area while the energy required to deform the cell is proportional to
the square of the area change. Therefore it costs more energy to strain a
single cell than to strain two cells with half the area, similar to the
reduction in effective stiffness obtained when springs are placed in series.
The bulk modulus is also a non-monotonic function of the rigidity ratio. For
small $r$ the bulk modulus decreases with $r$ as the size of the cell
decreases, reaching a minimum near $r=2/p_{*}^{2}\approx 0.144$ before
increasing linearly in $r$ for high $r$, due to the growing contribution from
the perimeter elasticity.
### IV.4 Young’s Modulus and Poisson’s Ratio
Next we calculate the Young’s modulus and Poisson’s ratio (Fig. 2c) by
stretching the width of the cell $w\rightarrow w(1+\epsilon)$ while allowing
the cell height $h$ and angle $\phi$ free to minimize the energy. The Young’s
modulus is defined as
$Y=\frac{1}{a}\left.\frac{\partial^{2}}{\partial\epsilon^{2}}\left(\min_{h,\phi}E\right)\right|_{\epsilon=0}$
(33)
and the Poisson’s ratio as
$\nu=-\frac{1}{h}\frac{\partial
h}{\partial\epsilon}=-\frac{w}{h}\frac{\partial h}{\partial w}.$ (34)
To evaluate $Y$ and $\nu$ we use the relationship between the linear elastic
constants, $Y=\frac{4KG}{K+G}$ and $\nu=\frac{K-G}{K+G}$, which have been
shown to hold away from the critical point.
#### IV.4.1 Compatible state, $p_{0}>p_{*}$
In the compatible state, the ground state degeneracy allows the cell to
achieve the target shape index and area for small strain by reducing cell
height and finding values of the angle $\phi$ different from $2\pi/3$, i.e.,
by changing its shape, with no energetic cost. As a result for $p_{0}>p_{*}$
we find
$Y=0\;,~{}~{}~{}~{}~{}\nu=1\;.$ (35)
for all $r$ (Fig. 3c-d, Fig. 5c-d).
#### IV.4.2 Incompatible case, $p_{0}<p_{*}$
It has been shown that away from the critical point the elastic constants of
the VM satisfy the familiar relation of linear elasticity of isotropic solids
[25]. We can therefore use the relations $Y=\frac{4KG}{K+G}$ and
$\nu=\frac{K-G}{K+G}$ to evaluate $Y$ and $\nu$ in the incompatible regime,
with the result
$\displaystyle Y$
$\displaystyle=(p-p_{0})\frac{rp\left(4a^{2}+rpp_{0}\right)}{a(4a^{2}+rp^{2})}\;,$
(36) $\displaystyle\nu$
$\displaystyle=1-\frac{2rp(p-p_{0})}{4a^{2}+rp^{2}}\;.$ (37)
We find that the Young’s modulus is a non-monotonic function of both target
shape index and rigidity ratio (Fig. 3c, Fig. 5c). The nonmonotonicity with
$p_{0}$ is most pronounced at intermediate values of $r$, where at small
$p_{0}$ the Young’s modulus increases, rather than decrease, with increasing
$p_{0}$.
At both high and low rigidity ratio, the Poisson’s ratio remains close to $1$
for all $p_{0}$, indicating that the cell preserves its area under
deformations (Fig. 3d, Fig. 5d). At intermediate values of the rigidity ratio
and small $p_{0}$, the nonmonotonicity of the Young’s modulus results in a
negative Poisson’s ratio, which indicates that a tissue stretched in the
$x$-direction, also expands in the $y$-direction.
In comparison to the constrained response, we find that the relaxed response
gives a softer Young’s modulus for all rigity ratio and target shape index
values, since the shear modulus is also softer by a factor of $\frac{2}{3}$
(Fig. 3c). Similarly, the Poisson’s ratio is higher in the comaptible state
for all $p_{0}<p_{*}$ (Fig. 3d).
In the limit of low rigidity ratio ($r\ll 1/p_{*}^{2}$), the Young’s modulus
and Poisson’s ratio are given by
$\displaystyle Y$ $\displaystyle=p_{*}(p_{*}-p_{0})r+\mathcal{O}(r^{2})\;,$
(38) $\displaystyle\nu$
$\displaystyle=1+\frac{1}{2}p_{*}(p_{0}-p_{*})r+\mathcal{O}(r^{2})\;,$ (39)
showing that when $p_{0}$ is increased towards the critical point from the
solid side ($p_{0}\rightarrow p_{*}^{-}$) the Young’s modulus vanishes and the
Poisson’s ratio increases towards $1$.
In the limit of high rigidity ratio ($r\ll 1/p_{*}^{2}$), we obtain
approximate expression by expanding in $1/r$ as
$\displaystyle Y$
$\displaystyle=2\left(\frac{p_{*}^{2}-p_{0}^{2}}{p_{*}^{2}}\right)+\mathcal{O}\left(\frac{1}{r}\right)\;,$
(40) $\displaystyle\nu$
$\displaystyle=1-\frac{4(p_{*}^{2}-p_{0}^{2})}{p_{*}^{4}}\frac{1}{r}+\mathcal{O}\left(\frac{1}{r^{2}}\right)\;.$
(41)
The Young’s modulus also has a maximum value of $2$ at $p_{0}=0$.
### IV.5 Origin of negative Poisson’s ratio
We can understand why certain cell parameters give a positive or negative
Poisson’s ratio by looking at how the energy gradient with respect to cell
height changes as we change the cell width. The gradient $\frac{\partial
E}{\partial h}$ can be thought of as the effective force acting on the height
of the cell, given by
$\frac{\partial E}{\partial h}=w(hw-1)+\sqrt{3}r(2w+\sqrt{3}h-p_{0}).$ (42)
In the ground state, this will be zero. Then, if we vary the cell width but
keep the height fixed we can measure the change in force as
$\frac{\partial^{2}E}{\partial w\partial h}=2hw-1+2\sqrt{3}r.$ (43)
When this value is positive, then as cell width is increased, the effective
force acting on cell height increases and so the cell height will decrease as
it relaxes to the energy minimum. Consequently, the sign of this value is the
same sign as the Poisson’s ratio.
The second term $2\sqrt{3}r$ comes from the perimeter contribution in the VM
and accounts for energy changes due to perimeter elasticity. Since $p\geq
p_{0}$, the cell is under tension and the perimeter term aims to shrink the
cell. Increasing the width further increases the perimeter and so tension
increases, providing more force to shrink the cell. Thus, the perimeter
elasticity always acts to shrink the cell and contributes to a positive
Poisson’s ratio.
The first term, $2hw-1=2a-1$, represents the energy change due to area
elasticity. We can write this as $2w(h-1/2w)$, which we can think of as an
spring like force with stiffness $2w$ and target height $1/2w$. As width
increases, the height becomes closer to the target height, reducing the
strain. At the same time, the effective stiffness $2w$ increases, increasing
the pressure. Thus there is a trade off between less restoring force on the
cell area versus increased effectiveness of changes in cell height. The net
effect on whether this increases or decreases the perimeter depends on the
size of the cell area: when cell area $a<\frac{1}{2}$ the area term acts to
increase cell height when width is increased, and for $a>\frac{1}{2}$ the area
term reduces the cell height.
For high rigidity ratio, the perimeter term dominates and so an increase in
cell width leads to a reduction in cell height. For low rigidity ratio, the
area term dominates and the cell area is close to $1$ (Fig. 4d), thus an
increase in cell width reduces the area pressure and cell height decreases.
However, in the intermediate regime for low $p_{0}$ cell area is small,
meaning the area term acts to increase cell width, and the area contribution
and perimeter contributions are of comparable size, resulting in negative
Poisson’s ratio.
We can calculate the transition to a negative Poisson’s ratio exactly at
$p_{0}=0$, which corresponds to the situation where the contribution to cell
edge tension from cortical contractility and cell-cell adhesion precisely
balance. In this limit the equation for the ground state perimeter, Eq. 16,
becomes
$p\left(p^{2}+\frac{rp_{*}^{4}-2p_{*}^{2}}{2}\right)=0$ (44)
with solution
$\displaystyle p$ $\displaystyle=p_{*}\sqrt{1-\frac{1}{2}rp_{*}^{2}}\;.$ (45)
$\displaystyle a$ $\displaystyle=\left(1-\frac{1}{2}rp_{*}^{2}\right)$ (46)
for $r<2/p_{*}^{2}\approx 0.144$. For $r>2/p_{*}^{2}$ the cell is unstable and
collapses to zero area. We can calculate whether cell height increases or
decreases when width is increased by calculating how the effective force on
the height, $\frac{\partial E}{\partial h}$, changes with width. Substituting
our formula for area we find
$\frac{\partial^{2}E}{\partial h\partial
w}=1-\frac{3}{4}rp_{*}^{2}=1-6\sqrt{3}r$ (47)
where we have used $p_{*}^{2}=8\sqrt{3}$. Thus for
$r>\frac{4}{3}p_{*}^{2}\approx 0.096$ and $p_{0}=0$ the tissue has a negative
Poisson’s ratio.
## V Conclusions
In this paper we studied the linear response of the 2D vertex model by
calculating the shear modulus, bulk modulus, Young’s modulus and Poisson’s
ratio, using a mean-field approach that allows for a class of non-affine
deformations, which agree well with numerical simulations. We also provide
approximate expressions in the limit of high and low rigidity ratio. Our
calculations match previous results showing a rigidity transition controlled
by purely geometric effects and tuned by the target shape index $p_{0}$.
For $p_{0}<p_{*}=3.772$, the ground state configuration of the tissue is a
regular hexagonal lattice of cells with area and perimeter different from
their target value. In this regime, referred to as the incompatible, the
tissue is rigid, with a gapped or prestressed ground state and finite shear
modulus.
For $p_{0}>p_{*}$, the cells can attain their target area and shape index and
there is a continuum manifold of degenerate zero energy states corresponding
to deformed hexagons and parametrized by the angle $\phi$ between adjacent
sides of the $6$-sided polygon (Fig. 4a), with
$\phi\in[2\pi/3,\phi_{m}(p_{0})]$. The tissue is soft because it can
accommodate shear deformations with no energy cost by simply finding a new
(deformed) zero energy state, i.e., by changing its shape. For
$\phi>\phi_{m}(p_{0})$ the tissue stiffens and acquires a finite shear
modulus. The angle $\phi_{m}$ can therefore be interpreted as the threshold of
shear-induced stiffening of the liquid state, as first mentioned by [37], and
recently shown by [36].
The linear response of the tissue to a uniaxial strain reveals a subtle
behavior. In the compatible regime the tissue has vanishing Young’s modulus
and Poisson’s ratio is $\nu=1$, indicating that the tissue area is conserved.
It is important to note, however, that the linear elastic response depends on
the protocol of the deformation: the Young’s modulus indeed vanishes if the
tissue is allowed to change its shape in response to a uniaxial strain, but if
the tissue is constrained to retain its shape when stretched uniaxially, then
one would measure a finite Young’s modulus.
In the incompatible regime, the Poisson’s ratio can become negative at small
$p_{0}$ and intermediate values of the rigidity ratio. This arises because for
small cell area upon stretching in one direction the cell has to increase its
height in the other direction to maintain its area fixed to the target value.
The bulk modulus of the tissue has a jump discontinuity at the transition
point $p_{0}=p_{*}$, where it changes from a larger value in the solid state
to a smaller value in the soft state. This discontinuity arises again because
cells in the compatible regime can change shape to to maintain their perimeter
under small changes in area. In contrast, in the incompatible regime cells
must increase their perimeter away from the preferred value, resulting in an
increase in energy. We also find that in the incompatible regime, the bulk
modulus can decrease below $1$ for small rigidity ratio. This indicates that
cell contractility can reduce the stiffness of the tissue, resulting in a
larger bulk modulus in the “soft” phase than in the “solid” phase.
Our results highlight the complex linear elastic behaviour that can arise from
the simplest version of the vertex model due to its underconstrained nature.
It would be interesting to extend our calculations to the case of disordered
cell networks. Additionally, we highlight the importance of allowing for
unconstrained degrees of freedom, in this case non-affine deformations, to
relax the system and give a softer mechanical response to strain.
In conclusion, our work demonstrates that the vertex model, thought of as a
collection of geometric constraints rather than a reference ground state
structure, can engender interesting linear mechanical responses. The linear
response exhibits a strong non-affine contribution under uniaxial compression
and shear, as well as a negative Poisson’s ratio. Typically, these two
phenomena in crystalline solids require special lattice constructions, whereas
in the vertex model exotic mechanical response can be achieved by tuning the
relative competition between area and perimeter constraints via $r$ and
geometric compatibility via $p_{0}$.
## VI Acknowledgements
M.C.M. and A.H. were supported by the National Science Foundation Grant No.
DMR-2041459. M.M. was supported by the Israel Science Foundation grant No.
1441/19. This research was supported in part by the National Science
Foundation under Grant No. NSF PHY-1748958.
## References
* Martin _et al._ [2009] A. C. Martin, M. Kaschube, and E. F. Wieschaus, Pulsed contractions of an actin–myosin network drive apical constriction, Nature 457, 495 (2009).
* Etournay _et al._ [2015] R. Etournay, M. Popović, M. Merkel, A. Nandi, C. Blasse, B. Aigouy, H. Brandl, G. Myers, G. Salbreux, F. Jülicher, _et al._ , Interplay of cell dynamics and epithelial tension during morphogenesis of the drosophila pupal wing, Elife 4, e07090 (2015).
* Streichan _et al._ [2018] S. J. Streichan, M. F. Lefebvre, N. Noll, E. F. Wieschaus, and B. I. Shraiman, Global morphogenetic flow is accurately predicted by the spatial distribution of myosin motors, Elife 7, e27454 (2018).
* Maniou _et al._ [2021] E. Maniou, M. F. Staddon, A. R. Marshall, N. D. Greene, A. J. Copp, S. Banerjee, and G. L. Galea, Hindbrain neuropore tissue geometry determines asymmetric cell-mediated closure dynamics in mouse embryos, Proceedings of the National Academy of Sciences 118, e2023163118 (2021).
* Poujade _et al._ [2007] M. Poujade, E. Grasland-Mongrain, A. Hertzog, J. Jouanneau, P. Chavrier, B. Ladoux, A. Buguin, and P. Silberzan, Collective migration of an epithelial monolayer in response to a model wound, Proceedings of the National Academy of Sciences 104, 15988 (2007).
* Brugués _et al._ [2014] A. Brugués, E. Anon, V. Conte, J. H. Veldhuis, M. Gupta, J. Colombelli, J. J. Muñoz, G. W. Brodland, B. Ladoux, and X. Trepat, Forces driving epithelial wound healing, Nature physics 10, 683 (2014).
* Tetley _et al._ [2019] R. J. Tetley, M. F. Staddon, D. Heller, A. Hoppe, S. Banerjee, and Y. Mao, Tissue fluidity promotes epithelial wound healing, Nature physics 15, 1195 (2019).
* Ajeti _et al._ [2019] V. Ajeti, A. P. Tabatabai, A. J. Fleszar, M. F. Staddon, D. S. Seara, C. Suarez, M. S. Yousafzai, D. Bi, D. R. Kovar, S. Banerjee, _et al._ , Wound healing coordinates actin architectures to regulate mechanical work, Nature physics 15, 696 (2019).
* Friedl and Gilmour [2009] P. Friedl and D. Gilmour, Collective cell migration in morphogenesis, regeneration and cancer, Nature reviews Molecular cell biology 10, 445 (2009).
* Arwert _et al._ [2012] E. N. Arwert, E. Hoste, and F. M. Watt, Epithelial stem cells, wound healing and cancer, Nature Reviews Cancer 12, 170 (2012).
* Salbreux _et al._ [2012] G. Salbreux, G. Charras, and E. Paluch, Actin cortex mechanics and cellular morphogenesis, Trends in cell biology 22, 536 (2012).
* Murrell _et al._ [2015] M. Murrell, P. W. Oakes, M. Lenz, and M. L. Gardel, Forcing cells into shape: the mechanics of actomyosin contractility, Nature reviews Molecular cell biology 16, 486 (2015).
* Ladoux and Mège [2017] B. Ladoux and R.-M. Mège, Mechanobiology of collective cell behaviours, Nature reviews Molecular cell biology 18, 743 (2017).
* Graner and Glazier [1992] F. Graner and J. A. Glazier, Simulation of biological cell sorting using a two-dimensional extended potts model, Physical review letters 69, 2013 (1992).
* Marchetti _et al._ [2013] M. C. Marchetti, J.-F. Joanny, S. Ramaswamy, T. B. Liverpool, J. Prost, M. Rao, and R. A. Simha, Hydrodynamics of soft active matter, Reviews of modern physics 85, 1143 (2013).
* Banerjee and Marchetti [2019] S. Banerjee and M. C. Marchetti, Continuum models of collective cell migration, Cell Migrations: Causes and Functions , 45 (2019).
* Alert and Trepat [2020] R. Alert and X. Trepat, Physical models of collective cell migration, Annual Review of Condensed Matter Physics 11, 77 (2020).
* Alt _et al._ [2017] S. Alt, P. Ganguly, and G. Salbreux, Vertex models: from cell mechanics to tissue morphogenesis, Philosophical Transactions of the Royal Society B: Biological Sciences 372, 20150520 (2017).
* Nagai and Honda [2001] T. Nagai and H. Honda, A dynamic cell model for the formation of epithelial tissues, Philosophical Magazine B 81, 699 (2001).
* Fletcher _et al._ [2014] A. G. Fletcher, M. Osterfield, R. E. Baker, and S. Y. Shvartsman, Vertex models of epithelial morphogenesis, Biophysical journal 106, 2291 (2014).
* Farhadifar _et al._ [2007] R. Farhadifar, J.-C. Röper, B. Aigouy, S. Eaton, and F. Jülicher, The influence of cell mechanics, cell-cell interactions, and proliferation on epithelial packing, Current Biology 17, 2095 (2007).
* Staple _et al._ [2010] D. B. Staple, R. Farhadifar, J.-C. Röper, B. Aigouy, S. Eaton, and F. Jülicher, Mechanics and remodelling of cell packings in epithelia, The European Physical Journal E 33, 117 (2010).
* Bi _et al._ [2016] D. Bi, X. Yang, M. C. Marchetti, and M. L. Manning, Motility-driven glass and jamming transitions in biological tissues, Physical Review X 6, 021011 (2016).
* Hufnagel _et al._ [2007] L. Hufnagel, A. A. Teleman, H. Rouault, S. M. Cohen, and B. I. Shraiman, On the mechanism of wing size determination in fly development, Proceedings of the National Academy of Sciences 104, 3835 (2007).
* Hernandez _et al._ [2022] A. Hernandez, M. F. Staddon, M. J. Bowick, M. C. Marchetti, and M. Moshe, Anomalous elasticity of a cellular tissue vertex model, Physical Review E 105, 064611 (2022).
* Bi _et al._ [2015] D. Bi, J. Lopez, J. M. Schwarz, and M. L. Manning, A density-independent rigidity transition in biological tissues, Nature Physics 11, 1074 (2015).
* Sussman _et al._ [2018] D. M. Sussman, M. Paoluzzi, M. C. Marchetti, and M. L. Manning, Anomalous glassy dynamics in simple models of dense biological tissue, EPL (Europhysics Letters) 121, 36001 (2018).
* Tong _et al._ [2021] S. Tong, N. K. Singh, R. Sknepnek, and A. Kosmrlj, Linear viscoelastic properties of the vertex model for epithelial tissues, arXiv preprint arXiv:2102.11181 (2021).
* Duclut _et al._ [2021] C. Duclut, J. Paijmans, M. M. Inamdar, C. D. Modes, and F. Jülicher, Nonlinear rheology of cellular networks, Cells & development , 203746 (2021).
* Duclut _et al._ [2022] C. Duclut, J. Paijmans, M. M. Inamdar, C. D. Modes, and F. Jülicher, Active t1 transitions in cellular networks, The European Physical Journal E 45, 1 (2022).
* Moshe _et al._ [2018] M. Moshe, M. J. Bowick, and M. C. Marchetti, Geometric frustration and solid-solid transitions in model 2d tissue, Physical review letters 120, 268105 (2018).
* Merkel _et al._ [2019] M. Merkel, K. Baumgarten, B. P. Tighe, and M. L. Manning, A minimal-length approach unifies rigidity in underconstrained materials, Proceedings of the National Academy of Sciences 116, 6560 (2019).
* Murisic _et al._ [2015] N. Murisic, V. Hakim, I. G. Kevrekidis, S. Y. Shvartsman, and B. Audoly, From discrete to continuum models of three-dimensional deformations in epithelial sheets, Biophysical journal 109, 154 (2015).
* Brakke [1992] K. A. Brakke, The surface evolver, Experimental mathematics 1, 141 (1992).
* Osserman [1978] R. Osserman, The isoperimetric inequality, Bulletin of the American Mathematical Society 84, 1182 (1978).
* Huang _et al._ [2021] J. Huang, J. O. Cochran, S. M. Fielding, M. C. Marchetti, and D. Bi, Shear-driven solidification and nonlinear elasticity in epithelial tissues, arXiv preprint arXiv:2109.10374 (2021).
* Farhadifar [2009] R. Farhadifar, Dynamics of cell packing and polar order in developing epithelia, ”” (2009).
* Staddon _et al._ [2022] M. F. Staddon, M. P. Murrell, and S. Banerjee, Interplay between substrate rigidity and tissue fluidity regulates cell monolayer spreading, Soft Matter 18, 7877 (2022).
|
Multiscale Finite Element Methods (MsFEMs) are now well-established finite element type approaches dedicated to multiscale problems. They first compute local, oscillatory, problem-dependent basis functions that generate a suitable discretization space, and next perform a Galerkin approximation of the problem on that space. We investigate here how these approaches can be implemented in a non-intrusive way, in order to facilitate their dissemination within industrial codes or non-academic environments.
We develop an abstract framework that covers a wide variety of MsFEMs for linear second-order partial differential equations. Non-intrusive MsFEM approaches are developed within the full generality of this framework, which may moreover be beneficial to steering software development and improving the theoretical understanding and analysis of MsFEMs.
§ INTRODUCTION
§ THE (INTRUSIVE) MULTISCALE FINITE ELEMENT METHOD
§.§ Discrete variational formulation
Let $d \geq 1$ denote the space dimension of interest and let $\Omega \subset \bbR^d$ be a bounded polytope.
Convexity can be assumed for elliptic regularity results to hold, for which we refer to <cit.>. This technical assumption is not necessary for the algorithmic aspects of the MsFEM that are the main focus of this article.
By way of example, we consider first the diffusion equation with homogeneous Dirichlet boundary conditions. In a second step, from Sec. <ref> onwards, we will also consider more general problems, and we will mention other types of boundary conditions in Sec. <ref>.
More precisely, we focus here on the boundary value problem
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts?s}
\IEEEstrut
$-{\operatorname{div}(A^\varepsilon \nabla u^\varepsilon)}$
&in $\Omega$,
\\
&on $\partial \Omega$,
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:diffusion-pde}
\end{equation}
where the diffusion tensor $A^\varepsilon \in L^\infty(\Omega,\ \bbR^{d \times d})$ satisfies the uniform bounds
\begin{gather}
\begin{aligned}
&\forall \, \xi\in\bbR^d,
\quad
m |\xi|^2 \leq \xi \cdot A^\varepsilon(x) \, \xi
\quad \text{a.e.~in } \Omega, \\
\text{and}
\qquad
&\forall \, \xi,\eta \in \bbR^d,
\quad
| \eta \cdot A^\varepsilon(x) \xi | \leq M \, |\xi| \, |\eta|
\quad \text{a.e.~in } \Omega,
\end{aligned}
\label{ass:bounds}
\end{gather}
for some $M \geq m > 0$ independent of $\varepsilon$. The right-hand side $f$ does not vary on the microscopic scale $\varepsilon$. We denote the diffusion tensor with a superscript $\varepsilon$ to keep in mind that $A^\varepsilon$ might be highly oscillatory on a typical length scale of size $\varepsilon$ much smaller than the diameter of $\Omega$ (assumed to be of order 1).
No further structural assumptions on $A^\varepsilon$ are made. In particular, $A^\varepsilon$ need not be the rescaling of a fixed periodic matrix of the form $A^\varepsilon(x) = A(x/\varepsilon)$. We will specialize to this periodic setting in Sec. <ref> only to obtain convergence results, but this assumption is of no relevance for the practical implementation of the MsFEM. Let us also mention that none of the considerations in this article require symmetry of the diffusion tensor. Our development of non-intrusive MsFEMs also generalizes to linear systems of PDEs. The analysis we provide is also expected to extend to e.g. the system of linear elasticity up to some technicalities that we do not consider here.
For simplicity of exposition, we assume that $f\in L^2(\Omega)$ (rather than $f \in H^{-1}(\Omega)$, for which the problem (<ref>) is in fact well-posed). We do so to avoid unnecessary technicalities. Our proposed non-intrusive MsFEM carries over to the more general case. For some convergence results, the condition $f \in L^2(\Omega)$ cannot be relaxed. In this case, this is also explicitly stated.
Problem (<ref>) admits a unique solution in the space $H^1_0(\Omega)$. This solution is also characterized by the variational formulation
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{t?s}
\IEEEstrut
Find $u^\varepsilon \in H^1_0(\Omega)$ such that
\\
$a^{\varepsilon,\dif}(u^\varepsilon,\, v) = F(v)$
&for all $v \in H^1_0(\Omega)$,
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:diffusion-vf}
\end{equation}
where the bilinear form $a^{\varepsilon,\dif}$ and the linear form $F$ are defined, for any $u,\, v \in H^1_0(\Omega)$, by
\begin{equation}
a^{\varepsilon, \dif}(u,\, v) =
\int_\Omega \nabla v \cdot A^\varepsilon \nabla u,
\quad
F(v) = \int_\Omega f v.
\label{eq:diffusion-bilin-form}
\end{equation}
The coercivity hypothesis in (<ref>) ensures that the bilinear form $a^{\varepsilon,\dif}$ is coercive on the space $H^1_0(\Omega)$. Then the Lax-Milgram Theorem <cit.> shows that (<ref>) is indeed well-posed.
The numerical approximation of (<ref>) with a finite element method starts by the introduction of a mesh $\mathcal{T}_H$ for $\Omega$. The subscript $H$ denotes the typical size of the mesh elements.
We assume $\mathcal{T}_H$ to be a simplicial, conformal mesh. For some convergence results, we shall assume quasi-uniformity. These assumptions are standard in finite element analysis. We refer, e.g., to <cit.> for a general exposition and various examples. Again, these regularity properties of the mesh do not have any impact on the implementation of the MsFEM on a given mesh. The regularity plays a role only to obtain convergence results.
A finite element method for (<ref>) is obtained by restricting the equivalent formulation (<ref>) to a finite-dimensional subspace of $H^1_0(\Omega)$, typically consisting of functions that are piecewise polynomial on the mesh $\mathcal{T}_H$. We suppose that we are in the regime where $H$ is larger than or comparable to the microscale $\varepsilon$ and cannot be taken smaller due to computational limitations. In this case, it is well known that a Galerkin approximation of (<ref>) on, say, the standard (conforming) Lagrange $\Pone$ space on $\mathcal{T}_H$ provides only a poor, not to say an incorrect approximation of $u^\varepsilon$. See <cit.>, for instance, for an explicit example where the $\Pone$ approximation on a coarse mesh fails. At the same time, the use of a finite element method on a fine mesh of size $H \ll \varepsilon$ might be unfeasible from a computational point of view because of its prohibitive computational cost. To remedy this issue, we shall next introduce the multiscale finite element method (MsFEM) <cit.>.
§.§ A simple multiscale finite element method
The MsFEM is a Galerkin approximation of (<ref>) for which the approximation space is adapted in order to achieve satisfactory accuracy even on a coarse mesh. The correct choice of approximation space yields a numerical approximation that is much closer to $u^\varepsilon$ than a standard $\Pone$-approximation when $\varepsilon$ is smaller than $H$, and especially when $\varepsilon$ becomes asymptotically small. To begin with, we introduce here the simplest variant of the MsFEM, which originally appeared in <cit.>, before moving on to other MsFEM variants.
Let $x_1,\dots,\, x_{N_0}$ be an enumeration of the interior vertices of the mesh $\mathcal{T}_H$, i.e., the vertices that do not lie on $\partial \Omega$.
We denote by $\phiPone{i}$ the unique piecewise $\Pone$ function such that $\phiPone{i}(x_j) = \delta_{i,j}$ for all $1 \leq j \leq N_0$. (These are the basis functions for the standard $\Pone$ Lagrange finite element.) We define the multiscale basis functions $\phiEps{i}$ (for $1 \leq i \leq N_0$) by
\begin{equation}
\forall \ K \in \mathcal{T}_H,
\quad
\left\{
\begin{IEEEeqnarraybox}[][c]{uts?s}
\IEEEstrut
$-{\operatorname{div}(A^\varepsilon \nabla \phiEps{i})}$
&in $K$,
\\
&on $\partial K$.
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:MsFEM-basis}
\end{equation}
All these problems, on each mesh element $K$, are again well-posed by coercivity of $A^\varepsilon$ and the Lax-Milgram Theorem. The functions $\phiEps{i}$ so defined belong to the global space $H^1_0(\Omega)$ because the local boundary conditions on $\partial K$ imply continuity across all mesh elements $K$. It is also immediately seen that $\phiEps{i}$ is supported by exactly the same mesh elements as $\phiPone{i}$.
On each mesh element $K$, problem (<ref>) defines at most non-trivial basis functions. Let $i_1,\dots,i_{d+1}$ be the indices of the vertices of $K$. It is easily inferred from (<ref>) that
\left. \phiEps{i_{d+1}} \right\vert_K
1 - \sum_{j=1}^d \left. \phiEps{i_j}\right\vert_K .
Thus, one only has to compute $d$ basis functions by the resolution of a PDE on $K$.
The multiscale approximation space is defined as $V_{H,0}^\varepsilon = \operatorname{span}\{ \phiEps{i} \mid 1 \leq i \leq N_0 \}$. This is a finite-dimensional space of the same dimension as the one used for a $\Pone$ Lagrange finite element approximation on the mesh $\mathcal{T}_H$. The MsFEM consists in computing the approximation $u^\varepsilon_H \in V_{H,0}^\varepsilon$ defined by the problem
\begin{equation}
\forall \, v_H^\varepsilon \in V_{H,0}^\varepsilon,
\qquad
a^{\varepsilon,\dif} \left(u^\varepsilon_H,\, v_H^\varepsilon \right) = F \left(v_H^\varepsilon \right).
\label{eq:diffusion-MsFEM}
\end{equation}
Since $V_{H,0}^\varepsilon$ is a subspace of $H^1_0(\Omega)$, the bilinear form $a^{\varepsilon,\dif}$ is coercive on $V_{H,0}^\varepsilon$ and the discrete problem (<ref>) is again well-posed by virtue of the Lax-Milgram Theorem.
The computation of the multiscale basis functions $\phiEps{i}$ is called the offline stage of the MsFEM and only has to be carried out once if (<ref>) has to be solved multiple times for different right-hand sides. Also note that all problems (<ref>) are independent of each other, and can thus be solved in parallel. In practice, the $\phiEps{i}$ are approximated numerically on a fine mesh of $K\in\mathcal{T}_H$ of mesh size $h \leq \varepsilon$ that resolves the oscillations of $A^\varepsilon$. We omit these details here because they have no importance for the non-intrusive strategy that we shall propose later in this article.
The resolution of the global problem (<ref>) is called the online stage. The computational cost for this problem is the same as for a standard $\Pone$ approximation on the same mesh.
A further discussion of the practical implementation of the MsFEM is provided in Sec. <ref>. This discussion partially reproduces some elements of <cit.>. We include it here to clarify and motivate the developments in the sequel.
§.§ Intrusive workflow
The practical resolution of the global problem (<ref>) consists in the construction and resolution of the following linear system:
\begin{equation}
\mathds{A}^\varepsilon U^\varepsilon = \mathds{F}^\varepsilon,
\label{eq:linear-system-msfem}
\end{equation}
\begin{equation}
\forall \ 1 \leq i,\,j \leq N_0,
\quad
\mathds{A}^\varepsilon_{j,i} = a^{\varepsilon,\dif} \left( \phiEps{i},\, \phiEps{j} \right),
\quad
\mathds{F}^\varepsilon_j = F \left( \phiEps{j} \right),
\label{eq:linear-system-msbasis}
\end{equation}
where we recall that $N_0$ denotes the number of interior vertices of $\mathcal{T}_H$.
The MsFEM approximation $u^\varepsilon_H$ is given by
\sum_{i=1}^{N_0} U_i^\varepsilon \phiEps{i}.
The MsFEM can then be written (as it is traditionally presented) as in Algorithm <ref>. We use the notation $\displaystyle a^{\varepsilon,\dif}_K(u,v) = \int_K \nabla v \cdot A^\varepsilon \nabla u$ for all $u,v \in H^1(K)$ and we write $\displaystyle F_K(v) = \int_K fv$ for any $v \in L^2(K)$.
MsFEM approach for problem (<ref>) (see comments in the text)
[1]
Construct a mesh $\mathcal{T}_H$ of $\Omega$, denote $N_0$ the number of internal vertices and $\mathcal{N}(n,K)$ the global index of the vertex of $K \in \mathcal{T}_H$ that has local index $1 \leq n \leq d+1$ in $K$
Set $\mathds{A}^\varepsilon \coloneqq 0$ and $\mathds{F}^\varepsilon \coloneqq 0$
$K \in \mathcal{T}_H$
$1 \leq n \leq d+1$
Set $i \coloneqq \mathcal{N}(n,K)$
Solve for $\left. \phiEps{i} \right\vert_K$ in (<ref>)
$1 \leq l \leq d+1$
Set $j \coloneqq \mathcal{N}(l,K)$
$1 \leq n \leq d+1$
Set $i \coloneqq \mathcal{N}(n,K)$ and $\mathds{A}^\varepsilon_{j,i} \pluseq a_K^{\varepsilon,\dif}(\phiEps{i},\, \phiEps{j})$
Set $\mathds{F}^\varepsilon_{j} \pluseq F_K(\phiEps{j})$
Solve the linear system $\mathds{A}^\varepsilon U^\varepsilon = \mathds{F}^\varepsilon$
Obtain the MsFEM approximation $u^\varepsilon_H = \sum\limits_{i=1}^{N_0} U^\varepsilon_i \phiEps{i}$
Lines <ref>-<ref> of Algorithm <ref> (resp. <ref>-<ref>) constitute the offline (resp. online) stage of the MsFEM. Note that the computation of the stiffness matrix in line <ref> only depends on the multiscale basis functions (and not on the right-hand side $f$) and can therefore be carried out once and for all in the offline stage. Also note that, for an efficient computation of the $\phiEps{i}$ in line <ref>, one should apply Rem. <ref>. Only the online stage is to be repeated when problem (<ref>) is to be solved multiple times for various right-hand sides $f$.
Implementing Algorithm <ref> in an industrial code is challenging. Indeed, the practical implementation of any finite element method relies on (i) the construction of a mesh, (ii) the construction of the linear system associated to the discrete variational formulation and (iii) the resolution of the linear system. An efficient implementation of the second step heavily relies on the choice of the discretization space.
Regarding the construction of the linear system (performed in line <ref> of Algorithm <ref>), it is by no means obvious to adapt existing finite element codes based on generic approximation spaces (such as the spaces $V_H^L$ and $V_H^{CR}$ in Def. <ref>) to a different, problem-dependent choice of space such as $V_H^\varepsilon$. No analytic expressions for the basis functions $\phiEps{i}$ are available (and thus a fine mesh should be used to approximate them), the computation of $a_K^\varepsilon(\phiEps{i},\phiEps{j})$ should be performed by quadrature rules on the fine mesh because the integrands are highly oscillatory, one should have at hand the correspondence between element and vertex indices of the coarse mesh ($\mathcal{N}(n,K)$ in Algorithm <ref>), the assembly of the global stiffness matrix $\{ \mathds{A}^\varepsilon_{j,i} \}_{1 \leq i,j \leq N_0}$ should be executed by a dedicated new piece of software, etc. To alleviate these obstacles, we introduce below a way of implementing the MsFEM that capitalizes on an existing code for solving (<ref>) by a $\Pone$ approximation on $\mathcal{T}_H$ in the case of slowly varying diffusion coefficients. The three central identities for our approach that we aim to generalize to other MsFEMs in this article are framed in distinctive boxes.
§.§ Effective problem on the macroscopic scale
Let us consider the construction of the stiffness matrix of the MsFEM in more detail. The stiffness matrix defined in (<ref>) requires the computation of the quantities
\begin{equation}
\mathds{A}^\varepsilon_{j,i}
\sum_{K\in\mathcal{T}_H} \int_K \nabla \phiEps{j} \cdot A^\varepsilon \nabla \phiEps{i},
\label{eq:stifness-msfem-msbasis}
\end{equation}
for all $1 \leq i,j \leq N_0$.
Following <cit.>, we rewrite the multiscale basis functions as
\begin{equation}
\tcboxmath{
\forall \, K \in \mathcal{T}_H,
\qquad
\phiEps{i}
\phiPone{i} + \sum_{\alpha=1}^d
\left.\left(\partial_\alpha \phiPone{i}\right) \right\vert_K \VK{\alpha}
\quad
\text{in } K,
\label{eq:diffusion-MsFEM-Vxy-grad}
\end{equation}
for all $1 \leq i \leq N_0$, where, for each mesh element $K$, we define the numerical corrector $\VK{\alpha} \in H^1_0(\Omega)$ ($1\leq\alpha\leq d$) as the function supported by $K$ that is the unique solution to the local problem
\begin{equation}
\tcboxmath{
\left\{
\begin{IEEEeqnarraybox}[][c]{uts?s}
\IEEEstrut
$-{\operatorname{div}(A^\varepsilon \nabla \VK{\alpha})}$
&$\operatorname{div}(A^\varepsilon e_ \alpha)$
&in $K$,
\\
&on $\partial K$.
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:diffusion-MsFEM-correctors}
\end{equation}
Here, $e_\alpha$ denotes the $\alpha$-th canonical unit vector of $\bbR^d$.
The expansion (<ref>) is obtained upon rewriting (<ref>) as a PDE for $\phiEps{i}-\phiPone{i}$, and then using linearity of the PDE and the fact that $\nabla\phiPone{i}$ is constant in $K$ to show that
$ \displaystyle
\sum_{\alpha=1}^d \left.\left(\partial_\alpha \phiPone{i}\right) \right\vert_K \VK{\alpha}
$ is indeed the unique solution to this PDE.
Inserting (<ref>) for the trial and test functions in (<ref>) and again exploiting the fact that all $\phiPone{i}$ have piecewise constant gradients, we obtain
\begin{align*}
\mathds{A}^\varepsilon_{j,i}
\sum_{K \in \mathcal{T}_H}
\sum_{\alpha,\beta=1}^d
\left.\left(\partial_\beta \phiPone{j}\right)\right\vert_K \left(
\int_K \left( e_\beta + \nabla \VK{\beta} \right) \cdot A^\varepsilon \left( e_\alpha + \nabla \VK{\alpha} \right)
\right)
\left.\left(\partial_\alpha \phiPone{i}\right)\right\vert_K
\\
\sum_{K \in \mathcal{T}_H}
\sum_{\alpha,\beta=1}^d
\left.\left(\partial_\beta \phiPone{j}\right)\right\vert_K
\, a_K^{\varepsilon,\dif} \left(x^\alpha + \VK{\alpha}, x^\beta + \VK{\beta}\right) \,
\left.\left(\partial_\alpha \phiPone{i}\right)\right\vert_K.
\end{align*}
Next we define the piecewise constant effective diffusion tensor $\overline{A} \in \mathbb{P}_0(\mathcal{T}_H,\,\bbR^{d\times d})$ by
\begin{equation}
\left. \overline{A}_{\beta,\alpha} \right\vert_K
\frac{1}{|K|} a^{\varepsilon,\dif}_K \left(x^\alpha + \VK{\alpha},\, x^\beta + \VK{\beta} \right)
\quad \text{for each } K \in \mathcal{T}_H \text{ and each } 1 \leq \alpha,\beta \leq d,
\label{eq:diffusion-eff-msfem-gal}
\end{equation}
where $|K|$ denotes the measure of the mesh element $K$.
Then (<ref>) can be written as
\begin{equation}
\tcboxmath{
\mathds{A}^\varepsilon_{j,i} = \int_\Omega \nabla \phiPone{j} \cdot \overline{A} \, \nabla \phiPone{i}.
\label{eq:stiffness-msfem-p1basis}
\end{equation}
Motivated by (<ref>), we introduce the coarse-scale problem
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts?s}
\IEEEstrut
$-{\operatorname{div}\left(\,\overline{A}\, \nabla u\right)}$
&in $\Omega$,
\\
&on $\partial \Omega$,
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:diffusion-pde-effective}
\end{equation}
and its Galerkin discretization with $\Pone$ Lagrange elements: with $V_{H,0} = \operatorname{span} \left\{\phiPone{i} \mid 1 \leq i \leq N_0 \right\}$ (note that the definition of $V_{H,0}$ will be generalized in Def. <ref>), find $u_{H,0} \in V_{H,0}$ such that
\begin{equation}
\forall \, v_H \in V_{H,0}, \quad \overline{a}^{\dif}(u_H,v_H) = F(v_H),
\label{eq:diffusion-FEM-effective}
\end{equation}
where the linear form $F$ is defined in (<ref>) and the bilinear form $\overline{a}^{\dif}$ is defined as
\begin{equation*}
\forall \, u, v \in H^1_0(\Omega), \quad \overline{a}^{\dif}(u,v) = \int_\Omega \nabla v \cdot \overline{A} \, \nabla u.
\end{equation*}
Problem (<ref>) equivalently writes
\begin{equation}
\mathds{A}^{\Pone} \, U^{\Pone} = \mathds{F}^{\Pone},
\label{eq:linear-system-p1}
\end{equation}
\begin{equation}
\forall \, 1 \leq i, j \leq N_0,
\quad
\mathds{A}^{\Pone}_{j,i} = \overline{a}^{\dif} \left(\phiPone{i},\phiPone{j} \right),
\quad
\mathds{F}^{\Pone}_j = F \left( \phiPone{j} \right).
\label{eq:linear-system-effective}
\end{equation}
Comparing the expressions (<ref>) and (<ref>), we deduce that $\mathds{A}^\varepsilon = \mathds{A}^\Pone$. In other words:
The stiffness matrix of the MsFEM problem (<ref>) is identical to the stiffness matrix of the $\Pone$ problem (<ref>).
This lemma immediately implies that the $\Pone$ problem (<ref>) is well-posed, since the MsFEM (<ref>) itself is well-posed.
Let us point out that problems (<ref>) and (<ref>) are defined entirely in terms of quantities that vary only on the macroscopic scale $H$. The finite element problem (<ref>) can thus be solved using a legacy code that is designed for standard FEMs.
Lemma <ref> then suggests including the $\Pone$ approximation (<ref>) of the effective, coarse-scale problem (<ref>) as an integral part of the MsFEM approach. We do so in Algorithm <ref> below.
The right-hand side vector $\mathds{F}^\varepsilon$ in (<ref>) is, in general, different from $\mathds{F}^\Pone$ in (<ref>). Indeed, we integrate the product of $f$ with highly oscillatory basis functions in the former problem and with $\Pone$ basis functions in the latter. The solutions $U^\varepsilon$ and $U^\Pone$ to (<ref>) and (<ref>), respectively, are thus different a priori.
§.§ Non-intrusive workflow
We propose the following non-intrusive MsFEM variant:
\begin{equation}
\text{Set } u_H^\varepsilon = u_H + \sum_{K\in\mathcal{T}_H} (\partial_\alpha u_H)\vert_K \, \VK{\alpha} \in V_{H,0}^\varepsilon \text{ where } u_H \in V_{H,0} \text{ is the unique solution to~\eqref{eq:diffusion-FEM-effective}}.
\label{eq:diffusion-MsFEM-noni}
\end{equation}
The MsFEM approximation $u_H^\varepsilon$ is well-defined, since we have seen above that problem (<ref>) is well-posed.
Note that the symbol $u^\varepsilon_H$ shall be used here and in the sequel for the solution to various MsFEMs variants to alleviate the notation. The exact MsFEM will be specified by the context. We will use distinct notation for different MsFEM variants when required for clarity.
The most efficient way to compute $u^\varepsilon_H$ from $u_H$ is not as stated here, however. The evaluation of $u_H(x)$ may require the determination of the degrees of freedom associated to the simplex $K$ to which $x$ belongs. This demands the use of the internal mechanisms of the legacy code that is used to compute $u_H$. The use of the legacy code can be avoided by expanding $u_H$ as follows. For any affine function $\varphi$ on $K$, we have
\begin{equation}
\varphi(x) = \varphi \left(x_{c,K}\right) + \sum_{\alpha=1}^d \partial_\alpha \varphi \cdot \left( x^\alpha - x^\alpha_{c,K} \right),
\label{eq:P1-expansion}
\end{equation}
where $x^\alpha$ denotes the function that to a point $x\in\Omega$ associates its $\alpha$-th coordinate, and $x_{c,K}=(x^1_{c,K},\dots,x^d_{c,K})$ is the centroid of $K$.
If one uses the legacy code to store the values of $u_H(x_{c,K})$ and $\partial_\alpha u_H$ element by element at the end of the online stage, then $u^\varepsilon_H$ defined in (<ref>) can be computed element by element according to
\begin{equation}
\forall \, K \in \mathcal{T}_H,
\quad
u_H(x_{c,K}) + \sum\limits_{\alpha=1}^d \left.\left(\partial_\alpha u_H\right)\right\vert_K \left( x^\alpha - x^\alpha_{c,K} + \VK{\alpha}(x) \right)
\quad
\text{on } K,
\label{eq:msfem-diff-noni-post}
\end{equation}
without using the legacy code.
The above observations culminate in the computational approach presented in Algorithm <ref>. We can distinguish
* the offline stage consisting of lines <ref>-<ref>,
* the online stage being executed entirely in line <ref>,
* a post-processing step in line <ref>.
Non-intrusive MsFEM approach for problem (<ref>)
[1]
Let $\mathcal{T}_H$ be the mesh used by the legacy code
$K \in \mathcal{T}_H$
$1 \leq \alpha \leq d$
Solve for $\VK{\alpha}$ defined by (<ref>)
Compute $\overline{A}\vert_K$ defined by (<ref>)
Use the legacy code to solve for $u_H$ defined by (<ref>) and to save $\left\{u_H(x_{c,K})\right\}_{K\in\mathcal{T}_H}$ and $\left\{ (\partial_\alpha u_H) \vert_K \right\}_{K\in\mathcal{T}_H, \, 1 \leq \alpha \leq d}$
Obtain the MsFEM approximation $u^\varepsilon_H$ by (<ref>)
The superiority of Algorithm <ref> over the classical MsFEM Algorithm <ref> is that the global problem of the online stage can completely be constructed and solved by the use of a pre-existing $\Pone$ PDE solver. The only requirements for the legacy code are the functionality to provide piecewise constant diffusion coefficients to the solver and the existence of a procedure to store the value of the $\Pone$ solution and its gradient at the centroids of the mesh.
An additional advantage in the online stage is that the construction of the right-hand side $\mathds{F}^\Pone$ (see (<ref>)) for the global problem only requires a numerical quadrature on the coarse mesh and is therefore cheaper than the construction of $\mathds{F}^\varepsilon$ (see (<ref>)), involving the multiscale basis functions and requiring numerical quadratures at the microscale.
The part of the offline stage that manipulates fine meshes (lines <ref>-<ref>) and the post-processing step can be developed independently of the legacy code used in line <ref>.
The requirement for these fine-scale solvers is that they have access to the coarse mesh $\mathcal{T}_H$ used by the global solver.
Note also that the local problem (<ref>) is only indexed by the coarse mesh element $K$, in contrast to the local problem (<ref>) that is indexed both by the coarse mesh element $K$ and the vertex index $i$. For the latter problems, one has to know, for each element $K$, the global index that corresponds to the vertices of $K$, a piece of information that may be difficult to access in a legacy code. For the problems (<ref>), this correspondence is not needed to compute $\overline{A}$, nor for the computation of the fine-scale solution $u^\varepsilon_H$ in (<ref>), both of which are entirely defined element-wise.
§.§ Interpretation of the non-intrusive MsFEM
We emphasized above that the right-hand sides of the linear system for the MsFEM in (<ref>) and the linear system solved for the non-intrusive MsFEM in (<ref>), are different in general. This motivates the comparison of the non-intrusive MsFEM approach (<ref>) to the following Petrov-Galerkin MsFEM:
\begin{equation}
\text{Find } u^\varepsilon_H \in V_{H,0}^\varepsilon \text{ such that }
a^{\varepsilon,\dif} \left(u_H^\varepsilon, \phiPone{j} \right)
F \left( \phiPone{j} \right) \quad \text{for all } 1 \leq j \leq N_{0},
\label{eq:diffusion-MsFEM-testP1}
\end{equation}
based on the trial space $V_{H,0}^\varepsilon$ and the test space $V_{H,0}$ for both the bilinear and the linear form. The following result was shown in <cit.>.
The non-intrusive MsFEM variant (<ref>) coincides with the Petrov-Galerkin MsFEM (<ref>).
The non-intrusive MsFEM approach is generalized in Sec. <ref> after the development of a general framework to define a wide variety of MsFEMs in Sec. <ref>. Lemma <ref> does not generalize to the full framework. We will see the conditions under which the non-intrusive approach leads to a Petrov-Galerkin MsFEM in Lemma <ref>.
§.§ Relation to homogenization theory
We highlight in this section the fact that many ingredients of our non-intrusive MsFEM approach are reminiscent of standard quantities of homogenization theory, or the theory of $H$-convergence, which studies the limit of a sequence of solutions $u^\varepsilon$ to a PDE as $\varepsilon$ tends to $0$. This relation to $H$-convergence provides an interesting interpretation of the effective tensor $\overline{A}$ introduced in (<ref>).
Let us suppose in this section (and in this section only, except for Sec. <ref>) that $A^\varepsilon(x) = A^\mathsf{per}(x/\varepsilon)$ for some bounded, $\bbZ^d$-periodic matrix $A^{\mathsf{per}}$ satisfying the coercivity property in (<ref>). In this case, the sequence of matrices $A^\varepsilon$ has a homogenized limit that is explicitly known. (An explicit characterization of the limit is not available for $H$-convergence in general.) We summarize the main results below. See, for instance, <cit.> or <cit.> for details on periodic homogenization.
Due to $H$-convergence of $A^\varepsilon$, the functions $u^\varepsilon$, solution to (<ref>), converge to a limit function $u^\star$ (weakly in $H^1(\Omega)$, strongly in $L^2(\Omega)$) as $\varepsilon \to 0$. The homogenized limit $u^\star$ is the solution to the homogenized equation (<ref>) below.
Let $Q$ denote the unit cube of $\bbR^d$.
We introduce the corrector functions $w_1,\dots,\,w_d \in H^1_{per}(Q)$ solution to
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts?s}
\IEEEstrut
$-{\operatorname{div}(A^\mathsf{per} \nabla w_\alpha)}$
&in $\bbR^d$,
\\
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:diffusion-correctors}
\end{equation}
which uniquely defines $w_\alpha$ up to an irrelevant additive constant. The entries of the (constant) homogenized diffusion tensor $A^\star \in \bbR^{d\times d}$ are given by
\begin{equation}
\int_Q (e_\beta + \nabla w_\beta) \cdot A^\mathsf{per}(e_\alpha + \nabla w_\alpha),
\qquad
1 \leq \alpha, \, \beta \leq d.
\label{eq:diffusion-hom-coef}
\end{equation}
The homogenized limit $u^\star$ of $u^\varepsilon$ is the unique solution in $H^1_0(\Omega)$ to the boundary value problem
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts?s}
\IEEEstrut
$-{\operatorname{div}(A^\star \nabla u^\star)}$
&in $\Omega$,
\\
&on $\partial \Omega$.
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:diffusion-hom-pde}
\end{equation}
The truncated reconstruction of $u^\star$ that is called the first-order two-scale expansion takes the form
\begin{equation}
u^{\varepsilon,1}(x) =
u^\star(x) +
\varepsilon \sum_{\alpha=1}^d \partial_\alpha u^\star(x) \, w_\alpha \left(\frac{x}{\varepsilon}\right).
\label{eq:expansion-1-hom}
\end{equation}
Under suitable regularity assumptions, the difference $u^\varepsilon - u^{\varepsilon,1}$ converges to $0$ strongly in $H^1(\Omega)$ as $\varepsilon\to0$. This property will be used for the convergence results in Sec. <ref>.
In the periodic setting, the expansion (<ref>) can be used to construct a numerical approximation of $u^\varepsilon$, without the need of any computations at the fine scale. This approximation is presumably valid only in the regime of very small parameters $\varepsilon$ and deteriorates if $\varepsilon$ grows. Moreover, in more general settings, the corrector functions are not local nor explicit, for their definition involves a PDE posed on the whole domain $\Omega$ and that depends on an effective tensor that is itself defined in terms of the corrector functions. Details can be found, e.g., in <cit.>. This prevents the $H$-convergence theory from being directly applicable for the numerical approximation of $u^\varepsilon$.
Numerical homogenization techniques, that draw their inspiration from the various elements above, offer an alternative for the approximation of $u^\varepsilon$ that can be applied in much more general contexts. We can see the similarities between the corrector functions $w_\alpha$ in (<ref>) and the numerical correctors $\VK{\alpha}$ in (<ref>). Note that the $\VK{\alpha}$ solve problems similar to (<ref>), but that they need to be solved at the microscale and on each mesh element $K$. Similarly, we note the resemblance between the reconstruction (<ref>) and the definition of $u_H^\varepsilon$ in (<ref>), and between the homogenized coefficient $A^\star$ defined in (<ref>) and the effective macroscopic coefficient $\overline{A}$ from (<ref>). However, contrary to $A^\star$, the MsFEM quantity $\overline{A}$ has to be computed on an element-by-element basis, and it is not necessarily constant throughout $\Omega$.
Finally, the MsFEM analogue of the homogenized problem (<ref>) is the resolution of the effective macroscale problem (<ref>).
A very particular setting, although academic in nature and only useful for pedagogical purposes, actually leads to an MsFEM approximation that is exactly equivalent to a discretization of the periodic homogenization setting. Consider (<ref>) in 2D posed on the unit square. Let us consider a mesh consisting of squares that are perfectly aligned with the periodicity of $A^\varepsilon$. We solve the corrector problems (<ref>) on all square mesh elements with periodic boundary conditions and subsequently compute the effective diffusion tensor $\overline{A}$ according to (<ref>).
In this case, the problems for the numerical correctors all reduce to (<ref>) and $\overline{A}$ is constant and equal to the homogenized coefficient $A^\star$ as defined by (<ref>). A $\mathbb{Q}_1$ discretization of the effective problem (<ref>) thus constitutes a non-intrusive MsFEM that is equivalent to the $\mathbb{Q}_1$ approximation of the homogenized equation (<ref>).
§ WHY DEVELOP A GENERAL FRAMEWORK?
In the sequel we develop a general framework for a wide variety of MsFEMs in an abstract setting. We motivate here why this general framework for MsFEMs is useful.
§.§ Local boundary conditions
First, let us explain why various different MsFEMs have been proposed in the literature. One reason is that different equations than (<ref>) (e.g. advection-diffusion equations) give rise to different choices of the local problem (<ref>), depending on which terms of the global PDE are included (see, for instance, <cit.>.)
The other reason is that, even for the pure diffusion problem (<ref>), the choice of the basis functions defined in (<ref>) has an important drawback. The definition of the multiscale basis functions requires a choice of arbitrary boundary conditions on the mesh element boundary $\partial K$, since the exact boundary condition satisfied by $u^\varepsilon$ is unknown. In (<ref>), affine boundary conditions are imposed. In view of this choice, we shall refer to the MsFEM defined above as the `MsFEM-lin'.
The MsFEM-lin cannot yield an accurate representation of $u^\varepsilon$ near $\partial K$ if $A^\varepsilon$ is highly oscillatory and the mesh $\mathcal{T}_H$ is coarse. Variations on the definition of the functions $\phiEps{i}$ have been proposed to improve the MsFEM. Here we summarize the ideas of oversampling and of MsFEM à la Crouzeix-Raviart, which together inspire the formulation of a general MsFEM framework in Sec. <ref>.
The oversampling variant of the MsFEM was introduced along with the variant based on (<ref>) at the time of its first appearance in <cit.>. For this method, an oversampling domain $S_K$ is associated to each mesh element $K$ (details are provided in Sec. <ref>). The problems (<ref>) are solved on the larger domain $S_K$ rather than $K$, so the inadequate boundary conditions are pushed away from the actual mesh elements. To construct the multiscale basis functions, the resulting functions on $S_K$ are restricted to the actual mesh elements $K$ and suitably combined around each vertex $x_i$. The new multiscale basis functions oscillate on $\partial K$ if the oversampling patch is taken large enough. We note that, in general, this strategy leads to discontinuous basis functions. Hence, the finite element space obtained is no longer conforming.
The MsFEM with Crouzeix-Raviart type boundary conditions for the local problems (which we shall abbreviate as `MsFEM-CR') was introduced in <cit.>.
It uses basis functions associated to the edges of the mesh (in contrast to the MsFEM-lin presented above, and its oversampling variant, where basis functions are associated to the vertices of the mesh). A typical basis function satisfies the following on $\partial K$: the flux through each face of $K$ is constant, and the constants are determined by the condition that the average of the basis function be 1 over one particular face and 0 over all other faces.
Again, this is a way to avoid imposing any conditions on the trace of the basis function directly. The multiscale functions can thus be oscillatory on the faces of the mesh.
As is the case for oversampling methods, the resulting finite element space is nonconforming.
All of these variations, applied to any MsFEM for linear second-order PDEs, are covered by the general MsFEM framework that we develop in Sec. <ref>.
§.§ The non-intrusive approach
The intrusiveness of the specific MsFEM-lin variant introduced in Sec. <ref> is exemplary for all MsFEMs described in Sec. <ref>. It turns out that the non-intrusive MsFEM approach introduced in <cit.> and recalled in Sec. <ref> can also be generalized to all these MsFEM variants. We summarize the key ingredients that allow for the formulation of the non-intrusive MsFEM approach of Algorithm <ref> (corresponding to the identities in boxes in Sec. <ref>).
The non-intrusive MsFEM follows from the expansion (<ref>), namely the expression of the multiscale basis function as a $\Pone$ basis function and a linear combination of numerical correctors that are fully localized. We note that
* the full localization of the numerical correctors defined in (<ref>) allows the preprocessing of the microstructure independently of the global approximation indices related to the finite element method;
* the expansion (<ref>) follows from the fact that $\nabla \phiPone{i}$ is piecewise constant combined with linearity of the local problems (<ref>);
* the stiffness matrix can be formulated in terms of a piecewise constant effective diffusion tensor in (<ref>) thanks to full localization of the corrector functions, the piecewise constant gradient of $\phiPone{i}$ in the expansion (<ref>) and bilinearity of the global problem (<ref>).
These observations provide the main structure of the general framework. First, we choose an underlying, low-dimensional space of piecewise affine functions to which the MsFEM is associated (Def. <ref>). This will be the standard conforming Lagrange space of order 1 (for the MsFEM-lin), or the Crouzeix-Raviart space of order 1 (for the MsFEM-CR). Second we need to formulate the local problems for the numerical correctors (Def. <ref> and Def. <ref>). This involves the definition of oversampling patches (for MsFEMs with oversampling, Def. <ref>), and an extension of the notion of degrees of freedom to define the boundary conditions for the numerical correctors (Def. <ref>, <ref> and <ref>) on oversampling patches. It is then possible to define the multiscale basis functions as a generalization of (<ref>) (see Def. <ref>) and finally to define the MsFEM for our general framework in Def. <ref>.
We note that our development of non-intrusive MsFEM approaches relies to a great extent on the fact that (<ref>), and its generalization (<ref>) in the general framework developed below, provide a description of the multiscale basis functions in terms of $\Pone$ basis functions, without the need of higher-order functions. Higher-order MsFEMs can be found in <cit.>. Possible analogues of (<ref>) for such MsFEMs and the subsequent techniques to design a non-intrusive MsFEM variant are more involved and may be the topic of future work. See <cit.>.
§.§ Other motivations for the general framework
Besides a unified formulation of our non-intrusive MsFEM approach, our general framework can also be beneficial to concrete code development for the MsFEM. Common features among various multiscale methods have previously been used to design flexible and efficient software for the implementation of such methods on the DUNE platform <cit.> within the Exa-Dune project <cit.>. For example, the distribution of local problems over multiple processors and subsequent coupling in a global problem are handled by designated software components <cit.>. Our work may contribute to the efficient implementation of all MsFEMs covered by our general framework in such a project and similar endeavours yet to come.
When formulating the general framework, we also clarify a few practical matters that are often left pending in the various research articles we are aware of. In particular, we give a rigorous definition of the oversampling procedure near the boundary $\partial \Omega$ of the global domain.
As we explore the general framework, we will also propose an MsFEM variant that has not yet appeared in the literature: the MsFEM-CR combined with the oversampling technique (see Example <ref>). We hope that our framework may also further the development of new MsFEM variants in an attempt to improve on the shortcomings of the methods known today.
Finally, the present study may also uncover a deeper understanding of MsFEMs by paving the way to a unified convergence analysis of different variants. This work is currently in preparation.
§ ABSTRACT DEFINITION OF THE MSFEM
We develop here a general framework for multiscale finite element methods. The ultimate aim is to generalize the key identities of Sec. <ref>. This is done in Def. <ref> and <ref> for the numerical correctors introduced in (<ref>), and in Def. <ref> for the expansion (<ref>) of the multiscale basis functions. This allows the reformulation of the linear system of the MsFEM as the linear system of an effective problem in (<ref>) (for a Petrov-Galerkin MsFEM) and (<ref>) (for a Galerkin MsFEM) in Sec. <ref>. The other notions introduced in this section, although rather technical and abstract, are necessary tools to capture a wide variety of MsFEMs in our general framework.
§.§ The continuous problem
The abstract variational problem for our general MsFEM framework is as follows.
Let $a^\varepsilon$ be a continuous bilinear form on $H^1(\Omega) \times H^1(\Omega)$.
We are interested in the solution to the problem
\begin{equation}
\text{Find } u^\varepsilon \in H^1_0(\Omega) \text{ such that }
a^\varepsilon(u^\varepsilon, v) = F(v)
\text{ for any } v \in H^1_0(\Omega),
\label{eq:gen-pb}
\end{equation}
where $F$ is defined as in (<ref>) for any $f\in L^2(\Omega)$. To ensure well-posedness of (<ref>), we suppose that the bilinear form $a^\varepsilon$ is coercive on $H^1_0(\Omega)$.
The bilinear form $a^\varepsilon$ may contain coefficients that oscillate on a microscopically small scale.
The oversampling and Crouzeix-Raviart variants of the MsFEM introduced in Sec. <ref> show that we need to accommodate for approximation spaces with discontinuities at the interfaces. This requires some additional assumptions on the formulation of the abstract problem.
We suppose that the bilinear form $a^\varepsilon$ is in fact defined on the broken Sobolev space $H^1(\mathcal{T}_H) \times H^1(\mathcal{T}_H)$. More precisely, we assume that we can represent it as $a^\varepsilon = \sum\limits_{K\in\mathcal{T}_H} a^\varepsilon_K$, where, for each $K \in \mathcal{T}_H$, $a^\varepsilon_K$ is a continuous bilinear form defined on $H^1(K) \times H^1(K)$.
To ensure well-posedness of MsFEMs, which may use nonconforming approximation spaces, coercivity on $H^1_0(\Omega)$ may be insufficient. Therefore, we add the following coercivity hypothesis for the bilinear forms $a_K^\varepsilon$:
\begin{equation}
\begin{IEEEeqnarraybox}[][c]{s}
\IEEEstrut
for all $K \in \mathcal{T}_H$, there exists $\alpha_K > 0$ such that
\\
\quad
\forall \, u \in H^1(K),
\quad
\geq
\alpha_K \, \lVert \nabla u \rVert_{L^2(K)}^2.
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\label{eq:gen-pb-coer}
\end{equation}
In order to perform a convergence analysis, one also has to assume that the $\alpha_K$ are bounded from below by some $\tilde{\alpha}>0$ that does not depend on $H$. We provide convergence results in Sec. <ref> for the pure diffusion problem (<ref>), in which case we have $\alpha_K=m$ from (<ref>).
As an example, the introductory problem (<ref>) with the associated bilinear form $a^{\varepsilon,\dif}$ is covered by this framework as is made explicit in Example <ref> below. Other second-order PDEs that fit in our abstract variational formulation are given in Example <ref>.
The diffusion problem (<ref>) is covered by the abstract variational formulation above. Indeed, we can set
a^\varepsilon = a^{\varepsilon,\dif},
where $a^{\varepsilon,\dif}$ is the bilinear form defined in (<ref>). Further, we define
\coloneqq
\int_K \nabla v \cdot A^\varepsilon \nabla u,
for all $u,v \in H^1(\mathcal{T}_H)$, so that we have indeed $a^{\varepsilon,\dif} = \sum\limits_{K\in\mathcal{T}_H} a^{\varepsilon,\dif}_K$. Clearly, each $a^{\varepsilon,\dif}_K$ satisfies (<ref>) with $\alpha_K=m$ the coercivity constant from (<ref>).
The reaction-advection-diffusion equation,
\begin{equation*}
-{\operatorname{div}(A^\varepsilon \nabla u^\varepsilon)} + b\cdot\nabla u^\varepsilon + \sigma u^\varepsilon= f,
\end{equation*}
with a divergence-free advection field $b:\Omega \mapsto \bbR^d$ and a positive reaction coefficient $\sigma : \Omega \mapsto \bbR$,
can be modelled (under some regularity hypotheses that we do not state here) with the bilinear forms
\begin{equation*}
\int_K \nabla v \cdot A^\varepsilon \nabla u
+ v \, b \cdot \nabla u + \sigma u v.
\end{equation*}
However, these bilinear forms $a_K^\varepsilon$ do not satisfy (<ref>) even though the bilinear form $a^\varepsilon$ is coercive on $H^1(\Omega)$. To this end, a skew-symmetrized formulation of the transport term can be used.
The skew-symmetrized formulation uses the bilinear form
\begin{equation*}
\int_K \nabla v \cdot A^\varepsilon \nabla u
+ \frac{1}{2} v \, b \cdot \nabla u
- \frac{1}{2} u \, b \cdot \nabla v
+ \sigma u v,
\end{equation*}
which does satisfy (<ref>). However, the assumption (<ref>) is used for proving well-posedness of the MsFEM in Lemma <ref>, but both choices for $a_K^{\varepsilon}$ mentioned here can be studied in practice.
We refer e.g. to <cit.> for more details.
Within the general MsFEM framework, $b$ and $\sigma$ are allowed to be highly oscillatory, and this may impact the specific MsFEM strategy to be preferred.
§.§ Piecewise affine structure
In Sec. <ref>, we have seen that the relation between multiscale basis functions and piecewise affine functions is essential for the development of our non-intrusive MsFEM. For the MsFEM definition in the general framework, we start by choosing such a structure in the following definition.
Let a mesh $\mathcal{T}_H$ be given. The underlying $\Pone$ space for the MsFEM, denoted $V_H$, is one of the following two spaces: the Lagrange approximation space
\begin{equation*}
V_H^{L} =
\{ v \in \Pone(\mathcal{T}_H) \mid v \text{ is continuous on } \Omega \},
\end{equation*}
in which case we shall refer to the associated MsFEM as the MsFEM-lin, or the Crouzeix-Raviart approximation space
\begin{equation*}
V_H^{CR} =
\left\{ v \in \Pone(\mathcal{T}_H) \mid \forall\ K \in \mathcal{T}_H, \ \forall e \in \mathcal{F}(K) \text{ such that } e \subset \Omega : \ \int_e \llbracket v \rrbracket = 0 \right\},
\end{equation*}
in which case the associated MsFEM shall be called the MsFEM-CR. We use the notation $\mathcal{F}(K)$ for the set of faces of $K$ and $\llbracket v \rrbracket$ denotes the jump of $v$ over the face $e$. The space $V_H^L$ is a subspace of $H^1(\Omega)$, but $V_H^{CR}$ is not. Note that no restrictions apply on faces lying on $\partial\Omega$.
We note that the underlying $\Pone$ space has the following property: if $v \in V_H$ is piecewise constant on the mesh $\mathcal{T}_H$, then $v$ is constant in $\Omega$. Contrary to the space $V_H^L$, functions in the Crouzeix-Raviart space $V_H^{CR}$ are discontinuous in general. They are continuous, however, at the centroids of all faces of the mesh.
For standard finite elements, the notion of degrees of freedom allows to characterize any finite element function. The idea of the MsFEM is to preserve this notion of degrees of freedom (in a suitable way made precise below) in the definition of a multiscale approximation space, while adapting the piecewise affine structure to the microstructure of the PDE.
We formalize this notion for the two underlying $\Pone$ spaces that we introduced in Def. <ref>. The definition involves an arbitrary simplex $K$, which is typically an element of the mesh $\mathcal{T}_H$, or an associated oversampling patch (for the oversampling technique of the MsFEM) that we shall define in Def. <ref>. The latter is not always a simplex, and we extend Def. <ref> to such oversampling patches in Def. <ref> and <ref>.
A degree of freedom operator ( operator) $\Gamma$ associates to any simplex $K \subset \bbR^d$ and $v \in \Pone(K)$ a vector $\Gamma(K,v) \in \bbR^{d+1}$, whose components are called the degrees of freedom of $v$ on $K$, in such a way that the application $v \mapsto \Gamma(K,v)$ is a linear bijection from $\Pone(K)$ to $\bbR^{d+1}$. More precisely, $\Gamma(K,\cdot)$ will denote in the sequel one of the following two operators:
* ( operator for the MsFEM-lin.) Let $x_{0},\dots,x_{d}$ denote the vertices of $K$. We set
\begin{equation*}
\forall \, v \in \Pone(K),
\qquad
\Gamma^L(K,v) = \left( v(x_0),\dots,v(x_d) \right).
\end{equation*}
For $K \in \mathcal{T}_H$, the degree of freedom $[\Gamma^{L}(K,\cdot)]_j$ is said to be associated to the boundary if, for all $v \in \Pone(K)$, $[\Gamma^{L}(K,v)]_j = v(x)$ for a vertex $x$ of the mesh that lies on $\partial \Omega$.
* ( operator for the MsFEM-CR.) Let $e_0,\dots,e_d$ denote the faces of $K$. We set
\begin{equation*}
\forall \, v \in \Pone(K),
\qquad
\Gamma^{CR}(K,v) = \left( \frac{1}{|e_0|} \int_{e_0} v,\dots,\frac{1}{|e_d|} \int_{e_d} v \right).
\end{equation*}
For $K \in \mathcal{T}_H$, the degree of freedom $[\Gamma^{CR}(K,\cdot)]_j$ is said to be associated to the boundary if, for all $v \in \Pone(K)$, $\displaystyle [\Gamma^{CR}(K,v)]_j = \frac{1}{|e|} \int_{e} v$ for a face $e$ of the mesh that lies on $\partial \Omega$.
The $\Pone$ test space is defined as
\begin{equation*}
V_{H,0} =
\left\{
v \in V_H \ \left\vert \
\begin{aligned}
&\forall \, K \in \mathcal{T}_H, \ \forall \, 1 \leq j \leq d+1, [\Gamma(K,v)]_j=0 \text{ if the degree} \\
&\text{of freedom } [\Gamma(K,\cdot)]_j \text{ is associated to the boundary}, \
\end{aligned}
\right.
\right\}.
\end{equation*}
The $\Pone$ test space is used in practice to approximate the subspace $H^1_0(\Omega)$ of $H^1(\Omega)$.
The degrees of freedom are defined element per element and are thus local.
Global properties of the underlying $\Pone$ space $V_H$ are most easily made explicit through the identification of a basis for $V_H$.
Let $V_H$ be an underlying $\Pone$ space as in Def. <ref>, and let $\Gamma$ be the associated operator. We shall denote by $N$ the dimension of $V_H$. The $\Pone$ basis functions $\phiPone{1},\dots,\phiPone{N}$ are defined as follows:
* For the MsFEM-lin, let $x_1,\dots,x_N$ be an enumeration of the (internal and boundary) vertices of $\mathcal{T}_H$. Then $\phiPone{i}$ is defined by $\phiPone{i}(x_j) = \delta_{i,j}$ for all $1 \leq i,j \leq N$.
* For the MsFEM-CR, let $e_1,\dots,e_N$ be an enumeration of the (internal and boundary) faces of $\mathcal{T}_H$. Then $\phiPone{i}$ is defined by $\displaystyle \frac{1}{|e_j|} \int_{e_j} \phiPone{i} = \delta_{i,j}$ for all $1 \leq i,j \leq N$.
In both cases, these functions form a basis of the corresponding space $V_H$ of Def. <ref>.
§.§ Local problems
§.§.§ Oversampling patches
To replace the (standard) underlying $\Pone$ space by a space of the same (low) dimension, adapted to the microstructure of $a^\varepsilon$, we associate to each mesh element $K \in \mathcal{T}_H$ an oversampling patch. It serves to avoid imposing artificial, non-oscillatory boundary conditions on $K$ directly when computing numerical correctors to process the microstructure.
Let $K\in\mathcal{T}_H$ be any mesh element and let $S_K'$ be a simplex obtained from $K$ by homothety around the centroid of $K$ with homothety ratio $\rho\geq1$. The oversampling patch $S_K$ is defined as $S_K = S_K' \cap \Omega$.
See Fig. <ref> for an illustration of the construction of oversampling patches in dimension 2. In this work, we allow for the trivial homothety ratio $\rho=1$. In this case, the patch $S_K$ coincides with $K$.
We will call an MsFEM without oversampling an MsFEM for which all oversampling patches satisfy $S_K=K$. Otherwise, the MsFEM is called an MsFEM with oversampling. We speak simply of an MsFEM when there are no assumptions on the oversampling patches.
Oversampling patches for MsFEM in 2D. Left: the patch for the mesh element $K$ is obtained from $K$ by homothety. Right: The triangle $S_K'$ partially lies outside the domain $\Omega$ and the oversampling patch $S_K$ is not homothetic to $K$. It is not even a triangle.
For most mesh elements $K$, the patch $S_K$ in Def. <ref> is a simplex. However, for mesh elements close to the boundary $\partial \Omega$, alternative constructions should be considered. We have not found any explicit description of such a construction in the literature. This complicates the reproducibility of the method as well as a rigorous convergence analysis. The precise definitions of this section provide a first step to address these issues. A fully rigorous convergence analysis of the MsFEM with oversampling as described here is the subject of ongoing investigations <cit.>.
§.§.§ Degrees of freedom on oversampling patches
Definition <ref> provides the definition of operators on any simplex. For the MsFEM, we wish to compute multiscale functions on oversampling patches $S_K$, in which case Def. <ref> may be insufficient. We illustrated this in Fig. <ref>. Indeed, the number of vertices/faces of the oversampling patch may be larger than $d+1$. In order to associate a multiscale basis function to every $\Pone$ basis function, we will still need a notion of operator such that $\Gamma(S_K,\cdot)$ is a linear bijection from $\Pone(S_K)$ to $\bbR^{d+1}$. Therefore, we extend the definition of the degrees of freedom operators $\Gamma^L$ and $\Gamma^{CR}$ in Def. <ref> and <ref>.
Let $K\in\mathcal{T}_H$ and let $S_K$ be its associated oversampling patch. Let $x_0,\dots,x_d$ be a selection of $d+1$ distinct vertices of $S_K$. We define the operator $\Gamma^L$ by
\begin{equation*}
\forall \, v \in \Pone(S_K),
\qquad
\Gamma^L(S_K,v) = \left( v(x_0),\dots,v(x_d) \right).
\end{equation*}
We note that any choice of $d+1$ nodal values unequivocally characterizes an affine function on $S_K$. Hence, $\Gamma^L(S_K,\cdot)$ is indeed a bijection. Since $\Gamma(S_K,\cdot)$ will only be used to identify elements of $\Pone(S_K)$ on the boundary of $S_K$, the particular choice of the vertices $x_0,\dots,x_d$ is unimportant in the above definition. Finally, when $S_K$ is a simplex, it has only $d+1$ vertices and Def. <ref> reduces to Def. <ref>.
To generalize the notion of degrees of freedom for the Crouzeix-Raviart space to non-simplicial patches, we need to introduce some additional notation.
On the boundary of a non-simplicial oversampling patch, we can identify some faces that collapse to a single vertex if we shrink $S_K$ to $K$. We call these faces the additional faces and denote the set containing them by $\mathcal{F}_a(S_K)$. The other faces of $S_K$ are referred to as the dilated faces, collected in the set $\mathcal{F}_d(S_K)$.
When the patch $S_K$ does not touch $\partial \Omega$, we have $\mathcal{F}_d(S_K)=\mathcal{F}(S_K)$ and $\mathcal{F}_a(S_K) = \emptyset$.
In Fig. <ref>, for example, the additional faces are exactly those faces that lie on $\partial\Omega$. This is not always the case, as is illustrated by Fig. <ref>.
For the definition of $\Gamma^{CR}(S_K,\cdot)$, we shall rely on the existence of $d+1$ dilated faces, because we need $\Gamma^{CR}(S_K,\cdot)$ to be a bijection between $\Pone(S_K)$ and $\bbR^{d+1}$. This imposes a constraint on the choice of the homothety ratio used to construct $S_K$. For example, in the case of Fig. <ref>, the lower right dilated face falls outside $\Omega$ if the homothety ratio is too large, and the oversampling patch $S_K$ only has two dilated faces (edges here) and two additional faces.
Non-simplicial oversampling patches in 2D. The dilated edges of the patch $S_K$, those that `correspond' to the edges of the original triangle $K$, are dashed.
Let $K\in\mathcal{T}_H$ and let $S_K$ be its associated oversampling patch. We assume that $S_K$ has $d+1$ dilated faces, and we denote hem by $e_0,\dots,e_d$. We define the operator $\Gamma^{CR}$ by
\begin{equation*}
\forall \, v \in \Pone(S_K),
\qquad
\Gamma^{CR}(S_K,v) = \left( \frac{1}{|e_0|} \int_{e_0} v,\dots,\frac{1}{|e_d|} \int_{e_d} v \right).
\end{equation*}
When $S_K$ is a simplex, we have $\mathcal{F}_d(S_K) = \mathcal{F}(S_K)$, and Def. <ref> coincides with the respective elements of Def. <ref>.
§.§.§ Numerical correctors: first oversampling strategy
We now provide the precise assumptions under which we will consider local problems, i.e., the analogues of (<ref>) defining the MsFEM-lin basis functions and the definition of the numerical correctors in (<ref>). In fact, since the numerical correctors play an essential role in the construction of non-intrusive MsFEM approaches, we define the numerical correctors first and use them to define the multiscale basis functions in Def. <ref>.
The two possible functional settings for all these constructions are provided by Def. <ref> and <ref>. Both definitions present a different implementation of the oversampling technique. These definitions involve a `sampling space', whose name is inspired by the idea that only a limited number of local problems will be solved to encode the microstructure of the PDE in the numerical model. The choice of sampling space has to accommodate for the boundary conditions that one wishes to impose on the numerical correctors and basis functions (e.g. essential or natural; see Examples <ref> and <ref>).
Let $K\in\mathcal{T}_H$, let $S_K$ be its associated oversampling patch and let $\Gamma$ be a operator from Def. <ref>. A subspace $V_K$ of $H^1(S_K)$ and bilinear form $s^\varepsilon_K : V_K \times V_K \mapsto \bbR$ are called sampling space and sampling form, respectively, if they satisfy the following:
* the space $V_K$ contains the space of affine functions $\Pone(S_K)$;
* the operator $\Gamma(S_K,\cdot)$ is well-defined on $V_K$;
* the -extended local problem: find $v \in V_K$ such that
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts}
\IEEEstrut
$s_K^\varepsilon(v, w)$
&$\langle g, w\rangle$
for all $w \in V_{K,0}$,
\\
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:sampling-space-pde-OS-dofe}
\end{equation}
has a unique solution for any $ g \in (H^1(S_K))'$. Here, $
V_{K,0} = \{ w \in V_K \mid \Gamma(S_K,w) = 0\}
$ is the sampling test space.
Problem (<ref>) is called `-extended' because the degrees of freedom, controlling the boundary conditions associated to the local problem, are imposed on the oversampling patch $S_K$ rather than the (generally smaller) mesh element $K$.
The sampling form $s_K^\varepsilon$ shall be used to encode the oscillations of the bilinear form $a^\varepsilon$ and thus the microstructure of the problem in the multiscale finite element functions.
There is some flexibility in choosing the sampling form; one may to choose to include all the same terms as those in the bilinear form $a^\varepsilon_K$ of the original problem (<ref>), or only some of them.
When the MsFEM was first proposed in <cit.>, it was suggested that $s_K^\varepsilon$ should include those terms that correspond to the highest-order terms of the PDE that is to be solved. In the context of the advection-diffusion equation, one may thus choose to include only the diffusion terms, but also the advection term can be included in our MsFEM framework. Both options have been studied e.g. in <cit.>.
In the functional setting of Def. <ref>, the generalization of (<ref>) to define the numerical correctors for the general MsFEM framework is as follows.
We introduce for all $K\in\mathcal{T}_H$, for any $0\leq\alpha\leq d$, the function $\VSK{\alpha}{1} \in V_{K,0}$ as the unique solution to the corrector problem
\begin{equation}
\tcboxmath[colback=white, colframe=black]{
\forall \, w \in V_{K,0},
\quad
s_K^\varepsilon \left( \VSK{\alpha}{1}, w \right) =
\begin{cases}
\IEEEstrut
{-s_K^\varepsilon} \left( 1, w \right)
& \text{if } \alpha=0,
\\
{-s_K^\varepsilon} \left( x^\alpha - x^\alpha_{c,K}, w \right)
& \text{if } 1\leq\alpha\leq d,
\end{cases}
\label{eq:MsFEM-gen-correctors-dofe}
\end{equation}
The -extended numerical corrector $\VKOS{\alpha}{1}$ is defined as the restriction of $\VSK{\alpha}{1}$ to $K$, extended to all of $\Omega$ by $0$.
Note that the above definition introduces one more numerical corrector than introduced in (<ref>). The precise definition of the numerical correctors is chosen such that the analogous expansion of (<ref>) for the general framework (see (<ref>)) leads to a PDE for the multiscale basis functions analogous to (<ref>); we show this in Lemma <ref> and (for a second oversampling strategy introduced below) in Lemma <ref>. In the following example, we see that Def. <ref> is indeed a generalization of the numerical correctors defined by (<ref>) in Sec. <ref>.
[MsFEM-lin for diffusion problems]
We consider $V_H = V_H^{L}$ and $\Gamma = \Gamma^L$ from Def. <ref>.
For the diffusion problem (<ref>), we have $a^\varepsilon = a^{\varepsilon,\dif}$ and we set $s_K^\varepsilon = a_K^{\varepsilon,\dif}$ (see Example <ref>).
The sampling space for the MsFEM-lin is defined as
\begin{equation*}
\coloneqq
\left\{
v \in H^1(S_K) \ \mid \ \exists \, w \in \Pone(S_K) \text{ such that } v\vert_{\partial S_K} = w\vert_{\partial S_K}
\right\}.
\end{equation*}
Then the sampling test space $V^{L}_{K,0}$ is the space $H^1_0(S_K)$.
In this case, it holds $a_K^{\varepsilon,\dif}(1,w)=0$ for all $w \in V^{L}_{K,0}$.
Consequently, the -extended numerical corrector $\VKOS{0}{1}$ is identically equal to $0$; we obtain indeed exactly $d$ numerical correctors as in Sec. <ref>.
For the non-trivial numerical correctors, Def. <ref> corresponds to the weak formulation of the following boundary value problem:
\begin{equation*}
-{\operatorname{div}(A^\varepsilon \nabla \VSK{\alpha}{1})} = \operatorname{div}(A^\varepsilon e_\alpha) \
\text{ in } S_K,
\quad
\VSK{\alpha}{1} = 0 \
\text{ on } \partial S_K,
\end{equation*}
which is clearly well-posed.
[MsFEM-CR for diffusion problems]
Taking $a^\varepsilon$, $a^\varepsilon_K$ and $s_K^\varepsilon$ as in the previous example, we construct the MsFEM-CR with the sampling space $V_K^{CR} \coloneqq H^1(S_K)$.
With $V_H = V_H^{CR}$ and $\Gamma = \Gamma^{CR}$ from Def. <ref>, the -extended numerical corrector $\VKOS{\alpha}{1}$ is obtained from the boundary value problem:
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts?s}
\IEEEstrut
$-{\operatorname{div}(A^\varepsilon \nabla \VSK{\alpha}{1})}$
&$\operatorname{div}(A^\varepsilon e_\alpha)$
&in $S_K$,
\\
$\vec{n} \cdot A^\varepsilon \nabla \VSK{\alpha}{1}$
&$-{\vec{n} \cdot A^\varepsilon e_\alpha}$
&on each $h \in \mathcal{F}_a(S_K)$, \\
$\vec{n} \cdot A^\varepsilon \nabla \VSK{\alpha}{1}$
&$c_h - \vec{n} \cdot A^\varepsilon e_\alpha $
&on each $h \in \mathcal{F}_d(S_K)$, \\
$\displaystyle \frac{1}{|h|} \int_h \VSK{\alpha}{1}$
&for each $h \in \mathcal{F}_d(S_K)$,
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:msfem-cr-dofe}
\end{equation}
where $\vec{n}$ denotes the outward unit vector on $\partial S_K$ and $c_h$ is a constant whose value is uniquely determined by the above problem.
We note that the condition for the flux on the additional faces of $S_K$ is entirely determined by the right-hand side in (<ref>), whereas the flux on the dilated faces of $S_K$ involves an additional constant, due to the fact that the test functions in $V_{K,0}^{CR}$ cannot take arbitrary values on the dilated faces. Indeed, their mean vanishes on these faces according to Def. <ref>.
When $S_K=K$ and when the faces of $K$ do not lie on $\partial \Omega$, this corresponds to the setting of the original MsFEM-CR defined in <cit.>. The latter work also provides an alternative characterization of the multiscale Crouzeix-Raviart space.
When a face $e$ of $K$ lies on $\Omega$, the basis functions that we will obtain do not satisfy $\phiEps{e}=0$ on $e$, but only satisfy a weak boundary condition in the average sense on $e$. This does not correspond to the original definition of the MsFEM-CR in <cit.>. The MsFEM-CR with local boundary conditions as defined here was studied in <cit.>.
When $S_k=K$ (i.e., in the absence of oversampling), the operator allows us to prescribe certain continuity properties on the faces of the mesh elements $K$. More precisely, when the MsFEM-lin with operator $\Gamma^L$ is employed, the numerical correctors $\VKOS{\alpha}{1}$ vanish at the vertices of the mesh, and, with the correct choice of sampling space (see Example <ref>), they vanish on all faces of $K$ and are thus continuous on $\Omega$. When the MsFEM-lin with operator $\Gamma^{CR}$ is considered, we obtain weak continuity of the numerical correctors over all faces of the mesh. The definition of the multiscale basis functions that we give below (see Def. <ref>, in the vein of the expansion (<ref>)) shows that this means that the continuity properties of the $\Pone$ basis functions of the underlying $\Pone$ space are not perturbed when building the multiscale basis functions.
In the general case, when the oversampling patch $S_K$ is larger than $K$, we cannot preserve any of these continuity properties if we use -extended local problems for our local computations, since the values on $\partial K$ are not controlled by the degrees of freedom $\Gamma(S_K,\cdot)$ on $\partial S_K$. Therefore, we introduce another variant of the local problems and the -extended numerical correctors in the next section.
§.§.§ Numerical correctors: second oversampling strategy
Let $K \in \mathcal{T}_H$ and let $V_K$ and $s_K^\varepsilon$ be a sampling space and sampling form, respectively, according to Def. <ref>. Additionally, suppose that the operator $\Gamma(K,\cdot)$ is well-defined on $V_K$. Then a -continuous local problem is to find $v\in V_K$ such that
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts}
\IEEEstrut
$s_K^\varepsilon(v, w)$
&$\langle g, w\rangle$
for all $w \in V_{K,0}$,
\\
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:sampling-space-pde-OS-dofc}
\end{equation}
for some $ g \in (H^1(S_K))'$.
Suppose any -continuous local problem in Def. <ref> is well-posed. Then we introduce, for all $K\in\mathcal{T}_H$ and all $0 \leq \alpha \leq d$, the functions $\VSK{\alpha}{2}$ as the unique functions in $V_K$ with $\Gamma\left(K, \VSK{\alpha}{2}\right)=0$ satisfying the corrector problem (<ref>). We define the -continuous numerical correctors $\VKOS{\alpha}{2}$ as the restriction of $\VSK{\alpha}{2}$ to $K$, extended to all of $\Omega$ by $0$.
We emphasize that the local problems of Def. <ref> and <ref> use test functions $w$ in the same space $V_{K,0}$. This means that the test functions satisfy $\Gamma(S_K,w)=0$, rather than $\Gamma(K,w)=0$.
Clearly, when $S_K=K$, there is no difference between the -extended and -continuous problems (<ref>) and (<ref>).
We shall in this case simply refer (<ref>) (or (<ref>)) as local problems, and we write $\VKOS{\alpha}{0} = \VK{\alpha}$ for the numerical correctors of MsFEMs without oversampling.
[MsFEM-lin for diffusion problems]
Continuing Example <ref>, consider now the -continuous numerical corrector $\VKOS{\alpha}{2}$. Equation (<ref>) solves the following problem: there exists $w \in \Pone(S_K)$ such that
\begin{equation*}
-{\operatorname{div}(A^\varepsilon \nabla \VKOS{\alpha}{2})} = \operatorname{div}(A^\varepsilon e_\alpha) \
\text{ in } S_K,
\quad
\VKOS{\alpha}{2} = w \
\text{ on } \partial S_K,
\quad
\VKOS{\alpha}{2} = 0 \
\text{ at the vertices of } K.
\end{equation*}
The boundary condition on $\partial S_K$ is `completed' by a condition at the vertices of $K$. Except when $A^\varepsilon$ is constant (and, consequently, $\phiEps{i}$ is affine on $S_K$), it is not evident whether a solution to this problem exists.
[MsFEM-CR for diffusion problems]
For the MsFEM-CR considered in Example <ref>, the -continuous numerical correctors satisfy the same problem (<ref>) as the -extended numerical correctors, but with the average condition replaced by $\displaystyle \frac{1}{|h|} \int_h \VKOS{\alpha}{2} = 0$ for each $h \in \mathcal{F}(K)$.
As we saw for the MsFEM-lin, this is not a standard boundary value problem on $S_K$.
Examples <ref> and <ref> show that a -extended local problem is typically equivalent to a PDE with boundary conditions on $S_K$. Under reasonable assumptions, these problems have a unique solution as required by Def. <ref>. This is not the case for -continuous problems, for which one finds some boundary conditions on $ \partial S_K$ (because the degrees of freedom of test functions in $V_{K,0}$ are prescribed on $S_K$) and another set of conditions on $\partial K$ that are explicitly imposed through the degrees of freedom on $K$ in (<ref>). Well-posedness is not obvious in general, and cannot always be deduced from well-posedness of the -extended counterpart (<ref>). See Examples <ref> and <ref> for concrete local problems. We now address the well-posedness of -continuous problems in more detail in Sec. <ref>.
§.§.§ Well-posedness of -continuous numerical correctors
We have seen in Examples <ref> and <ref> that -continuous local problems lead to non-standard boundary conditions. This poses not only a theoretical issue, but also a computational challenge. To complete our study of the general MsFEM framework, we now present a computational strategy to obtain the -continuous numerical correctors, and we use this strategy to discuss the well-posedness of the associated local problems.
In Def. <ref> we assume the well-posedness of -extended problems, and we have seen in Examples <ref> and <ref> that this is a natural assumption. It is also natural to assume that we can compute -extended numerical correctors numerically. We compute the -continuous numerical correctors from the -extended numerical correctors, by subtracting a linear combination of suitable functions $W^\beta$ from the -extended numerical correctors. The $W^\beta$ must all satisfy the homogeneous equation
s_K^\varepsilon \left( W^\beta , w \right) = 0
for all
w \in V_{K,0},
in order not to perturb the local problem (<ref>) that is already satisfied by both types of numerical correctors.
We shall use the functions $W^0 \coloneqq 1 + \VSK{0}{1}$ and $W^\beta \coloneqq x^\beta - x^\beta_{c,K} + \VSK{\beta}{1}$ for $1 \leq \beta \leq d$, where $\VSK{\beta}{1}$ is defined in Def. <ref>. The precise strategy is as follows.
Fix $0 \leq \alpha \leq d$. We look for coefficients $c_0^\alpha, \dots, c_d^\alpha$ such that
\displaystyle
\VSK{\alpha}{2}
\VSK{\alpha}{1}
- \sum_{\beta=0}^d c_\beta^\alpha \, W^\beta
on $K$, where we recall that $\VSK{\alpha}{2}$ is defined by Def. <ref>. Note that both sides of the equation clearly satisfy (<ref>). The desired equality thus holds if and only if
\displaystyle
\Gamma \left( K,
\VSK{\alpha}{1} - \sum_{\beta=0}^d c_\beta^\alpha \, W^\beta
\right)
Since the operators are linear, this leads to the linear system
\begin{equation}
\underbrace{
\begin{bmatrix}
\vert & \vert & & \vert \\
\Gamma(K,W^0) & \Gamma(K,W^1) & \hdots & \Gamma(K,W^d) \\
\vert & \vert & & \vert \\
\end{bmatrix}
\displaystyle
\eqqcolon \mathds{M}
\begin{bmatrix}
c_0^\alpha \\ c_1^\alpha \\ \vdots \\ c_d^\alpha
\end{bmatrix}
\Gamma\left(K,\VSK{\alpha}{1}\right).
\label{eq:glue-system}
\end{equation}
Invertibility of the matrix $\mathds{M}$ is thus a sufficient condition for the existence of all -continuous numerical correctors, and the resolution of the linear system (<ref>) for each $\alpha$ (where all -extended numerical correctors are replaced by their numerical approximation) allows to compute the -continuous numerical correctors numerically.
Before studying the invertibility of the matrix $\mathds{M}$ in a few special cases, let us consider the matrix composed of the degrees of freedom on $S_K$, i.e., the matrix
\begin{equation*}
\widetilde{\mathds{M}}
\coloneqq
\begin{bmatrix}
\vert & \vert & & \vert \\
\Gamma(S_K,W^0) & \Gamma(S_K,W^1) & \hdots & \Gamma(S_K,W^d) \\
\vert & \vert & & \vert \\
\end{bmatrix}.
\end{equation*}
By definition of the functions $\VSK{\beta}{}$, we have $\Gamma(S_K,W^\beta) = \Gamma(S_K,x^\beta - x_{c,K}^\beta)$ for $1\leq \beta \leq d$, and $\Gamma(S_K, W^{0}) = \Gamma(S_K,1)$.
Note that the constant function together with the coordinate functions $x^\beta-x^\beta_{c,K}$ ($1 \leq \beta \leq d$) span $\Pone(S_K)$. Since $\Gamma(S_K,\cdot)$ is a bijection, the vectors $\Gamma(S_K, W^0),\dots,\Gamma(S_K,W^{d})$ are linearly independent. Hence the matrix $\widetilde{\mathds{M}}$ is invertible.
One may hope that the linear independence of the vectors $\Gamma(S_K,W^\beta)$ is preserved for the degrees of freedom on the interior boundary $\partial K$ instead of $\partial S_K$, yielding invertibility of $\mathds{M}$. We found this to hold for all numerical tests that we performed, involving both the MsFEM-lin and the MsFEM-CR.
We can prove invertibility of $\mathds{M}$ in a few special cases. When $s_K^\varepsilon$ is the sampling form that was used in Example <ref> (corresponding to a diffusion problem; we will consider this case until the end of this section) and if $A^\varepsilon$ is constant, all numerical correctors vanish on $S_K$ and the foregoing argument for the matrix $\widetilde{\mathds{M}}$ shows invertibility of $\mathds{M}$.
In the periodic setting (see Sec. <ref>), even though $A^\varepsilon$ itself is not constant, its homogenized limit $A^*$ is. In this case, the $\VSK{\beta}{1}$ converge to zero weakly in $H^1(S_K)$. (We show this in Lemma <ref> in the absence of oversampling, but the argument can be generalized to -extended oversampling.)
Now consider the MsFEM-CR. The weak convergence of the $\VSK{\beta}{1}$ in $H^1(S_K)$ ensures weak convergence on each face of $K$ in the $H^{1/2}$-norm by continuity of the trace operator. Since the embedding of $H^{1/2}(\partial K)$ in $L^2(\partial K)$ is compact, the $\VSK{\beta}{1}$ converge to $0$ strongly in $L^2$ on each face of $K$. Consequently, the degrees of freedom $\Gamma(K, \VSK{\beta}{1})$ (the averages on the faces of $K$) converge to zero as $\varepsilon\to0$. Thus, $\Gamma(K,W^0) \to \Gamma(K,1)$ and $\Gamma(K,W^\beta) \to \Gamma(K,x^\beta - x_{c,K}^\beta)$ as $\varepsilon\to0$ for all $1 \leq \beta \leq d$ and, by the above argument for the matrix $\widetilde{\mathds{M}}$, the matrix $\mathds{M}$ is invertible in this limit. By continuity of the determinant function, the matrix $\mathds{M}$ is invertible when $\varepsilon$ is small enough, and the -continuous basis functions exist in this regime.
The study of the -continuous numerical correctors for the MsFEM-lin is more delicate, since pointwise operations are involved in evaluating the degrees of freedom, which are ill-defined on $H^1(S_K)$. One can invoke the De Giorgi-Nash result, which can be found e.g. in <cit.>, to see that the multiscale basis functions, obtained from the numerical correctors in Def. <ref> below, are in fact continuous for any bounded diffusion tensor. (See Example <ref> for a definition of the multiscale basis functions for the MsFEM-lin independent of the numerical correctors.) Pointwise evaluation is then justified. It would therefore be convenient to study the -continuous basis functions directly, without the intermediate step of the numerical correctors. We omit the details here.
§.§ The multiscale basis functions
We can now define the multiscale basis functions for the approximation of the abstract problem (<ref>) in terms of the numerical correctors. We recall that in Sec. <ref>, the numerical correctors were derived from the definition of the basis functions. We give an equivalent definition of the multiscale basis functions, independent of the numerical correctors, in Lemmas <ref> and <ref>. Recall that $\phiPone{1},\dots,\,\phiPone{N}$ is a basis of the space $V_{H}$ (see Def. <ref>). We can suppose that the first $N_{0}$ basis functions form a basis of $V_{H,0}$. The following definition is the generalization of (<ref>) to the general MsFEM framework.
For each $i=1,\dots,N$, the multiscale basis function $\phiEps{i}$ is defined by
\begin{equation}
\tcboxmath[colback=white, colframe=black]{
\forall \, K \in \mathcal{T}_H,
\qquad
\left. \phiEps{i} \right\vert_K
\left. \phiPone{i} \right\vert_K + \phiPone{i}(x_{c,K}) \, \VKOS{0}{0} + \sum_{\alpha=1}^d \partial_\alpha \left(\left. \phiPone{i} \right\vert_K \right) \VKOS{\alpha}{0},
\label{eq:MsFEM-Vxy}
\end{equation}
where $\bullet = \mathsf{e}$ corresponds to -extended multiscale basis functions and $\bullet = \mathsf{c}$ corresponds to -continuous multiscale basis functions.
The -extended multiscale basis functions satisfy the variational formulation in the following lemma on the oversampling patches $S_K$.
Let $K$ be any mesh element and let $1 \leq i \leq N$. Consider an MsFEM with -extended basis functions. Define an extension of $\phiEps{i}$ from $K$ to $S_K$ by
\begin{equation}
\widehat{\phiEps{i}}
\widehat{\left.\phiPone{i}\right\vert_K} + \phiPone{i} \left(x_{c,K} \right) \, \VSK{0}{1} + \sum_{\alpha=1}^d \partial_\alpha \left(\left. \phiPone{i} \right\vert_K \right) \VSK{\alpha}{1},
\quad
\text{in } S_K,
\label{eq:MsFEM-VxyOS-dofe}
\end{equation}
where $\widehat{ \left. \phiPone{i} \right\vert_K }$ denotes the affine extension of $\left. \phiPone{i} \right\vert_K$ to $S_K$, and $\VKOS{\alpha}{1}$ is as in Def. <ref>.
Then $\widehat{\phiEps{i}}$ is the unique solution in $V_K$ to
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts}
\IEEEstrut
$s_K^\varepsilon \left(\widehat{\phiEps{i}}, w \right)$
for all $w \in V_{K,0}$,
\\
$\Gamma \left( S_K, \widehat{\phiEps{i}} \right)$
&$\Gamma \left( S_K , \widehat{\left.\phiPone{i}\right\vert_K} \right)$,
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:msbasis-OS-vf-dofe}
\end{equation}
In the case of the MsFEM-lin for the diffusion problem (<ref>), problem (<ref>) with $S_K=K$ coincides with the definition of the multiscale basis functions in (<ref>); see Example <ref>.
Problem (<ref>) has a unique solution in view of Def. <ref>. It thus suffices to show that $\widehat{\phiEps{i}}$ satisfies (<ref>).
Since the numerical correctors $\VKOS{\alpha}{1}$ belong to $V_{K,0}$ for all $1 \leq \alpha \leq N$, it is clear from (<ref>) that $\Gamma\left(S_K,\widehat{\phiEps{i}}\right)=\Gamma \left( S_K , \widehat{\left.\phiPone{i}\right\vert_K} \right)$.
Inserting (<ref>) into (<ref>) and applying (<ref>) to all $\VKOS{\alpha}{1}$, we find, for any test function $w\in V_{K,0}$,
\begin{align*}
s_K^\varepsilon \left(\widehat{\phiEps{i}}, w \right)
s_K^\varepsilon \left(\widehat{\phiEps{i}}, \widehat{\left.\phiPone{i}\right\vert_K} \right)
\phiPone{i} \left( x_{c,K} \right) s_K^\varepsilon \left( \widehat{\phiEps{i}}, \VSK{0}{1} \right)
\sum_{\alpha=1}^d
\left.\left( \partial_\alpha \phiPone{i} \right) \right\vert_K s_K^\varepsilon \left( \widehat{\phiEps{i}}, \VSK{\alpha}{1}\right).\\
s_K^\varepsilon \left(\widehat{\phiEps{i}}, \widehat{\left.\phiPone{i}\right\vert_K} \right)
\phiPone{i} \left( x_{c,K} \right) s_K^\varepsilon \left( \widehat{\phiEps{i}}, 1 \right)
\sum_{\alpha=1}^d
\left.\left( \partial_\alpha \phiPone{i} \right) \right\vert_K s_K^\varepsilon \left( \widehat{\phiEps{i}}, x^\alpha - x^\alpha_{c,K}\right)\\
s_K^\varepsilon \left(\widehat{\phiEps{i}}, \widehat{\left.\phiPone{i}\right\vert_K} \right)
s_K^\varepsilon \left(
\widehat{\phiEps{i}}, \,
\phiPone{i} \left( x_{c,K} \right)
+ \sum_{\alpha=1}^d \left.\left( \partial_\alpha \phiPone{i} \right) \right\vert_K \left( x^\alpha - x^\alpha_{c,K} \right)
\right)
\end{align*}
Here we use that $s_K^\varepsilon$ is a bilinear form on $V_K$, that all piecewise affine functions are contained in $V_K$ according to Def. <ref>, and the property that $\nabla \phiPone{i}$ is piecewise affine. Finally, we use (<ref>) for $\varphi=\widehat{\phiPone{i}}$ to conclude that
\begin{equation*}
s_K^\varepsilon \left(\widehat{\phiEps{i}}, w \right)
s_K^\varepsilon \left(\widehat{\phiEps{i}}, \widehat{\left.\phiPone{i}\right\vert_K} \right)
s_K^\varepsilon \left(\widehat{\phiEps{i}}, \widehat{\left.\phiPone{i}\right\vert_K} \right)
\end{equation*}
which establishes the desired variational formulation satisfied by $\widehat{\phiEps{i}}$.
If the -continuous problems (<ref>) are well-posed, we obtain by the same analysis the following result for -continuous multiscale basis functions.
Let $K$ be any mesh element and let $1 \leq i \leq N$. Assume that any -continuous local problem (<ref>) is well-posed. Consider an MsFEM with -continuous oversampling. Define an extension of $\phiEps{i}$ from $K$ to $S_K$ by
\begin{equation*}
\widehat{\phiEps{i}}
\widehat{\left.\phiPone{i}\right\vert_K} + \phiPone{i} \left(x_{c,K} \right) \, \VSK{0}{2} + \sum_{\alpha=1}^d \partial_\alpha \left(\left. \phiPone{i} \right\vert_K \right) \VSK{\alpha}{2},
\quad
\text{in } S_K,
\end{equation*}
where $\VKOS{\alpha}{2}$ is as in Def. <ref>.
Then $\widehat{\phiEps{i}}$ is the unique solution in $V_K$ to
\begin{equation}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts}
\IEEEstrut
$s_K^\varepsilon \left(\widehat{\phiEps{i}}, w \right)$
for all $w \in V_{K,0}$,
\\
$\Gamma \left( K,\widehat{\phiEps{i}} \right)$
&$\Gamma \left( K , \phiPone{i} \right)$.
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\label{eq:msbasis-OS-vf-dofc}
\end{equation}
[MsFEM-lin for diffusion problems]
In the setting of Example <ref>, any -extended multiscale basis function $\phiEps{i}$ for the MsFEM-lin constructed in (<ref>) is obtained, in each mesh element $K$, as the restriction of a function $\widehat{\phiEps{i}}$, which is the unique solution in $H^1(S_K)$ to
\begin{equation*}
-{\operatorname{div}(A^\varepsilon \nabla \widehat{\phiEps{i}})} = 0 \
\text{ in } S_K,
\quad
\widehat{\phiEps{i}} = \widehat{\phiPone{i}} \
\text{ on } \partial S_K.
\end{equation*}
For a -continuous basis function, $\widehat{\phiEps{i}}$ solves the same PDE in $S_K$, is affine on $\partial S_K$, and satisfies $\widehat{\phiEps{i}}(x_j) = \phiPone{i}(x_j)$ at all vertices $x_j$ of $K$.
[MsFEM-CR for diffusion problems]
In the continuation of Example <ref>, the -extended multiscale basis function $\phiEps{i}$ for the MsFEM-CR is the restriction to $K$ of $\widehat{\phiEps{i}}$, the unique solution in $H^1(S_K)$ to
\begin{equation*}
\left\{
\begin{IEEEeqnarraybox}[][c]{uts?s}
\IEEEstrut
$-{\operatorname{div}(A^\varepsilon \nabla \widehat{\phiEps{i}})}$
&in $S_K$,
\\
$\vec{n} \cdot A^\varepsilon \nabla \widehat{\phiEps{i}}$
&on each $h \in \mathcal{F}_a(S_K)$, \\
$\vec{n} \cdot A^\varepsilon \nabla \widehat{\phiEps{i}}$
&on each $h \in \mathcal{F}_d(S_K)$, \\
$\displaystyle \frac{1}{|h|} \int_h \widehat{\phiEps{i}}$
&$\displaystyle \frac{1}{|h|} \int_h \widehat{\phiPone{i}}$
&for each $h \in \mathcal{F}_d(S_K)$,
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\right.
\end{equation*}
where the constants $c_h$ are uniquely determined by the problem. For -continuous basis functions, the last condition is applied to the faces $h \in \mathcal{F}(K)$ (and all other conditions remain unchanged).
Our general framework allows two characterizations of the multiscale basis functions, namely (<ref>) and (<ref>) or (<ref>), as was the case for the MsFEM studied in Sec. <ref> (where $\phiEps{i}$ is given by (<ref>) or (<ref>)). The essential advantage of (<ref>) is that the microscale is fully encoded in the numerical correctors $\VKOS{\alpha}{0}$, that can be computed element per element without any global information. In particular, the global index $i$ of the multiscale basis function $\phiEps{i}$ is irrelevant for the computation of the numerical correctors. The expression in (<ref>) is therefore the crucial relationship that we will employ to develop non-intrusive MsFEMs within the general framework in Sec. <ref>.
The second formulation of the multiscale basis functions, as solutions to the local problems (<ref>) or (<ref>), provides a more direct interpretation of the multiscale basis functions in terms of the sampling form chosen. It also gives a relation between the degrees of freedom of the $\Pone$ basis functions and the associated multiscale basis function. This is useful in particular for the well-posedness of the MsFEM, that we study in Lemma <ref>.
Our definition of the multiscale basis functions in (<ref>) is reminiscent of the Variational Multiscale Method, a framework developed in <cit.> to adapt Galerkin approximations on low-dimensional spaces to the presence of multiscale features. In this context, our formulation of the MsFEM also exhibits a link with residual-free bubbles, see e.g. <cit.>.
The first introduction of the MsFEM in <cit.> corresponds to the idea of oversampling with -continuous basis functions. Although their existence cannot be established in general, they are computed numerically by taking linear combinations of -extended basis functions (following an analogous strategy to the one we discussed in Sec. <ref>). The MsFEM with -extended basis functions is studied in the works <cit.> dealing with the convergence analysis of the MsFEM-lin with oversampling.
Let us also note that the combination of Crouzeix-Raviart MsFEM and oversampling has, to the best of our knowledge, not yet been proposed in the literature. This method, for which the basis functions are given explicitly in Example <ref>, is a natural by-product of the identification of the abstract MsFEM framework.
§.§ The global problem
We can now define the multiscale trial and test spaces, respectively $V_H^\varepsilon$ and $V_{H,0}^\varepsilon$, as follows:
\begin{equation*}
V_H^\varepsilon = \left\{
\phiEps{i} \ \mid \ 1 \leq i \leq N
\right\},
\qquad
V_{H,0}^\varepsilon = \left\{
\phiEps{i} \ \mid \ 1 \leq i \leq N_0
\right\}.
\end{equation*}
Note that we only use $V_{H,0}^\varepsilon$ in the present section, because (<ref>) is posed with homogeneous Dirichlet conditions, but that the larger space $V_H^\varepsilon$ is used for more general boundary conditions (see Sec. <ref>). Applying (<ref>), we have the equivalent characterization in terms of the $\Pone$ space $V_H$,
\begin{equation*}
\left\{
\left.
v_H +
\sum_{K\in\mathcal{T}_H} \left(
v_H(x_{c,K}) \, \VKOS{0}{0}
+ \sum_{\alpha=1}^d
\partial_\alpha \left( \left. v_H \right\vert_K \right)
\VKOS{\alpha}{0}
\right)
\,\right\vert\,
v_H \in V_H
\right\}.
\end{equation*}
Let $V_H$ be an underlying $\Pone$ space defined in Def. <ref> with the associated operator $\Gamma$ from Def. <ref> and Def. <ref>-<ref>. Define for each mesh element $K \in \mathcal{T}_H$ an oversampling patch (Def. <ref>), a sampling space and sampling form in accordance with Def. <ref>. Let the multiscale basis functions $\phiEps{i}$ be given as in Def. <ref>. Then a Multiscale Finite Element Method (MsFEM) for problem (<ref>) is: find $u^\varepsilon_H \in V_{H,0}^\varepsilon$ such that
\begin{equation}
\forall \, v_H^\varepsilon \in V_{H,0}^\varepsilon,
\qquad
\sum_{K\in\mathcal{T}_H} a_K^{\varepsilon} \left( u^\varepsilon_H,\, v_H^\varepsilon \right) = F \left( v_H^\varepsilon \right).
\label{eq:gen-MsFEM-OS}
\end{equation}
In the following lemma, we investigate the well-posedness of the MsFEM.
Consider an MsFEM without oversampling, or an MsFEM with oversampling using -continuous basis functions (assuming the associated basis functions are well-defined). When $a^\varepsilon$ satisfies (<ref>), the MsFEM (<ref>) has a unique solution.
Note that, with -continuous oversampling, but also without oversampling, the multiscale basis functions satisfy (<ref>). In particular, all degrees of freedom of $u_H^\varepsilon$ related to the boundary vanish. Also note that, the dimension of $V_{H,0}^\varepsilon$ being finite, it suffices to show that $u^\varepsilon_H=0$ is the unique solution to problem (<ref>) with $F=0$.
If $0 = F(u^\varepsilon_H) = a^\varepsilon(u_H^\varepsilon, u_H^\varepsilon)$, it follows from (<ref>) that $u^\varepsilon_H$ is piecewise constant.
Let us write $\displaystyle u_H^\varepsilon = \sum_{i=1}^{N_{0}} \alpha_i \, \phiEps{i}$ for some coefficients $\alpha_i\in\bbR$ and introduce the function $ \displaystyle u_H = \sum_{i=1}^{N_{0}} \alpha_i \, \phiPone{i} \in V_H$. Because of (<ref>), we have $\Gamma(K,u_H^\varepsilon) = \Gamma(K,u_H)$ for all mesh elements $K$. Since $u_H^\varepsilon$ is piecewise constant and $\Gamma(K,\cdot)$ is a bijection from $\Pone(K)$ to $\bbR^{d+1}$ (recall Def. <ref>), it follows that $u_H^\varepsilon=u_H$. In particular, the multiscale function $u_H^\varepsilon$ in fact belongs to the underlying $\Pone$ space $V_H$.
We remarked immediately below Def. <ref> that, for either of the two spaces $V_H=V_H^{L}$ or $V_H^{CR}$, the above implies that $u^\varepsilon_H$ is constant throughout $\Omega$. Since the degrees of freedom of $u^\varepsilon_H$ associated to the boundary vanish, we readily deduce that $u^\varepsilon_H=0$.
We do not know of the existence of a result on the well-posedness of MsFEMs with oversampling using -extended multiscale basis functions. In <cit.>, the authors establish an inf-sup result for a variant of the MsFEM-lin-OS with $\Pone$ test functions (see also Def. <ref>). This result is obtained for a periodic diffusion coefficient in the limit of sufficiently small $\varepsilon$.
§ NON-INTRUSIVE MSFEM FOR THE GENERAL FRAMEWORK
We show in this section how to develop a non-intrusive approach for the general MsFEM framework of Sec. <ref>. We have seen in Lemma <ref> that for a particular MsFEM variant, the non-intrusive Galerkin MsFEM approach coincides with a Petrov-Galerkin MsFEM. This does not hold for all MsFEMs in the general framework. We first develop a non-intrusive MsFEM approach for a Petrov-Galerkin MsFEM in the general framework. The non-intrusive approach for the Petrov-Galerkin MsFEM is equivalent to the Petrov-Galerkin MsFEM itself. In a second step, we introduce a non-intrusive approximation of the Galerkin MsFEM. Before doing so, let us summarize the main steps of Sec. <ref> and <ref> to obtain a non-intrusive MsFEM approach:
* the expansion (<ref>) allows to recast the matrix $\mathds{A}^\varepsilon$ of the linear system for the MsFEM as the matrix $\mathds{A}^\Pone$ associated to the $\Pone$ discretization of an effective problem;
* we approximate the right-hand side $\mathds{F}^\varepsilon$ of the MsFEM problem by the right-hand side $\mathds{F}^\Pone$ of this $\Pone$ discretization;
* the post-processing step (<ref>) applied to the $\Pone$ approximation of the effective problem yields the MsFEM approximation.
§.§ The Petrov-Galerkin MsFEM
We recall that the abstract continuous problem for which we developed the MsFEM in Sec. <ref> is given by (<ref>) and that it can be rewritten in terms of the bilinear forms $a_K^\varepsilon$ satisfying (<ref>). Petrov-Galerkin variants of the multiscale finite element method with $\Pone$ test functions were previously studied in <cit.>. In our general MsFEM framework, the adaptation of Def. <ref> to a Petrov-Galerkin MsFEM is the following.
Let $V_H$ be an underlying $\Pone$ space defined in Def. <ref> with the associated operator $\Gamma$ from Def. <ref> and Def. <ref>-<ref>. Define for each mesh element $K \in \mathcal{T}_H$ an oversampling patch (Def. <ref>), a sampling space and sampling form in accordance with Def. <ref>. Let the multiscale basis functions $\phiEps{i}$ be given as in Def. <ref>. Then a Petrov-Galerkin Multiscale Finite Element Method (PG-MsFEM) for problem (<ref>) is: find $u^\varepsilon_H \in V_{H,0}^\varepsilon$ such that
\begin{equation}
\forall \, v_H \in V_{H,0},
\qquad
\sum_{K\in\mathcal{T}_H} a_K^{\varepsilon} (u^\varepsilon_H,\, v_H) = F(v_H).
\label{eq:gen-MsFEM-OS-testP1}
\end{equation}
When confusion may arise, we shall refer to the MsFEM defined in Def. <ref> as the Galerkin MsFEM (G-MsFEM). To study well-posedness of the PG-MsFEM, it is most convenient to relate this method to the G-MsFEM. Therefore, we postpone well-posedness of (<ref>) to Lemma <ref>.
We now execute step <ref> of the summary of the non-intrusive MsFEM approach at the beginning of this section. The matrix $\mathds{A}^\varepsilon$ of the linear system associated to (<ref>) is defined by
\begin{equation}
\mathds{A}^\varepsilon_{j,i}
\sum_{K\in\mathcal{T}_H} a^\varepsilon_K \left(\phiEps{i}, \, \phiPone{j} \right),
\quad
1 \leq i,j \leq N_{0}.
\label{eq:system-msfem-gen-testP1}
\end{equation}
To find an effective $\Pone$ formulation with the same linear system, we will use the definition (<ref>) of the multiscale basis functions in the general framework, but first we combine it with (<ref>) applied to $\varphi=\phiPone{i}$ to rewrite (<ref>) as
\begin{align}
\left. \phiEps{i} \right\vert_K
\phiPone{i}(x_{c,K}) + \phiPone{i} \left( x_{c,K} \right) \VKOS{0}{0} + \sum_{\alpha=1}^d \left. \partial_\alpha \phiPone{i} \right\vert_K \left( x^\alpha - x^\alpha_{c,K} + \VKOS{\alpha}{0} \right)
\nonumber \\
\phiPone{i} \left( x_{c,K} \right) \, \VVK{0} + \sum_{\alpha=1}^d \left. \partial_\alpha \phiPone{i} \right\vert_K \VVK{\alpha},
\label{eq:MsFEM-VVxy}
\end{align}
\[
\VVK{0} \coloneqq 1+\VKOS{0}{0},
\quad
\VVK{\alpha} \coloneqq x^\alpha-x^\alpha_{c,K} + \VKOS{\alpha}{0},
\]
for all $1 \leq \alpha \leq d$ and each $K \in \mathcal{T}_H$. We recall that $\bullet \in \{\mathsf{e},\mathsf{c}\}$ indicates the choice of -extended or -continuous basis functions.
Inserting (<ref>) into (<ref>) for $\phiEps{i}$ and (<ref>) for $\varphi=\phiPone{j}$ yields
\begin{align*}
\mathds{A}^\varepsilon_{j,i}
\sum_{K\in\mathcal{T}_H} \left(
\phiPone{i}\left(x_{c,K}\right) \, a_K^\varepsilon\left(\VVK{0},1\right) \, \phiPone{j}\left(x_{c,K}\right)
+ \sum_{\alpha=1}^d \left.\left(\partial_\alpha\phiPone{i}\right)\right\vert_K \, a_K^\varepsilon \left(\VVK{\alpha},1\right) \, \phiPone{j}\left(x_{c,K}\right)
\right.
\nonumber\\
\sum_{\beta=1}^d \phiPone{i}\left(x_{c,K}\right) \, a_K^\varepsilon\left(\VVK{0},x^\beta-x_{c,K}^\beta\right) \, \left.\left(\partial_\beta\phiPone{j}\right)\right\vert_K
\nonumber\\
\left. \sum_{\alpha,\beta=1}^d
\left.\left(\partial_\alpha\phiPone{i}\right)\right\vert_K \, a^\varepsilon_K\left(\VVK{\alpha},x^\beta-x_{c,K}^\beta\right) \, \left.\left(\partial_\beta\phiPone{j}\right)\right\vert_K
\right),
\end{align*}
[ams equation]
|K| ( M i j )(x_c,K)
+ ∫_K j B^1 ·∇i
+ i B^2 ·∇j
+ ∇j ·A ∇i,
where we have defined the effective mass $\overline{M}$, (adjoint) advection vector $\overline{B}^1$ and $\overline{B}^2$, and the effective diffusion tensor $\overline{A}$, for all $1 \leq \alpha, \beta \leq d$ and for each $K\in\mathcal{T}_H$, as
\begin{equation}
\begin{IEEEeqnarraybox}[][c]{us?us}
\IEEEstrut
$\left. \overline{M} \right\vert_K $
&$\displaystyle = \frac{1}{|K|} \, a_K^\varepsilon\left(\VVK{0},1\right)$,
&$\left. \overline{B}^1_\alpha \right\vert_K$
&$\displaystyle = \frac{1}{|K|} \, a_K^\varepsilon\left(\VVK{\alpha},1\right)$,
\\
$\left. \overline{B}^2_\beta \right\vert_K$
&$\displaystyle = \frac{1}{|K|} \, a_K^\varepsilon\left(\VVK{0},x^\beta-x_{c,K}^\beta\right)$,
&$\left. \overline{A}_{\beta,\alpha} \right\vert_K $
&$\displaystyle = \frac{1}{|K|} \, a^\varepsilon_K \left(\VVK{\alpha}, x^\beta-x_{c,K}^\beta\right)$.
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\label{eq:gen-MsFEM-eff-coef}
\end{equation}
Note that all integrals in (<ref>) can be computed exactly by evaluating the integrand at the centroid. With this quadrature rule, we observe that the term $|K| \left( \overline{M} \phiPone{i} \, \phiPone{j} \right)(x_{c,K})$ also equals the numerical approximation of the integral $\displaystyle \int_K \overline{M} \phiPone{i} \phiPone{j}$.
The new expression (<ref>) for the matrix of the linear system motivates us to introduce the effective bilinear forms $\overline{a}_K$ defined on $H^1(K) \times H^1(K)$ by
\begin{equation}
\label{eq:gen-MsFEM-eff-form}
\overline{a}_K(u,v)
\int_K \nabla v \cdot \overline{A} \, \nabla u
+ v \left(\overline{B}^1 \cdot \nabla u\right)
+ u \left(\overline{B}^2 \cdot \nabla v\right)
+ \overline{M} \, u \, v,
\quad
\text{for all } u,\,v \in H^1(K),
\end{equation}
and the associated $\Pone$ Galerkin approximation on the space $V_{H,0}$:
\begin{equation}
\text{Find } u_H\in V_{H,0} \text{ such that}
\sum_{K\in\mathcal{T}_H}
\overline{a}_K(u_H,v_H)
\quad \text{for all } v_H \in V_{H,0}.
\label{eq:gen-FEM-effective}
\end{equation}
This discrete problem leads to a linear system with the matrix
\begin{equation*}
\mathds{A}^\Pone_{j,i}
\overline{a} \left( \phiPone{i},\, \phiPone{j} \right)
\sum_{K\in\mathcal{T}_H} \overline{a}_K(\phiPone{i}, \, \phiPone{j}),
\quad
1 \leq i,j \leq N_0.
\end{equation*}
The identity (<ref>) thus implies the following result, which generalizes Lemma <ref> to the PG-MsFEM in the general framework.
The matrices $\mathds{A}^\varepsilon$ and $\mathds{A}^\Pone$ are identical if the integrals in (<ref>) are evaluated at the centroid of each $K$ for the computation of $\mathds{A}^\Pone$. Then the PG-MsFEM (<ref>) coincides with the resolution of the effective problem (<ref>) combined with the post-processing step
\begin{equation}
\label{eq:gen-MsFEM-post}
\left. u_H^\varepsilon \right\vert_K
u_H(x_{c,K}) \, \VVK{0} + \sum_{\alpha=1}^d \left. \partial_\alpha u_H \right\vert_K \VVK{\alpha}.
\end{equation}
Note that step <ref> of the summary at the beginning of this section is irrelevant for the PG-MsFEM. The computation of the right-hand side in (<ref>) is clearly part of any standard FEM software.
The computational approach described by Lemma <ref> naturally fits within the non-intrusive workflow of Algorithm <ref>. The numerical correctors on line <ref> are, of course, replaced by those of Def. <ref> or Def. <ref>. Line <ref> is replaced by the computation of all effective quantities in (<ref>). The online phase in line <ref> amounts to solving the $\Pone$ problem (<ref>), where all integrations to construct the matrix of the linear system are to be performed by evaluation at the centroid. (This is not the case for the construction of the right-hand side, however.) Finally, in the post-processing phase we construct $u^\varepsilon_H$ from $u_H$ by virtue of (<ref>).
Next we generalize the above expansions to design a non-intrusive approximation of the G-MsFEM.
§.§ The non-intrusive Galerkin MsFEM
For the G-MsFEM (introduced in Def. <ref>), we need to replace the $\Pone$ test space of the PG-MsFEM by the multiscale test space $V_{H,0}^\varepsilon$.
The matrix of the linear system associated to (<ref>) is given by
\begin{equation*}
\mathds{A}_{j,i}^{\varepsilon,\mathsf{G}}
\sum_{K\in\mathcal{T}_H} a_K^\varepsilon \left( \phiEps{i},\phiEps{j} \right),
\quad
1 \leq i,j \leq N_{0}.
\end{equation*}
Upon inserting (<ref>) for the test function $\phiEps{j}$, we find, for all $1 \leq i,j \leq N_0$,
\begin{equation*}
\mathds{A}_{j,i}^{\varepsilon,\mathsf{G}}
\mathds{A}_{j,i}^\varepsilon
+ \sum_{K\in\mathcal{T}_H} \left(
\phiPone{j}\left(x_{c,K}\right) \, a^\varepsilon_K\left(\phiEps{i}, \VKOS{0}{0}\right)
\sum_{\beta=1}^d \left.\left(\partial_\beta \phiPone{j}\right) \right\vert_K \, a^\varepsilon_K\left(\phiEps{i},\VKOS{\beta}{0}\right)
\right),
\end{equation*}
where $\mathds{A}^\varepsilon$ is the matrix of the Petrov-Galerkin MsFEM, see (<ref>) and (<ref>).
An effective formulation can again be derived by inserting (<ref>) for the $\phiEps{i}$. We obtain
[ams equation]
|K| ( M^𝖦 i j )(x_c,K)
+ ∫_K j B^1,𝖦 ·∇i
+ i B^2,𝖦 ·∇j
+ ∇j ·A^𝖦 ∇i,
where the effective mass, (adjoint) advection vectors and diffusion tensor are given by (using those defined in (<ref>))
\begin{equation}
\begin{IEEEeqnarraybox}[][c]{us?us}
\IEEEstrut
$\left. \overline{M}^\mathsf{G} \right\vert_K $
&$\displaystyle = \left. \overline{M} \right\vert_K + \frac{1}{|K|} \, a_K^\varepsilon\left(\VVK{0},\VKOS{0}{0}\right)$,
&$\left. \overline{B}^{1,\mathsf{G}}_\alpha \right\vert_K$
&$\displaystyle = \left. \overline{B}^{1}_\alpha \right\vert_K + \frac{1}{|K|} \, a_K^\varepsilon\left(\VVK{\alpha},\VKOS{0}{0}\right)$,
\\
$\left. \overline{B}^{2,\mathsf{G}}_\beta \right\vert_K$
&$\displaystyle = \left. \overline{B}^{2}_\beta \right\vert_K + \frac{1}{|K|} \, a_K^\varepsilon\left(\VVK{0},\VKOS{\beta}{0}\right)$,
&$\left. \overline{A}^\mathsf{G}_{\beta,\alpha} \right\vert_K$
&$\displaystyle = \left. \overline{A}_{\beta,\alpha} \right\vert_K + \frac{1}{|K|} \, a^\varepsilon_K \left(\VVK{\alpha}, \VKOS{\beta}{0}\right)$.
\IEEEstrut
\IEEEeqnarraynumspace %preserve the same alignment as obtained in an equation environment
\end{IEEEeqnarraybox}
\label{eq:gen-MsFEM-eff-coef-Gal}
\end{equation}
The above computations lead to the introduction of the effective bilinear form $\overline{a}^{\mathsf{G}} = \sum\limits_{K\in\mathcal{T}_H} \overline{a}^{\mathsf{G}}_K$ with
\begin{equation}
\label{eq:gen-MsFEM-eff-form-Gal}
\overline{a}_K^\mathsf{G}(u,v)
\int_K \nabla v \cdot \overline{A}^\mathsf{G} \, \nabla u
+ v \left(\overline{B}^{1,\mathsf{G}} \cdot \nabla u\right)
+ u \left(\overline{B}^{2,\mathsf{G}} \cdot \nabla v\right)
+ \overline{M}^\mathsf{G} \, u \, v.
\end{equation}
We formulate the following effective variational problem:
\begin{equation}
\text{Find } u_H\in V_{H,0} \text{ such that}
\sum_{K\in\mathcal{T}_H}
\overline{a}^{\mathsf{G}}_K(u_H,v_H)
\quad \text{for all } v_H \in V_{H,0}.
\label{eq:gen-FEM-effective-gal}
\end{equation}
The associated linear system has coefficients $\mathds{A}_{j,i}^{\Pone,\mathsf{G}} = \overline{a}^{\mathsf{G}} \left( \phiPone{i},\phiPone{j} \right)$. We have the following analogue of Lemma <ref>, which generalizes Lemma <ref> to the G-MsFEM in the general framework.
The matrices $\mathds{A}^{\varepsilon,\mathsf{G}}$ and $\mathds{A}^{\Pone,\mathsf{G}}$ are identical if the integrals in (<ref>) are evaluated at the centroid of each $K$ in the computation of $\mathds{A}^{\Pone,\mathsf{G}}$.
Contrary to the matrices, the right-hand sides of the effective problem (<ref>) and the Galerkin MsFEM (<ref>) are not equal in general. We apply step <ref> formulated at the beginning of this section: the right-hand side of the G-MsFEM is approximated by the right-hand side of the effective problem to obtain an approximate, but non-intrusive, MsFEM. The non-intrusive G-MsFEM becomes:
\begin{equation}
\text{Find } u^\varepsilon_H \in V_{H,0}^\varepsilon \text{ such that }
\sum_{K \in \mathcal{T}_H}
a_K^{\varepsilon}\left(u_H^\varepsilon, \phiEps{j} \right)
F\left(\phiPone{j}\right) \quad \text{for all } 1 \leq j \leq N_{0}.
\label{eq:gen-MsFEM-noni}
\end{equation}
This problem is no longer a Galerkin approximation of (<ref>), because different test spaces are used for the bilinear and for the linear form. In view of Lemma <ref>, the non-intrusive MsFEM can equivalently be formulated as
\begin{equation*}
\text{compute } u_H \in V_{H,0} \text{ solution to~\eqref{eq:gen-FEM-effective-gal} and compute } u^\varepsilon_H \text{ from } u_H \text{ by~\eqref{eq:gen-MsFEM-post}},
\end{equation*}
provided all integrals in (<ref>) are evaluated at the centroid for the construction of the matrix of the linear system in (<ref>).
The latter formulation of the non-intrusive MsFEM immediately suggests how to effectively implement the non-intrusive MsFEM in a non-intrusive way similar to Algorithm <ref>. For completeness, we provide the algorithm for the non-intrusive G-MsFEM in Algorithm <ref>.
Non-intrusive G-MsFEM for the general framework
[1]
Let $\mathcal{T}_H$ be the mesh used by the legacy code, let $\bullet \in \{\mathsf{e},\mathsf{c}\}$ be the chosen oversampling variant
$K \in \mathcal{T}_H$
$0 \leq \alpha \leq d$
Solve for the applicable $\VKOS{\alpha}{0}$ from Def. <ref> or <ref>
Compute the effective tensors defined in (<ref>)
Use the legacy code to construct the matrix $\mathds{A}^\Pone$ by evaluating (<ref>) at the centroid of each mesh element and to solve for $u_H$ defined by (<ref>)
Save $\left\{u_H(x_{c,K})\right\}_{K\in\mathcal{T}_H}$ and $\left\{ (\partial_\alpha u_H) \vert_K \right\}_{K\in\mathcal{T}_H, \, 1 \leq \alpha \leq d}$
Obtain the MsFEM approximation $u^\varepsilon_H$ from (<ref>)
The discussion surrounding Algorithm <ref> regarding the advantages for the implementation of this non-intrusive MsFEM approach also applies here.
Let us now comment on the well-posedness of the MsFEMs for the general framework introduced above. We recall that the hypotheses of the general framework without oversampling, or with -continuous oversampling, provide well-posedness of the G-MsFEM (<ref>) by Lemma <ref>. In this case, the non-intrusive approximation (<ref>) is also well-posed, for the matrices associated to both MsFEM variants are the same.
Regarding the PG-MsFEM (<ref>), we can only establish well-posedness if the associated matrix coincides with the matrix of the corresponding Galerkin MsFEM. This is stated in the following lemma, which generalizes Lemma <ref> to the general framework.
Consider a G-MsFEM as defined by Def. <ref> without oversampling and suppose that the sampling form $s^\varepsilon_K$ equals the local bilinear form $a_K^\varepsilon$. Then the matrix associated to this G-MsFEM coincides with the matrix associated to the corresponding PG-MsFEM of Def. <ref>. Consequently, the non-intrusive Galerkin MsFEM (<ref>) coincides with the Petrov-Galerkin MsFEM of Def. <ref> and in particular, The Petrov-Galerkin MsFEM is well-posed.
To prove the lemma, we show that the matrices corresponding to the linear problems defined in (<ref>) and (<ref>) are equal. Using $s_K^\varepsilon = a_K^{\varepsilon,\dif}$, we have for all $1 \leq i, \, j \leq N_{0}$,
\begin{equation}
\mathds{A}^{\varepsilon,\mathsf{G}}_{j,i} - \mathds{A}^\varepsilon_{j,i}
\sum_{K\in\mathcal{T}_H} a_K^{\varepsilon,\dif}\left(\phiEps{i},\phiEps{j} - \phiPone{j}\right)
\sum_{K\in\mathcal{T}_H} s_K^\varepsilon \left(\phiEps{i}, \phiEps{j}-\phiPone{j} \right)
\label{eq:IBP-bubbles}
\end{equation}
Indeed, the multiscale basis functions satisfy $\Gamma(K,\phiEps{i}) = \Gamma(K,\phiPone{i})$ for all $K$, so that $\phiEps{j} - \phiPone{j} \in V_{K,0}$ (see (<ref>) with $S_K=K$ and recall Def. <ref> for the sampling test space $V_{K,0}^ \varepsilon$), and the variational problem in (<ref>) (with $S_K=K$) shows that the above quantity vanishes.
§.§ Further extensions of the non-intrusive MsFEM
We sketch some other FEM settings to which we have applied the above strategy to develop non-intrusive MsFEM approaches. For more details, we refer to <cit.>.
Stabilized finite element formulations.
Stabilized finite element formulations add mesh-dependent terms to a discrete variational formulation (such as (<ref>)) to remove numerical instabilities, for example caused by sharp boundary layers of the exact solution. See <cit.> for such a variant of the MsFEM and see <cit.> for the stabilization of single-scale problems. The expansion (<ref>) can also be inserted in these additional terms to find a non-intrusive implementation of the associated MsFEM.
Petrov-Galerkin formulations.
Other test spaces than the $\Pone$ space $V_{H,0}$ can be considered in Petrov-Galerkin formulations. An example would be to use multiscale test functions that locally solve the adjoint problem rather than the direct problem that is used for the multiscale functions in (<ref>). See e.g. <cit.>. An expansion of the kind (<ref>) can still be used to find a non-intrusive formulation, with a suitably adapted definition of the numerical correctors.
Non-homogeneous Dirichlet conditions.
Suppose that a legacy FEM code can provide a solution to an effective problem such as (<ref>) posed on the space $V_{H,0}$ and complemented with non-homogeneous Dirichlet conditions on $\partial\Omega$. This solution can directly be used to construct a multiscale approximation $u^\varepsilon_H \in V_H^\varepsilon$ from (<ref>). The translation of the Dirichlet condition to the MsFEM approximation is as follows: if -continuous oversampling is applied, the function $u^\varepsilon_H$ satisfies $[\Gamma(K,u_H^\varepsilon)]_j = [\Gamma(K,u_H)]_j$ for all degrees of freedom associated to the boundary. Here, $[\Gamma(K,u_H)]_j$ is determined by the legacy code. When -extended oversampling is used, the degrees of freedom associated to the boundary are equal to the sum of $[\Gamma(K,u_H)]_j$ and a perturbation due to the fact that the degrees of freedom of the numerical correctors do not vanish.
Neumann conditions.
To apply Neumann conditions on $\partial \Omega$, one solves a Galerkin approximation of the variational formulation in the space $V_H^\varepsilon$. The suitable adaptation of (<ref>) can be approximated by a non-intrusive Galerkin MsFEM following the same methodology as above. The effective $\Pone$ approximation that is obtained corresponds to the resolution of an effective PDE with Neumann conditions, for which a legacy code can be used. In the case of the diffusion problem (<ref>), the Neumann boundary condition in the effective problem is imposed on the effective flux $\vec{n} \cdot \overline{A} \nabla u_H$, where $\overline{A}$ is defined in (<ref>).
Parabolic equations.
When a parabolic equation is discretized in time, problems of the form (<ref>) are typically obtained for each time step, but with a right-hand side that depends on the solution of the previous time step. This term belongs to the space $V_H^\varepsilon$, so it varies on the microscale and cannot be integrated numerically by the legacy code that operates on the coarse mesh. The non-intrusive strategy of the foregoing sections cannot be applied directly to find a non-intrusive MsFEM. In the vein of our non-intrusive approach, one could introduce an additional approximation upon replacing the multiscale solution of the previous time step by its underlying $\Pone$ in (<ref>). The effect of this approximation is beyond the scope of the present work.
§.§ Intrusiveness of other multiscale methods
Some work on the formulation of effective $\Pone$ problems in multiscale methods, and the related question of non-intrusive approaches, can be found in the literature. We discuss here the case of the HMM and the LOD method.
First, the HMM is less intrusive than the original MsFEM, because its main objective is to approximate $u^\varepsilon$ on the coarse scale. The HMM directly proposes to solve a $\Pone$ problem on the coarse scale, where effective coefficients of the $\Pone$ problem are defined in terms of the solutions to local problems. This workflow corresponds to our non-intrusive MsFEM approach, and when the local problems of the HMM coincide with the computation of the numerical correctors introduced in this work, the HMM and the MsFEM for the pure diffusion problem are identical. For more general problems, there is an important difference between the two methods. In the MsFEM, the form of the effective equation and the definition of the effective coefficients follows directly from the choice of basis functions, and thus from the choice of local problems. For the HMM, the local problems and the effective equation are formulated independently, and the link between the two is only justified heuristically, drawing inspiration from homogenization theory.
The LOD method aims at approximating $u^\varepsilon$ at the coarse and the microscale by the use of multiscale basis functions, like the MsFEM. It is shown in <cit.> that a Petrov-Galerkin LOD method (also see <cit.>) can, with some additional approximations, be recast as the $\Pone$ discretization of an appropriate coarse-scale problem. This opens the way to non-intrusive implementations in the spirit of the present article. The LOD method and the MsFEM notably differ in the fact that the LOD basis functions are defined on a patch around the vertices of the mesh that should generally be taken larger than the support of the associated $\Pone$ functions. In contrast, the MsFEM uses fully localized basis functions (even though they may have been computed using oversampling patches), each of which has the same support as the corresponding $\Pone$ basis functions.
§ COMPARISON OF THE CLASSICAL AND NON-INTRUSIVE MSFEM FOR DIFFUSION PROBLEMS
We study in this section a particular setting within the general MsFEM framework, namely that of MsFEMs for diffusion problems. We set in this section $a^\varepsilon_K = a^{\varepsilon,\dif}_K$ defined in Example <ref>, and we choose the sampling form $s_K^\varepsilon = a^{\varepsilon,\dif}_K$.
§.§ The general framework for diffusion problems
For the convenience of the reader, we first give an explicit description of the simplifications of the general framework in the diffusion setting. In Def. <ref> and <ref> for the numerical correctors, Equation (<ref>) reduces to
\begin{equation}
a_K^{\varepsilon,\dif} \left( \VSK{\alpha}{0}, w \right)
-a_K^{\varepsilon,\dif} \left( x^\alpha, w \right),
\label{eq:diffusion-gen-correctors}
\end{equation}
for all $w\in V_{K,0}$ (where $V_{K,0}$ is the sampling test space for either the MsFEM-lin or the MsFEM-CR; see Examples <ref> and <ref>)
when $1 \leq \alpha \leq d$, whereas $\VSK{0}{0} = 0$. (The notation $\VK{\alpha}$ will be used in the absence of oversampling, see Rem. <ref>.) This means that $\VVK{0}=1$ in (<ref>). Consequently, regarding the formulation of the effective $\Pone$ problem, only the effective diffusion coefficient does not vanish in (<ref>) and (<ref>). Its definition in (<ref>) is identical to the formula in (<ref>) for the applicable choice of the numerical correctors.
The definition of the multiscale basis functions by (<ref>) reduces to (<ref>) (again upon replacing the numerical correctors $\VK{\alpha}$ by the relevant ones for the MsFEM under consideration). Hence, we can associate a multiscale counterpart in $V_H^\varepsilon$ to any $v_H \in V_H$, given by
\begin{equation}
v_H^\varepsilon = v_H
+ \sum_{K\in\mathcal{T}_H} \sum_{\alpha=1}^d
(\partial_\alpha v_H)\vert_K \, \VKOS{\alpha}{0}.
\label{eq:diffusion-MsFEM-Vxy}
\end{equation}
The non-intrusive MsFEM (<ref>) becomes
\begin{equation}
\text{Find } u^\varepsilon_H \in V_{H,0}^\varepsilon \text{ such that }
\sum_{K \in \mathcal{T}_H}
a_K^{\varepsilon,\dif}\left(u_H^\varepsilon, \phiEps{j} \right)
F\left(\phiPone{j}\right) \quad \text{for all } 1 \leq j \leq N_{0}.
\label{eq:diffusion-gen-MsFEM-noni}
\end{equation}
Lemma <ref> now amounts to the following.
Let `MsFEM' refer to the MsFEM-lin or the MsFEM-CR without oversampling. The non-intrusive Galerkin MsFEM (<ref>) coincides with the following Petrov-Galerkin MsFEM:
\begin{equation}
\text{Find } u^\varepsilon_H \in V_{H,0}^\varepsilon \text{ such that }
\sum_{K \in \mathcal{T}_H}
a_K^{\varepsilon,\dif} \left(u_H^\varepsilon, \phiPone{j} \right)
F \left( \phiPone{j} \right) \quad \text{for all } 1 \leq j \leq N_{0},
\label{eq:diffusion-gen-MsFEM-testP1}
\end{equation}
We will specify for all results in this section to which specific MsFEMs they apply among the MsFEM-lin and the MsFEM-CR, with or without oversampling. Lemmas <ref>, <ref> and <ref> are generalizations of results of <cit.>, where the MsFEM-lin without oversampling is treated.
§.§ Convergence results
We estimate here the difference between the solutions to the (intrusive) Galerkin approximation (<ref>) and the non-intrusive MsFEM (<ref>), which coincides with the Petrov-Galerkin MsFEM (<ref>). We first show coercivity of the effective diffusion tensor $\overline{A}$.
Consider the MsFEM-lin or the MsFEM-CR without oversampling, or the MsFEM-CR with -continuous oversampling. The effective tensor $\overline{A}$ defined by (<ref>) with the appropriate numerical correctors satisfies
\begin{equation*}
\forall \, \xi \in \bbR^d,
\quad
m |\xi|^2 \leq \xi \cdot \overline{A} \, \xi.
\end{equation*}
Here, $m$ is the same coercivity constant as in (<ref>).
Let $\xi = (\xi_1,\dots,\,\xi_d) \in \bbR^d$, and let $K$ be any simplex of the mesh $\mathcal{T}_H$. We have
\begin{equation*}
|K| \, \xi \cdot \left. \overline{A} \right\vert_K \xi
\sum_{\alpha,\beta=1}^d a^{\varepsilon,\dif}_K \left(
\xi_\alpha \left(x^\alpha + \VK{\alpha}\right),\,
\xi_\beta \left(x^\beta + \VK{\beta} \right)
\right)
\int_K (\xi + \nabla \chi^\xi) \cdot A^\varepsilon (\xi + \nabla \chi^\xi),
\end{equation*}
denoting by $\chi^\xi$ the function $\displaystyle \chi^\xi = \sum_{\alpha=1}^d \xi_\alpha \VK{\alpha}$. Using (<ref>), we obtain
\begin{equation*}
|K| \, \xi \cdot \left. \overline{A} \right\vert_K \xi
\geq
m \int_K \left\vert \xi + \nabla \chi^\xi \right\rvert^2
\geq
m\,|K| \, |\xi|^2 +
2 m\,\int_K \xi \cdot \nabla \chi^\xi.
\end{equation*}
Using an integration by parts, we see that $\displaystyle \int_K \xi \cdot \nabla \chi^\xi = \int_{\partial K} \chi^\xi \, n\cdot\xi$, where $n$ is the unit outward normal vector on $\partial K$.
In the case of the MsFEM-lin, the function $\chi^\xi$ vanishes on $\partial K$. In the case of the MsFEM-CR with -continuous oversampling, or without oversampling, the function $\chi^\xi$ has average zero on each face of $K$. Since the factor $n\cdot\xi$ is constant on each face, the integral again vanishes. In conclusion, we have $\displaystyle \int_K \xi \cdot \nabla \chi^\xi = 0$.
We thus obtain the inequality
$ \xi \cdot \left. \overline{A} \right\vert_K \xi
\geq m |\xi|^2.
Since $K \in \mathcal{T}_H$ is arbitrary here, this shows coercivity of $\overline{A}$ and completes the proof.
Coercivity of the effective tensor $\overline{A}$ implies coercivity of the bilinear form $\overline{a}^\dif$ on $H^1_0(\Omega)$. By an application of the Lax-Milgram Theorem, we conclude that the (continuous) effective problem (<ref>) is well-posed for the MsFEM-lin and the MsFEM-CR without oversampling, and for the MsFEM-CR with -continuous oversampling.
The proof of the above lemma does not extend to the MsFEM-lin with oversampling, because there is no global information about $\VK{\alpha}$ on the faces of $K$.
The following lemma provides a variational characterization of the bijection (<ref>).
Consider the MsFEM-lin or the MsFEM-CR without oversampling. Let $v_H^\varepsilon\in V_H^\varepsilon$. The unique $v_H \in V_H$ for which (<ref>) holds, is the unique solution in $V_H$ to the problem
\begin{equation}
\forall \, w_H \in V_H,
\quad
\overline{a}^\dif(v_H,w_H)
\label{eq:diffusion-MsFEM-Vxy-var-char}
\end{equation}
In addition, we have, with the constants $m$ and $M$ from (<ref>), the estimate
\begin{equation*}
\lVert \nabla v_H \rVert_{L^2(\mathcal{T}_H)}
\leq
\frac{M}{m} \lVert \nabla v_H^\varepsilon \lVert_{L^2(\mathcal{T}_H)}.
\end{equation*}
Let $v_H \in V_H$ be the unique element of $V_H$ such that $v_H^\varepsilon$ and $v_H$ satisfy (<ref>). Take any $w_H \in V_H$. Using that $\nabla v_H$ and $\nabla w_H$ are piecewise constant, we compute
\begin{equation*}
\sum_{K\in\mathcal{T}_H} \sum_{\alpha,\beta=1}^d
(\partial_\beta w_H)\vert_K \,
a^{\varepsilon,\dif}_K \left( x^\alpha + \VK{\alpha}, x^\beta \right)
(\partial_\alpha v_H)\vert_K.
\end{equation*}
For the MsFEM without oversampling, the numerical correctors belong to the sampling test space $V_{K,0}$. We can thus use (<ref>) to obtain
\begin{equation*}
\forall \, 1 \leq \alpha, \, \beta \leq d,
\quad
a^{\varepsilon,\dif}_K \left( x^\alpha + \VK{\alpha}, x^\beta \right)
a^{\varepsilon,\dif}_K \left( x^\alpha + \VK{\alpha}, x^\beta + \VK{\beta} \right).
\end{equation*}
Using the definitions of $\overline{A}$ in (<ref>) and the of $\overline{a}^{\dif}$ in (<ref>) (we recall that these expressions hold true here upon replacing the numerical correctors by those under consideration), we conclude that
\begin{equation*}
\sum_{K\in\mathcal{T}_H} \sum_{\alpha,\beta=1}^d \int_K
\partial_\beta w_H \,
\overline{A}_{\beta,\alpha} \,
\partial_\alpha v_H
\overline{a}^{\dif}(w_H,v_H).
\end{equation*}
It follows that $v_H$ satisfies (<ref>). In addition, in view of the coercivity of $\overline{A}$ established in Lemma <ref> and by the Lax-Milgram Theorem, problem (<ref>) uniquely characterizes $v_H$.
The estimate on $v_H$ follows by testing the characterization (<ref>) against $w_H=v_H$. This yields, for any $K\in\mathcal{T}_H$,
\begin{align*}
m \left\lVert \nabla v_H \right\rVert_{L^2(K)}^2
\overline{a}^{\dif}(v_H,v_H)
a^{\varepsilon,\dif} \left( v_H^\varepsilon,v_H \right)
\int_K \nabla v_H \cdot A^\varepsilon \, \nabla v_H^\varepsilon
\leq
M \, \lVert \nabla v_H \rVert_{L^2(K)} \, \lVert \nabla v_H^\varepsilon \rVert_{L^2(K)}.
\end{align*}
The first inequality follows from coercivity of $\overline{A}$ and the second inequality from the upper bound on $A^\varepsilon$ in (<ref>) and the Cauchy-Schwarz inequality. We thus find that $\displaystyle \lVert \nabla v_H \rVert_{L^2(K)} \leq \frac{M}{m} \lVert \nabla v_H^\varepsilon \rVert_{L^2(K)}$.
The desired result is obtained upon squaring the inequality and summing over all $K\in\mathcal{T}_H$.
For the remainder of this section, we will consider MsFEMs without oversampling. Let $u^{\varepsilon,\mathsf{G}}_H$ denote the solution to the MsFEM approximation (<ref>) (we use the superscript $\mathsf{G}$ to stress that this is a Galerkin approximation) and let $u^{\varepsilon,\mathsf{PG}}_H$ denote the solution to the non-intrusive MsFEM (<ref>) (which is equivalent to the Petrov-Galerkin MsFEM (<ref>), since we do not apply the oversampling technique).
We first study the error $u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H$ when $\varepsilon\to0$. In this case, we do not need a rate of convergence in $H$ and we shall relax the condition $f \in L^2(\Omega)$ to the condition $f \in H^{-1}(\Omega)$. Then the linear form $F$ in (<ref>) has to be redefined. Given $f \in H^{-1}(\Omega)$, there exist $f_0,f_1,\dots,f_d \in L^2(\Omega)$ such that
\displaystyle
\sum_{K\in\mathcal{T}_H} \left( \int_K f_0 \, v +
\sum_{\beta=1}^d \int_K f_\beta \, \partial_\beta v
\right),
which is in fact well-defined for any $v \in H^1(\mathcal{T}_H)$ and thus in particular on $V_H$, the underlying affine space for the MsFEM, and the multiscale space $V_H^\varepsilon$.
We consider in Lemma <ref> a sequence of diffusion tensors $A^\varepsilon$ that $H$-converges to a constant diffusion tensor. This means that $u^\varepsilon$ converges weakly in $H^1(\Omega)$ as $\varepsilon\to0$ towards a function $u^\star \in H^1_0(\Omega)$, solution to the homogenized problem (<ref>), and $A^\varepsilon\nabla u^\star \rightharpoonup A^\star \nabla u^\star$ weakly in $L^2(\Omega)$.
Consider the MsFEM-lin or the MsFEM-CR without oversampling. Suppose that $(A^\varepsilon)_{\varepsilon > 0}$ is a sequence of matrices satisfying (<ref>) that $H$-converges to a constant matrix. Let $f \in H^{-1}(\Omega)$. Then $\left\lVert u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H \right\rVert_{H^1(\mathcal{T}_H)} \to 0$ as $\varepsilon\to0$.
A rate of convergence can be obtained under some additional structural assumptions on $A^\varepsilon$; see Lemma <ref>.
We need a few auxiliary results to establish Lemma <ref>. The first result below concerns the convergence of the numerical correctors as $\varepsilon\to0$.
Suppose that $A^\varepsilon$ $H$-converges to a constant homogenized tensor $A^\star$. Consider the MsFEM-lin or the MsFEM-CR without oversampling. Then, for all $K\in\mathcal{T}_H$ and all $1\leq\alpha\leq d$, we have $\VK{\alpha} \rightharpoonup 0$ weakly in $H^1(K)$ as $\varepsilon\to0$.
We introduce for each $\alpha=1,\dots,d$ the function $\tau^{\varepsilon,\alpha} = x^\alpha + \VK{\alpha}$. Then (<ref>) implies the PDE
-{\operatorname{div}(A^\varepsilon \nabla \tau^{\varepsilon,\alpha})}
$ in $K$.
The homogenized limit of this problem is
-{\operatorname{div}(A^\star \nabla \tau^{\star,\alpha})}
$ in $K$.
For the MsFEM-lin, the boundary conditions of the local problems impose $\tau^{\star,\alpha}=x^\alpha$ on $\partial K$. The boundary conditions associated to the MsFEM-CR are a constant flux $\vec{n}\cdot A^\star \nabla \tau^{\alpha,\star}$ on each face of $K$ and $\displaystyle \int_h \tau^{\star,\alpha} = \int_h x^\alpha$ for all faces $h$ of $K$. In both cases, the homogenized equation has a unique solution, which is easily seen to be $\tau^{\star,\alpha} = x^\alpha$, because $A^\star$ is constant. Therefore, $\tau^{\star,\alpha} \rightharpoonup x^\alpha$ weakly in $H^1(K)$. Subtracting again the function $x^\alpha$, we deduce the desired convergence.
We will also use the following result, which is a straightforward generalization of the extended Poincaré inequality in <cit.>.
Let $W$ be the subspace of $H^1(\mathcal{T}_H)$ defined by
\begin{equation*}
W = \left\{
v \in H^1(\mathcal{T}_H) \, \left\vert \,
\int_h \llbracket v \rrbracket = 0 \right. \text{ for each face } h \text{ of } \mathcal{T}_h, \,
\int_h v = 0 \text{ for each face } h \subset \partial \Omega
\right\}.
\end{equation*}
There exists a constant $C>0$ depending only on $\Omega$ but not on $H$ such that
\begin{equation*}
\forall \, v \in W,
\qquad
\lVert v \rVert_{L^2(\Omega)} \leq C \, \lVert \nabla v \rVert_{L^2(\mathcal{T}_H)}.
\end{equation*}
Note that the multiscale space $V_{H,0}^\varepsilon$ is contained in $W$ for both the MsFEM-lin and the MsFEM-CR without oversampling. Finally, we provide a number of useful bounds for the difference between $u^{\varepsilon,\mathsf{G}}_H$ and $u^{\varepsilon,\mathsf{PG}}_H$.
Let $f \in H^{-1}(\Omega)$ and consider the MsFEM-lin or the MsFEM-CR without oversampling. Let $e^\varepsilon_H = u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H$. There exists a unique $e_H^\Pone\in V_H$ and a linear combination of the numerical correctors, that we denote by $e_H^{\mathsf{osc}}$, such that $e^\varepsilon_H = e^\Pone_H + e^\mathsf{osc}_H$, and it holds, with the constants $m,M$ from (<ref>) and the constant $C$ from Lemma <ref>,
\begin{align}
a^{\varepsilon,\dif} \left(e^\varepsilon_H, \, e^\varepsilon_H \right)
\label{eq:estim-Gal-PG-Cea}\\
\left\lVert \nabla e^\mathsf{osc}_H \right\rVert_{L^2(K)}
\frac{M}{m} \left\lVert \nabla e^\Pone_H \right\rVert_{L^2(K)}
\quad
\text{for all } K \in \mathcal{T}_H,
\label{eq:estim-gradient-error3}\\
\left\lVert \nabla e^\mathsf{\Pone}_H \right\rVert_{L^2(\mathcal{T}_H)}
\frac{M}{m} \left\lVert \nabla e^\varepsilon_H \right\rVert_{L^2(\mathcal{T}_H)},
\label{eq:estim-gradient-error1}\\
\left\lVert \nabla e^\varepsilon_H \right\rVert_{L^2(\mathcal{T}_H)}
\sqrt{1+C^2} \frac{M^2}{m^3} \lVert F \rVert_{\mathcal{L}(H^1(\mathcal{T}_H))},
\label{eq:estim-gradient-error2}
\end{align}
where $\lVert \cdot \rVert_{\mathcal{L}(H^1(\mathcal{T}_H))}$ is the operator norm on $\mathcal{L}(H^1(\mathcal{T}_H))$.
Since the numerical approximations $u^{\varepsilon,\mathsf{G}}_H$ and $u^{\varepsilon,\mathsf{PG}}_H$ both belong to the multiscale approximation space $V_H^\varepsilon$, it follows that $e^\varepsilon_H \in V_H^\varepsilon$, and we are in a position to use (<ref>): there exists a unique $e^\Pone_H \in V_H$ such that
\begin{equation}
e^\varepsilon_H = e^\Pone_H + e^\mathsf{osc}_H, \quad
e^\mathsf{osc}_H = \sum_{K\in\mathcal{T}_H} \sum_{\alpha=1}^d \left.\left(\partial_\alpha e^\Pone_H\right)\right\vert_K \, \VK{\alpha}.
\label{eq:estim-Gal-PG-error}
\end{equation}
Applying Lemma <ref> to $v_H^\varepsilon = e^\varepsilon_H$,we immediately obtain (<ref>).
Now recall that the numerical correctors are defined by (<ref>). Using the fact that $\nabla e_H^\Pone$ is piecewise constant, this implies that $e^{\mathsf{osc}}_H$ satisfies the following variational problem in each $K\in\mathcal{T}_H$:
\begin{equation*}
\forall \, w \in V_{K,0}, \quad
a^{\varepsilon,\dif}_K \left( e^\mathsf{osc}_H, \, w \right)
-a^{\varepsilon,\dif}_K \left(
e^\Pone_H, \, w
\right).
\end{equation*}
Without oversampling, it holds $\VK{\alpha} \in V_{K,0}$ for each $1\leq\alpha\leq d$, so $e^\mathsf{osc}_H$ can be used as a test function here. With the bounds in (<ref>), implying continuity and coercivity of $a_K^{\varepsilon,\dif}$, we obtain (<ref>).
Next using (<ref>), we can write
\begin{equation*}
a^{\varepsilon,\dif}\left( e^\varepsilon_H, \, e^\varepsilon_H \right)
a^{\varepsilon,\dif}\left( u^{\varepsilon,\mathsf{G}}_H, \, e^\varepsilon_H \right) -
a^{\varepsilon,\dif}\left( u^{\varepsilon,\mathsf{PG}}_H, \, e^\Pone_H \right) -
a^{\varepsilon,\dif}\left( u^{\varepsilon,\mathsf{PG}}_H, \, e^\mathsf{osc}_H \right).
\end{equation*}
We deduce from (<ref>) that
a^{\varepsilon,\dif} \left(u^{\varepsilon,\mathsf{PG}}_H, \, e^\mathsf{osc}_H \right) = 0
Since $e^\varepsilon_H$ can be used as a test function in the discrete problem (<ref>) and $e^\Pone_H$ in (<ref>), we have
a^{\varepsilon,\dif}\left( u^{\varepsilon,\mathsf{G}}_H, \, e^\varepsilon_H \right) -
a^{\varepsilon,\dif}\left( u^{\varepsilon,\mathsf{PG}}_H, \, e^\Pone_H \right)
F\left( e_H^\mathsf{osc} \right)
which shows (<ref>). It follows that
\begin{equation*}
a^{\varepsilon,\dif} \left( e^\varepsilon_H, \, e^\varepsilon_H \right)
\leq
\lVert F \rVert_{\mathcal{L}(H^1(\mathcal{T}_H))} \, \left\lVert e^\mathsf{osc}_H \right\rVert_{H^1(\mathcal{T}_H)}
\leq
\lVert F \rVert_{\mathcal{L}(H^1(\mathcal{T}_H))} \, \sqrt{1+C^2} \left\lVert \nabla e^\mathsf{osc}_H \right\rVert_{L^2(\mathcal{T}_H)},
\end{equation*}
where $C$ is the Poincaré constant from Lemma <ref>.
Now applying (<ref>) and (<ref>) on the right, and using coercivity of $a^{\varepsilon,\dif}$ on the left, we find
\begin{equation*}
m \left\lVert \nabla e^\varepsilon_H \right\rVert_{L^2(\mathcal{T}_H)}^2
\leq
\sqrt{1+C^2} \left( \frac{M}{m} \right)^2 \,
\lVert F \rVert_{\mathcal{L}(H^1(\mathcal{T}_H))} \, \left\lVert \nabla e^\varepsilon_H \right\rVert_{L^2(\mathcal{T}_H)},
\end{equation*}
from which we deduce (<ref>).
Let $e^\varepsilon_H = u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H$. We will use (<ref>). By Lemma <ref>, we have (<ref>). Combined with (<ref>), it holds
\begin{equation*}
m \left\lVert u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H \right\rVert_{H^1(\mathcal{T}_H)}^2
\leq
a^{\varepsilon,\dif}\left(e^\varepsilon_H, \, e^\varepsilon_H\right)
\sum_{K\in\mathcal{T}_H} \sum_{\alpha=1}^d \left.\left(\partial_\alpha e^\Pone_H\right)\right\vert_K \, F\left(\VK{\alpha}\right).
\end{equation*}
By Lemma <ref>, we know that $\VK{\alpha}\rightharpoonup0$ as $\varepsilon\to0$ weakly in $H^1(K)$ for each $K$ and for each $\alpha$. Therefore, $F\left(\VK{\alpha}\right) \to0$ as $\varepsilon\to0$. In view of (<ref>) and (<ref>), every derivative $\left.\left(\partial_\alpha e^\Pone_H\right)\right\vert_K$ is bounded independently of $\varepsilon$. It follows that $F\left(e^\mathsf{osc}_H\right)\to0$ as $\varepsilon\to0$. The conclusion now follows from the above inequality.
We next study the convergence of $u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H$ as $H\to0$. To this end, we return to the original hypotheses of Sec. <ref>, i.e., $f\in L^2(\Omega)$. Note that for the next result, the additional convergence hypothesis of Lemma <ref> for $A^\varepsilon$ is not needed.
Consider the MsFEM-lin or the MsFEM-CR without oversampling.
Assume that $f \in L^2(\Omega)$. Then there exists a constant $C$ independent of $\varepsilon$, $H$ and $f$ such that
\begin{equation*}
\left\lVert u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H \right\rVert_{H^1(\mathcal{T}_H)}
\leq
C H \lVert f \rVert_{L^2(\Omega)}.
\end{equation*}
To prove this lemma, we will use some Poincaré-Friedrichs inequalities, for which we refer e.g. to <cit.>, <cit.>.
Let $e^\varepsilon_H = u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H$ and recall the results of Lemma <ref>. We have $e^\varepsilon_H = e^\Pone_H + e^\mathsf{osc}_H$ (see(<ref>)), and (<ref>) provides, for $f\in L^2(\Omega)$, the equality
a^{\varepsilon,\dif} \left(e^\varepsilon_H, \, e^\varepsilon_H \right)
\left(f,e^\mathsf{osc}_H \right)_{L^2(\Omega)}.
Hence, by the Cauchy-Schwarz inequality,
\begin{equation}
a^{\varepsilon,\dif}\left( e^\varepsilon_H, \, e^\varepsilon_H \right)
\leq
\lVert f \rVert_{L^2(\Omega)} \left\lVert e^\mathsf{osc}_H \right\rVert_{L^2(\Omega)}.
\label{eq:estim-Gal-PG-estimate0}
\end{equation}
For the MsFEM-lin, it holds that $\VK{\alpha}=0$ on $\partial K$ for all mesh elements $K$ and all $1 \leq \alpha \leq d$, and it follows that $e^{\mathsf{osc}}_H=0$ on the boundaries of all mesh elements. In the case of the MsFEM-CR, it holds that $\displaystyle \int_h \left\llbracket \VK{\alpha} \right\rrbracket =0$ for all faces $h$ of the mesh and all $\alpha$. Since $\partial_\alpha e_H^\Pone$ is constant on each mesh element $K$, we also have $\displaystyle \int_h \left\llbracket e_H^{\mathsf{osc}} \right\rrbracket =0$. In both cases, an appropriate variant of the Poincaré-Friedrichs inequality yields a constant $C$ independent of $K$ but dependent on the regularity of the mesh, such that
\begin{equation}
\lVert e^\mathsf{osc}_H \rVert_{L^2(K)} \leq
CH \left\lVert \nabla e^\mathsf{osc}_H \right\rVert_{L^2(K)}.
\label{eq:estim-Gal-PG-estimate1}
\end{equation}
Upon inserting the inequalities (<ref>), (<ref>) and (<ref>) into (<ref>), it follows that
\begin{equation*}
a^{\varepsilon,\dif} \left( e^\varepsilon_H,e^\varepsilon_H \right)
\leq
CH \left(\frac{M}{m}\right)^2 \left\lVert \nabla e^\varepsilon_H \right\rVert_{L^2(\mathcal{T}_H)} \,
\lVert f \rVert_{L^2(\Omega)}.
\end{equation*}
One more time using the lower bound in (<ref>), we find
\begin{equation*}
\left\lVert \nabla e^\varepsilon_H \right\rVert_{L^2(\mathcal{T}_H)}
\leq
CH \frac{M^2}{m^3} \lVert f \rVert_{L^2(\Omega)}.
\end{equation*}
The proof is concluded by application of Lemma <ref> to $e_H^\varepsilon$.
§.§ Convergence results in the periodic setting
We now study the MsFEM-lin applied to the periodic setting introduced in Sec. <ref> in some more detail. To the best of our knowledge, all convergence results known for the MsFEM are obtained in this periodic setting (see e.g. <cit.>). The analysis in these works relies on the explicit description of the microstructure that we summarized in Sec. <ref>. In particular, recall the existence of a homogenized diffusion coefficient given by (<ref>) and the first-order two-scale expansion (<ref>). We emphasize, however, that the application of the MsFEM does not require the periodic setting, nor does it even suppose the PDE under consideration to be embedded in a sequence of PDEs for a family of parameters $\varepsilon$ that tend 0.
Applying the MsFEM to a sequence of matrices $A^\varepsilon = A^\mathsf{per}(\cdot/\varepsilon)$, we obtain a sequence of effective tensors $\overline{A}(\varepsilon)$. Each $\overline{A}(\varepsilon)$ is defined by (<ref>) for a fixed value of $\varepsilon$. We have the following convergence result.
Let $\overline{A}(\varepsilon)$ be the sequence of effective tensors obtained in (<ref>) by applying the MsFEM-lin without oversampling to $A^\varepsilon = A^{\mathsf{per}}(\cdot/\varepsilon)$. We have $\overline{A}(\varepsilon) \to A^\star$ as $\varepsilon\to0$.
We fix a mesh element $K \in \mathcal{T}_H$.
First observe that $\overline{A}(\varepsilon)$ and $A^\star$ satisfy
\begin{equation}
\left. \overline{A}_{\beta,\alpha}(\varepsilon) \right\vert_K
\frac{1}{|K|} a^{\varepsilon,\dif}_K \left(x^\alpha + \VK{\alpha},\, x^\beta \right),
\qquad
\int_Q e_\beta \cdot A^\mathsf{per}(e_\alpha + \nabla w_\alpha),
\label{eq:diffusion-eff-msfem-pg}
\end{equation}
for each $1 \leq \alpha,\beta \leq d$, in view of the variational formulations satisfied by $\VK{\alpha}$ (solution to the PDE (<ref>)) and $w_\alpha$ (solution to the PDE (<ref>)).
Now let $\tau^{\varepsilon,\alpha} = x^\alpha + \VK{\alpha}$. In view of Lemma <ref>, $\tau^{\varepsilon,\alpha} \rightharpoonup \tau^{\star,\alpha}$ as $\varepsilon\to0$ weakly in $H^1(K)$. Writing the two-scale expansion (<ref>) of $\tau^{\varepsilon,\alpha}$, we thus have, when $\varepsilon$ is small,
\begin{equation*}
\tau^{\varepsilon,\alpha}(x)
\approx
\tau^{\star,\alpha}(x) + \varepsilon \sum_{\gamma=1}^d w_\gamma \left( \frac{x}{\varepsilon} \right) \partial_\gamma \tau^{\star,\alpha}(x)
x^\alpha + \varepsilon \, w_\alpha \left( \frac{x}{\varepsilon} \right),
\end{equation*}
and the difference tends to zero in $H^1(K)$ as $\varepsilon\to0$. Inserting this convergence in (<ref>), we deduce that
\begin{equation*}
\lim_{\varepsilon\to0} \left. \overline{A}_{\beta,\alpha}(\varepsilon) \right\vert_K
\lim_{\varepsilon\to0} \frac{1}{|K|} \int_K
e_\beta \cdot A^\mathsf{per} \left(\frac{x}{\varepsilon}\right) \left(
e_ \alpha + \nabla w_\alpha \left(\frac{x}{\varepsilon}\right)
\right) \dd x
\end{equation*}
The convergence to the mean on the unit cube in the last equality follows from the $Q$-periodicity of the function
The following lemma studies the convergence of $u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H$ towards $0$ as $\varepsilon\to0$ for the MsFEM-lin. As was stated in Rem. <ref>, thanks to the periodic setting, we now obtain a rate for the convergence stated in Lemma <ref>.
Let $f \in L^2(\Omega)$. Suppose that the family of meshes $(\mathcal{T}_H)_{H>0}$ is quasi-uniform. Consider the MsFEM-lin without oversampling. For $A^\varepsilon = A^\mathsf{per}(\cdot/\varepsilon)$ sufficiently regular, we have
\begin{equation*}
\left\lVert u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H \right\rVert_{H^1(\Omega)}
\leq
C\varepsilon \, \lVert f \rVert_{L^2(\Omega)},
\end{equation*}
where the constant $C$ depends on the dimension $d$ and the constants $m, M$ in (<ref>), but not on $\varepsilon$, $H$ or $f$.
Let $e^\varepsilon_H = u^{\varepsilon,\mathsf{G}}_H - u^{\varepsilon,\mathsf{PG}}_H$. Lemma <ref> applies, so we can use (<ref>) and a Cauchy-Schwarz inequality to find
\begin{equation}
a^{\varepsilon,\dif} \left( e^\varepsilon_H, \, e^\varepsilon_H \right)
\leq
\lVert f \rVert_{L^2(\Omega)} \, \lVert e^{\mathsf{osc}}_H \rVert_{L^2(\Omega)}
\leq
\lVert f \rVert_{L^2(\Omega)} \,
\left\lVert
\sum_{K\in\mathcal{T}_H} \sum_{\alpha=1}^d
\left. \left(\partial_\alpha e^\Pone_H\right) \right\vert_K \VK{\alpha}
\right\rVert_{L^2(\Omega)}.
\label{eq:estim-Gal-PG-per0}
\end{equation}
Next we seek a bound on $\VK{\alpha}$ in $L^2(K)$.
Using (<ref>) and (<ref>), we have
\begin{equation*}
\operatorname{div} \left( A^\mathsf{per}\left(\frac{\cdot}{\varepsilon}\right) \nabla
\left[ \VK{\alpha} - \varepsilon \, w_\alpha \left(\frac{\cdot}{\varepsilon}\right) \right]
\right)
\end{equation*}
Since $\VK{\alpha}$ vanishes on $\partial K$ (recall that we consider the MsFEM-lin without oversampling), the maximum principle <cit.> yields
\begin{equation*}
\left\lVert
\VK{\alpha} - \varepsilon \, w_\alpha \left(\frac{\cdot}{\varepsilon}\right)
\right\rVert_{L^2(K)}
\leq
\sup_{\partial K} \left\lvert \VK{\alpha} - \varepsilon \, w_\alpha \left(\frac{\cdot}{\varepsilon}\right) \right\rvert \sqrt{\int_K 1}
\varepsilon \, |K|^{1/2}
\sup_{\partial K}
\left\lvert
w_\alpha \left(\frac{\cdot}{\varepsilon}\right)
\right\rvert
\end{equation*}
When $A^\mathsf{per}$ is sufficiently regular, the corrector functions $w_\alpha$ are uniformly bounded. Then the mesh regularity provides a constant $C$ such that for each $K\in\mathcal{T}_H$ and each $1 \leq \alpha \leq d$, we have
\begin{equation*}
\left\lVert
\VK{\alpha}
\right\rVert_{L^2(K)}
\leq
\left\lVert
\VK{\alpha} - \varepsilon \, w_\alpha \left(\frac{\cdot}{\varepsilon}\right)
\right\rVert_{L^2(K)}
\varepsilon \, \left\lVert
w_\alpha \left(\frac{\cdot}{\varepsilon}\right)
\right\rVert_{L^2(K)}
\leq
C \varepsilon H^{d/2}.
\end{equation*}
Since all $\VK{\alpha}$ have disjoint supports, we can use the latter estimate to bound
\begin{align}
\left\lVert
\sum_{K\in\mathcal{T}_H} \sum_{\alpha=1}^d
\left. \left(\partial_\alpha e^\Pone_H\right) \right\vert_K\VK{\alpha}
\right\rVert_{L^2(\Omega)}^2
\sum_{K\in\mathcal{T}_H} \left \lVert
\sum_{\alpha=1}^d \left. \left( \partial_\alpha e^\Pone_H \right) \right\vert_K\VK{\alpha}
\right\rVert_{L^2(K)}^2
\nonumber\\
C \varepsilon^2
\sum_{K\in\mathcal{T}_H}
\sum_{\alpha=1}^d \left( H^{d/2} \left. \left(\partial_\alpha e^\Pone_H\right) \right\vert_K \right)^2
\nonumber\\
C \varepsilon^2
\sum_{K\in\mathcal{T}_H}
\sum_{\alpha=1}^d \left\lVert \partial_\alpha e^\Pone_H \right\rVert_{L^2(K)}^2
\nonumber\\
C\varepsilon^2 \, \left\lVert \nabla e^\Pone_H \right\rVert_{L^2(\Omega)}^2.
\label{eq:estim-Gal-PG-per1}
\end{align}
The last inequality relies on the quasi-uniformity of the mesh.
We insert (<ref>) combined with (<ref>) into (<ref>) to find
\begin{equation*}
a^{\varepsilon,\dif} \left( e^\varepsilon_H, \, e^\varepsilon_H \right)
\leq
C\varepsilon \,
\lVert f \rVert_{L^2(\Omega)} \,
\lVert \nabla e^\varepsilon_H \rVert_{L^2(\Omega)}.
\end{equation*}
Applying the coercivity property in (<ref>) on the left-hand side, we obtain the desired result.
The classical error estimate for the Galerkin MsFEM approach (<ref>) is obtained in the periodic setting and under some regularity assumption on $A^\mathsf{per}$ and on the homogenized limit $u^\star$. The bound obtained in <cit.> reads
\begin{equation*}
\left\lVert u^\varepsilon - u^{\varepsilon,\mathsf{G}}_H \right\rVert_{H^1(\Omega)}
\leq
\end{equation*}
for some $C$ independent of $\varepsilon$ and $H$.
Lemma <ref> shows that the same estimate holds true for $u^{\varepsilon,\mathsf{PG}}_H$, the Petrov-Galerkin MsFEM approximation, under the correct regularity assumptions. We note that the bound for $u^{\varepsilon,\mathsf{PG}}_H$ can also be inferred from Lemma <ref>. However, since the MsFEM is applied in the regime where $\varepsilon < H$, the result of Lemma <ref> is more precise, thanks to the extra structural assumptions made on the diffusion tensor $A^\varepsilon$.
§ NUMERICAL COMPARISON
We now compare the Galerkin MsFEM (<ref>), its non-intrusive approximation (<ref>) and the Petrov-Galerkin MsFEM (<ref>) on a concrete numerical example in 2D ($d=2$). The numerical approximations obtained for these various MsFEMs shall be denoted $u_H^{\varepsilon,\mathsf{G}}$, $u_H^{\varepsilon,\text{\sf G-ni}}$ and $u_H^{\varepsilon,\mathsf{PG}}$, respectively. We consider the pure diffusion equation (<ref>) on the domain $\Omega = (0,1) \times (0,1)$. Thus, the local bilinear forms are $a_K^\varepsilon = a_K^{\varepsilon,\dif}$ defined in Example <ref>, where we will consider the two diffusion tensors
\begin{align}
A^\varepsilon(x) &= \, \nu^\varepsilon(x) \operatorname{Id},
\quad
\nu^\varepsilon(x) =
1 + 100 \cos{(\pi \, x_1 / \varepsilon)}^2 \sin{(\pi \, x_2 / \varepsilon)}^2,
\label{eq:diffusion-test-per} \\
A^{\varepsilon,\mathsf{np}}(x) &= (1+\cos{(2\pi x_1)}^2) \, A^\varepsilon(x).
\label{eq:diffusion-test-nonper}
\end{align}
We fix $f(x) = \sin{(x_1)}\sin{(x_2)}$. The coefficient $A^\varepsilon$ is $\varepsilon$-periodic with period $\varepsilon=\pi/150 \approx 0.02$. The coefficient $A^{\varepsilon,\mathsf{np}}$ is not periodic and, although a homogenized coefficient exists (see <cit.>), it is not constant. Consequently, a certain number of lemmas established in Sec. <ref> are not known to hold true. We will see nevertheless that the non-intrusive MsFEMs that we introduced above provide good approximations compared to the intrusive G-MsFEM.
A reference solution $u_h^\varepsilon$ is computed on a uniform $1024\times1024$ mesh $\mathcal{T}_h$ by means of a standard $\Pone$ finite element method using FreeFEM++ <cit.>. The FreeFEM++ scripts to perform all different MsFEMs can be found in the GitHub repository at .
We compare the reference solution $u_h^\varepsilon$ to MsFEM solutions obtained on a coarse mesh $\mathcal{T}_H$ for varying $H$. The mesh $\mathcal{T}_H$ is a uniform $1/H \times 1/H$ triangulation of $\Omega$. We test the MsFEM-lin and the MsFEM-CR using the sampling operator $s_K^\varepsilon = a^{\varepsilon,\dif}_K$. All oversampling methods in this section use a homothety ratio of 3 for the construction of the oversampling patches in Def. <ref>, and -continuous basis functions, which ensure certain continuity properties on the boundary of the mesh elements. A precise definition of the associated basis functions can be found in Examples <ref> and <ref>. The mesh $\mathcal{T}_h$ is a refinement of $\mathcal{T}_H$ for all values of $H$. Therefore, for each $K \in \mathcal{T}_H$, we use the corresponding submesh of $\mathcal{T}_h$ (consisting of all triangles included in $K$) for the numerical approximation of the numerical correctors in (<ref>) by $\Pone$ Lagrange finite elements.
Solid lines: difference between the Galerkin MsFEM approximation ($u^{\varepsilon,\mathsf{G}}_H$ defined by (<ref>)) and the non-intrusive Galerkin MsFEM approximation ($u^{\varepsilon,\mathsf{G\text{-}ni}}_H$ defined by (<ref>)) for the diffusion coefficients in (<ref>) as the mesh size $H$ varies.
Dashed lines: error of the Galerkin MsFEM with respect to the reference solution. All values are normalized with respect to the $H^1$ norm of the reference solution.
We first compare the approximations $u_H^{\varepsilon,\mathsf{G}}$ and $u_H^{\varepsilon,\text{\sf G-ni}}$ for varying $H$ in Fig. <ref>. Without oversampling (OS), the approximation $u_H^{\varepsilon,\text{\sf G-ni}}$ equals $u_H^{\varepsilon,\mathsf{PG}}$ due to Lemma <ref>. We also report the error committed by the G-MsFEM. We observe that, without oversampling, the difference $u_H^{\varepsilon,\mathsf{G}} - u_H^{\varepsilon,\text{\sf G-ni}}$ is much smaller than this error. As a result, the errors obtained with the G-MsFEM and its non-intrusive approximation are of the same size. Indeed, the error of the non-intrusive G-MsFEM-lin deviates from the error of the G-MsFEM-lin by at most 0.05% for all tests that we report here. For the MsFEM-CR, this is at most 1.2%. In both cases, the two MsFEM variants thus have practically the same accuracy. This is in agreement with the theoretical result of Lemma <ref>.
The estimates obtained in Sec. <ref> do not apply to MsFEMs with oversampling. From Fig. <ref>, we can see that the difference $u_H^{\varepsilon,\mathsf{G}} - u_H^{\varepsilon,\text{\sf G-ni}}$ is still small with respect to the error committed by the G-MsFEM when oversampling is applied. The approximation errors for the non-intrusive G-MsFEMs with oversampling differ by at most 1.3% from the error of the G-MsFEM. Let us also point out the qualitative and quantitative similarities between the performance of the MsFEM for the periodic and the non-periodic diffusion coefficient. Although the homogenized coefficient of $A^{\varepsilon,\mathsf{np}}$ has a more complicated structure than that of $A^\varepsilon$, in both cases the non-intrusive approximation does not deteriorate the accuracy of the MsFEM.
Comparison of the errors of the (intrusive) Galerkin MsFEM (<ref>) and the (non-intrusive) Petrov-Galerkin MsFEM (<ref>) with the oversampling (OS) strategy for the diffusion coefficients in (<ref>) as the mesh size $H$ varies. The Galerkin MsFEM without OS is included to illustrate the effect of the OS strategy.
We consider in Fig. <ref> the PG-MsFEM variant with oversampling, which is completely equivalent to its non-intrusive implementation by virtue of Lemma <ref>. It does not, however, coincide with the non-intrusive G-MsFEM. With oversampling, the matrices of the linear systems for the G-MsFEM and PG-MsFEM are different; Lemma <ref> does not apply. The result is that the differences $u_H^{\varepsilon,\mathsf{G}} - u_H^{\varepsilon,\mathsf{PG}}$ are larger than the differences $u_H^{\varepsilon,\mathsf{G}} - u_H^{\varepsilon,\text{\sf G-ni}}$. This is reflected in the numerical errors of the methods. We show the errors of the PG-MsFEM and the G-MsFEM with respect to the reference solution $u^\varepsilon_h$ in Fig. <ref>. (The non-intrusive G-MsFEM shown in Fig. <ref> is too close to the G-MsFEM to be distinguishable on the scale of Fig. <ref>.) The G-MsFEM without oversampling is also shown to illustrate the effect of using oversampling. Although clear differences in the performance of the Galerkin and Petrov-Galerkin MsFEMs (with oversampling) can be observed, these differences are small and both MsFEM approaches have a comparable accuracy. There is no systematic disadvantage in choosing the non-intrusive PG-MsFEM over the (intrusive or non-intrusive) G-MsFEM.
Moreover, the non-periodic test case again shows the robustness of the non-intrusive MsFEM when going beyond the setting of periodic homogenization.
§ ACKNOWLEDGMENTS
The first author acknowledges the support of DIM Math INNOV. The work of the second and third authors is partially supported by ONR under grant N00014-20-1-2691 and by EOARD under grant FA8655-20-1-7043. These two authors acknowledge the continuous support from these two agencies. The fourth author thanks Inria for the financial support enabling his two-year partial leave (2020-2022) that has significantly facilitated the
collaboration on this project.
Missing 'biblatex' package
The bibliography requires the 'biblatex' package.
The heterogeneous multiscale method (HMM), a general framework for
designing multiscale algorithms, is reviewed. Emphasis is given to the error
analysis that comes naturally with the framework. Examples of finite element
and finite difference HMM are presented. Applications to dynamical systems
and stochastic simulation algorithms with multiple time scales, spall
fracture and heat conduction in microprocessors are discussed.
issn0962-4929, 1474-0508
titleThe heterogeneous multiscale method
journaltitleActa Numerica
Springer New York
seriesApplied Mathematical Sciences
titleShape Optimization by the Homogenization Method
New York, NY
This paper is concerned with a multiscale finite element method for
numerically solving second order scalar elliptic boundary value problems with
highly oscillating coefficients. In the spirit of previous other works, our
method is based on the coupling of a coarse global mesh and of a fine local
mesh, the latter one being used for computing independently an adapted
finite element basis for the coarse mesh. The main new idea is the
introduction of a composition rule, or change of variables, for the
construction of this finite element basis. In particular, this allows for a
simple treatment of high order finite element methods. We provide optimal
error estimates in the case of periodically oscillating coefficients. We
illustrate our method on various examples.
issn1540-3459, 1540-3467
titleA Multiscale Finite Element Method for Numerical
journaltitleMultiscale Model. Simul.
Numerical homogenization is a methodology for the computational solution of
multiscale partial differential equations. It aims at reducing complex
large-scale problems to simplified numerical models valid on some target
scale of interest, thereby accounting for the impact of features on smaller
scales that are otherwise not resolved. While constructive approaches in the
mathematical theory of homogenization are restricted to problems with a clear
scale separation, modern numerical homogenization methods can accurately
handle problems with a continuum of scales. This paper reviews such
approaches embedded in a historical context and provides a unified
variational framework for their design and numerical analysis. Apart from
prototypical elliptic model problems, the class of partial differential
equations covered here includes wave scattering in heterogeneous media and
serves as a template for more general multi-physics problems.
issn0962-4929, 1474-0508
titleNumerical homogenization beyond scale separation
journaltitleActa Numerica
The notion of a generalized finite element method is introduced. This class
of methods is analyzed and their relation to mixed methods is discussed. The
class of generalized finite element methods offers a wide variety of
computational procedures from which particular procedures can be selected for
particular problems. A particular generalized finite element method which is
very effective for problems with rough coefficients is discussed in detail.
issn0036-1429, 1095-7170
shorttitleGeneralized Finite Element Methods
titleGeneralized Finite Element Methods: Their
Performance and Their Relation to Mixed Methods
journaltitleSIAM J. Numer. Anal.
In a companion paper (Bastian et al. 2007, this issue) we introduced an
abstract definition of a parallel and adaptive hierarchical grid for
scientific computing. Based on this definition we derive an efficient
interface specification as a set of C++ classes. This interface separates
the applications from the grid data structures. Thus, user implementations
become independent of the underlying grid implementation. Modern C++ template
techniques are used to provide an interface implementation without big
performance losses. The implementation is realized as part of the software
environment DUNE (http://dune-project.org/). Numerical tests demonstrate the
flexibility and the efficiency of our approach.
issn0010-485X, 1436-5057
shorttitleA generic grid interface for parallel and adaptive
scientific computing. Part II
titleA generic grid interface for parallel and adaptive scientific
computing. Part II: implementation and tests in DUNE
We give a mathematically rigorous definition of a grid for algorithms
solving partial differential equations. Unlike previous approaches (Benger
2005, PhD thesis; Berti 2000, PhD thesis), our grids have a hierarchical
structure. This makes them suitable for geometric multigrid algorithms and
hierarchical local grid refinement. The description is also general enough
to include geometrically non-conforming grids. The definitions in this
article serve as the basis for an implementation of an abstract grid
interface as C++ classes in the DUNE framework (Bastian et al. 2008, this
issn0010-485X, 1436-5057
shorttitleA generic grid interface for parallel and adaptive
scientific computing. Part I
titleA generic grid interface for parallel and adaptive scientific
computing. Part I: abstract framework
Springer International Publishing
booktitleSoftware for Exascale Computing - SPPEXA 2013-2015
seriesLecture Notes in Computational Science and
titleAdvances Concerning Multiscale Methods and
Uncertainty Quantification in EXA-DUNE
booktitleEuro-Par 2014: Parallel Processing Workshops
seriesLecture Notes in Computer Science
titleEXA-DUNE: Flexible PDE Solvers, Numerical
Methods and Applications
North-Holland Publishing Company
seriesStudies in mathematics and its applications
titleAsymptotic analysis for periodic structures
Amsterdam New York
noteIn preparation.
title`Difficult' multiscale problems and non-intrusive methods
École des Ponts ParisTech
typePhD thesis
Mathematics - Numerical Analysis
Multiscale Finite Element Methods (MsFEM) are finite element type
approaches dedicated to multiscale problems. They first compute local,
oscillatory, problemdependent basis functions which generate a specific
discretization space, and next perform a Galerkin approximation of the
problem on that space. We investigate here how these approaches can be
implemented in a non-intrusive way, in order to facilitate their
dissemination within industrial codes or non academic environments.
shorttitleNon-intrusive implementation of Multiscale Finite
Element Methods
titleNon-intrusive implementation of Multiscale Finite
Element Methods: an illustrative example
given=Thomas JR,
title$b=\int g$
journaltitleComput. Methods Appl. Mech. Eng.
titleChoosing bubbles for advection-diffusion problems
journaltitleMath. Models Methods Appl. Sci.
given=Thomas JR,
titleStreamline Upwind/Petrov-Galerkin Formulations for
Convection Dominated Flow with Particular Emphasis on the
Incompressible Navier-Stokes Equation
journaltitleComput. Methods Appl. Mech. Eng.
North-Holland Publishing Company
seriesStudies in mathematics and its applications
titleThe finite element method for elliptic problems
Amsterdam New York
The adaptation of Crouzeix-Raviart finite element in the context of
multiscale finite element method (MsFEM) is studied and implemented on
diffusion and advection-diffusion problems in perforated media. It is known
that the approximation of boundary condition on coarse element edges when
computing the multiscale basis functions critically influences the eventual
accuracy of any MsFEM approaches. The weakly enforced continuity of
Crouzeix-Raviart function space across element edges leads to a natural
boundary condition for the multiscale basis functions which relaxes the
sensitivity of our method to complex patterns of perforations. Another
ingredient to our method is the application of bubble functions which is
shown to be instrumental in maintaining high accuracy amid dense
perforations. Additionally, the application of penalization method makes it
possible to avoid complex unstructured domain and allows extensive use of
simpler Cartesian meshes.
issn1815-2406, 1991-7120
titleCrouzeix-Raviart MsFEM with Bubble Functions for
Diffusion and Advection-Diffusion in Perforated Media
journaltitleCommun. Comput. Phys.
issn15396746, 19450796
titleThe Heterogeneous Multiscale Methods
journaltitleComm. Math. Sci.
Springer New York
seriesSurveys and Tutorials in the Applied Mathematical
titleMultiscale Finite Element Methods
New York, NY
titleConvergence of a nonconforming multiscale finite element
journaltitleSIAM J. Numer. Anal.
issn0029-599X, 0945-3245
titleOn multiscale methods in Petrov–Galerkin formulation
journaltitleNumer. Math.
Springer New York
seriesApplied Mathematical Sciences
titleTheory and Practice of Finite Elements
New York, NY
We consider conforming Petrov±Galerkin formulations for the advective and
advective±diusive equations. For the linear hyperbolic equation, the
continuous formulation is set up using dierent spaces and the
discretization follows with dierent “bubble” enrichments for the test and
trial spaces. Boundary conditions for residual-free bubbles are modi®ed to
accommodate with the ®rstorder equation case and regular bubbles are used to
enrich the other space. Using piecewise linears with these enrichments, the
®nal formulations are shown to be equivalent to the SUPG method, provided
the data are assumed to be piecewise constant. Generalization to include
diusion is also presented. Ó 2000 Elsevier Science S.A. All rights
titleRecovering SUPG using Petrov–Galerkin formulations
enriched with adjoint residual-free bubbles
journaltitleComput. Methods Appl. Mech. Eng.
This paper aims at bridging existing theories in numerical and analytical
homogenization. For this purpose the multiscale method of M˚alqvist and
Peterseim [Math. Comp., 83 (2014), pp. 2583–2603], which is based on
orthogonal subspace decomposition, is reinterpreted by means of a discrete
integral operator acting on standard finite element spaces. The exponential
decay of the involved integral kernel motivates the use of a diagonal
approximation and, hence, a localized piecewise constant coefficient. In a
periodic setting, the computed localized coefficient is proved to coincide
with the classical homogenization limit. An a priori error analysis shows
that the local numerical model is appropriate beyond the periodic setting
when the localized coefficient satisfies a certain homogenization criterion,
which can be verified a posteriori. The results are illustrated in numerical
issn1540-3459, 1540-3467
titleComputation of Quasi-Local Effective Diffusion
Tensors and Connections to the Mathematical Theory of
journaltitleMultiscale Model. Simul.
Springer New York
seriesClassics in Mathematics
titleElliptic Partial Differential Equations of Second
Pitman Publishing
titleElliptic Problems in Nonsmooth Domains
titleNew development in FreeFem++
journaltitleJ. Numer. Math.
In this paper, a new high-order multiscale finite element method (MsFEM)
is developed for elliptic problems with highly oscillating coefficients. The
method is inspired by the MsFEM developed in [G. Allaire and R. Brizzi,
Multiscale Model. Simul., 4 (2005), pp. 790–812], but a more explicit
multiscale finite element space is constructed. The approximation space is
nonconforming when an oversampling technique is used. We use a
Petrov–Galerkin formulation suggested in [T. Y. Hou, X.-H. Wu, and Y.
Zhang, Commun. Math. Sci., 2 (2004), pp. 185–205] to simplify the
implementation and to improve the accuracy. The method is natural for
high-order finite element methods used with the advantage of solving the
coarse grained problem. We prove optimal error estimates in the case of
periodically oscillating coefficients and support the findings by various
numerical experiments.
issn1540-3459, 1540-3467
titleHigh-Order Multiscale Finite Element Method for
Elliptic Problems
journaltitleMultiscale Model. Simul.
titleA Multiscale Finite Element Method for Elliptic
Problems in Composite Materials and Porous Media
journaltitleJ. Comput. Physics
We continue the study of the nonconforming multiscale finite element
method (MsFEM) introduced in [17, 14] for second order elliptic equations
with highly oscillatory coefficients. The main difficulty in MsFEM, as well
as other numerical upscaling methods, is the scale resonance effect. It has
been show that the leading order resonance error can be effectively removed
by using an over-sampling technique. Nonetheless, there is still a secondary
cell resonance error of O(ε2/h2). Here, we introduce a Petrov-Galerkin MsFEM
formulation with nonconforming multiscale trial functions and linear test
functions. We show that the cell resonance error is eliminated in this
formulation and hence the convergence rate is greatly improved. Moreover, we
show that a similar formulation can be used to enhance the convergence of an
immersed-interface finite element method for elliptic interface problems.
issn15396746, 19450796
titleRemoving the Cell Resonance Error in the Multiscale
Finite Element Method via a Petrov-Galerkin Formulation
journaltitleComm. Math. Sci.
given=Thomas JR,
titleA New Finite Element Method Formulation for
Computational Fluid Dynamics: VIII. The
Galerkin/Least-Squares Method for Advective-Diffusive
journaltitleComput. Methods Appl. Mech. Eng.
An approach is developed for deriving variational methods capable of
representing multiscale phenomena. The ideas are first illustrated on the
exterior problem for the Helmholtz equation. This leads to the well-known
Dirichlet-to-Neumann formulation. Next, a class of subgrid scale models is
developed and the relationships to ‘bubble function’ methods and
stabilized methods are established. It is shown that both the latter methods
are approximate subgrid scale models. The identification for stabilized
methods leads to an analytical formula for 7, the ‘intrinsic time scale’,
whose origins have been a mystery heretofore.
shorttitleMultiscale phenomena
titleMultiscale phenomena: Green's functions, the
Dirichlet-to-Neumann formulation, subgrid scale models, bubbles and the
origins of stabilized methods
journaltitleComput. Methods Appl. Mech. Eng.
We present a general treatment of the variational multiscale method in the
context of an abstract Dirichlet problem. We show how the exact theory
represents a paradigm for subgrid-scale models and a posteriori error
estimation. We examine hierarchical p-methods and bubbles in order to
understand and, ultimately, approximate the ‘fine-scale Green’s
function’ which appears in the theory. We review relationships between
residual-free bubbles, element Green’s functions and stabilized methods.
These suggest the applicability of the methodology to physically interesting
problems in fluid mechanics, acoustics and electromagnetics. 0 1998 Elsevier
Science S.A. All rights reserved.
titleThe variational multiscale method—a paradigm for
computational mechanics
journaltitleComput. Methods Appl. Mech. Eng.
Mathematics - Numerical Analysis, 35J15, 65N12, 65N30
This paper is dedicated to the rigorous numerical analysis of a Multiscale
Finite Element Method (MsFEM) for the Stokes system, when dealing with highly
heterogeneous media, as proposed in B.P. Muljadi et al., Non-conforming
multiscale finite Element method for Stokes flows in heterogeneous media.
Part I: Methodologies and numerical experiments, SIAM MMS (2015), 13(4)
1146-1172. The method is in the vein of the classical Crouzeix-Raviart
approach. It is generalized here to arbitrary sets of weighting functions
used to enforced continuity across the mesh edges. We provide error bounds
for a particular set of weighting functions in a periodic setting, using an
accurate estimate of the homogenization error. Numerical experiments
demonstrate an improved accuracy of the present variant with respect to that
of Part I, both in the periodic case and in a broader setting.
notearXiv:1802.04389, submitted
shorttitleNon-Conforming Multiscale Finite Element
Method for Stokes Flows in Heterogeneous Media. Part II
titleNon-Conforming Multiscale Finite Element Method for
Stokes Flows in Heterogeneous Media. Part II: error estimates for
periodic microstructure
We analyze nonconforming finite element approximations of
streamline-diffusion type for solving convection-diffusion problems. Both the
theoretical and numerical investigations show that additional jump terms have
to be added in the nonconforming case in order to get the same O(hk+1/2)
order of convergence in L2 as in the conforming case for convection dominated
problems. A rigorous error analysis supported by numerical experiments is
issn0029-599X, 0945-3245
titleNonconforming streamline-diffusion-finite-element-methods for
convection-diffusion problems
journaltitleNumerische Mathematik
titleExamples of computational approaches for elliptic, possibly
multiscale PDEs with random inputs
journaltitleJ. Comput. Physics
Mathematics - Numerical Analysis
We follow up on our previous work [21] where we have studied a multiscale
finite element (MsFEM) type method in the vein of the classical
Crouzeix-Raviart finite element method that is specifically adapted for
highly oscillatory elliptic problems. We adapt the approach to address here a
multiscale problem on a perforated domain. An additional ingredient of our
approach is the enrichment of the multiscale finite element space using
bubble functions. We first establish a theoretical error estimate. We next
show that, on the problem we consider, the approach we propose outperforms
all dedicated existing variants of MsFEM we are aware of.
titleAn MsFEM type approach for perforated domains
journaltitleMultiscale Model. Simul.
We introduce and analyze a multiscale finite element type method (MsFEM)
in the vein of the classical Crouzeix-Raviart finite element method that is
specifically adapted for highly oscillatory elliptic problems. We illustrate
numerically the efficiency of the approach and compare it with several
variants of MsFEM.
issn0252-9599, 1860-6261
titleMsFEM à la Crouzeix-Raviart for Highly Oscillatory
Elliptic Problems
journaltitleChin. Ann. Math. Ser. B
The purpose of this work is to investigate the behavior of Multiscale
Finite Element type methods for advection-diffusion problems in the
advection-dominated regime. We present, study and compare various options to
address the issue of the simultaneous presence of both heterogeneity of
scales and strong advection. Classical MsFEM methods are compared with
adjusted MsFEM methods, stabilized versions of the methods, and a splitting
method that treats the multiscale diffusion and the strong advection
issn0764-583X, 1290-3841
titleA numerical comparison of some Multiscale Finite
Element approaches for advection-dominated problems in heterogeneous media
journaltitleESAIM: M2AN
We consider an advection-diffusion equation that is advection-dominated
and posed on a perforated domain. On the boundary of the perforations, we set
either homogeneous Dirichlet or homogeneous Neumann conditions. The purpose
of this work is to investigate the behavior of several variants of multiscale
finite element type methods, all of them based upon local basis functions
satisfying weak continuity conditions in the Crouzeix–Raviart sense on the
boundary of mesh elements. In the spirit of our previous works [C. Le Bris,
F. Legoll, and A. Lozinski, Chin. Ann. Math. Ser. B, 34 (2013), pp.
113–138; Multiscale Model. Simul., 12 (2014), pp. 1046–1077] introducing
such multiscale basis functions, and of [C. Le Bris, F. Legoll, and F.
Madiot, Math. Model. Numer. Anal., 51 (2017), pp. 851–888] assessing their
interest in advection-diffusion problems, we present, study, and compare
various options in terms of choice of basis elements, adjunction of bubble
functions, and stabilized formulations.
issn1540-3459, 1540-3467
titleMultiscale Finite Element Methods for
Advection-Dominated Problems in Perforated Domains
journaltitleMultiscale Model. Simul.
We address multiscale elliptic problems with random coefficients that are a
perturbation of multiscale deterministic problems. Our approach consists in
taking benefit of the perturbative context to suitably modify the classical
Finite Element basis into a deterministic multiscale Finite Element basis.
The latter essentially shares the same approximation properties as a
multiscale Finite Element basis directly generated on the random problem. The
specific reference method that we use is the Multiscale Finite Element
Method. Using numerical experiments, we demonstrate the efficiency of our
approach and the computational speed-up with respect to a more standard
approach. In the stationary setting, we provide a complete analysis of the
approach, extending that available for the deterministic periodic setting.
issn0764-583X, 1290-3841
titleMultiscale Finite Element approach for “weakly”
random problems and related issues
journaltitleESAIM: M2AN
The multiscale finite element method (MsFEM) is developed in the vein of
the Crouzeix–Raviart element for solving viscous incompressible flows in
genuine heterogeneous media. Such flows are relevant in many branches of
engineering, often at multiple scales and at regions where analytical
representations of the microscopic features of the flows are often
unavailable. Full accounts of these problems heavily depend on the geometry
of the system under consideration and are computationally expensive.
Therefore, a method capable of solving multiscale features of the flow
without confining itself to fine scale calculations is sought. The
approximation of boundary condition on coarse element edges when computing
the multiscale basis functions critically influences the eventual accuracy
of any MsFEM approaches. The weakly enforced continuity of Crouzeix–Raviart
function space across element edges leads to a natural boundary condition for
the multiscale basis functions which relaxes the sensitivity of our method to
complex patterns of obstacles exempt from the need to implement any
oversampling techniques. Additionally, the application of a penalization
method makes it possible to avoid a complex unstructured domain and allows
extensive use of simpler Cartesian meshes.
issn1540-3459, 1540-3467
shorttitleNonconforming Multiscale Finite Element Method
for Stokes Flows in Heterogeneous Media. Part I
titleNonconforming Multiscale Finite Element Method for
Stokes Flows in Heterogeneous Media. Part I: Methodologies and
Numerical Experiments
journaltitleMultiscale Model. Simul.
booktitleTopics in the Mathematical Modelling of Composite
Boston, MA
This paper constructs a local generalized finite elem elliptic problems
with heterogeneous and highly varying coefficie functions are solutions of
local problems on vertex patches. The corresponding generalized finite
element method decays expon respect to the number of layers of elements in
the patches. Hence, mesh of size H, patches of diameter H log(l/H) are
sufficient t linear rate of convergence in H without pre-asymptotic or reso
The analysis does not rely on regularity of the solution or scale the
coefficient. This result motivates new and justifies old classes multiscale
issn0025-5718, 1088-6842
titleLocalization of elliptic multiscale problems
journaltitleMath. Comp.
titleA residual-driven local iterative corrector scheme for the
multiscale finite element method
journaltitleJ. Comput. Physics
Springer International Publishing
titleNumerical Models for Differential Problems
Springer Berlin
titleHomogenization of Differential Operators and Integral
|
# Predictive Prescription of Unit Commitment Decisions Under Net Load
Uncertainty ††thanks: Mr. Yurdakul gratefully acknowledges the support of the
German Federal Ministry of Education and Research and the Software Campus
program under Grant 01IS17052.
Ogun Yurdakul1, Feng Qiu2, and Sahin Albayrak1 1Department of Electrical
Engineering and Computer Science, Technical University of Berlin, Berlin,
Germany 2Energy Systems Division, Argonne National Laboratory, Lemont, IL
60439, USA
###### Abstract
To take unit commitment (UC) decisions under uncertain net load, most studies
utilize a stochastic UC (SUC) model that adopts a one-size-fits-all
representation of uncertainty. Disregarding contextual information such as
weather forecasts and temporal information, these models are typically plagued
by a poor out-of-sample performance. To effectively exploit contextual
information, in this paper, we formulate a conditional SUC problem that is
solved given a covariate observation. The presented problem relies on the true
conditional distribution of net load and so cannot be solved in practice. To
approximate its solution, we put forward a predictive prescription framework,
which leverages a machine learning model to derive weights that are used in
solving a reweighted sample average approximation problem. In contrast with
existing predictive prescription frameworks, we manipulate the weights that
the learning model delivers based on the specific dataset, present a method to
select pertinent covariates, and tune the hyperparameters of the framework
based on the out-of-sample cost of its policies. We conduct extensive
numerical studies, which lay out the relative merits of the framework vis-à-
vis various benchmarks.
###### Index Terms:
contextual stochastic optimization, unit commitment, ensemble learning
## I Introduction
Taking unit commitment (UC) decisions under uncertain net load (i.e., load
minus renewable generation) lies at the cornerstone of ensuring the economical
and reliable operation of systems with deep penetration of renewables. To this
end, grid operators (GOs) typically draw upon contextual information (e.g.,
historical realizations of net load, weather forecasts) as features to train
machine learning (ML) algorithms that generate point predictions for net load,
which are subsequently utilized in solving a deterministic UC problem. Despite
capitalizing on contextual information, such an approach fails to capture the
stochastic nature of net load and suffers due to isolating the ML algorithm
from the downstream optimization problem. On the flip side, the well-touted
stochastic optimization (SO) models explicitly represent uncertainty by
usually making assumptions on the probability distribution of net load.
Nevertheless, SO models exhibit a poor out-of-sample performance if the
assumed probability distribution is wrong, and they cannot effectively exploit
covariate observations, resorting to a one-size-fits-all representation of
uncertainty.
Recently, a paradigm termed predictive prescriptions emerged in the operations
research literature, which aims to address these shortcomings by jointly
leveraging supervised ML algorithms and a conditional SO model. The approach
put forward in [1] trains an ML model to derive weights for historical
observations of the uncertain parameter and uses the weights in solving a
reweighted sample average approximation (SAA) problem. Predictive prescription
frameworks found applications in power systems as well [2, 3, 4]. The study in
[2] seeks to maximize the profit of a renewable resource trading in the day-
ahead market by training trees with a task-based loss, whereas [3] leverages
linear regression models to determine the renewable generation forecasts that
lead to UC decisions with minimal total cost.
In this paper, we initially map out in Section II a conditional SO formulation
for taking UC decisions under uncertain net load given a covariate
observation. In Section III, we put forward a predictive prescription
framework, which leverages the random forest (RF) algorithm to derive weights
that are used in solving a reweighted SAA problem. Section III further lays
out three principal contributions of this paper. First, we put forth a method
that manipulates the weights derived from the RF algorithm based on the size
and the information-richness of the dataset. Second, we present an approach to
tuning the hyperparameters of the framework based on the out-of-sample cost of
its prescriptions. Finally, we suggest a method for pinpointing the pertinent
covariates of net load. In Section IV, we demonstrate the application of the
framework using data harvested from the California Independent System Operator
(CAISO) grid and investigate its out-of-sample and computational performance.
Section V concludes the paper.
## II Problem Description
We start out with the analytical description of the problem.
### II-A Analytical underpinnings
We study the UC problem under the uncertainty in net load, solved by the GO at
an hourly granularity for a scheduling horizon of 24 hours. The study period
for each day $d$ is denoted by the set $\mathscr{H}_{d}\coloneqq\\{h\colon
h=1,...,24\\}$, where the term $h$ is the index for each hourly period. We
denote by ${Y}_{d}\in\mathscr{Y}\subseteq\mathbb{R}^{d_{y}}$ the uncertain net
load across all system buses and all 24 hours in $\mathscr{H}_{d}$, and we
represent its observation by $Y_{d}=y_{d}$. Assume that the GO has at its
disposal historical observations on net load for $D$ days. Define the set
$\mathscr{D}\coloneqq\\{d\colon d=1,\ldots,D\\}$.
Typically, it is not possible to precisely set forth the probability
distribution of net load ${Y}_{d}$ or provide a perfectly accurate forecast
for the materialized net load levels ${Y}_{d}=y_{d}$. Nevertheless, there is a
broad array of contextual information that could prove useful to these ends.
For instance, temperature, solar irradiance, and wind speed measurements,
temporal information such as the month of the year and the day of the week, as
well as lagged observations on net load may have a direct bearing on the net
load realization ${Y}_{d}=y_{d}$. Our framework capitalizes on the
observations of these covariates in assessing the uncertainty in net load.
Note that such covariates are precisely the features that are leveraged in
developing ML models so as to forecast net load. We express by
$X_{d}\in\mathscr{X}\subseteq\mathbb{R}^{d_{x}}$ the random covariate
associated with ${Y}_{d}$ and denote its observation by $X_{d}=x_{d}$. We
expound upon the candidate covariates drawn upon in our framework in Section
IV.
### II-B Conditional stochastic unit commitment problem
The contextual information associated with net load can be effectively
exploited in a conditional stochastic programming framework in taking UC
decisions after observing the contextual information $X=\bar{x}$. If it were
possible to know the true, underlying conditional distribution of the net load
${Y}$ given $X=\bar{x}$, we could formulate the following “gold standard”
conditional stochastic unit commitment ($\mathsf{CSUC}$) problem:
$\displaystyle\underset{z\in\mathscr{Z}}{\text{min}}$
$\displaystyle\mathbb{E}\big{[}\mathcal{C}({z};{Y})|{X=\bar{x}}]\coloneqq$
$\displaystyle\hskip
5.69046pt\eta^{\mathsf{T}}{z}+\mathbb{E}\big{[}\mathcal{Q}({z};{Y})|{X=\bar{x}}\big{]}$
(1) where $\displaystyle\mathcal{Q}(z;\bar{y})\coloneqq$ $\displaystyle\hskip
2.84544pt\underset{\zeta}{\text{min}}$ $\displaystyle c^{\mathsf{T}}\zeta$ (2)
subject to $\displaystyle W\zeta\leq b-Tz-M\bar{y}.$ (3)
The $\mathsf{CSUC}$ problem is a two-stage conditional SO problem with a
mixed-integer linear programming formulation. The objective (1) of the first-
stage problem is to minimize the commitment and startup costs plus the
expected dispatch and load curtailment costs. The first-stage decisions
comprise the binary commitment, startup, and shutdown variables and are
represented by $z$. The set $\mathscr{Z}$ denotes the feasible region of the
first-stage decisions, which is defined by the logical constraints that relate
the commitment, startup, and shutdown variables as well as the minimum uptime
and downtime constraints.
For a specific vector of first-stage decisions $z$ and materialized net load
values $Y=\bar{y}$, the value function $\mathcal{Q}({z};\bar{y})$ is evaluated
by solving the second-stage problem (2)–(3) with the objective (2) to minimize
the dispatch costs and the penalty cost due to load curtailment. The second-
stage variables are denoted by $\zeta$ and are composed of the power dispatch
levels of generators and the curtailed load for all hours
$h\in\mathscr{H}_{d}$. We succinctly represent in (3) the power generation and
ramping limits for generators as well as the transmission constraints using
injection shift factors based on the DC power flow model, wherein $W$, $T$,
and $M$ are constant matrices and $b$ is the right-hand-side vector of all
second-stage constraints.
Note that the $\mathsf{CSUC}$ formulation draws upon the true conditional
distribution of $Y|X=\bar{x}$, which cannot be known, thus rendering
$\mathsf{CSUC}$ solely a hypothetical, ideal formulation. Nevertheless, GOs
have data on the historical realizations of random net load and the associated
covariates. The predictive prescription framework introduced in Section III
utilizes these observations to construct the training set
$\mathscr{S}_{D}\coloneqq\\{({x}_{d},{y}_{d})\\}_{d=1}^{D}$, which is used to
solve a surrogate problem for $\mathsf{CSUC}$.
## III Proposed Framework
We next introduce our predictive prescription framework.
### III-A Surrogate problem formulation
The principal objective of the proposed framework is to approximate as close
as possible the optimal $\mathsf{CSUC}$ solution after observing $X=\bar{x}$.
Motivated by [1], we approximate the $\mathsf{CSUC}$ problem by the reweighted
SAA problem
$\displaystyle\mathsf{w-CSUC}\colon\,\hat{z}_{D}(\bar{x})\in\text{arg}\,\underset{z\in\mathscr{Z}}{\text{min}}\hskip
5.69046pt\sum_{d=1}^{D}{\omega}_{D,d}(\bar{x})\mathcal{C}({z};{y_{d}}),\vspace{-.1cm}$
(4)
where ${\omega}_{D,d}(\bar{x})$, $\forall d\in\mathscr{D}$, are weight
functions obtained from the training set that adjust the influence of each
historical observation on the objective function of the reweighted SAA problem
(4). In [1], the authors derive the weights ${\omega}_{D,d}(\bar{x})$ by
directly using ML algorithms and spell out how different ML algorithms can be
leveraged to that end. In the proposed framework, we draw upon the RF model in
conjunction with a nonlinear function to derive the weights.
### III-B Evaluation of the empirical weights
We start off by training the RF model to predict the net load in the next 24
hours, for which we use the covariate observations in $\mathscr{S}_{D}$ as
features and the net load values in $\mathscr{S}_{D}$ as labels. Next, for
each new covariate observation $X=\bar{x}$, we use the trained RF model to
quantify the similarity between the new observation and each historical
observation in $\mathscr{S}_{D}$.
To quantify the similarity between observations, we record the leaf that the
new observation $\bar{x}$ is mapped into in each tree of the RF and
subsequently identify the historical covariate observations that fall into the
same leaf node with $\bar{x}$. Central to our approach is to assign the weight
for observation $x_{d}$, that is, ${\omega}_{D,d}(\bar{x})$, based on the
number of trees in which $x_{d}$ and $\bar{x}$ are assigned to the same leaf
node. To this end, Bertsimas and Kallus [1] propose that the weight for
$x_{d}$ increase linearly with the number of trees in which $\bar{x}$ and
$x_{d}$ fall into the same leaf node, normalized by the total number of
covariate observations assigned to the same leaf node with $\bar{x}$, which
yields the empirical weights
$\displaystyle\hat{\omega}_{D,d}(\bar{x})=\frac{1}{T}\sum_{\tau=1}^{T}\frac{\mathbb{I}\big{[}x_{d}\in\mathcal{X}^{\tau}_{l(\bar{x})}\big{]}}{\big{|}\big{\\{}d^{\prime}\colon
x_{d^{\prime}}\in\mathcal{X}^{\tau}_{l(\bar{x})}\big{\\}}\big{|}},$ (5)
where $T$ denotes the number of trees in the forest, $\mathbb{I}(\cdot)$ the
indicator function, and $\mathcal{X}^{\tau}_{l(\bar{x})}$ the set of covariate
observations assigned to the same leaf with $\bar{x}$.
### III-C Deriving the final weights
Existing approaches in the literature plug the empirical weights
$\hat{\omega}_{D,d}(\bar{x})$ into the $\mathsf{w-CSUC}$ problem without
taking into account the size or the prescriptive content of the training set.
Nevertheless, the empirical weights obtained with a small training set and/or
a training set with little informative content may fail to afford an accurate
characterization of the similarity between observations. In contrast, we can
utilize a particular historical observation with greater confidence, if it
were deemed to be similar to a new observation under a large training set with
high prescriptive power. As such, we introduce the function
$\varphi\big{(}\hat{\omega}_{D,d}(\bar{x});\xi,D\big{)}\coloneqq$
$\hat{\omega}_{D,d}(\bar{x})^{\frac{D}{\xi}}$, which serves to manipulate the
empirical weights based on the training set size $D$ and the weight
modification parameter $\xi$. The parameter $\xi$ could be viewed as a proxy
for the information richness and the prescriptive power of the training set.
As $\xi$ decreases and $D$ increases, $\varphi(\cdot)$ amplifies the weights
of the data points that are assessed to be strongly similar to a new
observation and brings down the empirical weights of the points that are
markedly dissimilar to a new observation. Further, for a small $D$ and a large
$\xi$, $\varphi(\cdot)$ smoothens any significantly high and low weight value
and brings the weights toward a uniform level. Clearly, a key challenge to
this end is to hone in on a judicious value of $\xi$. To this end, we treat
$\xi$ as a hyperparameter of the overall framework and set its value by
assessing its influence on a separate validation set.
### III-D Task-based hyperparameter tuning
The proper tuning of an ML model’s hyperparameters could play a drastic role
in its performance. The classical approach to hyperparameter tuning is to
assess the performance of an ML under different hyperparameter values based on
a statistical loss function. In our framework, however, a specific selection
of RF hyperparameter values may bring forth a lower prediction error without
leading to UC decisions that drive down the total out-of-sample cost, which
punctuates the need to tune the RF model’s hyperparameters based on the
ultimate task for which it is trained. As such, we treat the hyperparameters
of the RF model as those of the overall framework and set their values based
on the total out-of-sample cost of the optimal policy $\hat{z}_{D}(\bar{x})$
obtained with different hyperparameter values.
At the outset, we use grid search to exhaustively generate candidate values
for the hyperparameters reported in Table I. Next, we construct a separate
validation set containing pairs of covariate and net load observations
$\mathscr{V}_{\bar{D}}\coloneqq\\{(\bar{x}_{i},\bar{y}_{i})\\}_{i=1}^{\bar{D}}$.
We use each covariate observation $\bar{x}_{i}$ to compute the optimal policy
$\hat{z}_{D}(\bar{x}_{i})$ and subsequently the out-of-sample cost
$\mathcal{C}(\hat{z}_{D}(\bar{x}_{i});\bar{y}_{i})$ obtained under the actual
net load observation $\bar{y}_{i}$. For each set of candidate hyperparameter
values, we compute the total out-of-sample cost over the validation set, i.e.,
$\sum_{i=1}^{\bar{D}}\mathcal{C}(\hat{z}_{D}(\bar{x}_{i});\bar{y}_{i})$.
Ultimately, we pick the hyperparameter values that deliver the lowest total
out-of-sample cost.
TABLE I: Hyperparameters hyperparameter | candidate values
---|---
max tree depth | 3, 6, 10
number of features considered | $\sqrt{d_{x}}$, $(0.3){d_{x}}$, $(0.6){d_{x}}$
for node splitting
weight modification parameter $\xi$ | $\frac{D}{10}$, $\frac{D}{4}$, ${D}$, ${4D}$, $10D$
### III-E Selection of the covariates
A key thrust of the framework is to pinpoint information-rich covariates that
aid in effectively grasping the uncertainty in net load. While there is
plethora of factors that can potentially influence a net load realization,
ruling out the covariates that afford little or no information can help the
trained RF model better assess the similarity between covariate observations,
thereby yielding final weights that reduce the out-of-sample costs. Further,
working with fewer covariates allows for expediting the training and testing
of the RF model.
To identify the covariates, we employ a hybrid approach comprising a filter
and a wrapper feature selection method. We denote the support of the initial
set of candidate covariates by
$\mathscr{X}^{r}\subseteq\mathbb{R}^{d_{x^{r}}}$. We start out by computing
the Pearson correlation coefficient (PCC) for each candidate covariate and the
net load observation, which measures the linear correlation between two
variables. The PCC attains values between $-1$ and $1$, with $1$ (resp. $-1$)
indicating a complete positive (resp. negative) correlation and $0$ signifying
that the correlation is immaterial. Customarily, when the absolute value of
the PCC is greater than or equal to $0.6$, it is interpreted as the variables
being strongly correlated with one another [5]. As such, we rule out all
candidate features that yield a PCC value between $0.6$ and $-0.6$ and obtain
$d_{x^{p}}$ covariates supported on the set
$\mathscr{X}^{p}\subseteq\mathbb{R}^{d_{x^{p}}}$. Note that PCC measures only
the linear correlation between the variables, and it does not assess how the
covariates integrate with the utilized ML model. As a remedy, we additionally
implement recursive feature elimination (RFE), which is a wrapper method that
takes an ML model as a parameter. RFE trains the selected ML model with the
initial set of features, ranks the features on the basis of their importance,
and recursively eliminates the least important features until the desired
number of features is reached. We run RFE with the RF model and ultimately
obtain the final set of covariates with support
$\mathscr{X}^{f}\subseteq\mathbb{R}^{d_{x^{f}}}$.
## IV Numerical Experiments
We next demonstrate the application of the proposed framework in a real-life
setting.
### IV-A Datasets and covariate selection
In our experiments, we draw upon the net load values recorded in the CAISO
grid between June 1, 2018 and August 31, 2019 [6]. We use the measurements
recorded in the first year (i.e., June 1, 2018–May 31, 2019) to construct the
training sets. As set forth below, we vary the size of the training set in
different experiments so as to assess its influence on the performance of the
methods. Nevertheless, to compare their performance on a consistent basis, we
use the same validation set and the same test set in all experiments.
Specifically, we utilize the measurements recorded in June 2019 as the
validation set and the measurements recorded from July 1 to August 31, 2019 as
the test set. We use the IEEE 14-bus system in the experiments, which has 5
generators with an aggregate capacity of 765.31 MW. We scale the net load
values so that the highest net load value is equal to the 90% of the aggregate
capacity of the generators.
To select the covariates for PV and wind generation, we assess the spatial
distribution of PV and wind installations with their respective capacities
across California, and we accordingly select locations from which we harvest
data on global horizontal irradiance (GHI) and wind speed (magnitude and
direction). We study the total population and population density of the
counties of California and identify locations from which we use temperature
measurements so as to capture the influence of temperature on system
load.111We provide the data and the source code of the simulations in the
online companion to this paper located in
https://github.com/oyurdakul/isgtna23. We leverage as candidate covariates the
GHI, wind speed, and temperature measurements reported by the National
Renewable Energy Laboratory [7] for the selected locations in the past 24
hours. We further use as candidate covariates 24 lagged realizations of net
load, as well as the 24 lagged realizations of the daily, weekly, and monthly
moving average of net load. Finally, we define categorical variables to
indicate whether a day falls on a weekend and on a public holiday and use one-
hot encoding for their representation. We ultimately obtain $d_{x^{r}}=440$
candidate covariates and follow the covariate selection method presented in
Section III-D to derive $d_{x^{f}}=25$ covariates.
### IV-B Benchmarks
To highlight the relative merits of the proposed framework, we draw upon
different decision-making methods to obtain alternative policies and
investigate their performance. One such method is the naive stochastic unit
commitment ($\mathsf{NSUC}$) model, which treats the net load observations in
the training set as equiprobable scenarios and disregards the covariate
observation $X=\bar{x}$, stated as
$\displaystyle\mathsf{NSUC:}\hskip
7.11317pt\underset{z\in\mathscr{Z}}{\text{min}}$ $\displaystyle\hskip
7.11317pt\frac{1}{D}\sum_{d=1}^{D}\mathcal{C}({z};{y_{d}}).$ (6)
We solve the following reweighted SAA problem using the empirical weights as
suggested in [1] so to investigate the impact of the transforming the weights:
$\displaystyle\mathsf{ew-CSUC:}\hskip
7.11317pt\underset{z\in\mathscr{Z}}{\text{min}}$ $\displaystyle\hskip
7.11317pt\sum_{d=1}^{D}{\hat{\omega}}_{D,d}(\bar{x})\mathcal{C}({z};{y_{d}}).$
(7)
We further use the point forecast of the trained RF model, i.e.,
$\hat{f}_{D}^{RF}(\bar{x})$, in solving the following deterministic UC
problem:
$\displaystyle\mathsf{PFUC:}\hskip
7.11317pt\underset{z\in\mathscr{Z}}{\text{min}}$ $\displaystyle\hskip
7.11317pt\mathcal{C}({z};\hat{f}_{D}^{RF}(\bar{x})).$ (8)
To obtain the minimum out-of-sample cost that could be ideally attained, we
solve the following ideal UC ($\mathsf{IUC}$) problem, which has a perfect
foresight of the net load observation $\bar{y}$:
$\displaystyle\mathsf{IUC:}\hskip
7.11317pt\underset{z\in\mathscr{Z}}{\text{min}}$ $\displaystyle\hskip
7.11317pt\mathcal{C}({z};\bar{y}).$ (9)
### IV-C Results
We conduct the experiments on a 64 GB-RAM computer containing an Apple M1 Max
chip with 10-core CPU. We build the RF models and select the covariates under
Python using scikit-learn 1.1.2. To model the UC instances, we extend the
`UnitCommitment.jl` package [8] to the two-stage stochastic setting, and we
solve all UC problems under Julia 1.6.1 with Gurobi 9.5.0 as the solver. The
penalty cost for load curtailment is set at $\$10,000/MWh$ in all experiments.
We initially construct the training set with the first 100 observations, i.e.,
$D=100$. We use the validation set to tune the hyperparameters of the proposed
predictive prescription framework (indicated as $\mathsf{w-CSUC}$) as well as
those of the $\mathsf{PFUC}$ and $\mathsf{ew-CSUC}$ methods. To assess how the
methods perform out-of-sample, we use the measurements in the test set to
determine how each method would have committed the generators for the
corresponding days in the test set, then we observe the actual net load levels
that had materialized, and ultimately use the resulting total cost and the
mean unserved energy (MUE) to score the performance of each method. In Table
II, we report for each method the average of the total cost and the MUE
computed over all 62 observations in the test set. We separately tabulate the
MUE results in addition to the total cost as the latter may greatly vary with
the choice of the penalty cost for load curtailment.
TABLE II: Out-of-sample costs and MUE levels method | total cost ($) | MUE $(MWh)$
---|---|---
$\mathsf{IUC}$ | 380089.8 | 0.0
$\mathsf{w-CSUC}$ | 401377.3 | 0.0
$\mathsf{ew-CSUC}$ | 414566.0 | 0.0
$\mathsf{NSUC}$ | 416429.5 | 0.0
$\mathsf{PFUC}$ | 453638.1 | 7.2
(a) (b)
Figure 1: Out-of-sample performances. In LABEL:res_1a, solid lines indicate
the mean values over the 62 observations in the test set, and shaded regions
show only one-tenth of the standard deviations around the mean values in order
to avoid excessive overlap between shaded regions. LABEL:res_1b depicts the
total cost and the computation time for obtaining the weights
${\omega}_{D,d}(\bar{x})$ under each set of features.
The results in Table II make clear the monetary benefits that can be reaped by
implementing $\mathsf{w-CSUC}$, which delivers the lowest total cost among all
methods except the perfect-foresight policy, yielding a total cost that is
higher by 3.86% than IUC. We highlight that the lower cost under
$\mathsf{w-CSUC}$ in comparison with that under the runner-up method
$\mathsf{ew-CSUC}$ provides an empirical justification for manipulating the
empirical weights before using them in solving the reweighted SAA problem. The
$\mathsf{NSUC}$ method fails to outperform $\mathsf{w-CSUC}$ and comes at the
heels of $\mathsf{ew-CSUC}$, which we ascribe to $\mathsf{NSUC}$ utilizing
equiprobable scenarios without taking the covariate observations into account.
The results further make evident the shortcomings of drawing upon
deterministic forecasts and ignoring the stochastic nature of net load in
solving the UC problem, as the policies under $\mathsf{PFUC}$ deliver the
highest total cost and necessitate involuntary load curtailment.
In certain practical applications, we may fail to collect a large number of
observations that can be used in constructing the training set.
Coincidentally, working with a larger training set requires solving the
$\mathsf{w-CSUC}$ problem with a greater number of scenarios, which drives up
the computational burden. As such, we assess the performance of each method
under different values of $D$. In doing so, we keep all the hyperparameters
except $\xi$ constant at the values determined for $D=100$ and use grid search
to tune the value of $\xi$ on the validation set. Fig. LABEL:res_1a visualizes
the total cost delivered by each method, and it illustrates for the proposed
framework the value of $\xi$ determined through grid search and the time to
solve the $\mathsf{w-CSUC}$ problem so as to obtain the policy
$\hat{z}_{D}(\bar{x})$.
The plots in Fig. LABEL:res_1a echo the order of performance in Table II, as
across most values of $D$, the policies of $\mathsf{w-CSUC}$ beat $\mathsf{ew-
CSUC}$, which in turn outperforms $\mathsf{NSUC}$. Note that the policies
under $\mathsf{PFUC}$ exhibit the worst out-of-sample performance for most
investigated training set sizes. We further point out that the total costs
obtained under the proposed framework are tightly clustered around their mean
values and less spread out compared with the benchmark methods.
We remark upon the tight coupling between the training set size, the solution
time, and the out-of-sample cost. Increasing $D$ from $10$ to $100$ markedly
improves (decreases) the out-of-sample performance of $\mathsf{w-CSUC}$,
albeit a diminishing return as $D$ grows from $50$ to $100$. We also observe
that the out-of-sample performance saturates around $D=100$ and sporadically
deteriorates (increases) as $D$ grows beyond $100$, during which the solution
time precipitously increases.
One can draw from Fig. LABEL:res_1a valuable insights into the value of $\xi$
determined via grid search. Most notably, the empirical weights are amplified
and suppressed the most under $D=365$, signifying an information-rich training
set. This observation drives home that training the RF model using a full
year’s data enables an accurate characterization of the similarity between
observations. Note that, the empirical weights for $D\in\\{50,100,200,300\\}$
are also boosted and attenuated, though not as much as for $D=365$, whereas
those for $D\in\\{10,20\\}$ are used as is, indicating that amplifying and
suppressing the empirical weights obtained with such small datasets is not
warranted.
We next investigate the influence of the covariate selection method laid out
in Section III-E on the out-of-sample performance and the computation time. To
this end, we repeat the experiments for $D=100$ under the set covariates
supported in $\mathscr{X}^{r}$ and $\mathscr{X}^{p}$. We tune the
hyperparameters for each set of covariates using the validation set and
leverage the test to compute the out-of-sample performances. For each set of
covariates, we measure the time for computing the weights
${\omega}_{D,d}(\bar{x})$ over 30 simulation runs, which is comprised of the
time for training the RF model and that for evaluating the weight for all
observations in the test set. Fig. LABEL:res_1b bears out the relative merits
of the proposed covariate selection method, which notches a 3.90% reduction in
the average time for evaluating the weights ${\omega}_{D,d}(\bar{x})$ vis-à-
vis those under the initial set of features without compromising on the out-
of-sample performance.
## V Conclusion
In this paper, we worked out a predictive prescription framework that jointly
leverages the random forest (RF) algorithm with a conditional stochastic
optimization model so as to take unit commitment decisions under uncertain net
load. We put forth a method to manipulate the empirical weights derived from
the RF model based on the size and the prescriptive power of the training set,
and we suggest a hybrid method to select pertinent covariates for net load. By
treating the hyperparameters of the RF model as those of the overall
framework, we tune them based on the ultimate task for which the framework is
developed, that is, bringing forth a lower out-of-sample cost. The extensive
numerical studies conducted illustrate the capabilities of the framework in
reducing not only the out-of-sample cost and load curtailment, but also the
computation time compared with various benchmarks.
## References
* [1] D. Bertsimas and N. Kallus, “From predictive to prescriptive analytics,” _Management Science_ , vol. 66, no. 3, pp. 1025–1044, 2020.
* [2] A. C. Stratigakos, S. Camal, A. Michiorri, and G. Kariniotakis, “Prescriptive trees for integrated forecasting and optimization applied in trading of renewable energy,” _IEEE Transactions on Power Systems_ , 2022.
* [3] X. Chen, Y. Yang, Y. Liu, and L. Wu, “Feature-driven economic improvement for network-constrained unit commitment: A closed-loop predict-and-optimize framework,” 2021.
* [4] M. A. Muñoz, S. Pineda, and J. M. Morales, “A bilevel framework for decision-making under uncertainty with contextual information,” _Omega_ , vol. 108, p. 102575, 2022.
* [5] C. Wang, Y. Wang, Z. Ding, T. Zheng, J. Hu, and K. Zhang, “A transformer-based method of multi-energy load forecasting in integrated energy system,” _IEEE Transactions on Smart Grid_ , 2022.
* [6] CAISO. (2022) California ISO Open Access Same-time Information System (OASIS). [Online]. Available: http://oasis.caiso.com/
* [7] M. Sengupta, Y. Xie, A. Lopez, A. Habte, G. Maclaurin, and J. Shelby, “The national solar radiation data base (nsrdb),” _Renewable and sustainable energy reviews_ , vol. 89, pp. 51–60, 2018.
* [8] A. S. Xavier, A. M. Kazachkov, O. Yurdakul, and F. Qiu, “UnitCommitment.jl: A Julia/JuMP Optimization Package for Security-Constrained Unit Commitment (Version 0.3),” Zenodo (2022). [Online]. Available: DOI:10.5281/zenodo.4269874
|
additions
# Fault Simulation for Superconducting Quantum Circuits
Mingyu Huang
Wang Fang State Key Laboratory of Computer Science
Institute of Software, Chinese Academy of Sciences
Beijing, China
<EMAIL_ADDRESS>State Key Laboratory of Computer Science
Institute of Software, Chinese Academy of Sciences
Beijing, China
<EMAIL_ADDRESS>Ji Guan
Mingsheng Ying State Key Laboratory of Computer Science
Institute of Software, Chinese Academy of Sciences
Beijing, China
<EMAIL_ADDRESS>State Key Laboratory of Computer Science
Institute of Software, Chinese Academy of Sciences
Department of Computer Science and Technology, Tsinghua University
Beijing, China
<EMAIL_ADDRESS>
###### Abstract
This paper introduces a fast fault simulation algorithm for superconducting
quantum circuits with realistic fault models based on real defect behavior or
control errors of the circuits. The algorithm is developed on a novel tensor
network representation of the fault models with fast tensor network
computation. The effectiveness and utility of the algorithm are demonstrated
by experimenting on a series of practical quantum circuits executed on
Google’s _Sycamore_ quantum superconducting processor. As a result, up to 225
qubits (for the variational quantum circuits) are handled (within about 15
minutes), outperforming state-of-the-art fault simulation algorithms in both
efficiency and scalability.
###### Index Terms:
Quantum computing, quantum circuits, fault simulation, tensor network
## I Introduction
Nowadays, various quantum processors have been manufactured, and quantum
supremacy has been achieved. Among the existing hardware implementations of
quantum processors, superconducting quantum computing is one of the most
promising routines. Specifically, superconducting quantum bits (qubits) have
faster gate time, which means the execution of the quantum circuit will be
faster. Additionally, superconducting implementation is also capable of using
a large number of qubits. The superconducting processor _Sycamore_ of Google
with 53 qubits is the first processor that achieves quantum supremacy (beyond
classical computing) [1]. After that, University of Science and Technology of
China implemented a 2-dimensional quantum random walk on a 62-qubit
superconducting quantum processor [2]. Thus, the superconducting quantum
computing implementation may bring practical applications to the current NISQ
(Noisy Intermediate-Scale Quantum) era [3], where quantum processors contain
50 to a few hundred noisy qubits.
The core of the processors is the circuits for executing computational tasks.
Before the circuits are physically built, simulation is essential in digital
circuit verification, test development, design debugging, and diagnosis. In
particular, _fault simulation_ is to simulate faulty digital circuits with two
main objectives: one is fault-free (logic) simulation to help the designer
verify that the design of digital circuits conforms to the intended functional
specifications; the other is to determine the efficiency of test patterns in
detecting the modeled faults of interest, with such patterns being usually
generated by automatic test pattern generators (ATPG) [4, 5]. Now fault
simulation can be efficiently applied to large-scale integrated circuits and
has become a standard technique integrated into electronic design automation
(EDA).
Currently, in quantum computing, however, physicists usually experimentally
build the designed quantum circuits and then estimate their performance in the
presence of _quantum faults_. Here, the quantum faults come from the
inaccuracy of the control pulse and the surrounding environment, which is
inevitable in the NISQ era. For example, a circuit with four qubits and four
controlled quantum logic gates was implemented in [6] for an experiment of HHL
algorithm [7], which could exponentially speed up solving linear systems over
classical computers. The performance test for three different input states
shows that compared to the ideal outcomes, the output states have fidelities
of 99.3%, 82.5%, and 83.6%, respectively. Google’s _Sycamore_ has used a
similar way to confirm quantum supremacy on sampling a quantum circuit with
cross-entropy benchmark fidelity 0.2% [1].
Fault simulation of quantum circuits (on classical computers) before
physically building them is helpful and more affordable due to the expensive
resource and strict conditions (e.g., the environmental temperature must be
close to absolute zero) of experimentally implementing quantum circuits and
the uncertainty of reading out outputs on real quantum devices. Therefore,
developing fault simulation algorithms for (superconducting) quantum circuits
is urgent and necessary. However, a direct generalization of the existing
fault simulation method for classical circuits to quantum circuits is expected
to be unsuccessful. One primary reason is that quantum fault simulation is
generally quantitative rather than qualitative as classical fault simulation:
the inputs/outputs of quantum circuits are vectors or matrices of complex
numbers, while those of classical circuits are boolean values — 0 or 1. This
fundamental difference requires quantum fault simulation to be built in its
own way.
Quantum Fault Simulation Algorithms: We first recall the state-of-the-art
simulation algorithms for noiseless and noisy quantum circuits. For both
cases, a direct method is to calculate the transitions of the state vector or
density matrix representation through a sequence of quantum gates modeled by
unitary matrices. This method is simple and usually integrated into software
development kits for quantum computing (e.g., Qiskit, Cirq, etc.). However,
the exponential number of terms in these representations limits the number of
qubits that can be simulated. For overcoming this issue, one of the most
popular methods is to use a data structure, called _tensor network_ , to catch
the locality (a quantum gate is only applied on 1 or 2 qubits) and regularity
(the pattern of quantum gates in practical circuits is regular) of quantum
circuits and has been successively used in the simulation of sizeable
noiseless quantum circuits [8, 9, 10, 11, 12, 13]. However, the tensor network
method cannot directly simulate noisy quantum circuits. Another approach is
the Decision Diagram-based (DD-base) method [14, 15], which is generally more
lightweight than the tensor network-based one. It stores all amplitudes of a
state in a compact data structure, using less memory and performing better on
circuits with simpler structures. However, most of these methods work well for
the circuits which can keep the size of the data structures small for each
intermediate state in the simulating process. They can achieve satisfactory
performance in full amplitude simulation of this kind of circuit. However,
superconducting quantum circuits compose of gates with arbitrary parameters,
and the performance of such methods on these circuits is ineffective [14, 16].
Although the DD-based methods can be adapted to support noisy quantum circuit
simulation [17], the extra noise operator leads to a high cost of runtime.
Current superconducting quantum processors have a relatively large number of
qubits, and most circuits include gates with arbitrary rotation angles, this
motivates us to develop a new simulation algorithm based on the tensor network
representation of noisy quantum circuits to perform fault simulation on the
superconducting quantum circuits.
Contributions of This Paper: The tensor network-based techniques cannot be
immediately used to simulate faulty quantum circuits as quantum faults
affected by quantum noises cannot be directly represented by tensor networks.
We solve this problem by representing a faulty quantum circuit in a double-
size tensor network. An algorithm is developed and implemented with Google
TensorNetwork [18] to simulate all kinds of quantum faults. Our experimental
results show that with our new method, the simulated size of practical faulty
superconducting quantum circuits can be boosted up to 225 qubits for that of
the QAOA algorithm.
## II Fault Models
As a basis of our work, this section aims to present fault models of
superconducting quantum circuits [19].
We start by recalling some basic concepts of quantum circuits. Ideally, a
quantum computer without noise is a closed system. In this case, quantum data
are mathematically modeled as complex unit vectors in a $2^{n}$-dimensional
Hilbert (linear) space $\mathcal{H}$. Such a quantum datum is usually called a
_pure state_ and written as $|\psi\rangle$ in the Dirac notation, and $n$
represents the number of involved _quantum bits (qubits)_. Specifically, a
qubit is a quantum datum in a $2$-dimensional Hilbert space, denoted by
$|q\rangle=\begin{pmatrix}a\\\ b\end{pmatrix}=a|0\rangle+b|1\rangle$ with
$|0\rangle=\begin{pmatrix}1\\\ 0\end{pmatrix}$ and
$|1\rangle=\begin{pmatrix}0\\\ 1\end{pmatrix}$, where complex numbers $a$ and
$b$ satisfy the normalization condition $|a|^{2}+|b|^{2}=1$. Here, the
orthonormal basis $\\{|0\rangle$, $|1\rangle\\}$ of the Hilbert space
corresponds to the values $\\{0,1\\}$ of a bit in classical computers. A
quantum computing task is implemented by a _quantum circuit_ , which is
mathematically represented by a $2^{n}\times 2^{n}$ unitary matrix $U$, i.e.,
$U^{\dagger}U=UU^{\dagger}=I_{n}$, where $U^{\dagger}$ is the (entry-wise)
conjugate transpose of $U$ and $I_{n}$ is the identity matrix on
$\mathcal{H}$. For an input $n$-qubit datum $|\psi\rangle$, the output of the
circuit is a datum of the same size: $|\psi^{\prime}\rangle=U|\psi\rangle.$
Like its classical counterpart, a quantum circuit $U$ consists of a sequence
(product) of _quantum logic gates_ $U_{i}$, i.e., $U=U_{d}\cdots U_{1}$. Here
$d$ is the depth of the circuit $U$. Each gate $U_{i}$ only non-trivially
operates on one or two qubits. We list commonly used 1-qubit gates in Table I.
Arbitrary 1-qubit gates can be decomposed into 1-qubit rotation gates
$R_{x}(\theta)$, $R_{y}(\theta)$ and $R_{z}(\theta)$ with rotation parameter
$\theta$, and superconducting quantum processors implement arbitrary 1-qubit
gate by executing a sequence of these rotation gates. In addition, for any
1-qubit logic gate $U$, we can generate a 2-qubit logic gate — controlled-$U$
(CU) gate, applying $U$ on the second (target) qubit if and only if the first
(control) qubit is $|1\rangle$. Specifically, the controlled-Z (CZ) gate is
commonly used in superconducting quantum circuits.
TABLE I: 1-Qubit Gate H | $\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\\ 1&-1\end{pmatrix}$ | X ($\sigma_{x}$) | $\begin{pmatrix}0&1\\\ 1&0\end{pmatrix}$
---|---|---|---
Y ($\sigma_{y}$) | ${\left(\begin{matrix}0&-i\\\ i&0\end{matrix}\right)}$ | Z ($\sigma_{z}$) | $\left(\begin{matrix}1&0\\\ 0&-1\end{matrix}\right)$
T | ${\left(\begin{matrix}1&0\\\ 0&e^{i\pi/4}\end{matrix}\right)}$ | $R_{x}(\theta)$ | ${\left(\begin{matrix}\cos\frac{\theta}{2}&-i\sin\frac{\theta}{2}\\\ -i\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{matrix}\right)}$
$R_{y}(\theta)$ | ${\left(\begin{matrix}\cos\frac{\theta}{2}&-\sin\frac{\theta}{2}\\\ \sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{matrix}\right)}$ | $R_{z}(\theta)$ | ${\left(\begin{matrix}e^{-i\theta/2}&0\\\ 0&e^{i\theta/2}\end{matrix}\right)}$
$\begin{array}[]{c}\text{CU}=\begin{aligned}
\Qcircuit<EMAIL_ADDRESS>&\gate{U}&\qw}\end{aligned}\end{array}\begin{array}[]{c}\text{CZ}=\begin{aligned}
\Qcircuit@C=1em@R=1em{&\ctrl{1}&\qw\\\
&\ctrl{-1}&\qw}\end{aligned}=\left(\begin{matrix}1&0&0&0\\\ 0&1&0&0\\\
0&0&1&0\\\ 0&0&0&-1\end{matrix}\right)\\\ \end{array}$
Now, we are ready to introduce fault models for superconducting quantum
circuits. By the principle of quantum mechanics, faults in quantum circuits
fall into two classes — unitary faults and super-operator faults.
Unitary Faults: Elementary 2-qubit gates (e.g. CZ gate), are implemented by
adding control pulses to effective Hamiltonians in real superconducting
quantum chips. A typical CZ gate could be realized by modulating the frequency
of the control qubit, which is called a $\sigma_{z}$ control. Additional
dynamical phases are accumulated in the pulse duration. However, the
$\sigma_{z}$ control requires fine-tuning. Therefore, errors in the pulse
parameters will lead to undesired $R_{z}(\theta)$, accounting for the fidelity
loss. The effect of this fault is appending a controlled-$R_{z}$ gate after
the ideal CZ gate, as shown in the dashed box of the following circuit for a
quantum approximate optimization algorithm (QAOA) [20], which is a variational
quantum algorithm for solving combinatorial optimization problems. Such a
fault is called a _unitary fault_ , which is represented by a unitary matrix.
$\displaystyle\begin{array}[]{c}\Qcircuit@C=0.5em@R=0.5em@!R{&\gate{\mathrm{R_{Y}}\,(\mathrm{\frac{-\pi}{2}})}&\gate{\mathrm{R_{Z}}\,(\mathrm{\frac{\pi}{2}})}&\ctrl{1}&\ctrl{1}&\gate{\mathrm{R_{X}}\,(\mathrm{\pi})}&\qw\\\
&\gate{\mathrm{R_{Y}}\,(\mathrm{\frac{-\pi}{2}})}&\gate{\mathrm{R_{Z}}\,(\mathrm{\frac{\pi}{2}})}&\ctrl{-1}&\gate{\mathrm{R_{Z}}\,(\theta)}&\gate{\mathrm{R_{X}}\,(\mathrm{\pi})}&\qw\gategroup{1}{4}{2}{5}{.6em}{--}}\\\
\text{A 2-qubit QAOA circuit with a unitary fault on CZ gate}\end{array}$
Super-operator Faults: In the current NISQ era, quantum noise is unavoidable,
we have to consider the noisy effect on quantum circuits. In this case,
uncertainty will be brought in quantum computing such that quantum states
became mixed instead of pure states. A _mixed state_ is considered as an
ensemble $\\{(p_{k},|\psi_{k}\rangle\\}$, meaning the quantum system is in
state $|\psi_{k}\rangle$ with probability $p_{k}$. Mathematically, it can be
described by a $2^{n}\times 2^{n}$ _density matrix_ $\rho$ (Hermitian positive
semidefinite matrix with unit trace111$\rho$ has unit trace if ${\rm
tr}(\rho)=1$, where trace ${\rm tr}(\rho)$ of $\rho$ is defined as the
summation of diagonal elements of $\rho$.) on $\mathcal{H}$:
$\rho=\sum_{k}p_{k}|\psi_{k}\rangle\langle\psi_{k}|,$ where $\langle\psi_{k}|$
is the conjugate transpose of $|\psi_{k}\rangle$, i.e.,
$\langle\psi_{k}|=|\psi_{k}\rangle^{\dagger}$. For convenience, we use $\psi$
to donate the density operator of pure state $|\psi\rangle\langle\psi|$. We
write $\mathcal{D(H)}$ for the set of all (mixed) quantum states on
$\mathcal{H}$. In this situation, a computational task starting at a mixed
state $\rho$ is finished by a mapping $\mathcal{E}$:
$\rho^{\prime}=\mathcal{E}(\rho),$ where $\mathcal{E}$ is the noisy
implementation of the noiseless quantum circuit $U$. Such a $\mathcal{E}$ is
called a _super-operator_ , and admits a _Kraus matrix form_ [21]: there
exists a finite set $\\{E_{k}\\}_{k\in\mathcal{K}}$ of matrices on
$\mathcal{H}$ such that
$\mathcal{E}(\rho)=\sum_{k\in\mathcal{K}}E_{k}\rho
E_{k}^{\dagger}\quad\textrm{ with
}\sum_{k\in\mathcal{K}}E_{k}^{\dagger}E_{k}=I_{n},$
where $\\{E_{k}\\}_{k\in\mathcal{K}}$ is called _Kraus matrices_ of
$\mathcal{E}$. Briefly, $\mathcal{E}$ is represented as
$\mathcal{E}=\\{E_{k}\\}_{k\in\mathcal{K}}$. Similar to noiseless quantum
circuit $U$, noisy quantum circuit $\mathcal{E}$ also consists of a sequence
(mapping composition) of (noisy) gates $\\{\mathcal{E}_{i}\\}$, i.e.,
$\mathcal{E}=\mathcal{E}_{d}\circ\cdots\circ\mathcal{E}_{1}$, where each
$\mathcal{E}_{i}$ is either a noiseless gate or a noisy one. The noisy one is
called a _super-operator fault_ , which is represented by Kraus matrices.
In superconducting quantum hardware, decoherence is a typical noisy fault
resulting from the surrounding environment of the hardware. Decoherence can be
mathematically explained as the decay effect of the off-diagonal element in
the density matrices of mixed states. The decay comes from amplitude damping
and phase damping.
(1) Amplitude damping $\mathcal{E}_{a}$ is a super-operator fault describing
the effect that the energy of excited states gradually decreases, which has
Kraus matrices:
$E_{a,1}=\begin{pmatrix}1&0\\\ 0&e^{-\Delta t/2T_{1}}\end{pmatrix},\ \
E_{a,2}=\begin{pmatrix}0&\sqrt{1-e^{-\Delta t/T_{1}}}\\\ 0&0\end{pmatrix}$
where $T_{1}$ is the energy relaxation time and $\Delta t$ is the gate time
[19]. (2) Phase damping $\mathcal{E}_{p}$ is also a super-operator fault
refers to pure dephasing, which is the off-diagonal elements decay without
affecting the population of diagonal elements. Phase damping can be described
by three Kraus matrices:
$\qquad\qquad\qquad\qquad\qquad E_{p,0}=e^{-\Delta t/2T_{\varphi}}I,\\\
E_{p,1}=\sqrt{1-e^{-\Delta t/T_{\varphi}}}|0\rangle\langle 0|,\ \
E_{p,2}=\sqrt{1-e^{-\Delta t/T_{\varphi}}}|1\rangle\langle 1|$
where $T_{\varphi}$ is defined by
$\frac{1}{T_{\varphi}}=\frac{1}{T_{2}}-\frac{1}{2T_{1}}$ and $T_{2}$ is total
dephasing time [19]. Decoherence $\mathcal{E}_{D}$ is the composition of these
two super-operators, i.e.
$\mathcal{E}_{D}=\mathcal{E}_{p}\circ\mathcal{E}_{a}$.
At the end of quantum circuits, we cannot directly observe a qubit as it is
physically a particle at the microscopic scale (near or less than $10^{-9}$
meters). The only way allowed by quantum mechanics to extract classical
information is through a quantum measurement, which is mathematically modeled
by a set $\\{P_{i}\\}_{i\in\mathcal{O}}$ of projection matrices on its state
(Hilbert) space $\mathcal{H}$ with $\mathcal{O}$ being the set of outputs and
$\sum_{i\in\mathcal{O}}P_{i}=I_{n}$. This observing process is probabilistic:
for a given quantum state $\rho$, a measurement outcome $i\in\mathcal{O}$ is
obtained with probability $p_{i}={\rm tr}(P_{i}\rho).$
## III Fault Simulation of Quantum Circuits
As illustrated in Fig. 1, the result of fault simulation works for fault
detection and diagnosis. In this section, we first introduce fault detection
and diagnosis for quantum circuits, then formulate the simulation problem in
our work.
back arrow disabled=true, set color list=red!22,cyan!22,blue!22,green!22,
additions= additional item offset=0.85cm, additional item border color=black,
additional connections disabled=false, [flow diagram:horizontal]Test Circuit,
Fault Simulation, Fault Effect, Detection and Diagnosisbelow of module2/Fault
Model, above of module2/Input and Output State
Figure 1: Workflow of Fault Detection.
Fault Detection and Diagnosis for Quantum Circuits: Following the strategy of
fault detection for classical digital circuits, for each modeled fault of an
ideal quantum circuit $U$, a carefully designed test state $|\psi_{t}\rangle$
is applied to a faulty circuit $\mathcal{E}_{f}$ and then the output state
$\mathcal{E}(\psi_{t})$ is observed by a measurement
$\\{P_{i}\\}_{i\in\mathcal{O}}$ according to each $|\psi_{t}\rangle$ to get
the probability distribution $\\{{\rm
tr}(P_{i}\mathcal{E}_{f}(\psi_{t}))\\}_{i\in\mathcal{O}}$ of outcomes
$\mathcal{O}$. If the distribution matches the expected one $\\{{\rm
tr}(P_{i}\mathcal{U}(\psi_{t})\\}_{i\in\mathcal{O}}$ for
$\mathcal{U}=\\{U\\}$, then such fault is deemed not to be present in $U$.
Furthermore, if all tests corresponding to different modeled faults pass, the
circuit $U$ is guaranteed to contain no modeled faults. On the other hand,
such test information (the probability distributions of measurement outcomes)
can be used to find which modeled fault appears or which gate is faulty. This
problem is known as the diagnosis of faulty quantum circuits (for more
details, see [22, 23, 24], including how to generate such test states
$|\psi_{t}\rangle$ and measurement $\\{P_{i}\\}_{i\in\mathcal{O}}$). It is
formally defined as follows.
###### Definition 1
Given a faulty quantum circuit $\mathcal{E}_{f}$, a test state
$|\psi_{t}\rangle$ and a measurement $\\{P_{i}\\}_{i\in\mathcal{O}}$, the
fault simulation for detection and diagnosis is to compute probabilities ${\rm
tr}(P_{i}\mathcal{E}_{f}(\psi_{t}))$ for all $i\in\mathcal{O}$.
Note that $P_{i}$ is the projection onto some subspace $\mathcal{H}_{i}$ of
$\mathcal{H}$: $P_{i}|\psi\rangle=|\psi\rangle$ for all
$|\psi\rangle\in\mathcal{H}_{i}$; $P_{i}|\psi\rangle=0$ for all
$|\psi\rangle\in\mathcal{H}_{i}^{\perp}$, the orthogonal complement of
$\mathcal{H}_{i}$. Thus, if $\\{|\phi_{ik}\rangle\\}$ is an orthonormal basis
of $\mathcal{H}_{i}$, then $P_{i}$ admits the _eigenvalue decomposition_ :
$P_{i}=\sum_{k}|\phi_{ik}\rangle\langle\phi_{ik}|,$ With this decomposition,
we have:
$\displaystyle{\rm tr}(P_{i}\mathcal{E}_{f}(\psi_{t}))$ $\displaystyle={\rm
tr}(\sum_{k}|\phi_{ik}\rangle\langle\phi_{ik}|\mathcal{E}_{f}(\psi_{t}))$
$\displaystyle=\sum_{k}\langle\phi_{ik}|\mathcal{E}_{f}(\psi_{t})|\phi_{ik}\rangle$
Fault Effect of Quantum Circuits: In the field of fault detection and
diagnosis, the projection $P_{i}$ is usually defined by a pure state
$|\phi_{i}\rangle$, i.e., $P_{i}=|\phi_{i}\rangle\langle\phi_{i}|$ [23, 24].
Subsequently, what we need to calculate is just
$\langle\phi_{i}|\mathcal{E}_{f}(\psi_{t})|\phi_{i}\rangle$ for all
${i\in\mathcal{O}}$. Back to the general case of calculating ${\rm
tr}(P_{i}\mathcal{E}_{f}(\psi_{t}))$ for all ${i\in\mathcal{O}}$, we only need
to compute $\langle\phi_{ik}|\mathcal{E}_{f}(\psi_{t})|\phi_{ik}\rangle$ in
parallel for each $i$ and $k$. As a result, we summarize the task of our fault
simulation as fault effect in the following.
###### Definition 2
Given a faulty quantum circuit $\mathcal{E}_{f}$, a test state
$|\psi_{t}\rangle$ and an measurement defined by state $|\psi_{e}\rangle$, the
simulation for fault effect of $\mathcal{E}_{f}$ on $|\psi_{t}\rangle$ is to
compute $\langle\psi_{e}|\mathcal{E}_{f}(\psi_{t})|\psi_{e}\rangle$.
Recall that the fidelity between two quantum states $\rho$ and $\sigma$ is:
$F(\rho,\sigma)=[{\rm tr}(\sqrt{\rho^{1/2}\sigma\rho^{1/2}})]^{2}.$
In particular, for pure state $\psi=|\psi\rangle\langle\psi|$ and mixed state
$\sigma$, $F(\psi,\sigma)=\langle\psi|\sigma|\psi\rangle$. As a result, the
fault effect we defined is
$\langle\psi_{e}|\mathcal{E}_{f}(\psi_{t})|\psi_{e}\rangle=F(\psi_{e},\mathcal{E}_{f}(\psi_{t}))$.
Since the fidelity measures how close two quantum states (density operators)
are, observe that if we choose $|\psi_{e}\rangle$ as the expected output state
$U|\psi_{t}\rangle$, the fault effect just measures how close the real output
state is with the expected output state. Thus instead of providing essential
information for fault detection, fault simulation can also be used to predict
the performance of quantum circuits under noise, such as that for the HHL
algorithm mentioned in the introduction section.
## IV Fast Fault Simulation Algorithm
In this section, we present our tensor network-based fast fault simulation
algorithm for computing fault effect $F(\psi_{e},\mathcal{E}_{f}(\psi_{t}))$.
Tensor Network Contraction: For a better understanding, let us explain tensor
network contraction by tensor diagram notation in the context of quantum
circuits (see [25] for more details). Briefly, a tensor is a multi-dimensional
array of complex numbers and is a labeled box with zero or more open output
legs. Each leg labeled by an index (e.g., $i,j,k$) represents a dimension of
the array, and a complex number is a tensor without legs. For example, the
following notations represent a complex number (e.g., $e^{i\pi/4}$), a vector
(e.g., pure state $|\psi\rangle$), a matrix (e.g., quantum gate $U$) and a set
of matrices (e.g. quantum noise $\mathcal{E}=\\{E_{k}\\}_{k\in\mathcal{K}}$),
respectively.
$e^{i\pi/4}$$\lvert\psi\rangle$$j$$U$$i$$j$$\mathcal{E}$$j$$i$$k$
Furthermore, if multiple tensors exist in a diagram, then we call them a
tensor network. In this case, we can contract them by linking their legs with
the same indexes. This operation is known as _tenor network contraction_ ,
which is a generalization of matrix (and vector) multiplication. For example,
vector-matrix-vector product
$\langle\phi|U|\psi\rangle=\sum_{ij}a_{i}U_{ij}b_{j}$ for
$|\phi\rangle=(a_{1},\ldots,a_{2^{n}})^{\dagger}$ and
$|\psi\rangle=(b_{1},\ldots,b_{2^{n}})^{\top}$, and matrix-matrix product
$(AB)_{ik}=\sum_{j}A_{ij}B_{jk}$ are illustrated as the following two tensor
contractions:
$\begin{array}[]{lcl}\begin{array}[]{cc}\leavevmode\hbox to72.47pt{\vbox
to20.61pt{\pgfpicture\makeatletter\hbox{\hskip 45.18954pt\lower-7.31319pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}}
{{}\pgfsys@rect{9.90399pt}{-7.11319pt}{17.1806pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{11.90399pt}{-2.5pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\lvert\psi\rangle$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.26791pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{3.3453pt}{3.36514pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$j$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {}{}{}}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}\pgfsys@moveto{9.704pt}{0.0pt}\pgfsys@lineto{-0.73491pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{{}{}}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}}
{{}\pgfsys@rect{-15.64902pt}{-7.11319pt}{14.22638pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-12.49484pt}{-3.41666pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$U$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{{}{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-31.29803pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{{}{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-23.06213pt}{2.39284pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$i$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {}{}{}}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-27.76503pt}{0.0pt}\pgfsys@lineto{-15.84901pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{{}{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{{}\pgfsys@rect{-44.98955pt}{-7.11319pt}{18.8472pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-42.98955pt}{-2.5pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\langle\phi|$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\end{array}&=&\begin{array}[]{cc}\leavevmode\hbox
to42.57pt{\vbox
to14.63pt{\pgfpicture\makeatletter\hbox{\hskip-45.57999pt\lower-7.31319pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ } {
{}{}{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{{}\pgfsys@rect{45.77998pt}{-7.11319pt}{42.168pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{47.77998pt}{-2.5pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\langle\phi|U\lvert\psi\rangle$}} }}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>\begin{array}[]{cc}\leavevmode\hbox to83.7pt{\vbox
to20.61pt{\pgfpicture\makeatletter\hbox{\hskip 54.8479pt\lower-7.31319pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}}
{{}\pgfsys@rect{-7.11319pt}{-7.11319pt}{14.22638pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.0434pt}{-3.41666pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$B$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-22.7622pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-15.14899pt}{3.36514pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$j$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{20.00179pt}{-3.47221pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$k$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {}{}{}}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{ {}{}}{}{}{{}{}}}}}{{}}{}{}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-19.2292pt}{0.0pt}\pgfsys@lineto{-7.31319pt}{0.0pt}\pgfsys@moveto{7.31319pt}{0.0pt}\pgfsys@lineto{16.46878pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} { {}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{{}\pgfsys@rect{-34.14331pt}{-7.11319pt}{14.22638pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-30.78012pt}{-3.41666pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$A$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{{}{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-51.5149pt}{-3.29762pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$i$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {}{}{}}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-44.53676pt}{0.0pt}\pgfsys@lineto{-34.3433pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\end{array}&=&\begin{array}[]{cc}\leavevmode\hbox
to56.67pt{\vbox
to14.63pt{\pgfpicture\makeatletter\hbox{\hskip-34.77827pt\lower-7.31319pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ } {
{}{}{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{{}\pgfsys@rect{52.80264pt}{-7.11319pt}{19.58682pt}{14.22638pt}\pgfsys@stroke\pgfsys@invoke{
} }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{54.80264pt}{-3.41666pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$AB$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{{}{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{38.11128pt}{-3.29762pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$i$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{{}{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{82.59784pt}{-3.47221pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$k$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {}{}{}}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{ {}{}}{}{}{{}{}}}}}{{}}{}{}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}{}\pgfsys@moveto{45.08942pt}{0.0pt}\pgfsys@lineto{52.60265pt}{0.0pt}\pgfsys@moveto{72.58946pt}{0.0pt}\pgfsys@lineto{79.06483pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\end{array}\end{array}$
The above two diagrams on the left are instances of tensor networks. Indeed,
all quantum circuits in the previous sections can be easily represented as
tensor networks, where a gate is a tensor and each leg of the gate is labeled
by an index in $\\{0,1\\}$.
The benefit of representing quantum circuits by tensor networks is that tensor
networks can exploit the regularity and locality contained in the structure of
quantum circuits (but matrix representation cannot). Complexity of tensor
network contraction for a $q$-local-interacting quantum circuit is
$T^{O(1)}\exp[O(qd)]$ [26], where $T$ is the number of gates (tensors), $d$ is
the depth in the circuit (tensor network), and a circuit is $q$-local-
interacting if under a linear ordering of its qubits, each gate acts only on
qubits within distance $q$. Even though the worse case (presented in the
complexity) is exponential in $d$, there are a bulk of efficient algorithms to
implement tensor network contraction for practical large-size quantum circuits
(e.g. [8, 9, 10, 11, 12]).
Now we are ready to present our quantum fault simulation algorithm. The key
idea is to convert the noisy quantum circuits into a double-size tensor
network (Algorithm 1).
Fast Fault Simulation Algorithm: We can compute fault effect
$F(\psi_{e},\mathcal{E}_{f}(\psi_{t}))$ in the following way.
$\displaystyle F(\psi_{e},\mathcal{E}_{f}(\psi_{t}))$
$\displaystyle=\langle\psi_{e}|\mathcal{E}_{f}(\psi_{t})|\psi_{e}\rangle={\rm
tr}(\psi_{e}\mathcal{E}_{f}(\psi_{t}))$
$\displaystyle=\langle\Omega|[|\psi_{e}^{*}\rangle\langle\psi_{e}^{*}|\otimes\mathcal{E}_{f}(\psi_{t})]|\Omega\rangle$
$\displaystyle=\langle\Omega|[|\psi_{e}^{*}\rangle\langle\psi_{e}^{*}|\otimes\mathcal{E}_{d}\circ\cdots\circ\mathcal{E}_{1}(\psi_{t})]|\Omega\rangle$
$\displaystyle=\langle\psi_{e}^{*}|\otimes\langle\psi_{e}|(M_{\mathcal{E}_{d}}\cdots
M_{\mathcal{E}_{1}})|\psi_{t}^{*}\rangle\otimes|\psi_{t}\rangle$
where $|\psi_{t}^{*}\rangle$ is the entry-wise conjugate of
$|\psi_{t}\rangle$, $|\Omega\rangle$ is the (unnormalized) maximal entangled
state, i.e., $|\Omega\rangle=\sum_{j}|j\rangle\otimes|j\rangle$ with
$\\{|j\rangle\\}$ being an orthonormal basis of Hilbert space $\mathcal{H}$,
and $M_{\mathcal{E}}=\sum_{k}E_{k}\otimes E_{k}^{*}$ for
$\mathcal{E}=\\{E_{k}\\}$ is called the _matrix representation_ of
$\mathcal{E}$ [27].
Algorithm 1
FastFaultSimulation($\mathcal{E}_{f},|\psi_{t}\rangle,|\psi_{e}\rangle$)
1:A faulty quantum circuit
$\mathcal{E}_{f}=\mathcal{E}_{d}\circ\cdots\circ\mathcal{E}_{1}$ with
$\mathcal{E}_{i}=\\{E_{ik}\\}_{k\in\mathcal{K}_{i}}$ for $d\geq i\geq 1$, a
test state $|\psi_{t}\rangle$ and an expected output $|\psi_{e}\rangle$.
2:The fault effect $F(\psi_{e},\mathcal{E}_{f}(\psi_{t}))$
3:Computing
$X=\langle\psi_{e}^{*}|\otimes\langle\psi_{e}|(M_{\mathcal{E}_{d}}\cdots
M_{\mathcal{E}_{1}})|\psi_{t}^{*}\rangle\otimes|\psi_{t}\rangle$ by calling a
tool for tensor network contraction
4:return X
We observe that $M_{\mathcal{E}}=\sum_{k\in\mathcal{K}}E_{k}\otimes E_{k}^{*}$
can be represented by the following left tensor network. In particular, for a
unitary $\mathcal{U}=\\{U\\}$, $M_{\mathcal{U}}$ is depicted as the right
tensor network.
$\displaystyle M_{\mathcal{E}}=\sum_{k\in\mathcal{K}}E_{k}\otimes
E_{k}^{*}=\begin{aligned} \Qcircuit@C=1em@R=1em{&\gate{\mathcal{E}}&\qw\\\
&\gate{\mathcal{E}^{*}}\qwx&\qw}\end{aligned}\qquad M_{\mathcal{U}}=U\otimes
U^{*}=\begin{aligned} \Qcircuit@C=1em@R=1em{&\gate{U}&\qw\\\
&\gate{U^{*}}&\qw}\end{aligned}$
With this observation, we get a serial connection of the two tensor networks
representing $n$-qubit circuits $\mathcal{E}_{f}$ and $\mathcal{E}_{f}^{*}$.
Subsequently, we can compute
$\langle\psi_{e}^{*}|\otimes\langle\psi_{e}|(M_{\mathcal{E}_{d}}\cdots
M_{\mathcal{E}_{1}})|\psi_{t}^{*}\rangle\otimes|\psi_{t}\rangle$ by
contracting a tensor network with double size ($2n$ qubits). Based on this
idea, an algorithm (Algorithm 1) is developed to compute fault effect
$F(\psi_{e},\mathcal{E}_{f}(\psi_{t}))$. For calculating fidelity with input
test states $|\psi_{t}\rangle$ that have unknown expected output state
$|\psi_{e}\rangle$, since $|\psi_{e}\rangle=U|\psi_{t}\rangle$, we can append
the tensor network for calculating $U|\psi_{t}\rangle$ at the end of the
tensor network of the faulty circuit. Moreover, we observe that those gates
after the last fault can be canceled out, reducing the calculation cost of
contracting the tensor network, as shown in Fig. 2.
$\displaystyle\begin{array}[]{c}\Qcircuit@C=0.5em@R=0.5em@!R{\lstick{\begin{array}[]{cc}\leavevmode\hbox
to18.29pt{\vbox to14.63pt{\pgfpicture\makeatletter\hbox{\hskip
7.14442pt\lower-4.81319pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }
{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@rect{-6.94443pt}{-4.61319pt}{17.88885pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.94443pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\langle 0|$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\end{array}}&\gate{\mathrm{R_{Y}}}&\gate{\mathrm{R_{Z}}}&\ctrl{1}&\qw&\ctrl{1}&\gate{\mathrm{R_{Z}}\,(\mathrm{\frac{-\pi}{2}})}&\gate{\mathrm{R_{Y}}\,(\mathrm{\frac{\pi}{2}})}&\gate{|0\rangle}\\\
\lstick{\begin{array}[]{cc}\leavevmode\hbox to18.29pt{\vbox
to14.63pt{\pgfpicture\makeatletter\hbox{\hskip 7.14442pt\lower-4.81319pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }
{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@rect{-6.94443pt}{-4.61319pt}{17.88885pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.94443pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\langle 0|$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\end{array}}&\gate{\mathrm{R_{Y}}}&\gate{\mathrm{R_{Z}}}&\control\qw&\gate{\leavevmode\hbox
to11.6pt{\vbox to8.75pt{\pgfpicture\makeatletter\hbox{\hskip
5.79872pt\lower-4.37332pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-5.79872pt}{-2.45999pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\mathcal{E}_{D}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}\qwx[2]&\control\qw&\gate{\mathrm{R_{Z}}\,(\mathrm{\frac{-\pi}{2}})}&\gate{\mathrm{R_{Y}}\,(\mathrm{\frac{\pi}{2}})}&\gate{|0\rangle}\\\
\lstick{\begin{array}[]{cc}\leavevmode\hbox to18.29pt{\vbox
to14.63pt{\pgfpicture\makeatletter\hbox{\hskip 7.14442pt\lower-4.81319pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }
{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@rect{-6.94443pt}{-4.61319pt}{17.88885pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.94443pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\langle 0|$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\end{array}}&\gate{\mathrm{R_{Y}}^{*}}&\gate{\mathrm{R_{Z}}^{*}}&\ctrl{1}&\qw&\ctrl{1}&\gate{\mathrm{R_{Z}}\,(\mathrm{\frac{-\pi}{2}})^{*}}&\gate{\mathrm{R_{Y}}\,(\mathrm{\frac{\pi}{2}})^{*}}&\gate{|0\rangle}\\\
\lstick{\begin{array}[]{cc}\leavevmode\hbox to18.29pt{\vbox
to14.63pt{\pgfpicture\makeatletter\hbox{\hskip 7.14442pt\lower-4.81319pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }
{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}}
{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@rect{-6.94443pt}{-4.61319pt}{17.88885pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.94443pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$\langle 0|$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\end{array}}&\gate{\mathrm{R_{Y}}}&\gate{\mathrm{R_{Z}}}&\control\qw&\gate{\leavevmode\hbox
to11.6pt{\vbox to10.05pt{\pgfpicture\makeatletter\hbox{\hskip
5.79872pt\lower-5.0247pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-5.79872pt}{-3.11137pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\hbox{{$\mathcal{E}_{D}^{*}$}} }}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hbox
to0.0pt{}{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}&\control\qw&\gate{\mathrm{R_{Z}}\,(\mathrm{\frac{-\pi}{2}})^{*}}&\gate{\mathrm{R_{Y}}\,(\mathrm{\frac{\pi}{2}})^{*}}&\gate{|0\rangle}}\end{array}$
Figure 2: Circuit for calculating fidelity of the 2-qubit QAOA circuit in
Section.II with decoherence noise $\mathcal{E}_{D}$ by appending
$|\psi_{e}\rangle=U|\psi_{t}\rangle$ to the end of circuit and redundant gates
are canceled out.
## V EXPERIMENTS on Superconducting Quantum Circuits
In this section, we demonstrate the utility and effectiveness of our fault
simulation algorithm by computing the fault effects of superconducting quantum
circuits with realistic fault models introduced before. All types of these
circuits have been implemented on quantum superconducting devices of Google.
Runtime Environment: All our experiments are carried out on a server with
Intel Xeon Platinum 8153 @ $2.00\mathrm{GHz}\times 256$ Processors, 2048 GB
Memory. The machine runs Centos 7.7.1908. We use the Google TensorNetwork
Python package [18] for the tensor network computation. The time-out (TO)
limit was set to 3600 seconds. The memory-out (MO) limit was set to 2048 GB.
Experimental Cases: We test our fault simulation algorithm by computing the
fault effects of three types of practical superconducting quantum circuits
which have already been implemented on Google’s _Sycamore_ quantum
superconducting processor: (i) Quantum Approximate Optimization Algorithm
(QAOA), (ii) Hartree-Fock Variational Quantum Eigensolver (VQE), and (iii)
_inst_ circuits used to show the quantum supremacy of _Sycamore_ [20, 28, 29].
The test states in all experiments are chosen to be
$|0\rangle\otimes\cdots\otimes|0\rangle$. To get fidelity results we append
the tensor network representing $U|0\rangle\otimes\cdots\otimes|0\rangle$, as
explained in Section IV. We generate the faulty circuits by putting the
unitary fault after each CZ gate or randomly inserting $m$ decoherence noises
in the circuits. Thus, the number of unitary faults is that of CZ gates in the
circuits, and the number $m$ of super-operator fault is in the range $0\leq
m\leq 16$.
Baselines: We compare the efficiency of our simulator with the simulators of
DDSIM [15], Qiskit, and TDD (Tensor Decision Diagram) [30, 31]. The DDSIM is a
state-of-the-art DD-based simulator for noiseless circuit simulation, we use
it to benchmark the performance of our simulator in noise-free cases. The
algorithm in Qiskit is widely used in quantum simulation and based on matrix-
vector techniques, and that of TDD is based on tensor network techniques. All
three simulation algorithms are executed on the same runtime environment
mentioned above.
Optimization Techniques: One standard method for contracting tensor networks
is through a sequential pairwise contraction, i.e., two tensors from the
tensor network are selected and merged as one tensor. The tensor network
contraction is finished by repeatedly applying the pairwise contraction. A
cleverly chosen pairwise contraction order can often reduce tensor network
contraction’s runtime by several orders of magnitude. Over the years, many
heuristics have been proposed to find efficient orders. Here, we call a greedy
method (embedded in Google TensorNetwork) to speed up the tensor network
contraction in our algorithm. Other heuristics may further optimize the
performance of our algorithm.
Experimental Results and Analysis: The utility and effectiveness of our
simulation algorithm are confirmed by the following four numerical experiments
$(1-4)$ in terms of runtime, fault numbers, fidelity, and the physical
parameters (introduced in Section II) appearing in realistic faulty
superconducting quantum circuits. For all cases, except for $(3)$, we fix the
parameters: the fault rotation angle $\theta$ of controlled-$R_{z}(\theta)$ is
chosen to be 0.1, the decoherent time is set to $T_{1}=100\mu s$ and
$T_{2}=20\mu s$.
The concrete benchmark circuits consist of $A$-qubit QAOA circuit (qaoa_$A$),
$B$x$C$-qubit _inst_ circuit with depth $D$ (inst_$B$x$C$_$D$) and $E$-qubit
Hartree-Fock VQE circuit (hf_$E$), where $A,B,C,D,E$ are all positive numbers.
(1) _Super-operator Fault Number and Runtime:_ We illustrate the performance
of our algorithm with different _super-operator fault numbers_ (i.e., the
number of super-operator faults in quantum circuits) in Fig. 3. The result
shows that our algorithm works very well when the super-operator fault number
is small ($\leq 6$), while, with the super-operator fault number increasing,
the runtime grows significantly. The main reason is that more super-operator
faults may increase the maximum rank of nodes in the contraction of the tensor
network.
(2) _Fault Number and Fidelity:_ The relation between fidelity and different
super-operator fault numbers is shown in Fig 5. The fault number ranges from 2
to 16. Not surprisingly, the fidelity drops with the fault number increasing.
(3) _Fault Parameters and Fidelity:_ We show the fidelity with different
parameters in the fault model, including rotation angle $\theta$, $T_{1}$, and
$T_{2}$, in Figs. 5 and 6. As a result, larger faulty rotation angles and
shorter decoherence time ($T_{1}$ and $T_{2})$ will lead to a more significant
fault effect on superconducting quantum circuits. From Fig. 5, we can say that
a larger rotation angle leads to a high loss of fidelity, which is a challenge
for hardware technology in the current NISQ era.
(4) _Performance Comparison:_ First we compare the performance of our
algorithm with the state-of-the-art QMDD-based simulator DDSIM in Table II by
running a noiseless simulation task. The result shows that our method
outperforms DDSIM in almost all test cases, indicating that our algorithm
performs well on the noiseless simulation task of these superconducting
quantum circuits. Then, we mainly compare the performance of TDD and our
algorithm on the three benchmark circuits in Table III, IV, V. Qiskit can only
simulate the Hartree-Fock VQE circuit since the qubit number is small.
Although DDSIM can perform faulty circuit simulation [17], the task is
slightly different from ours, and the performance is most concerned with
stochastic simulation, so it is not compared with our simulator for noisy
quantum circuits. The $|\psi_{e}\rangle$ in this task is chosen to be
$|0\rangle\otimes\cdots\otimes|0\rangle$ and the runtime of our algorithm will
be the same for any other product states. From our result, our method has less
runtime in all three benchmarks than the TDD method with the super-operator
fault number set to be 2. The performance of our method is better, especially
in _inst_ circuits with large depths. The high efficiency of our method
results from the fact that our tensor network method utilizes the locality of
quantum circuits and several state-of-the-art contracting methods embedded in
the Google TensorNetwork package. Since the simulator in Qiskit does not
consider the locality, it can only handle circuits with few qubits. Although
the TDD method benefits from the locality, the representation is unsuitable
for representing a state with arbitrary amplitude on all computational basis
(e.g., _inst_ circuits). Moreover, the partition strategy used in TDD is
straightforward compared to the strategies embedded in the Google
TensorNetwork package, which may find more efficient contraction orders.
$0$$2$$4$$6$$8$$10$$12$$14$$16$$0$$10$$20$$30$$40$$50$$60$$70$$80$Super-
operator Fault NumberTime(s)qaoa_100inst_6x6_20 Figure 3: Super-operator Fault
Number and Runtime.
$0$$2$$4$$6$$8$$10$$12$$14$$16$$0.9$$0.92$$0.94$$0.96$$0.98$$1$Super-operator
Fault NumberFidelityqaoa_100inst_6x6_20 Figure 4: Super-operator Fault Number
$0$$0.04$$0.08$$0.12$$0.16$$0.2$$0.5$$0.6$$0.7$$0.8$$0.9$$1$Controlled $R_{z}$
Rotation AngleFidelityqaoa_64inst_6x6_10 Figure 5: Unitary Fault Angle and
Fidelity
$100$$120$$140$$160$$180$$200$$0.89$$0.9$$0.91$$0.92$$0.93$$T_{1}$ Time($\mu
s$)Fidelityqaoa_100inst_6x6_20
$10$$12$$14$$16$$18$$20$$0.81$$0.84$$0.87$$0.9$$0.93$$T_{2}$ Time($\mu
s$)Fidelityqaoa_100inst_6x6_20
Figure 6: Decoherence Time and Fidelity TABLE II: Comparison with DDSIM Circuit | Qubits | Gates | Depth | Time(s)
---|---|---|---|---
Our | DDSIM
hf_10 | 10 | 461 | 142 | 0.38 | 0.20
hf_12 | 12 | 690 | 194 | 0.88 | 0.67
qaoa_20 | 20 | 188 | 33 | 0.09 | 129.51
qaoa_25 | 20 | 241 | 33 | 0.12 | TO
inst_4x4_60 | 16 | 579 | 61 | 7.18 | 26.70
inst_4x4_70 | 16 | 669 | 71 | 13.48 | 31.18
inst_4x4_80 | 16 | 764 | 81 | 18.89 | 36.69
inst_4x5_10 | 20 | 145 | 11 | 0.19 | 242.05
inst_4x5_20 | 20 | 261 | 21 | 0.27 | 1839.90
inst_4x5_30 | 20 | 378 | 31 | 26.98 | TO
inst_6x6_10 | 36 | 264 | 11 | 0.39 | TO
inst_6x6_20 | 36 | 483 | 21 | 3.75 | TO
inst_7x7_10 | 49 | 364 | 11 | 0.43 | TO
TABLE III: Hartree-Fock result Circuit | Qubits | Gates | Depth | Time(s)
---|---|---|---|---
Our | TDD | Qiskit
hf_6 | 6 | 155 | 72 | 0.095 | 1.2 | 0.17
hf_8 | 8 | 308 | 124 | 0.33 | 3.65 | 0.24
hf_10 | 10 | 461 | 142 | 0.37 | 7.59 | 26.91
hf_12 | 12 | 690 | 194 | 0.99 | 18.81 | 206.37
TABLE IV: QAOA Result Circuit | Qubits | Gates | Depth | Time(s)
---|---|---|---|---
Our | TDD
qaoa_64 | 64 | 1696 | 42 | 3.512039 | 58.33
qaoa_72 | 72 | 1922 | 42 | 4.412076 | 77.98
qaoa_80 | 80 | 2148 | 42 | 5.342694 | 95.34
qaoa_100 | 100 | 2720 | 42 | 11.64412 | 149.01
qaoa_121 | 121 | 3322 | 42 | 9.770204 | 225.76
qaoa_169 | 169 | 4706 | 42 | 18.46261 | 463.89
qaoa_196 | 196 | 5488 | 42 | 336.4724 | 617.16
qaoa_225 | 225 | 6330 | 42 | 925.8719 | MO
TABLE V: _inst_ Result Circuit | Qubits | Gates | Depth | Time(s)
---|---|---|---|---
Our | TDD
inst_4x4_10 | 16 | 115 | 11 | 0.07 | 0.88
inst_4x4_20 | 16 | 209 | 21 | 0.20 | 4.17
inst_4x4_30 | 16 | 299 | 31 | 5.76 | TO
inst_4x4_40 | 16 | 394 | 41 | 4.34 | TO
inst_4x4_50 | 16 | 485 | 51 | 4.80 | TO
inst_4x4_60 | 16 | 579 | 61 | 9.93 | TO
inst_4x4_70 | 16 | 669 | 71 | 21.72 | TO
inst_4x4_80 | 16 | 764 | 81 | 11.26 | TO
inst_4x5_10 | 20 | 145 | 11 | 0.10 | 1.52
inst_4x5_20 | 20 | 261 | 21 | 0.30 | TO
inst_4x5_30 | 20 | 378 | 31 | 30.98 | TO
inst_4x5_40 | 20 | 494 | 41 | 62.29 | TO
inst_4x5_50 | 20 | 610 | 51 | 123.83 | TO
inst_4x5_60 | 20 | 726 | 61 | 1587.82 | TO
inst_4x5_70 | 20 | 843 | 71 | 2027.34 | TO
inst_4x5_80 | 20 | 959 | 81 | TO(4132.89) | TO
inst_6x6_10 | 36 | 264 | 11 | 0.22 | 0.77
inst_6x6_20 | 36 | 483 | 21 | 4.85 | 21.41
inst_7x7_10 | 49 | 364 | 11 | 0.45 | 1.66
## VI Conclusion
This paper presents a fast fault simulation algorithm for quantum
superconducting circuits. In particular, we employ tensor network contraction
to calculate the fault effect. The utility and effectiveness of our algorithm
are demonstrated by experimenting on a series of realistic faulty
superconducting circuits. The experimental results indicate that the proposed
algorithm (implemented with Google TensorNetwork) significantly improves the
efficiency and scalability of the state-of-the-art fault simulation algorithms
based on the density matrix or DD-based method, especially for superconducting
quantum circuits in the current NISQ era. Therefore, our algorithm is expected
to be used as an integrated feature in the currently developed ATPG programs
(e.g., [22, 23, 24]) for verifying and detecting design errors, manufacturing
defects, and quantum noise effects of large-size quantum (superconducting)
circuits.
## References
* [1] Frank Arute et al. “Quantum supremacy using a programmable superconducting processor” In _Nature_ 574.7779 Nature Publishing Group, 2019, pp. 505–510
* [2] Ming Gong et al. “Quantum walks on a programmable two-dimensional 62-qubit superconducting processor” In _Science_ 372.6545 American Association for the Advancement of Science, 2021, pp. 948–952
* [3] John Preskill “Quantum computing in the NISQ era and beyond” In _Quantum_ 2 Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften, 2018, pp. 79
* [4] Miron Abramovici, Melvin A Breuer and Arthur D Friedman “Digital systems testing and testable design” Computer science press New York, 1990
* [5] Laung-Terng Wang, Cheng-Wen Wu and Xiaoqing Wen “VLSI test principles and architectures: design for testability” Elsevier, 2006
* [6] X-D Cai et al. “Experimental quantum computing to solve systems of linear equations” In _Physical Review Letters_ 110.23 APS, 2013, pp. 230501
* [7] Aram W Harrow, Avinatan Hassidim and Seth Lloyd “Quantum algorithm for linear systems of equations” In _Physical review letters_ 103.15 APS, 2009, pp. 150502
* [8] Thomas Häner and Damian S Steiger “5 petabyte simulation of a 45-qubit quantum circuit” In _Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis_ , 2017, pp. 1–10
* [9] Cupjin Huang et al. “Classical simulation of quantum supremacy circuits” In _arXiv preprint arXiv:2005.06787_ , 2020
* [10] Riling Li et al. “Quantum supremacy circuit simulation on Sunway TaihuLight” In _IEEE Transactions on Parallel and Distributed Systems_ 31.4 IEEE, 2019, pp. 805–816
* [11] Edwin Pednault et al. “Breaking the 49-qubit barrier in the simulation of quantum circuits” In _arXiv preprint arXiv:1710.05867_ 15, 2017
* [12] Benjamin Villalonga et al. “A flexible high-performance simulator for verifying and benchmarking quantum circuits implemented on real hardware” In _npj Quantum Information_ 5.1 Nature Publishing Group, 2019, pp. 1–16
* [13] Feng Pan and Pan Zhang “Simulation of Quantum Circuits Using the Big-Batch Tensor Network Method” In _Phys. Rev. Lett._ 128 American Physical Society, 2022, pp. 030501 DOI: 10.1103/PhysRevLett.128.030501
* [14] Yuan-Hung Tsai, Jie-Hong R. Jiang and Chiao-Shan Jhang “Bit-Slicing the Hilbert Space: Scaling Up Accurate Quantum Circuit Simulation” In _2021 58th ACM/IEEE Design Automation Conference (DAC)_ San Francisco, CA, USA: IEEE Press, 2021, pp. 439–444 DOI: 10.1109/DAC18074.2021.9586191
* [15] Alwin Zulehner and Robert Wille “Advanced simulation of quantum computations” In _IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems_ 38.5 IEEE, 2018, pp. 848–859
* [16] Stefan Hillmich, Richard Kueng, Igor L Markov and Robert Wille “As accurate as needed, as efficient as possible: Approximations in dd-based quantum circuit simulation” In _2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)_, 2021, pp. 188–193 IEEE
* [17] Thomas Grurl, Jürgen Fuß and Robert Wille “Noise-aware Quantum Circuit Simulation With Decision Diagrams” In _IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems_ , 2022, pp. 1–1 DOI: 10.1109/TCAD.2022.3182628
* [18] Chase Roberts et al. “TensorNetwork: A Library for Physics and Machine Learning”, 2019 arXiv:1905.01330 [physics.comp-ph]
* [19] “Fault models in superconducting quantum circuits” unpublished
* [20] Matthew Harrigan et al. “Quantum Approximate Optimization of Non-Planar Graph Problems on a Planar Superconducting Processor” In _Nature Physics_ , 2021 URL: https://www.nature.com/articles/s41567-020-01105-y
* [21] Michael A Nielsen and Isaac L Chuang “Quantum computation and quantum information” Cambridge university press, 2010
* [22] Xiao Fang-ying, Chen Han-wu, Liu Wen-jie and Li Zhi-giang “Fault detection for single and multiple missing-gate faults in reversible circuits” In _2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence)_ , 2008, pp. 131–135 IEEE
* [23] Alexandru Paler, Ilia Polian and John P Hayes “Detection and diagnosis of faulty quantum circuits” In _17th Asia and South Pacific Design Automation Conference_ , 2012, pp. 181–186 IEEE
* [24] Debajyoti Bera “Detection and diagnosis of single faults in quantum circuits” In _IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems_ 37.3 IEEE, 2017, pp. 587–600
* [25] Jacob Biamonte and Ville Bergholm “Tensor networks in a nutshell” In _arXiv preprint arXiv:1708.00006_ , 2017
* [26] Igor L Markov and Yaoyun Shi “Simulating quantum computation by contracting tensor networks” In _SIAM Journal on Computing_ 38.3 SIAM, 2008, pp. 963–981
* [27] Mingsheng Ying “Foundations of Quantum Programming” Morgan Kaufmann, 2016
* [28] Google AI Quantum and Collaborators et al. “Hartree-Fock on a superconducting qubit quantum computer” In _Science_ 369.6507, 2020, pp. 1084–1089 DOI: 10.1126/science.abb9811
* [29] Sergio Boixo et al. “Characterizing Quantum Supremacy in Near-Term Devices” In _Nature Physics_ 14, 2018, pp. 595–600 URL: https://www.nature.com/articles/s41567-018-0124-x
* [30] Xin Hong et al. “A Tensor Network Based Decision Diagram for Representation of Quantum Circuits” In _ACM Trans. Des. Autom. Electron. Syst._ 27.6 New York, NY, USA: Association for Computing Machinery, 2022 DOI: 10.1145/3514355
* [31] Xin Hong et al. “Approximate Equivalence Checking of Noisy Quantum Circuits” In _2021 58th ACM/IEEE Design Automation Conference (DAC)_ , 2021, pp. 637–642 DOI: 10.1109/DAC18074.2021.9586214
|
# Directed Acyclic Graph Structure Learning from Dynamic Graphs
Shaohua Fan1, Shuyang Zhang1, Xiao Wang1,2, Chuan Shi1,2 Corresponding author.
###### Abstract
Estimating the structure of directed acyclic graphs (DAGs) of features
(variables) plays a vital role in revealing the latent data generation process
and providing causal insights in various applications. Although there have
been many studies on structure learning with various types of data, the
structure learning on the dynamic graph has not been explored yet, and thus we
study the learning problem of node feature generation mechanism on such
ubiquitous dynamic graph data. In a dynamic graph, we propose to
simultaneously estimate contemporaneous relationships and time-lagged
interaction relationships between the node features. These two kinds of
relationships form a DAG, which could effectively characterize the feature
generation process in a concise way. To learn such a DAG, we cast the learning
problem as a continuous score-based optimization problem, which consists of a
differentiable score function to measure the validity of the learned DAGs and
a smooth acyclicity constraint to ensure the acyclicity of the learned DAGs.
These two components are translated into an unconstraint augmented Lagrangian
objective which could be minimized by mature continuous optimization
techniques. The resulting algorithm, named GraphNOTEARS, outperforms baselines
on simulated data across a wide range of settings that may encounter in real-
world applications. We also apply the proposed approach on two dynamic graphs
constructed from the real-world Yelp dataset, demonstrating our method could
learn the connections between node features, which conforms with the domain
knowledge.
## Introduction
A Bayesian network (BN) is a probabilistic graphical model that represents a
set of variables and their conditional dependencies. It has been widely used
in machine learning applications (Pearl 2011; Ott, Imoto, and Miyano 2003;
Friedman, Geiger, and Goldszmidt 1997). The structure of a BN takes the form
of a directed acyclic graph (DAG) and provides a convenient and interpretable
output which is needed in today’s high-stake applications of artificial
intelligence, such as healthcare, finance and autonomous driving. The edges in
a DAG represent the directed generation relationships between variables (e.g.,
features) in a system. When these edges are not known based on prior
knowledge, one possible solution is to resort to DAG structure learning,
namely, learning the edges in a graphical model from the observed data.
Figure 1: A toy example of feature generation process on dynamic graph.
Existing approaches for DAG learning mostly focus on dealing with tabular
data, i.e., each sample is independently drawn from the same distribution (IID
data) (Spirtes et al. 2000; Neuberg 2003; Spirtes, Meek, and Richardson 2013;
Geiger and Heckerman 1994; Heckerman, Geiger, and Chickering 1995).
Nevertheless, in real-world scenarios, there usually exists associations
between samples, so the generation of features of certain samples may be
influenced by the other samples through the links between samples. Several
pioneer works have proposed constraint-based methods to learn DAGs from static
(i.e., equilibrium) graph data (Maier et al. 2010; Lee and Honavar 2016; Maier
et al. 2013). They usually test the conditional independencies of the
attributes of entities in graph data to form DAGs among those variables.
However, many real-world scenarios exhibit temporal information in graph data.
For example, as shown in Fig. 1, Bob’s friends, Lisa and Mary, went to eat
seafood and posted their recommendation and the meal time on the social media
at timestamp $t-1$. Bob viewed this information. When he has enough available
time at timestamp $t$ (e.g., several days later), he will choose to eat
seafood with high probability. Hence, the generation of Bob’s type value at
timestamp $t$ will be both determined by current available time and
recommendation from social network at previous timestamp. Another example is
that, the risk of ones to be infected by COVID-19 may be both determined by
current protection status (e.g., wearing a mask or keeping social distance)
and the ratio of neighborhood who has been vaccinated or infected at previous
timestamp two weeks ago. Actually, the current methods largely ignore modeling
the temporal interaction, so that the true data generation process could not
be revealed accurately.
When learning DAGs from dynamic graphs, there are two intractable challenges
we need to confront. First, a dynamic graph contains complex temporal
interactions between samples, so what kind of DAGs would reflect the
generation process of features in a dynamic graph? As the samples at each
timestamp will be generated based on the interactions from previous
timestamps, the learned DAG should model the generation process of each new
sample at each timestamp, considering the time-lagged interaction information.
Second, how to efficiently learn a DAG from complex evolutional graph data?
Compared with IID data, a dynamic graph contains both temporal and interaction
information, hence it has a more complicated data generation mechanism. It is
non-trivial to design a DAG learning method for dynamic graphs. Fortunately,
owing to the well-developed optimization techniques, it is possible to develop
a differentiable score function that could measure the validity of candidate
DAGs and resort to blackbox solvers to find the optimal DAG efficiently.
Particularly, to address these two challenges, we propose an effective score-
based approach for learning DAGs that could scale gracefully to dynamic graphs
with high-dimensional node features, called GraphNOTEARS. To solve the first
challenge, we propose to learn an intra-slice matrix to characterize
contemporaneous relationships between variables, and several inter-slice
matrices to characterize multi-step time-lagged graph influence on current
timestamp. Meanwhile, an acyclicity constraint is required to ensure the
acyclicity of the learned whole graph. As for the second challenge, we cast
the problem as a score-based optimization problem, and develop a least-squares
loss based score function. The score function leverages the temporal and
interaction information, as well as two kinds of learnable structural matrices
to reconstruct the data. With the smooth acyclicity constraint, we translate
the original unsolvable constraint problem into an unconstraint augmented
Lagrangian objective. The resulting program could be solved by the standard
second-order optimization schemes efficiently. The main contributions of this
paper are summarized as follows:
* •
To our best knowledge, we first study the DAG learning problem on dynamic
graphs. Because of the ubiquity of dynamic graph data in real applications,
learning DAGs on such data could reveal the underlying feature generation
process, provide skeletons for possible Bayesian networks, and answer causal
questions, like the effect of various interventions. These applications are
very important for building explainable, robust, and generalized algorithms.
* •
We develop a score-based learning method for simultaneously estimating the
structure and parameters of a sparse DAG from a dynamic graph. The resulting
method can be used to learn the relationships between variables of arbitrary
time-lagged order in a dynamic graph, without any implicit assumptions on the
underlying DAG topologies.
* •
We conduct extensive simulation experiments with broad range settings which
may encounter in real world, validating the effectiveness of our approach in
revealing the feature generation mechanism of dynamic graphs. The experiments
on real-world datasets well demonstrate the rationality of the relationships
inferenced by GraphNOTEARS.
## Background and Related Works
A DAG $G$ is faithful with respect to a joint distribution $\mathcal{P}$ of a
set of variables if all and only the conditional independencies of variables
true in $\mathcal{P}$ are entailed by $G$ (Pearl 2014). The faithfulness
assumption enables one to recover $G$ from $\mathcal{P}$. Given samples $D$
from an unknown distribution corresponding to a faithful but unknown DAG,
structure learning refers to recovering the DAG from $D$.
Existing methods for DAG learning can be classified into constraint-based
methods and score-based methods. Most constraint-based DAG learning methods
(Spirtes et al. 2000; Neuberg 2003; Spirtes, Meek, and Richardson 2013) first
use conditional independence tests to find graph skeleton and then determine
the orientations of the edges up to the Markov equivalence class, which
usually contains DAGs that can be structurally diverse and may still have many
unoriented edges. Score-based methods (Geiger and Heckerman 1994; Huang et al.
2018; Hyvärinen and Smith 2013), on the other hand, define a score function to
find the best DAG that fits the given data. Unlike constraint-based methods
that assume faithfulness and identify only the Markov equivalence class, these
methods are able to distinguish between different DAGs in the same equivalence
class, owing to the additional assumptions on data distribution and/or
functional classes. However, due to the acyclicity constraint and
superexponential in the number of nodes of DAGs to search over, score-based
methods are computationally expensive. The recent work, NOTEARS (Zheng et al.
2018), expresses the acyclicity of a DAG by a smooth equality constraint under
the linear assumption, which makes it possible to formulate structure learning
as a smooth minimization problem subject to this equality constraint.
DYNOTEARS (Pamfil et al. 2020) extend NOTEARS to learn the DAG of time-series
data, which incorporates the temporal information into the score function.
The mainstream of existing methods is designed for the tabular samples, which
are independently identically drawn from the same distribution. However, in
many real-world settings, there may exist links between samples and the links
may influence the features generation process of samples. Maier et al. (Maier
et al. 2010) first extend the well-known PC algorithm to the relational
setting for learning causal relationships from relational data, called RPC.
Later, Maier et al. (Maier et al. 2013) demonstrate the lack of completeness
of RPC and introduce a sound and complete algorithm, named RCD. All of these
relational DAG learning algorithms are constraint-based methods and they could
only handle static graphs, ignoring temporal information. In this paper, we
propose an efficient score-based method that could model the temporal
interaction information.
## DAG Structure Learning on Dynamic Graphs
Figure 2: Illustration of intra-slice (solid lines) and inter-slice (dashed
lines) dependencies in a dynamic graph with $n=3$ samples and $d=3$ variables
at each timestamp and time-lagged order $p=2$. For clarity, we ignore the
edges that do not influence the variables $\mathbf{X}_{11}^{(t)}$.
In this section, we introduce the definition of dynamic graphs, the problem of
DAG structure learning on dynamic graphs as well as the proposed model:
GraphNOTEARS.
### Problem Formulation
###### Definition 1 (Dynamic Graph)
A dynamic graph is
$\mathcal{G}=\\{(\mathbf{X}^{(1)},\mathbf{A}^{(1)}),\cdots,(\mathbf{X}^{(T)},\mathbf{A}^{(T)})\\}$,
where $T$ is the total number of timestamps, tuple ($\mathbf{X}^{(T)}$,
$\mathbf{A}^{(T)}$) represents the graph at timestamp $T$,
$\mathbf{X}^{(T)}\in\mathbb{R}^{n\times d}$ is the matrix of node features and
$\mathbf{A}^{(T)}\in\mathbb{R}^{n\times n}$ is the adjacency matrix of nodes,
$n$ is the number of nodes111We assume the number of nodes at each timestamp
remains the same., $d$ is the number of node features (i.e., variables).
At each timestamp $t$, we assume that each feature of nodes is generated based
on the contemporaneous variables and time-lagged neighborhood variables. For
instance, as depicted in Fig. 2, the variable $\mathbf{X}_{11}^{(t)}$ of
sample $\mathbf{X}_{1}^{(t)}$ at timestamp $t$ is determined by the
contemporaneous variables $\mathbf{X}_{12}^{(t)}$ and $\mathbf{X}_{13}^{(t)}$
from the same sample222In this paper, we use node and sample interchangeably.
with coefficients $\mathbf{W}_{21}$ and $\mathbf{W}_{31}$, and the time-lagged
aggregated neighborhood variables $\hat{\mathbf{X}}_{.1}^{(t-1)}$ and
$\hat{\mathbf{X}}_{.1}^{(t-2)}$ from timestamp $t-1$ and $t-2$ with
coefficients $\mathbf{P}^{(t-1)}_{11}$ and $\mathbf{P}^{(t-2)}_{11}$,
respectively. We call these intra-slice and inter-slice dependencies,
respectively. One may argue that the generation of variables of samples at
current timestamp $t$ should also depend on the neighborhood samples at the
same timestamp. In this paper, we assume that neighborhood behaviour needs a
delay to influence ego nodes. The delay influence phenomenon is very common in
real-world applications. One example is that the user’s preference on
restaurant category may be influenced by their friends’ recommendation on Yelp
platform, and the user will go to the restaurant in a few days. Moreover, we
assume the interactions between samples are given at each timestamp, which is
very common in dynamic graph data (Rossi et al. 2020; Sankar et al. 2020). For
example, we could easily get the following relationship between users on Yelp
platform.
Moreover, we assume that in each timestamp, the effects of the time-lagged
influences from neighborhood on current timestamp could be at most $p$
timestamps, where $p\leq T$. We also make the stationary process assumption
(Hamilton 2020) that the generation process is fixed through time and is
identical for generating each timestamp in the time-series, which is very
common assumption in time-series data (Hamilton 2020). Without loss of any
generality, we propose to model the data generation process at timestamp $t$
with the structural vector autoregressive (SVAR) model (Demiralp and Hoover
2003; Kilian 2013):
$\displaystyle\mathbf{X}^{(t)}$
$\displaystyle=\mathbf{X}^{(t)}\mathbf{W}+\mathbf{\hat{A}}^{(t-1)}\mathbf{X}^{(t-1)}\mathbf{P}^{(t-1)}+\cdots$
(1)
$\displaystyle+\mathbf{\hat{A}}^{(t-p)}\mathbf{X}^{(t-p)}\mathbf{P}^{(t-p)}+\mathbf{Z},$
where $\mathbf{W}\in\mathbb{R}^{d\times d}$ and
$\mathbf{P}^{(t-i)}(i\in\\{1,\cdots,p\\})\in\mathbb{R}^{d\times d}$ represents
weighted adjacency matrices with nonzero entries corresponding to the intra-
slice and inter-slice edges, respectively.
$\mathbf{\mathbf{\hat{A}}}^{(t-i)}(i\in\\{1,\cdots,p\\})$ are normalized
adjacency matrices, which is computed by
$\mathbf{D}^{-\frac{1}{2}}(\mathbf{A}+\mathbf{I})\mathbf{D}^{-\frac{1}{2}}$,
$\mathbf{D}_{ii}=\sum_{j}\mathbf{A}_{ij}$, $\mathbf{I}$ is the identity matrix
of $\mathbf{A}$. The reason for $\mathbf{A}+\mathbf{I}$ is that the time-
lagged influence should not only from neighborhood but also from itself at
previous timestamps. And $\mathbf{Z}$ is a centered error matrix, where each
row is independent for each sample. The error variables could be Gaussian,
Exponential or others. The overall intuition of Eq. (1) would be that, the
generation of the value of a new variable $i$ of $\mathbf{X}^{(t)}$ depends on
two parts: parent variables from contemporaneous variables (i.e.,
$\mathbf{X}^{(t)}\mathbf{W}$) and parents variables from time-lagged
neighborhood variables (i.e.,
$\mathbf{\hat{A}}^{(t-1)}\mathbf{X}^{(t-1)}\mathbf{P}^{(t-1)}+\cdots+\mathbf{\hat{A}}^{(t-p)}\mathbf{X}^{(t-p)}\mathbf{P}^{(t-p)}$).
$\mathbf{X}^{(t)}\mathbf{W}$ represents the relationship between
contemporaneous variables. According to SVAR model, contemporaneous variables
usually exhibit a causal order, hence $\mathbf{W}$ represents the causal order
of contemporaneous variables and is acyclic, where $\mathbf{W}_{kj}$
represents the coefficient of the parent variable $k$ at timestamp $t$ on the
variable $j$ at the same timestamp. And $\mathbf{P}^{(t-i)}_{kj}$ represents
the coefficient of $k$-th aggregated node variables at timestamp $t-i$ on
$j$-th variable at timestamp $t$. For the detailed pseudocode of Eq. (1),
please refer to Appendix A.2 333Supplementary
material:https://drive.google.com/file/d/1S1pzEyy
C9kNL6s97yQxRt5lOLdO5L8sz/view?usp=sharing.
Let $\mathbf{M}=[\mathbf{X}^{(t-1)}|\cdots|\mathbf{X}^{(t-p)}]$ be the
$n\times pd$ matrix of time-lagged node features data,
$\mathbf{A}=[\mathbf{\hat{A}}^{(t-1)}|\cdots|\mathbf{\hat{A}}^{(t-p)}]$ be the
$n\times pn$ matrix of time-lagged interaction data, and
$\mathbf{P}=[\mathbf{P}^{(t-1)}|\cdots|\mathbf{P}^{(t-p)}]$ be the $pd\times
d$ matrix of inter-slice weights. We could rewrite Eq. (1) as following
compact form of structural equation model (SEM) (Hox and Bechger 1998):
$\mathbf{X}=\mathbf{X}\mathbf{W}+\mathbf{A}\boxtimes\mathbf{M}\mathbf{P}+\mathbf{Z},$
(2)
where
$\mathbf{A}\boxtimes\mathbf{M}=[\mathbf{\hat{A}}^{(t-1)}\mathbf{X}^{(t-1)}|\cdots|\mathbf{\hat{A}}^{(t-p)}\mathbf{X}^{(t-p)}]\in\mathbb{R}^{n\times
pd}$. This general formulation covers scenarios in which the time-lagged data
matrix $\mathbf{M}$ or $\mathbf{A}$ are not a contiguous sequence of time
slices (i.e., from $t-p$ to $t-1$). For example, someone usually meets their
friends in the weekend, hence one can include the lagged data matrix
$\mathbf{M}$ or $\mathbf{A}$ only those time points that have an impact on the
variables at timestamp $t$. Based on the generation formulation, we formulate
our target problem:
###### Problem 1 (DAG Structure Learning on Dynamic Graph)
Given a dynamic graph $\mathcal{G}$, the goal of DAG structure learning on the
dynamic graph is to estimate the generation process of node features, which
usually forms as a DAG, both considering the contemporaneous effect from each
node itself and neighborhood effects from time-lagged graphs.
Hence, given the data $\mathbf{X}$, $\mathbf{M}$ and $\mathbf{A}$, the goal of
this paper is to estimate weighted adjacency matrices $\mathbf{W}$ and
$\mathbf{P}$, which could characterize the node feature generation process in
a dynamic graph.
### The Proposed Model: GraphNOTEARS
An SEM could be found through minimizing the least-squares (LS) loss (Zheng et
al. 2018). The statistical properties of the LS loss in scoring DAGs have been
extensively studied: The minimizer of the least-squares loss provably recovers
a true DAG with high probability on finite-samples and in high-dimensions
(Aragam, Amini, and Zhou 2015; Loh and Bühlmann 2014). Inspired by this, we
propose to estimate $\mathbf{W}$ and $\mathbf{P}$ by minimizing the following
LS loss:
$\mathcal{L}(\mathbf{W},\mathbf{P})=\frac{1}{2n}||\mathbf{X}-\mathbf{X}\mathbf{W}-\mathbf{A}\boxtimes\mathbf{M}\mathbf{P}||^{2}_{F}.$
(3)
Moreover, the edges in $\mathbf{P}$ go only forward in time and thus they do
not create cycles. To ensure that the whole graph is acyclic, it thus suffices
to require that $\mathbf{W}$ is acyclic. In this work, we utilize acyclic
constraint proposed by NOTEARS (Zheng et al. 2018) to ensure the acyclicity of
learned $\mathbf{W}$, which states that: a directed graph $G$ with binary
adjacency matrix $\mathbf{W}$ is acyclic if and only if:
$h(\mathbf{W}):=\text{trace}(e^{\mathbf{W}\circ\mathbf{W}})-d=0,$ (4)
where $e^{\mathbf{W}\circ\mathbf{W}}$ is the matrix exponential of
${\mathbf{W}\circ\mathbf{W}}$, and $\circ$ denotes the Hadamard product of two
matrices. To enforce the sparsity of $\mathbf{W}$ and $\mathbf{P}$, we also
introduce $\ell_{1}$ penalties in the objective function. The overall
optimization problem is:
$\displaystyle\min_{\mathbf{W},\mathbf{P}}f(\mathbf{W},\mathbf{P})$ (5) s.t.
$\displaystyle
h(\mathbf{W}):=\text{trace}(e^{\mathbf{W}\circ\mathbf{W}})-d=0,$ with
$\displaystyle
f(\mathbf{W},\mathbf{P})=\frac{1}{2n}||\mathbf{X}-\mathbf{X}\mathbf{W}-\mathbf{A}\boxtimes\mathbf{M}\mathbf{P}||^{2}_{F}$
$\displaystyle+\lambda_{\mathbf{W}}||\mathbf{W}||_{1}+\lambda_{\mathbf{A}}||\mathbf{P}||_{1},$
where the $||\cdot||_{1}$ represents the element-wise $\ell_{1}$ norm, and
$\lambda_{\mathbf{W}}$ and $\lambda_{\mathbf{A}}$ represent the coefficients
of $||\mathbf{W}||_{1}$ and $||\mathbf{P}||_{1}$, respectively. As we can see,
given $\mathbf{X}$, $\mathbf{M}$ and $\mathbf{A}$, though minimizing this
objective, our model could simultaneously recover the contemporaneous
dependencies $\mathbf{W}$ and neighborhood dependencies $\mathbf{P}$ from
data, as well as ensure the acyclicity of the learned graph.
### Optimization
The above optimization is a standard equality-constrained program (ECP). We
translate the problem to an unconstrained problem with the following smooth
augmented Lagrangian objective
$\min_{\mathbf{W},\mathbf{P}}f(\mathbf{W},\mathbf{P})+\frac{\rho}{2}h(\mathbf{W})^{2}+\alpha
h(\mathbf{W}).$ (6)
The resulting problem can be solved using efficient solvers such as L-BFGS-B
(Zhu et al. 1997), and the update strategy of $\rho$ and $\alpha$ is the same
as (Zheng et al. 2018). To reduce false discoveries (Zhou 2009), we threshold
the edge weights of $\mathbf{W}$ and $\mathbf{P}$ via hard thresholds
$\tau_{\mathbf{W}}$ and $\tau_{\mathbf{P}}$: After obtaining a stationary
point $\mathbf{W}$ and $\mathbf{P}$, given a fixed threshold
$\mathbf{\tau_{W}}>0$ and $\mathbf{\tau_{P}}>0$, set any weights smaller than
$\mathbf{\tau_{W}}$ and $\mathbf{\tau_{P}}$ in absolute value to zero, termed
as $\widetilde{\mathbf{W}}$ and $\widetilde{\mathbf{P}}$, and their binary
version termed as $\mathbf{\hat{W}}$ and $\mathbf{\hat{P}}$, respectively.
### Discussion
Here we discuss the identifiability, some limitations and possible extensions
of GraphNOTEARS.
Identifiability Identifiability is the key research problem of SVAR models of
the econometrics literature (Kilian 2013). Identifiability of structure
learning on time-series data has been discussed in (Pamfil et al. 2020) by
using the conclusions of SVAR model. As a dynamic graph could be viewed as a
special time-series data, where $\mathbf{A}\boxtimes\mathbf{M}$ in Eq. (2) is
the aggregated time-lagged features, we assume the conditions for the
identifiability in (Pamfil et al. 2020) will be also held in our model, i.e.,
the error $\mathbf{Z}$ could be drawn from non-Gaussian or standard normal
distribution. Thus, $\mathbf{W}$ and $\mathbf{P}$ of our model are
identifiable under reasonable conditions.
Assumptions To be simplified, we have assumed that the structure of the
process of variable generation is fixed across time and is identical for all
timestamps. Based on this assumption, when long time-series data is available,
our model could easily utilize such long time-series in the objective function
Eq. (5) by extending data matrices (i.e., $\mathbf{X}$, $\mathbf{M}$ and
$\mathbf{A}$) to tensors and keeping the parameter matrices $\mathbf{W}$ and
$\mathbf{P}$ unchanged. And all our experiments are based on this extension.
This stationary process assumption is a very common assumption in time-series
data (Hamilton 2020) and could be relaxed in several ways. We could allow the
directed dependency structure of data to vary smoothly over time (Song, Kolar,
and Xing 2009) or have discrete change points (Grzegorczyk and Husmeier 2011).
Nonlinear relationship As a very beginning work of structure learning on
dynamic graphs, we follow previous works on structure learning (Zheng et al.
2018; Pamfil et al. 2020) that first consider the linear scenarios. Note that
linear assumption is made purely for simplicity, so that our paper could focus
on the most salient temporal and network aspects of this problem. Inspired by
the GNNs (Wu et al. 2020; Fan et al. 2019, 2020, 2021, 2022b, 2022a) and the
nonlinear structure learning methods (Zheng et al. 2020; Yu et al. 2019;
Lachapelle et al. 2019), we could model the nonlinear effects of neighbors by
GNNs.
An important feature of GraphNOTEARS is its simplicity, both in terms of
formulating an objective function and optimizing it. Nevertheless, the
proposed model is general enough to be extended to more complicated scenarios.
## Experiments
It is notoriously hard to obtain the ground truth of causal structure because
it is difficult to obtain the underlying data generation process of real-world
problems. To validate the effectiveness of our method, in this section, we
follow the setting in (Zheng et al. 2018; Pamfil et al. 2020; Maier et al.
2010, 2013), which conduct extensive experiments on synthetic data with known
generating mechanisms to simulate real-world scenarios. 444Code and data:
https://github.com/googlebaba/GraphNOTEARS.
Datasets. To validate the effectiveness of GraphNOTEARS against existing
approaches, we simulate data according to the SEM from Eq. (2). We need three
steps to this process: 1) generating the weighted graphs
$\mathcal{G}_{\mathbf{W}}$ and $\mathcal{G}_{\mathbf{P}}$, and adjacency
matrix $\mathbf{A}$; 2) generating data matrices $\mathbf{X}$ and $\mathbf{M}$
based on $\mathcal{G}_{\mathbf{W}}$ and $\mathcal{G}_{\mathbf{P}}$; 3) running
all algorithms on all or partial of $\mathbf{X}$, $\mathbf{M}$ and
$\mathbf{A}$ based on whether the model considers this kind of information and
computing metrics respectively. Particularly, following (Pamfil et al. 2020),
we use either the Erdős-Rényi (ER) model (Newman 2018) or the Barabási-Albert
(BA) model (Barabási and Albert 1999) to generate intra-slice graphs
$\mathcal{G}_{\mathbf{W}}$. And for inter-slice graph
$\mathcal{G}_{\mathbf{P}}$, we use ER model or Stochastic Block Model (SBM)
(Newman 2018). These graph generation models could simulate real-world
variable generation processes, like “rich get richer”, “preferential
attachment” and “cluster effect”. To get weighted intra- and inter-slice
matrices $\mathbf{W}$ and $\mathbf{P}$, we sample weights uniformly at random
from $[-2,-0.5]\cup[0.5,2]$. For the generation of $\mathbf{A}$, we connect
each pair of samples with 0.1 probability. Once given $\mathbf{W}$,
$\mathbf{P}$ and $\mathbf{A}$, we use the SEM from Eq. (2) to generate a data
matrix $\mathbf{X}$ of size $n\times d$. In particular, we first generate the
variables of $\mathbf{X}$ and $\mathbf{M}$ based on the sorted topological
order of $\mathbf{W}$ same as (Zheng et al. 2018). And we generate current
timestamp observation according to:
$\mathbf{X}=\mathbf{X}\mathbf{W}+\mathbf{A}\boxtimes\mathbf{M}\mathbf{P}+\mathbf{Z}$.
For the noise term $\mathbf{Z}$, we utilize Gaussian noise and Exponential
noise. Moreover, to compare GraphNOTEARS against baselines with a wide range
of sample sizes and the number of variables, we vary the sample size
$n\in\\{100,200,300,500\\}$, the number of variables $d\in\\{5,10,20,30\\}$ at
each timestamp, and the length of time-series $T$ is set as 7 for all
experiments. A detailed introduction to the data generation process is in
Appendix A.
Baselines. Because we study a new problem, there is no baseline specially
designed for this problem. We compare the following two alternatives that
could deal with the problem in an indirect way.
* •
NOTEARS (Zheng et al. 2018)+LASSO: This is a two-step approach. We use static
NOTEARS to estimate $\mathbf{W}$, and use Lasso regression to estimate
$\mathbf{P}$, independently.
NOTEARS:
$\mathcal{L}(\mathbf{W})=\frac{1}{2n}||\mathbf{X}-\mathbf{X}\mathbf{W}||^{2}_{F}+\lambda_{\mathbf{W}}||\mathbf{W}||_{1}.\text{s.t.,}\mathbf{W}\text{
is acyclic.}$
LASSO:
$\mathcal{L}(\mathbf{P})=\frac{1}{2n}||\mathbf{X}-\mathbf{A}\boxtimes\mathbf{M}\mathbf{P}||^{2}_{F}+\lambda_{\mathbf{P}}||\mathbf{P}||_{1}.$
* •
DYNOTEARS (Pamfil et al. 2020): This is an extension of NOTEARS on time
series. Compared with our method, it ignores the interactions between samples
(i.e., $\mathbf{A}$).
Objective:
$\mathcal{L}(\mathbf{W},\mathbf{P})=\frac{1}{2n}||\mathbf{X}-\mathbf{X}\mathbf{W}-\mathbf{M}\mathbf{P}||^{2}_{F}+\lambda_{\mathbf{W}}||\mathbf{W}||_{1}+\lambda_{\mathbf{P}}||\mathbf{P}||_{1}.\text{s.t.,}\quad\mathbf{W}\text{
is acyclic.}$
For fully utilizing multiple-step graph series, these methods are also
extended to tensor versions.
Figure 3: Example results for data with Gaussian noise, $n=500$ samples, $d=5$
variables at each timestamp, $T=7$ time-series, and $p=2$ time-lagged graph
effect order. Our algorithm recovers weights that are closer to the ground
truth than baselines.
Metrics. We evaluate the learned binary intra-slice $\mathbf{\hat{W}}$ and
inter-slice matrices $\mathbf{\hat{P}}$ separately by two common graph
metrics: F1-score and Structural Hamming Distance (SHD) (Zheng et al. 2018).
Experimental setup. For all methods, we set hyperparameters
$\lambda_{\mathbf{W}}=\lambda_{\mathbf{P}}=0.01$. For the weight thresholds,
following (Zheng et al. 2018), we choose
$\tau_{\mathbf{W}}=\tau_{\mathbf{P}}=0.3$ for all the methods. The relative
ranking of the three methods is not sensitive to the weight thresholds
according to our observation. For utilizing multiple-step graph series, we use
the first $T-1$ timestamps to predict last $T-p$ timestamps, where $p$ time-
lagged influence order is considered at each timestamp. For all experiments,
we utilize 5 different random seeds to generate different datasets and
initialize models, and report the mean value and 95% confidence interval.
Figure 4: F1 scores (higher is better) for different noise models (Gaussian,
Exponential) and different sample sizes ($n\in[100,200,300,500]$). The length
of time-series is 7 and we consider 1-step time-lagged neighbor influence
here. Each panel contains results for two different choices of intra-slice
graphs (columns) and inter-slice graphs (rows). Every marker corresponds to
the mean performance across 5 algorithm runs, where each on a different
simulated dataset, and shade area means the 95% confidence interval.
Continuous and dashed lines represent F1 scores for intra-slice and inter-
slice edges, respectively.
### Performance Evaluation
We start by illustrating the estimated weighted matrices of GraphNOTEARS and
baselines with the ground truth in Fig. 3. The evaluation data is generated
with Gaussian noise, $n=500$ samples, $d=5$ variables, $T=7$ time-series, and
$p=2$ time-lagged graph effect order. And the intra-slice and inter-slice are
both generated with ER graph. As Fig. 3 shows, our estimated weights are much
closer to the true weights for both $\mathbf{W}$, $\mathbf{P}^{(1)}$ and
$\mathbf{P}^{(2)}$ compared with baselines, where the intra-slice $\mathbf{W}$
is recovered perfectly (F1 score =1.0) and the inter-slice $\mathbf{P}^{(1)}$
and $\mathbf{P}^{(2)}$ are also better than baselines with a large margin. The
results validate that our model is a general framework that could estimate the
DAG of a dynamic graph with multiple timestamps and could simultaneously learn
arbitrary time-lagged order influence. However, both NOTEARS+LASSO and
DYNOTEARS could not achieve satisfying results. The reasons could be that
NOTEARS+LASSO is a two-stage method that could not simultaneously consider the
contemporaneous and time-lagged interaction influence to generate variables,
while our model jointly models these two factors into a unified framework.
Furthermore, DYNOTEARS only takes the time-lagged series information into
consideration and ignores the interaction information. As we can see, if the
generation of variables are determined by the neighborhood, it is necessary to
incorporate the graph information into the model.
We present the F1-score and SHD results on the full setting in Fig. 4 and Fig.
5, respectively. Note that, for simplicity, we set time-lagged graph order
$p=1$ here. From the results, we have the following observations: (1)
GraphNOTEARS is the best algorithm. In most cases, for inter-slice graph
$\mathbf{W}$, our model nearly recovers the graph perfectly (F1
score$\approx$1.0 and SHD$\approx$0). For the intra-slice graph $\mathbf{P}$,
our method also achieves satisfying results, outperforming baselines with a
large margin. The phenomenon demonstrates that GraphNOTEARS could learn DAGs
from dynamic graphs, owing to the fact that it could comprehensively consider
complex information. (2) Overall, with the number of variables increasing,
especially in insufficient samples scenario, e.g., $n=100$, all the methods
suffer from the degradation of performance. However, our model performs stable
and still outperforms baselines with a large margin, indicating our model
could handle the challenging high-dimensional scenario well. (3) No matter in
what kind of underlying graphs or noise-type scenarios, our model could all
achieve promising results. It indicates our model has the potential to deal
with various scenarios which may encounter in real-world applications. (4) For
SHD results, GraphNOTEARS requires less modification of edges to reach ground
truth in all settings, further validating the effectiveness of the proposed
method.
Figure 5: SHD scores. Illustrations are the same as Fig. 4.
### Sensitivity of Threshold
We investigate the effect of threshold of $\mathbf{W}$ and $\mathbf{P}$ on the
results in Fig. 6. We present the estimated edge weights of
$\widetilde{\mathbf{W}}$ or $\widetilde{\mathbf{P}}$ in decreasing order in
$n=500$ and $n=100$ scenarios. As the ground-truth graphs are usually very
sparse, we can observe that most of the edge weights are equal or close to
zero as expected. The remaining question is how to choose a threshold that
separates out noise (near zero) from signals (away from zero) so that we can
obtain a DAG with the best performance. With enough samples ($n=500$), one can
often notice a sudden change in the weight distribution. With insufficient
samples ($n=100$), the turning point is less clear, and the optimal choice
that balances between recall and false discovery that depends on the specific
settings. Nonetheless, the predictive performance is less sensitive to the
threshold value as one can see there is a slope of the decrease in the weights
before getting close to zero.
(a) $\widetilde{\mathbf{W}},n=500$
(b) $\widetilde{\mathbf{P}},n=500$
(c) $\widetilde{\mathbf{W}},n=100$
(d) $\widetilde{\mathbf{P}},n=100$
Figure 6: Illustration of the effect of the threshold. In each subfigure, we
plot the sorted weights of $\widetilde{\mathbf{W}}$ and
$\widetilde{\mathbf{P}}$ in decreasing order.
## Application on Real-world Datasets
We consider applying GraphNOTEARS on two real-world dynamic graphs constructed
from Yelp dataset (Luca 2016). (Anderson and Magruder 2012) introduced a toy
SCM that embeds causal knowledge for the Yelp example. That is, there are
three random variables, i.e., the restaurant category $C$, Yelp star rating
$S$, and customer flow $Y$. There exist three directed edges that represent
the three causal relationships between variables: (1) Restaurant category
influences its Yelp rating. For example, the average rating of fast-food
restaurants is lower than that of high-end seafood restaurants. (2) Restaurant
category also influences its customer flow. For example, the average customer
flow of high-end restaurants is lower than fast food. (3) Yelp rating of a
restaurant influences its customer flow. According to this, we construct two
dynamic graphs, i.e., the user graph and business graph, where its node
features are these three variables. Particularly, we construct a user graph
based on whether two users are friends on the Yelp platform. Then we take the
time lag as one month and calculate the average category, the average Yelp
star rating, and the average customer flow of the restaurants they have
visited in this month as the node features. Here we consider 1-step time-
lagged graph information. As the user’s taste may be influenced by their
friends, the generation of users’ features should consider their friends’
influence. For example, if a user’s friend posts a positive review on this
restaurant, the user will have a larger possibility to visit this restaurant.
For the business graph, as the same category of restaurants may have the
“effect of agglomeration” to influence each other, we add edges between the
restaurants which have a similar category and close distance. Then we
calculate the variables of the restaurants same as the user graph.
We apply GraphNOTEARS and DYNOTEARS on the constructed graphs and obtain the
binary DAG via 0.1 threshold, shown in Fig. 7. For both two dynamic graphs,
the estimated relationships of the intra-slice matrix $\hat{\mathbf{W}}$
discovered by GraphNOTEARS coincides with our prior knowledge, i.e., the three
black directed edges among variables in Fig. 7(a)(c). However, DYNOTEARS could
only discover partial edges, e.g., missing $C\rightarrow F$ in Fig. 7(b), and
$C\rightarrow S$ and $C\rightarrow F$ in Fig. 7(d). And we find that there are
strong correlations between the same type of variables as illustrated in
inter-slice $\hat{\mathbf{P}}$ (e.g., the directed edge between the aggregated
neighborhood category (orange S) with self category (blue S)), which could be
explained by the homophily influence of the graph. Overall, our model could
discover an explainable DAG on real-world dynamic graphs.
(a) GraphNOTEARS on user dynamic graph.
(b) DYNOTEARS on user dynamic graph.
(c) GraphNOTEARS on business dynamic graph.
(d) DYNOTEARS on business dynamic graph.
Figure 7: Estimated DAG on real-world Yelp dataset.
## Conclusion
In this paper, we first study a new DAG learning diagram on dynamic graphs,
which plays a vital role in understanding the node features generation
mechanism. To handle such complex data, we propose a score-based DAG method to
learn both intra-slice and inter-slice dependencies between variables
simultaneously, considering both temporal and interaction information. The
resulting method could deal with such a complex problem efficiently and has
the potential for more complicated settings. Extensive experiments on both
simulated and real-world datasets well demonstrate the effectiveness of the
proposed method.
## Acknowledgments
This work is supported in part by the National Natural Science Foundation of
China (No. U20B2045, 62192784, 62172052, 62002029, 62172052, U1936014).
## References
* Anderson and Magruder (2012) Anderson, M.; and Magruder, J. 2012. Learning from the crowd: Regression discontinuity estimates of the effects of an online review database. _The Economic Journal_ , 122(563): 957–989.
* Aragam, Amini, and Zhou (2015) Aragam, B.; Amini, A. A.; and Zhou, Q. 2015. Learning directed acyclic graphs with penalized neighbourhood regression. _arXiv:1511.08963_.
* Barabási and Albert (1999) Barabási, A.-L.; and Albert, R. 1999. Emergence of scaling in random networks. _science_ , 286(5439): 509–512.
* Demiralp and Hoover (2003) Demiralp, S.; and Hoover, K. D. 2003. Searching for the causal structure of a vector autoregression. _Oxford Bulletin of Economics and statistics_ , 65: 745–767.
* Fan et al. (2022a) Fan, S.; Wang, X.; Mo, Y.; Shi, C.; and Tang, J. 2022a. Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure. _NeurIPS_.
* Fan et al. (2021) Fan, S.; Wang, X.; Shi, C.; Cui, P.; and Wang, B. 2021. Generalizing Graph Neural Networks on Out-Of-Distribution Graphs. In _arXiv preprint arXiv:2111.10657_.
* Fan et al. (2022b) Fan, S.; Wang, X.; Shi, C.; Kuang, K.; Liu, N.; and Wang, B. 2022b. Debiased Graph Neural Networks with Agnostic Label Selection Bias. _IEEE Transactions on Neural Networks and Learning Systems_.
* Fan et al. (2020) Fan, S.; Wang, X.; Shi, C.; Lu, E.; Lin, K.; and Wang, B. 2020. One2multi graph autoencoder for multi-view graph clustering. In _WWW_ , 3070–3076.
* Fan et al. (2019) Fan, S.; Zhu, J.; Han, X.; Shi, C.; Hu, L.; Ma, B.; and Li, Y. 2019. Metapath-guided heterogeneous graph neural network for intent recommendation. In _SIGKDD_ , 2478–2486.
* Friedman, Geiger, and Goldszmidt (1997) Friedman, N.; Geiger, D.; and Goldszmidt, M. 1997. Bayesian network classifiers. _Machine learning_ , 29(2): 131–163.
* Geiger and Heckerman (1994) Geiger, D.; and Heckerman, D. 1994. Learning gaussian networks. In _Uncertainty Proceedings 1994_ , 235–243. Elsevier.
* Grzegorczyk and Husmeier (2011) Grzegorczyk, M.; and Husmeier, D. 2011. Non-homogeneous dynamic Bayesian networks for continuous data. _Machine Learning_ , 83(3): 355–419.
* Hamilton (2020) Hamilton, J. D. 2020. _Time series analysis_. Princeton university press.
* Heckerman, Geiger, and Chickering (1995) Heckerman, D.; Geiger, D.; and Chickering, D. M. 1995. Learning Bayesian networks: The combination of knowledge and statistical data. _Machine learning_ , 20(3): 197–243.
* Hox and Bechger (1998) Hox, J. J.; and Bechger, T. M. 1998. An introduction to structural equation modeling.
* Huang et al. (2018) Huang, B.; Zhang, K.; Lin, Y.; Schölkopf, B.; and Glymour, C. 2018. Generalized score functions for causal discovery. In _SIGKDD_.
* Hyvärinen and Smith (2013) Hyvärinen, A.; and Smith, S. M. 2013. Pairwise likelihood ratios for estimation of non-Gaussian structural equation models. _Journal of Machine Learning Research_ , 14(Jan): 111–152.
* Kilian (2013) Kilian, L. 2013. Structural vector autoregressions. In _Handbook of research methods and applications in empirical macroeconomics_. Edward Elgar Publishing.
* Lachapelle et al. (2019) Lachapelle, S.; Brouillard, P.; Deleu, T.; and Lacoste-Julien, S. 2019. Gradient-based neural dag learning. _arXiv preprint arXiv:1906.02226_.
* Lee and Honavar (2016) Lee, S.; and Honavar, V. 2016. On learning causal models from relational data. In _AAAI_.
* Loh and Bühlmann (2014) Loh, P.-L.; and Bühlmann, P. 2014. High-dimensional learning of linear causal networks via inverse covariance estimation. _The Journal of Machine Learning Research_ , 15(1): 3065–3105.
* Luca (2016) Luca, M. 2016. Reviews, reputation, and revenue: The case of Yelp. com. _Harvard Business School NOM Unit Working Paper_ , (12-016).
* Maier et al. (2013) Maier, M.; Marazopoulou, K.; Arbour, D.; and Jensen, D. 2013. A sound and complete algorithm for learning causal models from relational data. _arXiv preprint arXiv:1309.6843_.
* Maier et al. (2010) Maier, M.; Taylor, B.; Oktay, H.; and Jensen, D. 2010. Learning causal models of relational domains. In _AAAI_ , volume 24.
* Neuberg (2003) Neuberg, L. G. 2003. Causality: models, reasoning, and inference, by judea pearl, cambridge university press, 2000. _Econometric Theory_ , 19(4): 675–685.
* Newman (2018) Newman, M. 2018. _Networks_. Oxford university press.
* Ott, Imoto, and Miyano (2003) Ott, S.; Imoto, S.; and Miyano, S. 2003. Finding optimal models for small gene networks. In _Biocomputing 2004_ , 557–567. World Scientific.
* Pamfil et al. (2020) Pamfil, R.; Sriwattanaworachai, N.; Desai, S.; Pilgerstorfer, P.; Georgatzis, K.; Beaumont, P.; and Aragam, B. 2020. Dynotears: Structure learning from time-series data. In _AISTATS_.
* Pearl (2011) Pearl, J. 2011. Bayesian networks.
* Pearl (2014) Pearl, J. 2014. _Probabilistic reasoning in intelligent systems: networks of plausible inference_. Elsevier.
* Rossi et al. (2020) Rossi, E.; Chamberlain, B.; Frasca, F.; Eynard, D.; Monti, F.; and Bronstein, M. 2020. Temporal graph networks for deep learning on dynamic graphs. _arXiv preprint arXiv:2006.10637_.
* Sankar et al. (2020) Sankar, A.; Wu, Y.; Gou, L.; Zhang, W.; and Yang, H. 2020. Dysat: Deep neural representation learning on dynamic graphs via self-attention networks. In _Proceedings of the 13th international conference on web search and data mining_ , 519–527.
* Song, Kolar, and Xing (2009) Song, L.; Kolar, M.; and Xing, E. 2009. Time-varying dynamic bayesian networks. In _NeurIPS_.
* Spirtes et al. (2000) Spirtes, P.; Glymour, C. N.; Scheines, R.; and Heckerman, D. 2000. _Causation, prediction, and search_. MIT press.
* Spirtes, Meek, and Richardson (2013) Spirtes, P. L.; Meek, C.; and Richardson, T. S. 2013. Causal inference in the presence of latent variables and selection bias.
* Wu et al. (2020) Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; and Philip, S. Y. 2020. A comprehensive survey on graph neural networks. _IEEE transactions on neural networks and learning systems_ , 32(1): 4–24.
* Yu et al. (2019) Yu, Y.; Chen, J.; Gao, T.; and Yu, M. 2019. DAG-GNN: DAG structure learning with graph neural networks. In _ICML_.
* Zheng et al. (2018) Zheng, X.; Aragam, B.; Ravikumar, P.; and Xing, E. P. 2018. Dags with no tears: Continuous optimization for structure learning. In _NeurIPS_.
* Zheng et al. (2020) Zheng, X.; Dan, C.; Aragam, B.; Ravikumar, P.; and Xing, E. 2020. Learning sparse nonparametric dags. In _International Conference on Artificial Intelligence and Statistics_ , 3414–3425. PMLR.
* Zhou (2009) Zhou, S. 2009. Thresholding procedures for high dimensional variable selection and statistical estimation. _NeurIPS_.
* Zhu et al. (1997) Zhu, C.; Byrd, R. H.; Lu, P.; and Nocedal, J. 1997. Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization. _ACM Transactions on mathematical software (TOMS)_ , 23(4): 550–560.
|
# Radial velocity confirmation of a hot super-Neptune discovered by TESS with
a warm Saturn-mass companion
E. Knudstrup,1,2 D. Gandolfi,3 G. Nowak,4,5 C. M. Persson,6 E. Furlan,7 J.
Livingston,8,9,10 E. Matthews,11 M. S. Lundkvist,1 M. L. Winther,1 J. L.
Rørsted,1 S. H. Albrecht,1 E. Goffo,3,12I. Carleo,4 H. J. Deeg,4,5 K. A.
Collins,13 N. Narita,14,8,4 H. Isaacson,15,7 S. Redfield,16 F. Dai,17,18T.
Hirano,8,9 J. M. Akana Murphy,19 C. Beard,20 L. A. Buchhave,21 S. Cary,22 A.
Chontos,23I. Crossfield,24 W. D. Cochran,25 D. Conti,26 P. A. Dalba,19,27 M.
Esposito,12 S. Fajardo-Acosta,7S. Giacalone,15 S. K. Grunblatt,28 P. Guerra,29
A. P. Hatzes,12 R. Holcomb,20 F. G. Horta,A. W. Howard,18 D. Huber,30 J. M.
Jenkins,31 P. Kabáth,32 S. Kane,33 J. Korth,6 K. W. F. Lam,35K. V. Lester,31
R. Matson,36 K. K. McLeod,22 J. Orell-Miquel,4,5 F. Murgas,4,5 E. Palle,4,5A.
S. Polanski,24 G. Ricker,37 P. Robertson,20 R. Rubenzahl,18 J E. Schlieder,38
S. Seager,39,37,40A. M. S. Smith,35 P. Tenenbaum,27,31 E. Turtelboom,15 R.
Vanderspek,37 L. Weiss,41 and J. Winn23
1Stellar Astrophysics Centre, Department of Physics and Astronomy, Aarhus
University, Ny Munkegade 120, DK-8000 Aarhus C, Denmark
2Nordic Optical Telescope, Rambla José Ana Fernández Pérez 7, ES-38711 Breña
Baja, Spain
3Dipartimento di Fisica, Università degli Studi di Torino, via Pietro Giuria
1, I-10125, Torino, Italy
4Instituto de Astrofísica de Canarias (IAC), E-38205 La Laguna, Tenerife,
Spain
5Dept. Astrofísica, Universidad de La Laguna (ULL), E-38206 La Laguna,
Tenerife, Spain
6Department of Space, Earth and Environment, Chalmers University of
Technology, Onsala Space Observatory, SE-439 92 Onsala, Sweden
7NASA Exoplanet Science Institute, Caltech/IPAC, Mail Code 100-22, 1200 E.
California Blvd., Pasadena, CA 91125, USA
8Astrobiology Center, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
9National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo
181-8588, Japan
10Department of Astronomy, The Graduate University for Advanced Studies
(SOKENDAI), 2-21-1 Osawa, Mitaka, Tokyo, Japan
11Département d’Astronomie, Université de Genève, Chemin Pegasi 51b, 1290
Versoix, Suisse
12Thüringer Landessternwarte, Tautenburg Sternwarte 5, 07778 Tautenburg,
Germany
13Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
14Komaba Institute for Science, The University of Tokyo, 3-8-1 Komaba, Meguro,
Tokyo 153-8902, Japan
15Department of Astronomy, 501 Campbell Hall, University of California,
Berkeley, CA 94720, USA
16Astronomy Department and Van Vleck Observatory, Wesleyan University,
Middletown, CT 06459, USA
17Division of Geological and Planetary Sciences, 1200 E California Blvd,
Pasadena, CA, 91125, USA
18Department of Astronomy, California Institute of Technology, Pasadena, CA
91125, USA
19Department of Astronomy and Astrophysics, University of California, Santa
Cruz, CA 95064, USA
20Department of Physics & Astronomy, University of California Irvine, Irvine,
CA 92697, USA
21DTU Space, National Space Institute, Technical University of Denmark,
Elektrovej 328, DK-2800 Kgs. Lyngby, Denmark
22Department of Astronomy, Wellesley College, Wellesley, MA 02481, USA
23Department of Astrophysical Sciences, Princeton University, Princeton, NJ
08544, USA
24Department of Physics and Astronomy, University of Kansas, Lawrence, KS
66045, USA
25Center for Planetary Systems Habitability and McDonald Observatory, The
University of Texas at Austin, Austin TX USA 78712
26American Association of Variable Star Observers, 185 Alewife Brook Parkway,
Suite 410, Cambridge, MA 02138, USA
27SETI Institute, Carl Sagan Center, 339 Bernardo Ave, Suite 200, Mountain
View, CA 94043, USA
28Department of Physics and Astronomy, Johns Hopkins University, 3400 N
Charles St, Baltimore, MD 21218, USA
29Observatori Astronòmic Albanyà, Camí de Bassegoda S/N, Albanyà 17733,
Girona, Spain
30Institute for Astronomy, University of Hawai‘i, 2680 Woodlawn Drive,
Honolulu, HI 96822, USA
31NASA Ames Research Center, Moffett Field, CA 94035, USA
32Astronomical Institute of the Czech Academy of Sciences, Fričova 298, 2516,
Ondřejov
33Department of Earth and Planetary Sciences, University of California,
Riverside, CA 92521, USA
34Department of Space, Earth and Environment, Chalmers University of
Technology, Chalmersplatsen 4, 412 96 Gothenburg, Sweden
35Institute of Planetary Research, German Aerospace Center (DLR),
Rutherfordstrasse 2, D-12489 Berlin, Germany
36U.S. Naval Observatory, Washington, D.C. 20392, USA
37Department of Physics and Kavli Institute for Astrophysics and Space
Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
38Exoplanets and Stellar Astrophysics Laboratory, NASA Goddard Space Flight
Center, 8800 Greenbelt Road, Greenbelt, MD, USA
39Department of Earth, Atmospheric, and Planetary Sciences, Massachusetts
Institute of Technology, Cambridge, MA 02139, USA
40Department of Aeronautics and Astronautics, Massachusetts Institute of
Technology, Cambridge, MA 02139, USA
41Department of Physics, University of Notre Dame, Notre Dame, IN 46556, USA
E-mail<EMAIL_ADDRESS>(EK)NASA Sagan FellowHenry Norris Russell
FellowHeising-Simons 51 Pegasi b Postdoctoral FellowCitizen Scientist
0000-0001-7880-594X 0000-0001-8627-9628 0000-0003-1257-5146
0000-0001-9800-6248 0000-0002-8661-2571 0000-0001-9234-430X
0000-0003-1762-8235 0000-0001-9670-961X 0000-0003-0047-4241
0000-0001-6588-9574 0000-0001-8511-2981 0000-0003-3786-3486
0000-0002-8958-0683 0000-0003-3618-7535 0000-0001-8898-8284
0000-0001-7708-2364 0000-0003-1605-5666 0000-0003-1860-1632
0000-0003-1125-2564 0000-0001-9662-3496 0000-0003-2239-0567
0000-0002-4297-5506 0000-0002-6893-4534 0000-0001-9309-0102
0000-0002-8965-3969 0000-0003-4976-9980 0000-0002-4308-2339
0000-0002-3404-8358 0000-0002-5034-9476 0000-0001-9927-7269
0000-0001-8638-0320 0000-0002-4715-9460 0000-0002-7084-0529
0000-0002-0076-6239 0000-0002-9910-6088 0000-0002-9903-9911
0000-0001-7233-7508 0000-0001-9504-1486 0000-0001-9087-1245
0000-0001-7047-8681 0000-0003-2058-6662 0000-0003-0149-9678
0000-0003-3856-3143 0000-0002-6892-6948 0000-0002-2386-4341
0000-0002-1949-4720 0000-0002-1845-2617
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
We report the discovery and confirmation of the planetary system TOI-1288.
This late G dwarf harbours two planets: TOI-1288 b and TOI-1288 c. We combine
TESS space-borne and ground-based transit photometry with HARPS-N and HIRES
high-precision Doppler measurements, which we use to constrain the masses of
both planets in the system and the radius of planet b. TOI-1288 b has a period
of $2.699835^{+0.000004}_{-0.000003}$ d, a radius of $5.24\pm 0.09$ R⊕, and a
mass of $42\pm 3$ M⊕, making this planet a hot transiting super-Neptune
situated right in the Neptunian desert. This desert refers to a paucity of
Neptune-sized planets on short period orbits. Our 2.4-year-long Doppler
monitoring of TOI-1288 revealed the presence of a Saturn-mass planet on a
moderately eccentric orbit ($0.13^{+0.07}_{-0.09}$) with a minimum mass of
$84\pm 7$ M⊕ and a period of $443^{+11}_{-13}$ d. The 5 sectors worth of TESS
data do not cover our expected mid-transit time for TOI-1288 c, and we do not
detect a transit for this planet in these sectors.
###### keywords:
planets and satellites: detection – techniques: photometric – techniques:
radial velocities
††pubyear: 2015††pagerange: Radial velocity confirmation of a hot super-
Neptune discovered by TESS with a warm Saturn-mass companion–B
## 1 Introduction
As the tally of exoplanets has now surpassed 5,000, we can make more informed
inferences about planet formation and evolution. A wealth of architectures and
different planet types have been discovered, some of which are quite different
from the planets found in the Solar System. We first learned about giant
planets on short period orbits, the so-called hot Jupiters, which have been
found in abundance, owing to their detection bias. The Kepler space mission
(Borucki et al., 2010) showed us that, while super-Earths appear to be quite
common (Howard et al., 2010a; Mayor et al., 2011), we see a significant dearth
of Neptune mass planets on short period orbits, a paucity referred to as the
Neptunian “desert” (Mazeh et al., 2016).
In addition to this paucity, studies on the planetary initial mass function
(e.g., Mordasini et al., 2009) have found a minimum in the mass range where
super-Neptunes reside, namely from around 30 M⊕ to 70 M⊕. This valley has been
interpreted as the division between planets dominated by solids and gas giants
that have undergone runaway gas accretion (Ida & Lin, 2004). Finding and
characterising planets in this mass range could therefore help shed light on
why some proto-planets undergo runaway accretion while others do not.
Most of the super-Neptunes were detected by Kepler around relatively faint
stars, meaning that precise mass determinations only exist for a few of these
(e.g., Kepler-101b, Bonomo et al., 2014). The Transiting Exoplanet Survey
Satellite (TESS; Ricker et al., 2015) along with ground-based efforts have now
detected more of these super-Neptunes in brighter systems for which precise
radial velocities (RVs) are more viable, enabling both radius and mass
determinations. Therefore, we can also determine the bulk density and make
inferences about the composition. A way to gain more insight into the
composition and potential migration is through atmospheric studies, which have
also been used as a means to rule out certain mechanisms. For instance, as in
Vissapragada et al. (2022) in which photoevaporation is ruled out as the
mechanism responsible for shaping the upper edge of the Neptunian desert.
Here we report on the discovery and characterisation of the TOI-1288 planetary
system. In this system we have discovered a hot super-Neptune, TOI-1288 b,
with an outer Saturn mass companion, TOI-1288 c. These planets are hosted by a
late G dwarf.
The paper is structured as follows. In Section 2 we describe our observations,
which include ground-based photometry as well as that from TESS. We have also
acquired speckle and adaptive optics (AO) imaging to search for blended
companions. In addition we have carried out extensive spectroscopic follow-up
to confirm and characterise this planetary system. In Section 3 we present our
analysis of the data in which we model the photometry and spectroscopy
jointly. The results are presented in Section 4 and we discuss them in Section
5. Finally, we give our conclusions in Section 6.
Table 1: System parameters. Catalog IDs, coordinates, and magnitudes for the
TOI-1288 system.
Parameter | Value | Name
---|---|---
TICa | 365733349 |
Gaia DR3b | 2245652826430109184 |
TYCc | 4255-1629-1 |
$\alpha$ (J2000)b | 20:52:40.09 | Right ascension (R.A.)
$\delta$ (J2000)b | +65:36:31.59 | Declination (Dec.)
$\mu_{\alpha}$ (mas yr-1)b | $43.496\pm 0.017$ | Proper motion R.A.
$\mu_{\delta}$ (mas yr-1)b | $-68.775\pm 0.017$ | Proper motion Dec.
$\varpi$ (mas)b | $8.720\pm 0.013$ | Parallax
RV (km s-1)b | $-68.1\pm 0.6$ | Radial velocity
$G$b | $10.4507\pm 0.0018$ | Gaia $G$ magnitude
$B_{\mathrm{P}}$b | $10.855\pm 0.006$ | Gaia $B_{\mathrm{P}}$ magnitude
$R_{\mathrm{P}}$b | $9.873\pm 0.003$ | Gaia $R_{\mathrm{P}}$ magnitude
$V$c | $10.44\pm 0.04$ | Tycho $V$ magnitude
$B$c | $11.38\pm 0.07$ | Tycho $B$ magnitude
$J$d | $9.19\pm 0.02$ | 2MASS $J$ magnitude
$H$d | $8.84\pm 0.03$ | 2MASS $H$ magnitude
$K$d | $8.78\pm 0.02$ | 2MASS $K$ magnitude
* a
https://exofop.ipac.caltech.edu/tess/.
* b
Gaia Collaboration et al. (2022).
* c
Høg et al. (2000).
* d
Cutri et al. (2003).
## 2 Observations
The TOI-1288 system has been observed with different space- and ground-based
facilities, including both photometric and spectroscopic observations, as well
as high-resolution imaging. System parameters for TOI-1288 are summarised in
Table 1.
### 2.1 Photometry
TESS observed TOI-1288 during Sectors 15, 16, 17, 18, and 24 (August 15 to
November 27, 2019, and April 16 to May 13, 2020). This candidate was
identified by the Science Processing Operation Center (SPOC; Jenkins et al.,
2016) team at the NASA Ames Research Center, who searched the light curves,
which are extracted through simple aperture photometry (SAP; Twicken et al.,
2010; Morris et al., 2020) and processed using the Presearch Data Conditioning
(PDC; Smith et al., 2012; Stumpe et al., 2012; Stumpe et al., 2014) algorithm.
The SPOC team searches the PDC-SAP light curves for transit-like signals with
an adaptive, noise-compensating matched filter (Jenkins, 2002; Jenkins et al.,
2010) using a pipeline that iteratively performs multiple transiting planet
searches and stops when it fails to find subsequent transit-like signatures
above the detection threshold of a signal-to-noise ratio (SNR) of 7.1. The
results were published in the Data Validation Report (DVR; Twicken et al.,
2018; Li et al., 2019), and as the light curve shows a $\sim$0.25% dip
occurring every 2.7 d with an SNR of around 62, it was identified as a TESS
Object of Interest (TOI; Guerrero et al., 2021) and given the ID TOI-1288. The
results of the difference image centroiding test were also presented in the
DVR, which located the source of the transit signal to within $1.3\pm
2.6^{\prime\prime}$ in the Sector 14-26 multi-sector transit search.
Figure 1: TESS image of TOI-1288. Cutout of a TESS image of TOI-1288 from
Sector 15. The red dots denote Gaia sources with their sizes scaled to the
difference in $G$ magnitude to TOI-1288. The grey dot denotes the position of
TOI-1288. The hatched area shows the aperture mask we used to create the light
curves, and the black circle illustrates the separation to the brightest
nearby star at $\sim$23 arcsec.
An independent search for transit signals was performed using the Détection
Spécialisée de Transits (DST; Cabrera et al., 2012) pipeline on the PDCSAP
light curves. A transit signal with orbital period of $2.70\pm 0.02$ days and
a transit depth of $\sim$0.25% was detected, consistent with the signal
detected by the SPOC pipeline.
Fig. 1 displays the TESS image in the immediate vicinity of TOI-1288 with
nearby Gaia DR3 sources (Gaia Collaboration et al., 2022). All the TESS
photometry from Sectors 15-18 and Sector 24 is displayed in Fig. 2, where we
show the background corrected light curve at the top. This was done using the
RegressionCorrector implemented in lightkurve (Lightkurve Collaboration et
al., 2018). Overplotted in grey is a model light curve created using batman
(Kreidberg, 2015) with transit parameters stemming from an initial fit. We
used this to remove the transit signal before removing outliers from the light
curve. In the middle light curve the transits are removed, and we have applied
a Savitzky-Golay filter (Savitzky & Golay, 1964) to temporarily filter the
light curve. We then removed outliers through sigma clipping at 5$\sigma$,
these outliers are highlighted in red. Finally, in the bottom light curve we
have re-injected the transits to the unfiltered light curve as we want to
account for any trend while fitting, as described in Section 3.
Figure 2: TESS photometry. The light curve at the top shows the background
corrected light curve. The grey line is a transit model created from
parameters stemming from an initial fit. The transit model has been used to
temporarily remove the transit in the light curve shown in the middle. Here
the grey line shows a Savitsky-Golay filter (as implemented in Lightkurve
Collaboration et al., 2018) used to filter and detrend the data for outlier
rejection. The red points are outliers removed through a 5$\sigma$ sigma
clipping. The TESS data with outliers removed and the transits re-injected is
shown in the light curve at the bottom. The white line is the GP we use to
detrend the data (see Section 3).
#### 2.1.1 Light Curve Follow-up
We acquired ground-based time-series follow-up photometry of TOI-1288 as part
of the TESS Follow-up Observing Program (TFOP; Collins,
2019)111https://tess.mit.edu/followup. using various facilities as listed in
Table 6 from October 2019 to September 2021. This is done in an attempt to (1)
rule out or identify nearby eclipsing binaries (NEBs) as potential sources of
the detection in the TESS data, (2) detect the transit-like events on target
to confirm the depth and thus the TESS photometric deblending factor, (3)
refine the TESS ephemeris, and (4) place constraints on transit depth
differences across optical filter bands. We used the TESS Transit Finder,
which is a customized version of the Tapir software package (Jensen, 2013), to
schedule our transit observations. The images were calibrated and the
photometric data were extracted using the AstroImageJ (AIJ) software package
(Collins et al., 2017), except the Las Cumbres Observatory Global Telescope
(LCOGT; Brown et al., 2013) images, which were calibrated by the standard
LCOGT BANZAI pipeline (McCully et al., 2018), and the Multicolor Simultaneous
Camera for studying Atmospheres of Transiting exoplanets (MuSCAT; Narita et
al., 2015) data, which were extracted using the custom pipeline described in
(Fukui et al., 2011).
The individual observations are detailed in Table 6 and the light curves are
shown in Fig. 14. All photometric apertures exclude flux from all known Gaia
DR3 stars near TOI-1288, except the TESS-band 16.4 magnitude neighbor
1.5″southwest, which is nominally too faint to be capable of causing the
detection in the TESS photometric aperture (individual follow-up photometric
apertures are listed in Table 6). Transit events consistent with the TESS
TOI-1288 b transit signal were detected in each light curve and are included
in the joint model described in Section 3.
### 2.2 Speckle/AO imaging
Nearby sources that are blended in the aperture mask used for the photometry
can contaminate the light curve and alter the measured radius, it is thus
important to vet for close visual companions. Furthermore, a close companion
could be the cause of a false positive if the companion is itself an eclipsing
binary (Ciardi et al., 2015). We therefore collected both adaptive optics and
speckle imaging. The observations are described below and summarised in Table
2.
#### 2.2.1 WIYN/NESSI
On the nights of 2019 November 17 and 2021 October 29, TOI-1288 was observed
with the NESSI speckle imager (Scott, 2019), mounted on the 3.5 m WIYN
telescope at Kitt Peak, AZ, USA. NESSI simultaneously acquires data in two
bands centered at 562 nm and 832 nm using high speed electron-multiplying CCDs
(EMCCDs). We collected and reduced the data following the procedures described
in Howell et al. (2011). The resulting reconstructed image achieved a contrast
of $\Delta\mathrm{mag}\,\approx\,5.75$ at a separation of 1″in the 832 nm band
(see Fig. 3).
Figure 3: WIYN/NESSI contrast curves from 2019 (top) and 2021 (bottom). Two
filter speckle imaging contrast curves for TOI-1288 from NESSI. The insets
show the reconstructed 562 nm and 832 nm images with 1 arcsec scale bars.
On both nights we detected a companion at a separation of $\sim$1.2″, however,
only in the 832 nm filter. Additionally, on the night of 2021 October 29 the
pipeline detected a companion at a separation of 0.065″(position angle of
$313^{\circ}$ and $\Delta\mathrm{mag}=2.57$). However, this companion is close
to the detection limit (Scott, 2019), and the fit that produced it relied on
image elongation (as opposed to being fully separated from the primary), which
is possible to get from a mismatch between the science target and the (single)
comparison star. Furthermore, it was not detected in the 2019 November 17 data
(despite being of higher quality), nor was it detected in any of the other
speckle or AO images (see below). We therefore conclude that the inner
companion is a spurious detection most likely caused by a data artifact.
#### 2.2.2 Gemini/’Alopeke
TOI-1288 was observed with the ’Alopeke speckle instrument on the Gemini North
telescope, HI, USA, (Scott et al., 2021) on 2020 June 9, 2021 June 24, 2021
October 22, and 2022 May 14 (all dates in UT). Observations were obtained
simultaneously in two narrow-band filters centered at 562 nm (width=54 nm) and
at 832 nm (width=40 nm). Between 6 and 7 sets of 1000$\times$0.06 s exposures
were collected and then reduced with the standard reduction pipeline using
Fourier analysis (see, e.g., Howell et al., 2011, for an overview). The
reduced data products include reconstructed images and 5$\sigma$ contrast
curves. TOI-1288 was very faint in most data sets and even not detected in one
of them at 562 nm. At 832 nm, in addition to the primary star, a faint
($\Delta M\sim 5.9$) companion was detected at a projected separation of
$\sim$1.2′′-1.3′′ in the data from 2020 June 9, 2020 June 24, 2021 October 22,
and 2022 May 14. An even fainter ($\Delta M\sim 7$) companion was detected at
a separation of $\sim$1.4′′-1.5′′ in the data from 2020 June 24, 2021 October
22, and 2022 May 14.
#### 2.2.3 Gemini/NIRI
We collected adaptive optics images of TOI-1288 with the Gemini Near-Infrared
Imager (NIRI; Hodapp et al., 2003) on 2019 November 8. We collected 9 science
frames, each with an exposure time of 6.8 s, and dithered the telescope by
$\sim$2″ between each frame, thereby allowing for the science frames
themselves to serve as sky background frames. The target was observed in the
Br-$\gamma$ filter centered at 2.166 $\mu$m. Data processing consisted of bad
pixel removal, flat fielding, and subtraction of the sky background. We then
aligned the frames based on the position of the primary star, and coadded the
images.
The total field of view is around 26′′ square, with optimum sensitivity in the
central $\sim$22′′ square. We again identified two visual candidates in the
field of view. The brighter companion is at a separation of 1.152′′, a
position angle of 289.3 degrees counter-clockwise of north, and is
4.77$\pm$0.03 mag fainter than the host in the Br-$\gamma$ band; the fainter
companion is at a separation of 1.579′′, a position angle of 207.7 degrees
counter-clockwise of north and is 5.88$\pm$0.04 mag fainter than the host.
Figure 4: Gemini/NIRI contrast curve. AO imaging contrast curve for TOI-1288.
The inset shows the reconstructed Br-$\gamma$ image with the two detected
companions highlighted.
We measured the sensitivity of our observations as a function of radius by
injecting fake companions and scaling their brightness such that they could be
detected at 5$\sigma$. The contrast sensitivity is 5.56 mag fainter than the
host at a separation of 250 mas, and 8.1 mag fainter than the host in the
background limited regime, beyond $\sim$1′′ from the target. The contrast
sensitivity as a function of radius and a high resolution image of the star
are shown in Fig. 4; we show the curve for the inner 3′′ only, but note that
the data are sensitive to candidates within 13′′ in all directions. From our
speckle and AO imaging we have thus identified two nearby companions.
Table 2: Companions detected in Speckle, AO, and Gaia. $\rho$ and $\Delta$ are
the separation and the difference in magnitude from the central target (host),
respectively. $\theta$ is the position angle from the brighter of the targets
to the fainter component, measured from North through East. Star 1 is the
brighter companion and star 2 the fainter one. The uncertainties for $\rho$
and $\theta$ for the ’Alopeke data are estimated to be around 5 mas and 1 deg,
respectively, while the uncertainties for $\Delta$mag come out to around 0.5
mag for the closer companion and 1 mag for the fainter one.
Date | Star | $\rho$ | $\Delta$ | $\theta$ | Type | Filter | Instrument | Telescope
---|---|---|---|---|---|---|---|---
(UT) | | (arcsec) | (mag) | (deg) | | | |
2019-11-08 | 1 | 1.152 | $4.77\pm 0.03$ | 289.3 | AO | Br-$\gamma$ | NIRI | Gemini
2019-11-08 | 2 | 1.579 | $5.88\pm 0.04$ | 207.7 | AO | Br-$\gamma$ | NIRI | Gemini
2019-11-17a | 1 | 1.123 | 5.90 | 289.5 | Speckle | 832 nm | NESSI | WIYN
2020-06-09a | 1 | 1.172 | 5.94 | 289.5 | Speckle | 832 nm | ’Alopeke | Gemini
2021-06-24a | 1 | 1.256 | 6.4 | 292.0 | Speckle | 832 nm | ’Alopeke | Gemini
2021-06-24a | 2 | 1.516 | 6.8 | 211.7 | Speckle | 832 nm | ’Alopeke | Gemini
2021-10-22a | 1 | 1.233 | 5.92 | 293.4 | Speckle | 832 nm | ’Alopeke | Gemini
2021-10-22a | 2 | 1.468 | 7.34 | 212.3 | Speckle | 832 nm | ’Alopeke | Gemini
2021-10-29a | 1 | 1.282 | 5.48 | 293.6 | Speckle | 832 nm | NESSI | WIYN
2022-05-14a | 1 | 1.306 | 5.83 | 293.5 | Speckle | 832 nm | ’Alopeke | Gemini
2022-05-14a | 2 | 1.443 | 7.90 | 214.3 | Speckle | 832 nm | ’Alopeke | Gemini
Epoch=2016.0 | 2 | 1.74 | 6.41 | 198 | Photometry | $G$ | - | Gaia
* a
Observations were also carried out in the 562 nm filter, but the companions
were not detected in this filter.
#### 2.2.4 Gaia
As is also apparent from Fig. 1, one of the two aforementioned companions is
also detected by Gaia DR3. The position of this Gaia companion is in good
agreement with it being the fainter of the two companions seen in the Gemini
’Alopeke and AO observations. This is most likely also the companion seen in
the light curve follow-up in Section 2.1.1. The Gaia detection is summarised
in Table 2 along with the speckle and AO observations.
Figure 5: Sky positions of blended companions. The blue star denotes TOI-1288,
while red stars are the relative positions for the companions detected in the
Spekle/AO images. Their sizes are scaled according to their relative
brightness with the larger stars corresponding to $\Delta$Br-$\gamma=4.77$ and
the smaller one corresponding to $\Delta$Br-$\gamma=5.88$. The transparent
trails show how the companions move relative to TOI-1288 as a function of time
with the opaque being the most recent position. The orange star is the
relative position of the companion detected by Gaia, which is most likely the
fainter companion. The blue line shows the proper motion of TOI-1288 over the
course of 3.5 years.
#### 2.2.5 Are the companions bound?
In the following we will be referring to the brighter companion as star 1, and
the fainter companion as star 2. To test whether these companions are bound,
we study the positions of the host and the candidate companion in colour-
magnitude diagrams (CMDs), loosely following the method outlined in Hirsch et
al. (2017). We used the measured photometry in the 832 nm and Br-$\gamma$
filters for star 1, and the Gaia $G$ and Br-$\gamma$ filters for star 2. In
each case, we used the stellar parameters and uncertainties of the host
($\tau_{\star}$, [Fe/H], $\log g$, and $d$ from the SED fit in Table 3) to
generate a set of 1000 randomly sampled isochrones. For each filter pair, we
determined a companion CMD position from the set of isochrones based on the
$\Delta$-magnitude of the companion in each filter, and then calculated a
weighted average of these measurements. This can then be compared to the
observed CMD position of the companion as seen in Fig. 15. For star 1, the
observed and predicted positions agree to within 0.3$\sigma$, which could
indicate that these objects are bound. This is further supported by their
relative proximity on the sky. However, this could also be chance alignment
for a background star with the right colour profile (Hirsch et al., 2017). For
star 2, the observed and predicted CMD positions do not match, with a
disagreement of 3.5$\sigma$. This strongly suggests that star 2 is a
background star, and is not physically bound to the TOI-1288 system.
In Fig. 5 we show the relative positions of the companions detected in
Speckle/AO and the one detected in Gaia. Evidently, the detected companions
seem to be moving over the time span covered by the different observations in
a similar direction, which is more or less opposite to the proper motion of
TOI-1288. This clearly suggests that neither of the two companions are bound
and are likely background stars. Finally, we note that the Gaia position is an
average of different scans taken from July 2014 to May 2017 (for DR3) and
might be less reliable. Furthermore, the reason that only the fainter
companion was detected in the Gaia data could be that at an earlier epoch
TOI-1288 and the brighter companion star were likely closer on the sky, and it
would thus have been more difficult for Gaia to detect this companion.
However, as seen in the speckle/AO observations, due to the proper motion of
TOI-1288, the separation between TOI-1288 and this background star is
increasing, meaning that it might be possible to detect it in future data
releases.
### 2.3 High-resolution spectroscopy
#### 2.3.1 FIES
We performed high-resolution (R = $67\,000$) reconnaissance spectroscopy of
TOI-1288 using the FIber-fed Echelle Spectrograph (FIES; Frandsen & Lindberg,
1999; Telting et al., 2014) mounted at the Nordic Optical Telescope (NOT;
Djupvik & Andersen, 2010) at Roque de los Muchachos Observatory, La Palma,
Spain. The FIES spectra were extracted following Buchhave et al. (2010), and
stellar parameters were derived using the stellar parameter classification
(SPC; Buchhave et al., 2012, 2014) tool. The resulting parameters are
tabulated in Table 3.
#### 2.3.2 HARPS-N
We acquired 57 high-resolution (R = 115 000) spectra of TOI-1288 utilizing the
High Accuracy Radial velocity Planetary Searcher for the Northern hemisphere
(HARPS-N; Cosentino et al., 2012) attached at the 3.58 m Telescopio Nazionale
Galileo (TNG), also located at Roque de los Muchachos observatory. The spectra
were collected between 19 November 2019 and 23 May 2022. We set the exposure
time to 1200-2700 s based on the sky conditions and scheduling constraints,
which led to a median SNR of $\sim$60 per pixel at 550 nm. We used the second
fibre of the instrument to monitor the sky background.
The HARPS-N spectra were reduced and extracted using the dedicated Data
Reduction Software (DRS; Lovis & Pepe, 2007) available at the telescope. The
DRS also provides the full width at half maximum (FWHM) and the bisector
inverse slope (BIS) of the cross-correlation function (CCF), which was
obtained by cross-correlating the observed échelle spectra against a G2
numerical mask. In this work, we used the Template-Enhanced Radial velocity
Re-analysis Application (TERRA; Anglada-Escudé & Butler, 2012) to extract
precise RV measurements, along with additional activity indicators (namely,
the H$\alpha$, S-index, and Na D indexes).
#### 2.3.3 HIRES
We also gathered 28 spectra the High Resolution Echelle Spectrometer (HIRES;
Vogt et al., 1994) mounted on the 10 m Keck-1 at the Keck Observatory,
Hawai’i, USA. Observations were carried out between 10 December 2019 and 11
October 2021 with exposure times varying from 280-1000 s depending on sky
conditions, resulting in a median SNR of $\sim$72 near the spectral center of
the image. The spectra were obtained with the iodine cell in the light path,
and the RV extraction followed the standard HIRES forward-modelling pipeline
(Howard et al., 2010b).
#### 2.3.4 Periodogram Analysis
All the RVs are shown in Fig. 6 and tabulated in Table 7. In Fig. 7 we have
calculated the generalised Lomb-Scargle (GLS; Lomb, 1976; Scargle, 1982)
periodogram. Evidently, the $\sim$2.7 d transiting signal is also detected in
the RVs, where a peak at this frequency clearly exceeds the false-alarm
probabilities (FAPs; at 0.1%, 1%, and 10%). We also see a significant peak at
much lower frequencies with a period of around $443$ d, which we ascribe to
the presence of a further out companion. Seeing the $443$ d-signal we searched
the TESS light curve for additional transits using the box least squares (BLS;
Kovács et al., 2002) algorithm after removing the transits from planet b
($\sim$2.7 d), but found no evidence for additional transiting signals.
Figure 6: Radial velocities of TOI-1288. Top: The HARPS-N (blue) and HIRES
(orange) radial velocities as a function time. The grey model shows the
combined signal for planet b and c. Bottom left: The radial velocities phased
to the period planet b with the signal from planet c and the long-term trend
subtracted with the best-fitting model overplotted. Bottom right: The radial
velocities phased to the period of planet c with the signal from planet b and
the long-term trend subtracted with the best-fitting model overplotted.
We also detected another low frequency/long period peak in the GLS which seems
to be a long-term trend in the RVs. We have furthermore created GLS
periodograms for the activity indicators from the HARPS-N spectra shown in
Fig. 8. Evidently, the star is inactive and the 2.7 d and $443$ d do not
coincide with any appreciable peak in these metrics, meaning that they are
unlikely to come from stellar activity.
Figure 7: Generalised Lomb-Scargle diagram. The GLS created from the RVs in
Table 7. Top: The GLS after subtracting the systemic velocities for HARPS-N
and HIRES. The periods from Table 4 for planet b (green) and planet c (orange)
are shown as the vertical lines. The dashed, dashed-dotted, and solid
horizontal lines are the 0.1%, 1%, and 10% FAPs, respectively. Upper middle:
The GLS after subtracting the signal from planet b. The inset shows a close-up
around the period of planet c. Lower middle: The GLS after subtracting both
the signal from planet b and c. Lower: The GLS after subtracting both the
signal from planet b, c, and the long-term trend. Figure 8: Generalised Lomb-
Scargle diagram for activity indicators. The GLS created from the activity
indicators from the HARPS-N spectra. From top to bottom we show the GLS for
the Bisector, Hα, S-index, NaD1, NaD2, and full width half maximum (FWHM).
Symbols have the same meaning as in Fig. 7.
#### 2.3.5 Stellar modelling using SME and SED
In addition to the FIES reconnaissance spectroscopy we also made use of our
HARPS-N observations to derive stellar properties. We used co-added HARPS-N
spectra with the software SME222http://www.stsci.edu/~valenti/sme.html.
(Spectroscopy Made Easy; Valenti & Piskunov, 1996; Piskunov & Valenti, 2017),
a tool for fitting observations to synthetic spectra. A detailed description
of the modelling can be found in Fridlund et al. (2017) and Persson et al.
(2018). For this star, we held the micro- and macro-turbulent velocities,
$v_{\rm mic}$ and $v_{\rm mac}$, fixed in the modelling to 0.83 km s-1 (Bruntt
et al., 2010), and 3.0 km s-1 (Doyle et al., 2014), respectively. The
synthetic spectra were computed with the stellar atmosphere grid Atlas12
(Kurucz, 2013), and the atomic and molecular line data were taken from
VALD333http://vald.astro.uu.se (Ryabchikova et al., 2015). Our best model
found an effective temperature of $T_{\rm eff}=5123\pm 62$ K, an iron
abundance of [Fe/H]$=+0.10\pm 0.11$, a surface gravity of $\log
g_{\star}=4.23\pm 0.09$, and a projected rotational velocity of $v\sin
i_{\star}=1.3\pm 1.2$ km s-1. These results were checked with the empirical
code SpecMatch-Emp (Yee et al., 2017) and were found to agree within
$1~{}\sigma$.
Using the SME results as priors, we modelled the stellar radius with
ARIADNE444https://github.com/jvines/astroARIADNE (Vines & Jenkins, 2022)
fitting broadband photometry to the spectral energy distribution (SED). The
fitted bandpasses were the Johnson $B$ and $V$ magnitudes (APASS), $GG_{\rm
BP}G_{\rm RP}$ (eDR3), $JHK_{S}$ magnitudes (2MASS), WISE W1-W2, and the Gaia
eDR3 parallax. The final radius was computed with Bayesian Model Averaging
from the four fitted atmospheric models grids Phoenix v2 (Husser et al.,
2013), BtSettl (Allard et al., 2012), Castelli & Kurucz (2004), and Kurucz
(1993) atmospheric model grids. The final stellar radius was found to be
$1.010\pm 0.015$ $R_{\odot}$, and the stellar mass $0.895^{+0.042}_{-0.023}$
$M_{\odot}$ interpolated from the MIST (Choi et al., 2016) isochrones. The
stellar parameters are summarised in Table 3.
#### 2.3.6 Stellar modelling using BASTA
As an independent measure for the stellar parameters we also modelled the star
using the BAyesian STellar
Algorithm555https://basta.readthedocs.io/en/latest/index.html (BASTA; Silva
Aguirre et al., 2015; Aguirre Børsen-Koch et al., 2022). We ran BASTA using
the spectroscopic parameters from the SPC analysis ($T_{\rm eff}$, [Fe/H],
$\log g$) as input along with the Gaia magnitudes ($G$, $B_{\rm P}$, $R_{\rm
P}$) and parallax. BASTA’s approach to fitting the magnitudes and parallax is
described in Section 4.2.2 in Aguirre Børsen-Koch et al. (2022), where
bolometric corrections are applied using the tables by Hidalgo et al. (2018),
and the reddening is calculated through the dust map by Green et al. (2019).
BASTA uses these values as constraints when fitting to a grid of BaSTI (a Bag
of Stellar Tracks and Isochrones; Hidalgo et al., 2018) isochrones, where we
opted for a science case that included both diffusion, convective core
overshooting, and mass loss (see Section 3.1 in Aguirre Børsen-Koch et al.,
2022). The resulting values are tabulated in Table 3 and are generally
consistent with the other parameters, although BASTA found a slightly smaller
stellar radius as the fit seemed to prefer a slightly larger value for $\log
g$ compared to the SME and SED fits.
In the following we will be using stellar parameters coming from the SED.
Therefore, derived quantities such as the planetary radius and masses will be
calculated from the SED parameters.
Table 3: Stellar parameters for TOI-1288. The stellar parameters from our
spectral analyses and stellar modelling in Section 2.3, Section 2.3.5, and
Section 2.3.6. We also list the Gaia measurements.
Parameter | Name | SME | SED | Spec-Match | SPC+BASTAe | Gaia DR3
---|---|---|---|---|---|---
$T_{\mathrm{eff}}$ | Effective temperature (K) | $5123\pm 62$ | 5225${}^{+23}_{-27}$ | $5220\pm 110$ | 5367$\pm$ 50 | $5300^{+20}_{-22}$
$\log g$ | Surface gravity | $4.23\pm 0.09$ | $4.24\pm 0.09$ | $4.36\pm 0.12$ | $4.36\pm 0.10$ | $4.447^{+0.010}_{-0.006}$
$\rm[Fe/H]$ | Iron abundance | $0.10\pm 0.11$ | $0.07\pm 0.09$ | $0.30\pm 0.09$ | $0.18\pm 0.08$ | $0.15^{+0.02}_{-0.03}$
$\rm[Ca/H]$ | Calcium abundance | $0.15\pm 0.09$ | - | - | - | -
$\rm[Na/H]$ | Sodium abundance | $0.25\pm 0.12$ | - | - | - | -
$v\sin i_{\star}$ | Projected rotation velocity (km s-1) | $1.3\pm 1.2$ | - | - | <2 | -
$\zeta$ | Macro-turbulence (km s-1) | 3.0a | - | - | - | -
$\xi$ | Micro-turbulence (km s-1) | 0.83b | - | - | - | -
$d$ | Distance (pc) | - | 114.7 $\pm$ 0.7 | - | $112.8^{+1.6}_{-1.4}$ | $114.677\pm 0.013$
$R_{\star}$ | Stellar radius (R⊙) | - | 1.010${}^{+0.015}_{-0.014}$ | $1.09\pm 0.18$ | $0.95_{-0.02}^{+0.03}$ | -
$M_{\star}$c | Stellar mass (M⊙) | - | 0.89${}^{+0.04}_{-0.02}$ | 0.90 $\pm$ 0.08 | $0.91^{+0.04}_{-0.05}$ | -
$M_{\star}$d | Stellar mass (M⊙) | - | 0.65${}^{+0.14}_{-0.13}$ | - | - | -
$L_{\star}$ | Luminosity (L⊙) | - | $0.68\pm 0.02$ | - | $0.65\pm 0.03$ | -
$A_{V}$ | $V$ band extinction | - | 0.014${}^{+0.015}_{-0.009}$ | - | - | -
$\tau_{\star}$ | Age (Gyr) | - | 12.1${}^{+1.4}_{-3.1}$ | $10.05\pm 0.17$ | $9.8^{+4.7}_{-3.8}$ | -
* a
Relation from Doyle et al. (2014).
* b
Relation from Bruntt et al. (2010).
* c
SED estimate is from MIST isochrones.
* d
SED estimate is from $\log g$ and $R_{\star}$.
* e
$T_{\rm eff}$, $\log g$, [Fe/H], and $v\sin i_{\star}$ are from SPC. The rest
have been derived using BASTA.
## 3 Analysis
In our modelling we included both planets, where only parameters for planet b
are constrained by the photometry given we have not detected any transits of
planet c. We modelled the transits using batman, where we accounted for the
correlated noise in the light curve using Gaussian Process (GP) regression as
implemented in celerite (Foreman-Mackey et al., 2017). We made use of the
Matèrn-3/2 kernel, which is characterised by two hyper parameters: the
amplitude, $A$, and the time scale, $\tau$. This model is shown at the bottom
of Fig. 2.
In addition to the RV signals from planet b and c, we included a first-order
acceleration parameter, $\dot{\gamma}$, to account for the long-term trend.
Instead of stepping in $e$ and $\omega$, our Markov Chain Monte Carlo (MCMC)
sampling was stepping in $\sqrt{e}\cos\omega$ and $\sqrt{e}\sin\omega$ for
both planets. Furthermore, we were stepping in the sum of the limb-darkening
coefficients, $q_{1}+q_{2}$, while keeping the difference fixed. All stepping
parameters and their priors are listed in Table 4.
As seen in Fig. 1 (see also Fig. 16 for a DSS2 image of the field) there are
multiple companions in the TESS aperture mask. Therefore we added a dilution
term in the MCMC, where we only included the contribution from all sources
brighter than $\Delta G=5$, meaning that only the contribution from the south-
eastern companion at a separation of $\sim$23 arcsec (Fig. 1) was included. We
thus did not consider the contribution from the (much closer) companions. The
brightest of the two is found at $\Delta$Br-$\gamma=4.77\pm 0.03$ and from our
measurements in Table 2 it is clear that both companions seem to be redder
than TOI-1288, meaning that the differences in magnitude are even larger in
the TESS passband.
The total flux as a function of time is thus
$F(t)=(F_{1}(t)+F_{2})/(F_{1}+F_{2})$, where $F_{1}(t)$ is the in-transit flux
and $F_{1}$ is the out-of-transit flux for TOI-1288, and $F_{2}$ is the flux
from the contaminating source at $\sim$23 arcsec. The flux from the
contaminating source is then included as a fraction of TOI-1288,
$F_{1}/F_{2}$, which in magnitude translates to
$\Delta\mathrm{mag}=-2.5\log(F_{2}/F_{1})$. As the TESS passband is very close
to the Gaia $R_{\mathrm{P}}$ passband, we used the difference in this passband
as a proxy for the difference between TOI-1288 and the 23 arcsec neighbour in
the TESS passband. Thus we sampled the dilution as a Gaussian prior with
$\Delta R_{\mathrm{P}}=\Delta\mathrm{TESS}=4.41\pm 0.02$. The photometric
apertures from the ground-based facilities are small enough (Table 6) so that
this source does not contaminate those light curves. As such no dilution
factors were included for these.
We sampled the posteriors for the transit and orbital parameters using MCMC
sampling utilising the emcee package (Foreman-Mackey et al., 2013). Our
likelihood function is defined as
$\log\mathcal{L}=-0.5\sum_{i=1}^{N}\left[\frac{(O_{i}-C_{i})^{2}}{\sigma_{i}^{2}}+\log
2\pi\sigma_{i}^{2}\right]\,,$ (1)
where $N$ indicates the total number of data points from photometry and RVs.
$C_{i}$ represents the model corresponding to the observed data point $O_{i}$.
$\sigma_{i}$ represents the uncertainty for the $i$th datum, where we add a
jitter term in quadrature and a penalty in the likelihood for the RVs. To our
likelihood in Eq. (1) we add our priors $\sum_{j=1}^{M}\log\mathcal{P}_{j}$,
$\mathcal{P}_{j}$ being the prior on the $j$th parameter, and this sum
constitutes the total probability.
Table 4: MCMC results. The median and high posterior density at a confidence
level of 0.68. Subscripts b and c denote parameters for planet b and c,
respectively. $\mathcal{U}$ denotes that a uniform prior was applied during
the run.
Parameter | Name | Prior | Value
---|---|---|---
Stepping parameters
$P_{\mathrm{b}}$ | Period (days) | $\mathcal{U}$ | $2.699835^{+0.000004}_{-0.000003}$
$T_{\mathrm{0,b}}$ | Mid-transit time (BTJD) | $\mathcal{U}$ | $1712.3587\pm 0.0002$
$(R_{\mathrm{p}}/R_{\star})_{\mathrm{b}}$ | Planet-to-star radius ratio | $\mathcal{U}$ | $0.0476\pm 0.0005$
$(a/R_{\star})_{\mathrm{b}}$ | Semi-major axis to star radius ratio | $\mathcal{U}$ | $8.5\pm 0.4$
$K_{\mathrm{b}}$ | Velocity semi-amplitude (m s-1) | $\mathcal{U}$ | $20.7^{+0.4}_{-0.5}$
$\cos i_{\mathrm{b}}$ | Cosine of inclination | $\mathcal{U}$ | $0.030^{+0.012}_{-0.030}$
$(\sqrt{e}\cos\omega)_{\mathrm{b}}$ | | $\mathcal{U}$ | $-0.19^{+0.03}_{-0.04}$
$(\sqrt{e}\sin\omega)_{\mathrm{b}}$ | | $\mathcal{U}$ | $0.16^{+0.07}_{-0.06}$
$P_{\mathrm{c}}$ | Period (days) | $\mathcal{U}$ | $443^{+11}_{-13}$
$T_{\mathrm{0,c}}$ | Mid-transit time (BTJD) | $\mathcal{U}$ | $1883^{+12}_{-14}$
$K_{\mathrm{c}}$ | Velocity semi-amplitude (m s-1) | $\mathcal{U}$ | $7.6^{+0.5}_{-0.6}$
$(\sqrt{e}\cos\omega)_{\mathrm{c}}$ | | $\mathcal{U}$ | $0.15^{+0.19}_{-0.15}$
$(\sqrt{e}\sin\omega)_{\mathrm{c}}$ | | $\mathcal{U}$ | $0.28^{+0.14}_{-0.13}$
$\gamma_{\mathrm{HARPS-N}}$ | Systemic velocity HARPS-N (m s-1) | $\mathcal{U}$ | $7.7^{+0.8}_{-0.7}$
$\sigma_{\mathrm{HARPS-N}}$ | Jitter HARPS-N (m s-1) | $\mathcal{U}$ | $1.9\pm 0.3$
$\gamma_{\mathrm{HIRES}}$ | Systemic velocity HIRES (m s-1) | $\mathcal{U}$ | $6.2^{+0.9}_{-1.0}$
$\sigma_{\mathrm{HIRES}}$ | Jitter HIRES (m s-1) | $\mathcal{U}$ | $3.4\pm 0.6$
$\dot{\gamma}$ | Linear trend (m s-1 d-1) | $\mathcal{U}$ | $-0.0088\pm 0.0017$
Derived parameters
$e_{\mathrm{b}}$ | Eccentricity | - | $0.064^{+0.014}_{-0.015}$
$\omega_{\mathrm{b}}$ | Argument of periastron (∘) | - | $139^{+13}_{-17}$
$i_{\mathrm{b}}$ | Inclination (∘) | - | $88.3^{+1.7}_{-0.7}$
$b_{\mathrm{b}}$ | Impact parameter | - | $0.26^{+0.10}_{-0.24}$
$e_{\mathrm{c}}$ | Eccentricity | - | $0.13^{+0.07}_{-0.09}$
$\omega_{\mathrm{c}}$ | Argument of periastron (∘) | - | $63^{+30}_{-33}$
$T_{\mathrm{14,b}}$ | Transit duration (hours) | - | $2.37_{-0.03}^{+0.05}$
Physical parameters †
$T_{\mathrm{eq,b}}$$\chi$ | Equilibrium temperature (K) | - | $1266\pm 27$
$R_{\mathrm{p,b}}$ | Planet radius (R⊕) | - | $5.24\pm 0.09$
$M_{\mathrm{p,b}}$ | Planet mass (M⊕) | - | $42\pm 3$
$\rho_{\mathrm{p,b}}$ | Planet density (g cm-3) | - | $1.3\pm 0.5$
$(M_{\mathrm{p}}\sin i)_{\mathrm{c}}$ | Lower value for planet mass (M⊕) | - | $84\pm 7$
* *
Barycentric TESS Julian Date (BTJD) is defined as BJD-2457000.0, BJD being the
Barycentric Julian Date
* †
From the SED stellar parameters in Table 3.
* $\chi$
Following Kempton et al. (2018).
## 4 Results
In Fig. 9 we show the TESS light curve phase-folded on the transits of planet
b along with the best-fitting model. Light curves from all photometric
observations can be found in Fig. 14. We find a planet-to-star radius ratio of
$0.0476\pm 0.0005$, which given the stellar radius from the SED analysis in
Table 3 yields a radius of $5.24\pm 0.09$ R⊕. With a period of just
$2.699835^{+0.000004}_{-0.000003}$ d, TOI-1288 b is thus a hot super-Neptune.
Figure 9: TESS light curve of TOI-1288 b. The GP detrended TESS data from Fig.
2 showing the phase folded transits of planet b. We show the data binned in
larger, solid points, while the unbinned data are shown smaller, more
transparent points. The datum with errorbar is not an actual measurement, but
illustrates the median of the uncertainties of all data. The grey line is the
best-fitting model.
Shown in Fig. 6 are the best-fitting models for the radial velocities for both
planet b and c. This 2-planet model is heavily favoured over a 1-planet model
according to the Bayesian information criterion (BIC, $\Delta$BIC$=104$). To
get a measure of the mass for both planets we use the relation
$M_{\rm p}\sin
i=\frac{K\sqrt{1-e^{2}}}{28.4~{}\mathrm{m}~{}\mathrm{s}^{-1}}\left(\frac{P}{1~{}\mathrm{yr}}\right)^{1/3}\left(\frac{M_{\star}}{\mathrm{M}_{\odot}}\right)^{2/3}\,,$
(2)
where we can only get a lower limit for the mass of planet c as we do not know
the inclination. For planet b we find a mass of $42\pm 3$ M⊕, which combined
with the radius yields a bulk density of $1.3\pm 0.5$ g cm-3. For planet c we
find a lower limit for the mass of $84\pm 7$ M⊕.
For the long-term trend we have found a value for $\dot{\gamma}=-0.0086\pm
0.0019$ m s-1 d-1. This first-order acceleration parameter constitutes a lower
limit for the semi-amplitude through $(t_{\rm f}-t_{\rm
i})\times\dot{\gamma}/2$ with $t_{\rm f}$ and $t_{\rm i}$ being the final and
first timestamps. Following the Monte Carlo approach in Kane et al. (2019)
(see also Pepper et al., 2020), we used our measured value for $\dot{\gamma}$
to calculate the lower limit for the companion inducing this long-term trend
as a function of orbital separation. Namely, we solved
$K\leq\sqrt{\frac{G}{a_{\rm B}(1-e_{\rm B}^{2})}}\frac{M_{\rm B}\sin i_{\rm
B}}{\sqrt{M_{\rm B}+M_{\star}}}$ (3)
for MB at each $a_{\rm B}$ with $e_{\rm B}$ being drawn from a
$\beta$-distribution and $\cos i_{\rm B}$ from a uniform distribution.
In Fig. 10 we show the resulting distributions for each orbital separation,
here converted to a sky-projected separation. We furthermore show the observed
position of the brightest of the two companions, star 1, detected in speckle
and AO, if it were bound to TOI-1288. From our analysis in Section 2.2.5 and
its position in the CMD in Fig. 15 this companion would most likely have been
an M-dwarf with a mass of around $0.2$ M⊙. While this is a lower limit for the
mass and could be consistent with the mass we have estimated for star 1, the
median is around two orders of magnitude lower at the position for star 1. We
should thus in most cases have detected a much more significant drift, if it
were due to star 1. Therefore, it seems more likely that the drift we are
seeing is coming from another planetary companion.
Finally, we note that the Gaia Renormalised Unit Weight Error (RUWE) statistic
for TOI-1288 is 1.17. For a good single-star fit one would expect it to be
around 1, whereas a value of $\gtrapprox 1.4$ could suggest that the source is
non-single or otherwise problematic for the astrometric solution. The slight
departure could be because the Gaia astrometry is seeing the orbital motion
induced by this long-term RV companion.
Figure 10: Lower mass limit for an additional bound companion. The violinplot
shows the resulting distribution for for the mass of the companion for a given
separation (here converted to sky-projected separation using the distance from
Table 1). The solid black curve is the median mass for each separation. The
vertical red band spans the range of the speckle and AO measurements for the
separation of star 1 (Table 2), while the dashed vertical line is the median
of these. The horizontal blue band spans the range from 0.1 M⊙ to 0.3 M⊙ with
0.2 M⊙ shown with the dashed line.
## 5 Discussion
### 5.1 Location in the Neptunian desert
We have found TOI-1288 b to be a hot super-Neptune with an equilibrium
temperature of $1266\pm 27$ K (estimated from Kempton et al., 2018, assuming
zero albedo and full day-night heat redistribution). In Fig. 11 we plot the
radius (left, in Earth radii) and mass (right, in Jupiter masses) of TOI-1288
b as functions of orbital period. Evidently, TOI-1288 b falls right in the hot
Neptunian desert reported by Mazeh et al. (2016). Mazeh et al. (2016) mention
two processes that could account for the upper boundary. Firstly, if the
planet had migrated through the disk, then stopped at the upper boundary of
the desert due to a decrease in density in the disk as it moves inwards, the
inner radius of the disk might be related to its mass and consequently the
planetary mass. Therefore, the central hole in the disk would be smaller in a
more massive disk, and hence allow for a more massive planet. Alternatively,
the atmosphere of a planet moving horizontally in the diagram, i.e.,
migrating, might be stripped of its atmosphere due to the stellar irradiation,
resulting in a smaller, lower mass planet.
In Vissapragada et al. (2022) the upper boundary of the desert was
investigated by looking at the metastable helium feature in the atmospheres of
the planets, which could be a tracer for any outflows. They found that this
upper boundary is stable against photoevaporation, meaning that a different
mechanism must be responsible for tracing out this upper edge. This is in-line
with the findings of Owen & Lai (2018) in which they argue that if
photoevaporation is responsible for the upper boundary, we should see a lot of
sub-Jovian mass planets in the mass-period plane at very short periods, which
we do not. Rather they argue that the upper boundary is caused by high-
eccentricity migration.
On the other hand, Owen & Lai (2018) do find that the lower boundary could be
explained by photoevaporation. This photoevaporation which leaves behind a
rocky core has furthermore been used to explain the dearth of hot super-Earths
(e.g., Sanchis-Ojeda et al., 2014; Lundkvist et al., 2016). An alternative
explanation for the lower boundary of the desert could be that as the
separation increases, so does the Hill sphere of the planetesimal, the orbital
path, and the dust-to-gas ratio, meaning that the core mass is increased
towards the end of the first stage of formation. This would then result in
more massive planets at larger separations (Mazeh et al., 2016).
Figure 11: The hot Neptunian desert reported in Mazeh et al. (2016) shown as
dashed lines. Planets (as of September 2022) from the TEPCat catalogue of
"well-studied transiting planets" (Southworth, 2011,
https://www.astro.keele.ac.uk/jkt/tepcat/allplanets-noerr.html) with
uncertainties smaller than 30% in radius (left) and mass (right). The points
are colour coded according to the incident flux, which is truncated at $F=1$
F⊕. TOI-1288 b is shown as the large square with a red outline. The large
circles denote the closest eight planets to TOI-1288 b in the radius-period
parameter space, with their position highlighted in the mass-radius diagram as
well.
What is clear from Fig. 11 is that the upper boundary is much more well-
defined than the lower boundary. However, even if the lower boundary would be
at larger radii, TOI-1288 b is still found in a very deserted area. In Fig. 11
we have highlighted eight planets that are the closest to TOI-1288 b in the
radius-period (distance here measured in units of
$(\mathrm{R}_{\oplus}^{2}+\mathrm{days}^{2})^{1/2}$) plane; Kepler-101 b
(Bonomo et al., 2014), HATS-7 b (Bakos et al., 2015), TOI-532 b (Kanodia et
al., 2021), TOI-674 b (Murgas et al., 2021), TOI-1728 b (Kanodia et al.,
2020), NGTS-14Ab (Smith et al., 2021), WASP-156 b (Demangeon et al., 2018),
and K2-55 b (Crossfield et al., 2016). Some key parameters (Southworth, 2011,
from https://www.astro.keele.ac.uk/jkt/tepcat/allplanets-noerr.html) for these
systems are summarised in Table 5 along our parameters for TOI-1288 b.
Obviously, the planets are similar in terms of period and radius, but they
also have quite similar masses, and thus densities. The most striking
difference in Fig. 11 is the insolation, which is dictated by the spectral
type ($T_{\mathrm{eff}}$) of the stellar host. In this context it is worth
noting that the overabundance of large planets with high insolation compared
to at smaller radii in Fig. 11, merely reflects that it is easier to detect a
larger planet around a larger (hotter) star. This is apparent from Fig. 17 and
also what is seen in Szabó & Kálmán (2019).
A clustering of Neptune-sized planets with equilibrium temperatures of around
2000 K has been reported in Persson et al. (2022), which begs the question
whether there could be an island of stability in the desert. However, this
might also be a selection effect, and more planets in this parameter space are
needed to establish this. It is an intriguing idea, and if an island of
stability could exist for these slightly smaller planets on more irradiated
orbits, maybe a similar island exist for TOI-1288 b and its neighbours, who
are slightly bigger and less irradiated. It could also be a strip of pseudo
stability in the desert, or it might just reflect the aforementioned less
well-defined lower boundary of the desert.
Table 5: Closest radius-period neighbours. The eight planets closest to
TOI-1288 b in terms of radius and period (with distance in units of
$(\mathrm{R}_{\oplus}^{2}+\mathrm{days}^{2})^{1/2}$).
| $P$ | $F$† | $R_{\rm p}$ | $M_{\rm p}$ | $\rho_{\rm p}$ | SpT
---|---|---|---|---|---|---
| (d) | (F⊕) | (R⊕) | (M⊕) | ($\rho_{\oplus}$) |
TOI-1288 b | 2.6998 | 630 | 5.6 | 41 | 0.24 | G
TOI-532 b | 2.327 | 119 | 5.8 | 61 | 0.31 | M
TOI-674 b | 1.977 | 57 | 5.3 | 24 | 0.17 | M
Kepler-101 b | 3.488 | 1260 | 5.8 | 51 | 0.26 | G
HATS-7 b | 3.185 | 288 | 6.3 | 38 | 0.15 | K
TOI-1728 b | 3.492 | 72 | 5.1 | 27 | 0.21 | M
NGTS-14Ab | 3.536 | 240 | 5.0 | 29 | 0.25 | K
WASP-156 b | 3.836 | 186 | 5.7 | 41 | 0.24 | K
K2-55 b | 2.849 | 130 | 4.4 | 44 | 0.5 | K
* †
From
$(\rho_{\star}/\rho_{\odot})^{-2/3}(P/1~{}\mathrm{yr})^{-4/3}(T_{\mathrm{eff}}/5777~{}\mathrm{K})^{4}$.
### 5.2 Internal structure and atmosphere
In the mass-radius diagram in Fig. 12 we compare TOI-1288 b to models with
different compositions. The models are taken from Zeng et al. (2016, 2019).
Evidently, TOI-1288 b can be described as a rocky core with a gaseous envelope
at high irradiation. Probing the atmosphere of the planet through transmission
spectroscopy could naturaully help reveal atmospheric features, but can also
provide valuable constraints on the internal structure.
Figure 12: Mass-radius diagram. Planets from the same catalogue as in Fig. 11,
but now for planets with uncertainties on both mass and radius of less (more)
than 30% shown as black (grey) dots. Solid lines are composition models from
Zeng et al. (2016, 2019). TOI-1288 b is again shown as the large (coloured)
square with the similar ($R_{\mathrm{p}},P$) planets shown with large circles.
We observed a transit of planet b on the night 2020 June 11 using the HARPS-N
spectrograph. The RVs from this night can be seen around orbital phase 0.0 in
the lower left panel of Fig. 6, but due to the slow rotation of the star
($v\sin i_{\star}=1.3\pm 1.2$ km s-1) we do not see the Rossiter-McLaughlin
(RM; Rossiter, 1924; McLaughlin, 1924) effect. For an aligned configuration a
decent approximation for the amplitude is given by $0.7\sqrt{1-b^{2}}(R_{\rm
P}/R_{\star})^{2}v\sin i_{\star}$, which comes out to just shy of 2 m s-1 for
TOI-1288 b.
We nonetheless ran an MCMC where we included the RM effect (using the code by
Hirano et al., 2011). We excluded the photometry and instead applied Gaussian
priors to $P_{\rm b}$, $T_{\rm 0,b}$, $(R_{\rm p}/R_{\star})_{\rm b}$,
$(a/R_{\star})_{\rm b}$, and $i_{\rm b}$ using the values in Table 4 and for
$v\sin i_{\star}$ from the SME value in Table 3. We also applied Gaussian
priors to the macro- and micro-turbulence as well as the sum of the limb-
darkening coefficients (values estimated from Doyle et al., 2014; Bruntt et
al., 2010; Claret & Bloemen, 2011, respectively), while applying a uniform
prior to the sky-projected obliquity, $\lambda_{\rm b}$. The rest followed the
same approach as the run in Section 3. The resulting value for the projected
obliquity was $\lambda_{\rm b}=70^{+110}_{-100}$∘, meaning that it is
unconstrained.
Following Kempton et al. (2018) we can calculate the transmission
spectroscopic metric (TSM) to assess the feasibility of transmission
spectroscopy for TOI-1288 b. The TSM is given by
$\mathrm{TSM}=H\times\frac{R_{\rm p}^{3}T_{\rm eq}}{M_{\rm
p}R_{\star}^{2}}\times 10^{-m_{J}/5}\,,$ (4)
where $m_{J}$ is the apparent magnitude of the host in the $J$ band and $H$ is
a scale factor related to the size of the planet. For TOI-1288 b $H$ is 1.15,
while the planet, stellar, and system parameters are listed in Table 4, Table
3 (SED), and Table 1, respectively. This yields a TSM of $\sim 87$, which is
just below the suggested cutoff for follow-up efforts in Kempton et al.
(2018).
Figure 13: Simulated JWST observations. A simulated transmission spectrum of
TOI-1288 b in grey using petitRADTRANS. The coloured error bars are simulated
JWST data from PandExo of different instruments with their wavelength coverage
shown by the horizontal coloured lines and with the names of the instrument
shown above. We also show the TESS transmission curve in purple. Some atomic
and molecular species are highlighted in the coloured areas with K, H2O, CH4,
and CO2 shown with yellow, blue, red, and grey, respectively.
While – according to this metric – TOI-1288 b is not a high priority target
for JWST (Gardner et al., 2006), we still investigate what JWST might be able
to detect if it were to do transmission spectroscopy. We simulated the
spectrum of TOI-1288 b using petitRADTRANS (Mollière et al., 2019, 2020)
assuming a cloud-free, isothermal model at 1266 K. We used PandExo (Batalha et
al., 2017) to simulate the JWST data for four different instruments. For each
we assumed a total of 4 transits with a 4 hr baseline each. The resulting
spectrum is shown in Fig. 13. For this most likely quite optimistic scenario,
JWST should be able to detect several molecular species, such as H2O, CH4, and
CO2 (if present).
### 5.3 Outer companions
According to the Web TESS Viewing Tool666https://heasarc.gsfc.nasa.gov/cgi-
bin/tess/webtess/wtv.py, TOI-1288 is (at the time of writing) being re-
observed in Sectors 56-58 (beginning in September 2022 and ending in November
2022). These additional sectors should help refine the transit parameters of
planet b. While our current estimate for the period and ephemeris of planet c
suggest a transit occurred (of course, depending on the inclination) July
2022, the uncertainties are rather large, so it is worthwhile to be on the
look out for a potential transit of planet c.
Zhu & Wu (2018) and Bryan et al. (2019) found an excess of cold Jupiters in
systems harbouring super-Earths/sub-Neptunes with the former stating that
stars with super-Earths have roughly a 3 times higher cold Jupiter fraction
compared to field stars. Furthermore, they found that this cold Jupiter
fraction rises to about 60% for stars with $\rm[Fe/H]>0.1$. Given the
metallicity we find for TOI-1288 of $0.07\pm 0.09$ from the SED (median from
all measurements in Table 3 is 0.15), it is perhaps not too surprising that we
are seeing a cold gas giant in this system. This strong correlation between
super-Earths and cold Jupiters suggests that they are not competing for the
same solid material, which Zhu & Wu (2018) argue disfavours theories invoking
large-scale migration.
On the other hand TOI-1288 b is a bit larger than the planets in the
aforementioned studies and might have a gaseous envelope. In line with the
discussion above, hot Neptunes are in danger of losing their atmospheres,
especially while their stars are young and active (e.g. Lopez et al., 2012).
Kozai-Lidov cycles and high-eccentricity migration can deliver Neptune-sized
planets on short period orbits past this active stage ($\sim 100$ Myr) for the
star (Dawson & Johnson, 2018). Interactions between TOI-1288 b and c could
therefore be responsible for transporting TOI-1288 b to its current position.
Subsequent tidal interactions with the star could then have dampened the
eccentricity to the current value ($0.064^{+0.014}_{-0.015}$), which compared
to Earth’s orbit ($\sim 0.016$) is still significant.
To assess whether planet c can influence the dynamics of the inner, planetary
system as we see it today, we calculated the planet-star coupling parameter,
$\epsilon_{\star 1}$, given in Lai et al. (2018), which is a measure for
whether an outer companion can cause the orbit of the inner planet to precess.
For this we used the approximation in their Eq. (24), which is made for the
case of a hot Jupiter with a gas giant companion at a separation of around 1
AU. While not exactly the case here, the approximation can still provide us
with some qualitative insights.
As we need to know the stellar rotation period, $P_{\rm rot}$, for this, we
searched the TESS light curve using the autocorrelation method (McQuillan et
al., 2013), however, we did not detect any signs of stellar rotation. Instead
we estimated $P_{\rm rot}$ from the age, $\tau=10.05$ Gyr (Table 3), and
colour, $B-V=0.94$ (Table 1), using the relation in Mamajek & Hillenbrand
(2008), which yields a rotation period of 64 d. From this we get
$\epsilon_{\star 1}\sim 0.5$ suggesting a strong coupling between TOI-1288 b
and the star. However, it is not too far from the resonant regime of
$\epsilon_{\star 1}\sim 1$, meaning the excitation of the spin-orbit angle,
the obliquity, could be possible.
In addition to TOI-1288 c for which we have constrained the orbit and thus the
mass to some extent, we also see evidence for what could be a companion on an
even wider orbit. However, for the time being we can only make rather crude
inferences about the characteristics of this companion as was done in Section
4, namely Fig. 10. For instance, if this companion were on a 10 yr coplanar
(with respect to TOI-1288 b) orbit it would have a mass of around 0.3 MJ. To
decipher the dynamic influence from this companion on the architecture would
require continued RV monitoring to trace out the orbit.
## 6 Conclusions
Here we presented the discovery of multiple planets in the TOI-1288 system.
Using photometry from TESS as well as ground-based telescopes, we have
determined that the transiting planet TOI-1288 b is a super Neptune ($5.24\pm
0.09$ R⊕) on a short period orbit ($2.699835^{+0.000004}_{-0.000003}$ d).
TOI-1288 b thus joins the growing population of super Neptunes that despite
the drought have settled in the Neptunian desert. We have characterised the
planet in terms of mass through intensive RV monitoring with the HARPS-N and
HIRES spectrographs, where we find a mass of $42\pm 3$ M⊕.
Combining the radius and mass for TOI-1288 b, we find that the planet can be
described as a rocky core with a gaseous envelope at high radiation. Similar
compositions are found for the planets most identical to TOI-1288 b in terms
of orbital period and radius, meaning that the internal structure and
composition might be a crucial premise for survival in the desert. Atmospheric
studies of occupants in and around the desert could help shed light on the
processes, such as photoevaporation, shaping this region. TOI-1288 b is a
well-suited candidate for such studies.
Furthermore, from our RV monitoring we also found evidence for an additional
companion in the TOI-1288 system with an orbital period of $443^{+11}_{-13}$
d. We find a lower mass of $84\pm 7$ M⊕, meaning that if this companion is
close to being coplanar with TOI-1288 b, it would be a Saturn-mass planet.
TOI-1288 c might have been responsible for transporting TOI-1288 b from a
further out orbit to its present day location, for instance, through high-
eccentricity migration. Finally, we detect hints of a long-term RV trend
possibly caused by another body in the TOI-1288 system.
## Acknowledgements
We thank the anonymous referee for a timely review. This paper includes data
collected by the TESS mission. Funding for the TESS mission is provided by the
NASA Explorer Program. We acknowledge the use of public TESS Alert data from
pipelines at the TESS Science Office and the TESS Science Operations Center.
Funding for the Stellar Astrophysics Centre is provided by The Danish National
Research Foundation (Grant agreement no.: DNRF106). Based on observations
(programme IDs: A40TAC_22, A41TAC_19, A41TAC_49, A43TAC_11, CAT19A_162,
CAT22A_111, and ITP19_1) made with the Italian Telescopio Nazionale Galileo
(TNG) operated on the island of La Palma by the Fundación Galileo Galilei of
the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del
Roque de los Muchachos of the Instituto de Astrofisica de Canarias. Based on
observations made with the Nordic Optical Telescope, owned in collaboration by
the University of Turku and Aarhus University, and operated jointly by Aarhus
University, the University of Turku and the University of Oslo, representing
Denmark, Finland and Norway, the University of Iceland and Stockholm
University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of
the Instituto de Astrofisica de Canarias. We are extremely grateful to the NOT
and TNG staff members for their unique and superb support during the
observations. Resources supporting this work were provided by the NASA High-
End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS)
Division at Ames Research Center for the production of the SPOC data products.
Some of the data presented herein were obtained at the W. M. Keck Observatory,
which is operated as a scientific partnership among the California Institute
of Technology, the University of California and the National Aeronautics and
Space Administration. The Observatory was made possible by the generous
financial support of the W. M. Keck Foundation. The authors wish to recognize
and acknowledge the very significant cultural role and reverence that the
summit of Maunakea has always had within the indigenous Hawaiian community. We
are most fortunate to have the opportunity to conduct observations from this
mountain. This work makes use of observations from the LCOGT network. Part of
the LCOGT telescope time was granted by NOIRLab through the Mid-Scale
Innovations Program (MSIP). MSIP is funded by NSF. This research has made use
of the Exoplanet Follow-up Observation Program (ExoFOP; DOI: 10.26134/ExoFOP5)
website, which is operated by the California Institute of Technology, under
contract with the National Aeronautics and Space Administration under the
Exoplanet Exploration Program. The observations in the paper made use of the
NN-EXPLORE Exoplanet and Stellar Speckle Imager (NESSI). NESSI was funded by
the NASA Exoplanet Exploration Program and the NASA Ames Research Center.
NESSI was built at the Ames Research Center by Steve B. Howell, Nic Scott,
Elliott P. Horch, and Emmett Quigley. The authors are honored to be permitted
to conduct observations on Iolkam Du’ag (Kitt Peak), a mountain within the
Tohono O’odham Nation with particular significance to the Tohono O’odham
people. This work presents results from the European Space Agency (ESA) space
mission Gaia. Gaia data are being processed by the Gaia Data Processing and
Analysis Consortium (DPAC). Funding for the DPAC is provided by national
institutions, in particular the institutions participating in the Gaia
MultiLateral Agreement (MLA). The Gaia mission website is
https://www.cosmos.esa.int/gaia. The Gaia archive website is
https://archives.esac.esa.int/gaia. Some of the observations in the paper made
use of the High-Resolution Imaging instrument ‘Alopeke. ‘Alopeke was funded by
the NASA Exoplanet Exploration Program and built at the NASA Ames Research
Center by Steve B. Howell, Nic Scott, Elliott P. Horch, and Emmett Quigley.
Data were reduced using a software pipeline originally written by Elliott
Horch and Mark Everett. ‘Alopeke was mounted on the Gemini North telescope of
the international Gemini Observatory, a program of NSF’s OIR Lab, which is
managed by the Association of Universities for Research in Astronomy (AURA)
under a cooperative agreement with the National Science Foundation. on behalf
of the Gemini partnership: the National Science Foundation (United States),
National Research Council (Canada), Agencia Nacional de Investigación y
Desarrollo (Chile), Ministerio de Ciencia, Tecnología e Innovación
(Argentina), Ministério da Ciência, Tecnologia, Inovações e Comunicações
(Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea).
E.K. and S.A. acknowledge the support from the Danish Council for Independent
Research through a grant, No.2032-00230B. C.M.P. and J.K. gratefully
acknowledge the support of the Swedish National Space Agency (SNSA; DNR
2020-00104). M.S.L would like to acknowledge the support from VILLUM FONDEN
(research grant 42101) and The Independent Research Fund Denmark’s Inge
Lehmann program (grant agreement no.: 1131-00014B). This work is partly
supported by JSPS KAKENHI Grant Number JP17H04574, JP18H05439, JP20K14521,
JP19K14783, JP21H00035, JST CREST Grant Number JPMJCR1761, the Astrobiology
Center of National Institutes of Natural Sciences (NINS) (Grant Number
AB031010). D.H. acknowledges support from the Alfred P. Sloan Foundation and
the National Aeronautics and Space Administration (80NSSC21K0652). K.W.F.L.
was supported by Deutsche Forschungsgemeinschaft grants RA714/14-1 within the
DFG Schwerpunkt SPP 1992, Exploring the Diversity of Extrasolar Planets.
K.K.M. acknowledges support from the New York Community Trust Fund for
Astrophysical Research. Parts of the numerical results presented in this work
were obtained at the Centre for Scientific Computing, Aarhus
https://phys.au.dk/forskning/faciliteter/cscaa/. This research made use of
Astropy,777http://www.astropy.org a community-developed core Python package
for Astronomy (Astropy Collaboration et al., 2013, 2018, 2022). This research
made use of Astroquery (Ginsburg et al., 2019). This research made use of
TESScut (Brasseur et al., 2019). This research has made use of "Aladin sky
atlas" developed at CDS, Strasbourg Observatory, France (Bonnarel et al.,
2000; Boch & Fernique, 2014).
## Data Availability
The radial velocities underlying this article are available in its online
supplementary material.
## References
* Aguirre Børsen-Koch et al. (2022) Aguirre Børsen-Koch V., et al., 2022, MNRAS, 509, 4344
* Allard et al. (2012) Allard F., Homeier D., Freytag B., 2012, Philosophical Transactions of the Royal Society of London Series A, 370, 2765
* Anglada-Escudé & Butler (2012) Anglada-Escudé G., Butler R. P., 2012, ApJS, 200, 15
* Astropy Collaboration et al. (2013) Astropy Collaboration et al., 2013, A&A, 558, A33
* Astropy Collaboration et al. (2018) Astropy Collaboration et al., 2018, AJ, 156, 123
* Astropy Collaboration et al. (2022) Astropy Collaboration et al., 2022, ApJ, 935, 167
* Bakos et al. (2015) Bakos G. Á., et al., 2015, ApJ, 813, 111
* Batalha et al. (2017) Batalha N. E., et al., 2017, PASP, 129, 064501
* Boch & Fernique (2014) Boch T., Fernique P., 2014, in Manset N., Forshay P., eds, Astronomical Society of the Pacific Conference Series Vol. 485, Astronomical Data Analysis Software and Systems XXIII. p. 277
* Bonnarel et al. (2000) Bonnarel F., et al., 2000, A&AS, 143, 33
* Bonomo et al. (2014) Bonomo A. S., et al., 2014, A&A, 572, A2
* Borucki et al. (2010) Borucki W. J., et al., 2010, Science, 327, 977
* Brasseur et al. (2019) Brasseur C. E., Phillip C., Fleming S. W., Mullally S. E., White R. L., 2019, Astrocut: Tools for creating cutouts of TESS images, Astrophysics Source Code Library, record ascl:1905.007 (ascl:1905.007)
* Brown et al. (2013) Brown T. M., et al., 2013, Publications of the Astronomical Society of the Pacific, 125, 1031
* Bruntt et al. (2010) Bruntt H., et al., 2010, MNRAS, 405, 1907
* Bryan et al. (2019) Bryan M. L., Knutson H. A., Lee E. J., Fulton B. J., Batygin K., Ngo H., Meshkat T., 2019, AJ, 157, 52
* Buchhave et al. (2010) Buchhave L. A., et al., 2010, ApJ, 720, 1118
* Buchhave et al. (2012) Buchhave L. A., et al., 2012, Nature, 486, 375
* Buchhave et al. (2014) Buchhave L. A., et al., 2014, Nature, 509, 593
* Cabrera et al. (2012) Cabrera J., Csizmadia S., Erikson A., Rauer H., Kirste S., 2012, A&A, 548, A44
* Castelli & Kurucz (2004) Castelli F., Kurucz R. L., 2004, astro-ph/0405087,
* Choi et al. (2016) Choi J., Dotter A., Conroy C., Cantiello M., Paxton B., Johnson B. D., 2016, ApJ, 823, 102
* Ciardi et al. (2015) Ciardi D. R., Beichman C. A., Horch E. P., Howell S. B., 2015, ApJ, 805, 16
* Claret (2017) Claret A., 2017, A&A, 600, A30
* Claret & Bloemen (2011) Claret A., Bloemen S., 2011, A&A, 529, A75
* Collins (2019) Collins K., 2019, in American Astronomical Society Meeting Abstracts #233. p. 140.05
* Collins et al. (2017) Collins K. A., Kielkopf J. F., Stassun K. G., Hessman F. V., 2017, AJ, 153, 77
* Cosentino et al. (2012) Cosentino R., et al., 2012, in Ground-based and Airborne Instrumentation for Astronomy IV. p. 84461V, doi:10.1117/12.925738
* Crossfield et al. (2016) Crossfield I. J. M., et al., 2016, ApJS, 226, 7
* Cutri et al. (2003) Cutri R. M., et al., 2003, VizieR Online Data Catalog, p. II/246
* Dawson & Johnson (2018) Dawson R. I., Johnson J. A., 2018, ARA&A, 56, 175
* Demangeon et al. (2018) Demangeon O. D. S., et al., 2018, A&A, 610, A63
* Djupvik & Andersen (2010) Djupvik A. A., Andersen J., 2010, in Highlights of Spanish Astrophysics V. p. 211 (arXiv:0901.4015), doi:10.1007/978-3-642-11250-8_21
* Doyle et al. (2014) Doyle A. P., Davies G. R., Smalley B., Chaplin W. J., Elsworth Y., 2014, MNRAS, 444, 3592
* Foreman-Mackey et al. (2013) Foreman-Mackey D., Hogg D. W., Lang D., Goodman J., 2013, PASP, 125, 306
* Foreman-Mackey et al. (2017) Foreman-Mackey D., Agol E., Angus R., Ambikasaran S., 2017, ArXiv
* Frandsen & Lindberg (1999) Frandsen S., Lindberg B., 1999, in Karttunen H., Piirola V., eds, Astrophysics with the NOT. p. 71
* Fridlund et al. (2017) Fridlund M., et al., 2017, A&A, 604, A16
* Fukui et al. (2011) Fukui A., et al., 2011, PASJ, 63, 287
* Gaia Collaboration et al. (2022) Gaia Collaboration Vallenari, A. Brown, A.G.A. Prusti, T. et al. 2022, A&A
* Gardner et al. (2006) Gardner J. P., et al., 2006, Space Sci. Rev., 123, 485
* Ginsburg et al. (2019) Ginsburg A., et al., 2019, AJ, 157, 98
* Green et al. (2019) Green G. M., Schlafly E., Zucker C., Speagle J. S., Finkbeiner D., 2019, ApJ, 887, 93
* Guerrero et al. (2021) Guerrero N. M., et al., 2021, ApJS, 254, 39
* Hidalgo et al. (2018) Hidalgo S. L., et al., 2018, ApJ, 856, 125
* Hirano et al. (2011) Hirano T., Suto Y., Winn J. N., Taruya A., Narita N., Albrecht S., Sato B., 2011, ApJ, 742, 69
* Hirsch et al. (2017) Hirsch L. A., et al., 2017, AJ, 153, 117
* Hodapp et al. (2003) Hodapp K. W., et al., 2003, PASP, 115, 1388
* Høg et al. (2000) Høg E., et al., 2000, A&A, 355, L27
* Howard et al. (2010a) Howard A. W., et al., 2010a, Science, 330, 653
* Howard et al. (2010b) Howard A. W., et al., 2010b, ApJ, 721, 1467
* Howell et al. (2011) Howell S. B., Everett M. E., Sherry W., Horch E., Ciardi D. R., 2011, AJ, 142, 19
* Husser et al. (2013) Husser T. O., Wende-von Berg S., Dreizler S., Homeier D., Reiners A., Barman T., Hauschildt P. H., 2013, A&A, 553, A6
* Ida & Lin (2004) Ida S., Lin D. N. C., 2004, ApJ, 604, 388
* Jenkins (2002) Jenkins J. M., 2002, ApJ, 575, 493
* Jenkins et al. (2010) Jenkins J. M., et al., 2010, in Radziwill N. M., Bridger A., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7740, Software and Cyberinfrastructure for Astronomy. p. 77400D, doi:10.1117/12.856764
* Jenkins et al. (2016) Jenkins J. M., et al., 2016, in Software and Cyberinfrastructure for Astronomy IV. p. 99133E, doi:10.1117/12.2233418
* Jensen (2013) Jensen E., 2013, Tapir: A web interface for transit/eclipse observability, Astrophysics Source Code Library (ascl:1306.007)
* Kane et al. (2019) Kane S. R., et al., 2019, AJ, 157, 252
* Kanodia et al. (2020) Kanodia S., et al., 2020, ApJ, 899, 29
* Kanodia et al. (2021) Kanodia S., et al., 2021, AJ, 162, 135
* Kempton et al. (2018) Kempton E. M. R., et al., 2018, PASP, 130, 114401
* Kovács et al. (2002) Kovács G., Zucker S., Mazeh T., 2002, A&A, 391, 369
* Kreidberg (2015) Kreidberg L., 2015, PASP, 127, 1161
* Kurucz (1993) Kurucz R. L., 1993, VizieR Online Data Catalog, p. VI/39
* Kurucz (2013) Kurucz R. L., 2013, ATLAS12: Opacity sampling model atmosphere program, Astrophysics Source Code Library (ascl:1303.024)
* Lai et al. (2018) Lai D., Anderson K. R., Pu B., 2018, MNRAS, 475, 5231
* Li et al. (2019) Li J., Tenenbaum P., Twicken J. D., Burke C. J., Jenkins J. M., Quintana E. V., Rowe J. F., Seader S. E., 2019, PASP, 131, 024506
* Lightkurve Collaboration et al. (2018) Lightkurve Collaboration et al., 2018, Lightkurve: Kepler and TESS time series analysis in Python, Astrophysics Source Code Library (ascl:1812.013)
* Lomb (1976) Lomb N. R., 1976, Ap&SS, 39, 447
* Lopez et al. (2012) Lopez E. D., Fortney J. J., Miller N., 2012, ApJ, 761, 59
* Lovis & Pepe (2007) Lovis C., Pepe F., 2007, A&A, 468, 1115
* Lundkvist et al. (2016) Lundkvist M. S., et al., 2016, Nature Communications, 7, 11201
* Mamajek & Hillenbrand (2008) Mamajek E. E., Hillenbrand L. A., 2008, ApJ, 687, 1264
* Mayor et al. (2011) Mayor M., et al., 2011, arXiv e-prints, p. arXiv:1109.2497
* Mazeh et al. (2016) Mazeh T., Holczer T., Faigler S., 2016, A&A, 589, A75
* McCully et al. (2018) McCully C., Volgenau N. H., Harbeck D.-R., Lister T. A., Saunders E. S., Turner M. L., Siiverd R. J., Bowman M., 2018, in Proc. SPIE. p. 107070K (arXiv:1811.04163), doi:10.1117/12.2314340
* McLaughlin (1924) McLaughlin D. B., 1924, ApJ, 60, 22
* McQuillan et al. (2013) McQuillan A., Aigrain S., Mazeh T., 2013, MNRAS, 432, 1203
* Mollière et al. (2019) Mollière P., Wardenier J. P., van Boekel R., Henning T., Molaverdikhani K., Snellen I. A. G., 2019, A&A, 627, A67
* Mollière et al. (2020) Mollière P., et al., 2020, A&A, 640, A131
* Mordasini et al. (2009) Mordasini C., Alibert Y., Benz W., Naef D., 2009, A&A, 501, 1161
* Morris et al. (2020) Morris R. L., Twicken J. D., Smith J. C., Clarke B. D., Jenkins J. M., Bryson S. T., Girouard F., Klaus T. C., 2020, Kepler Data Processing Handbook: Photometric Analysis, Kepler Science Document KSCI-19081-003
* Murgas et al. (2021) Murgas F., et al., 2021, A&A, 653, A60
* Narita et al. (2015) Narita N., et al., 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 045001
* Owen & Lai (2018) Owen J. E., Lai D., 2018, MNRAS, 479, 5012
* Pepper et al. (2020) Pepper J., et al., 2020, AJ, 159, 243
* Persson et al. (2018) Persson C. M., et al., 2018, A&A, 618, A33
* Persson et al. (2022) Persson C. M., et al., 2022, arXiv e-prints, p. arXiv:2208.05797
* Piskunov & Valenti (2017) Piskunov N., Valenti J. A., 2017, A&A, 597, A16
* Ricker et al. (2015) Ricker G. R., et al., 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Rossiter (1924) Rossiter R. A., 1924, ApJ, 60, 15
* Ryabchikova et al. (2015) Ryabchikova T., Piskunov N., Kurucz R. L., Stempels H. C., Heiter U., Pakhomov Y., Barklem P. S., 2015, Phys. Scr., 90, 054005
* Sanchis-Ojeda et al. (2014) Sanchis-Ojeda R., Rappaport S., Winn J. N., Kotson M. C., Levine A., El Mellah I., 2014, ApJ, 787, 47
* Savitzky & Golay (1964) Savitzky A., Golay M. J. E., 1964, Analytical Chemistry, 36, 1627
* Scargle (1982) Scargle J. D., 1982, ApJ, 263, 835
* Scott (2019) Scott N. J., 2019, in AAS/Division for Extreme Solar Systems Abstracts. p. 330.15
* Scott et al. (2021) Scott N. J., et al., 2021, Frontiers in Astronomy and Space Sciences, 8, 138
* Silva Aguirre et al. (2015) Silva Aguirre V., et al., 2015, MNRAS, 452, 2127
* Smith et al. (2012) Smith J. C., et al., 2012, PASP, 124, 1000
* Smith et al. (2021) Smith A. M. S., et al., 2021, A&A, 646, A183
* Southworth (2011) Southworth J., 2011, MNRAS, 417, 2166
* Stumpe et al. (2012) Stumpe M. C., et al., 2012, PASP, 124, 985
* Stumpe et al. (2014) Stumpe M. C., Smith J. C., Catanzarite J. H., Van Cleve J. E., Jenkins J. M., Twicken J. D., Girouard F. R., 2014, PASP, 126, 100
* Szabó & Kálmán (2019) Szabó G. M., Kálmán S., 2019, MNRAS, 485, L116
* Telting et al. (2014) Telting J. H., et al., 2014, Astronomische Nachrichten, 335, 41
* Twicken et al. (2010) Twicken J. D., Clarke B. D., Bryson S. T., Tenenbaum P., Wu H., Jenkins J. M., Girouard F., Klaus T. C., 2010, in Radziwill N. M., Bridger A., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7740, Software and Cyberinfrastructure for Astronomy. p. 774023, doi:10.1117/12.856790
* Twicken et al. (2018) Twicken J. D., et al., 2018, PASP, 130, 064502
* Valenti & Piskunov (1996) Valenti J. A., Piskunov N., 1996, A&AS, 118, 595
* Vines & Jenkins (2022) Vines J. I., Jenkins J. S., 2022, arXiv e-prints, p. arXiv:2204.03769
* Vissapragada et al. (2022) Vissapragada S., et al., 2022, arXiv e-prints, p. arXiv:2204.11865
* Vogt et al. (1994) Vogt S. S., et al., 1994, in Crawford D. L., Craine E. R., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 2198, Instrumentation in Astronomy VIII. p. 362, doi:10.1117/12.176725
* Yee et al. (2017) Yee S. W., Petigura E. A., von Braun K., 2017, ApJ, 836, 77
* Zeng et al. (2016) Zeng L., Sasselov D. D., Jacobsen S. B., 2016, ApJ, 819, 127
* Zeng et al. (2019) Zeng L., et al., 2019, Proceedings of the National Academy of Science, 116, 9723
* Zhu & Wu (2018) Zhu W., Wu Y., 2018, AJ, 156, 92
## Appendix A Figures
Figure 14: Light curves of TOI-1288 b. The phase folded transits of planet b
from all the different photometers. The TESS light curve (top left) is the GP
detrended data from Fig. 2. The grey lines are the best-fitting models. Figure
15: Colour-magnitude diagrams for TOI-1288. Comparison of observed photometry
with predicted photometry for both candidate companions. In each case, we show
the CMD position of the host (black), the predicted position of a bound
companion based on each measured $\Delta$mag (light blue) and the weighted
average of these predictions (dark blue), and finally the observed CMD
position of the companion (red). The clear disagreement for star 2 (right)
indicates that this is a background object, while the relative agreement
between the red and dark blue points for star 1 (left) could indicate a bound
companion. As discussed in Section 2.2.5 this is not the case. Figure 16: DSS2
image of TOI-1288. The field around TOI-1288 as seen by the Digitized Sky
Survey (DSS2). TOI-1288 is marked with the grey dot, and the red dots are the
stars within the aperture in Fig. 1 using the same (relative) scaling for the
marker sizes. Figure 17: The hot Neptunian desert as in Fig. 11, but with the
colour coding done in the host star effective temperature.
## Appendix B Tables
Table 6: Ground-based photometry. Information on our ground-based photometric
observations.
Observatory | Location | Aperture (m) | Photometric Aperture (arcsec) | UTC Date | Filter | Coverage
---|---|---|---|---|---|---
LCOGT1-TFN | Tenerife, Spain | 1.0 | 5.8 | 2021-09-18 | $z$-short2 | Full
LCOGT-TFN | Tenerife, Spain | 1.0 | 5.8 | 2021-09-18 | $B$ | Full
Whitin | Massachusetts, USA | 0.7 | 8.0 | 2020-11-15 | $z^{\prime}$ | Full
Whitin | Massachusetts, USA | 0.7 | 8.0 | 2020-11-15 | $g^{\prime}$ | Full
Ca l’Ou3 | Catalonia, Spain | 0.4 | 6.7 | 2021-05-14 | $B$ | Full
Albany$\rm\grave{a}$4 | Catalonia, Spain | 0.4 | 12.2 | 2019-12-02 | $I_{\rm c}$ | Full
MuSCAT5 | Okayama, Japan | 1.88 | 5.4 | 2019-10-31 | Sloan $g^{\prime}$ | Ingress
MuSCAT | Okayama, Japan | 1.88 | 5.4 | 2019-10-31 | Sloan $r^{\prime}$ | Ingress
MuSCAT | Okayama, Japan | 1.88 | 5.4 | 2019-10-31 | $z$-short2 | Ingress
* 1
Las Cumbres Observatory Global Telescope (LCOGT; Brown et al., 2013).
* 2
Pan-STARRS $z$-short.
* 3
Observatori de Ca l’Ou, Sant Martí Sesgueioles.
* 4
Albany$\rm\grave{a}$ Observatory.
* 5
Multicolor Simultaneous Camera for studying Atmospheres of Transiting
exoplanets (MuSCAT; Narita et al., 2015).
Table 7: Radial velocities. The epochs, RVs, and errors from the HARPS-N and HIRES observations. This table is available in its entirety online. Epoch | RV | $\sigma_{\mathrm{RV}}$ | Instrument
---|---|---|---
BJDTDB | m s-1 | m s-1 |
2458790.340214 | -68045.93 | 1.21 | HARPS-N
2458790.359751 | -68048.67 | 1.27 | HARPS-N
2458821.330149 | -68073.51 | 1.16 | HARPS-N
⋮ | ⋮ | ⋮ | ⋮
2459702.697487 | -68051.86 | 0.90 | HARPS-N
2458827.739873 | 32.38 | 1.22 | HIRES
2458844.755499 | 4.83 | 1.28 | HIRES
2458852.719975 | 11.01 | 1.28 | HIRES
⋮ | ⋮ | ⋮ | ⋮
2459498.888082 | -14.75 | 1.51 | HIRES
Table 8: Limb-darkening coefficients. The limb-darkening coefficients for our MCMC using a quadratic limb-darkening law. We stepped in the sum of the coefficients with a Gaussian prior ($\mathcal{N}(\mu,\sigma)$), while keeping the difference fixed ($\mathcal{F}(c)$). The initial values were found from the tables in Claret (2017) for the case of TESS and Claret & Bloemen (2011) for the rest of the filters. Parameter | Name | Prior | Value
---|---|---|---
Stepping parameters
$\rm\delta M$ | Dilution | $\mathcal{N}$(4.41,0.02) | $4.410^{+0.021}_{-0.019}$
$(q_{1}+q_{2})_{1}$ | Sum of limb-darkening coefficients TESS | $\mathcal{N}$(0.6184,0.1) | $0.57\pm 0.05$
$(q_{1}+q_{2})_{1}$ | Sum of limb-darkening coefficients TESS | $\mathcal{N}$(0.6184,0.1) | $0.57\pm 0.05$
$(q_{1}+q_{2})_{2}$ | Sum of limb-darkening coefficients Albany$\rm\grave{a}$ $I$ | $\mathcal{N}$(0.6611,0.1) | $0.67^{+0.10}_{-0.09}$
$(q_{1}+q_{2})_{3}$ | Sum of limb-darkening coefficients Whitin $g^{\prime}$ | $\mathcal{N}$(0.804,0.1) | $0.86^{+0.03}_{-0.07}$
$(q_{1}+q_{2})_{4}$ | Sum of limb-darkening coefficients Whitin $z^{\prime}$ | $\mathcal{N}$(0.5544,0.1) | $0.57\pm 0.10$
$(q_{1}+q_{2})_{5}$ | Sum of limb-darkening coefficients LCO TFN $z$-short | $\mathcal{N}$(0.5544,0.1) | $0.53\pm 0.09$
$(q_{1}+q_{2})_{6}$ | Sum of limb-darkening coefficients LCO TFN $B$ | $\mathcal{N}$(0.8402,0.1) | $0.95^{+0.03}_{-0.05}$
$(q_{1}+q_{2})_{7}$ | Sum of limb-darkening coefficients CALOU $B$ | $\mathcal{N}$(0.8402,0.1) | $0.95^{+0.03}_{-0.05}$
$(q_{1}+q_{2})_{8}$ | Sum of limb-darkening coefficients MuSCAT $g^{\prime}$ | $\mathcal{N}$(0.804,0.1) | $0.87^{+0.03}_{-0.08}$
$(q_{1}+q_{2})_{9}$ | Sum of limb-darkening coefficients MuSCAT $r^{\prime}$ | $\mathcal{N}$(0.712,0.1) | $0.72\pm 0.10$
$(q_{1}+q_{2})_{10}$ | Sum of limb-darkening coefficients MuSCAT $z$-short | $\mathcal{N}$(0.5544,0.1) | $0.56\pm 0.10$
Fixed parameters
$(q_{1}-q_{2})_{1}$ | Difference of limb-darkening coefficients TESS | $\mathcal{F}(0.1598)$ |
$(q_{1}-q_{2})_{2}$ | Difference of limb-darkening coefficients Albany$\rm\grave{a}$ $I$ | $\mathcal{F}(0.2025)$ |
$(q_{1}-q_{2})_{3}$ | Difference of limb-darkening coefficients Whitin $g^{\prime}$ | $\mathcal{F}(0.79)$ |
$(q_{1}-q_{2})_{4}$ | Difference of limb-darkening coefficients Whitin $z^{\prime}$ | $\mathcal{F}(0.209)$ |
$(q_{1}-q_{2})_{5}$ | Difference of limb-darkening coefficients LCO TFN $z$-short | $\mathcal{F}(0.209)$ |
$(q_{1}-q_{2})_{6}$ | Difference of limb-darkening coefficients LCO TFN $B$ | $\mathcal{F}(0.9002)$ |
$(q_{1}-q_{2})_{7}$ | Difference of limb-darkening coefficients CALOU $B$ | $\mathcal{F}(0.9002)$ |
$(q_{1}-q_{2})_{8}$ | Difference of limb-darkening coefficients MuSCAT $g^{\prime}$ | $\mathcal{F}(0.79)$ |
$(q_{1}-q_{2})_{9}$ | Difference of limb-darkening coefficients MuSCAT $r^{\prime}$ | $\mathcal{F}(0.4344)$ |
$(q_{1}-q_{2})_{10}$ | Difference of limb-darkening coefficients MuSCAT $z$-short | $\mathcal{F}(0.209)$ |
Derived parameters
$(q_{1})_{1}$ | Linear limb-darkening coefficient TESS | | $0.36\pm 0.02$
$(q_{1})_{1}$ | Quadratic limb-darkening coefficient TESS | | $0.20\pm 0.02$
$(q_{1})_{2}$ | Linear limb-darkening coefficient Albany$\rm\grave{a}$ $I$ | | $0.44\pm 0.05$
$(q_{2})_{2}$ | Quadratic limb-darkening coefficient Albany$\rm\grave{a}$ $I$ | | $0.23\pm 0.05$
$(q_{1})_{3}$ | Linear limb-darkening coefficient Whitin $g^{\prime}$ | | $0.824^{+0.016}_{-0.034}$
$(q_{2})_{3}$ | Quadratic limb-darkening coefficient Whitin $g^{\prime}$ | | $0.034^{+0.016}_{-0.034}$
$(q_{1})_{4}$ | Linear limb-darkening coefficient Whitin $z^{\prime}$ | | $0.39\pm 0.05$
$(q_{2})_{5}$ | Quadratic limb-darkening coefficient Whitin $z^{\prime}$ | | $0.18\pm 0.05$
$(q_{1})_{5}$ | Linear limb-darkening coefficient LCO TFN $z$-short | | $0.37\pm 0.05$
$(q_{2})_{5}$ | Quadratic limb-darkening coefficient LCO TFN $z$-short | | $0.16\pm 0.05$
$(q_{1})_{6}$ | Linear limb-darkening coefficient LCO TFN $B$ | | $0.926^{+0.014}_{-0.026}$
$(q_{2})_{6}$ | Quadratic limb-darkening coefficient LCO TFN $B$ | | $0.026^{+0.014}_{-0.026}$
$(q_{1})_{7}$ | Linear limb-darkening coefficient CALOU $B$ | | $0.927^{+0.014}_{-0.027}$
$(q_{2})_{7}$ | Quadratic limb-darkening coefficient CALOU $B$ | | $0.027^{+0.014}_{-0.027}$
$(q_{1})_{8}$ | Linear limb-darkening coefficient MuSCAT $g^{\prime}$ | | $0.828^{+0.017}_{-0.038}$
$(q_{2})_{8}$ | Quadratic limb-darkening coefficient MuSCAT $g^{\prime}$ | | $0.038^{+0.017}_{-0.038}$
$(q_{1})_{9}$ | Linear limb-darkening coefficient MuSCAT $r^{\prime}$ | | $0.57\pm 0.05$
$(q_{2})_{9}$ | Quadratic limb-darkening coefficient MuSCAT $r^{\prime}$ | | $0.14\pm 0.05$
$(q_{1})_{10}$ | Linear limb-darkening coefficient MuSCAT $z$-short | | $0.38\pm 0.05$
$(q_{2})_{10}$ | Quadratic limb-darkening coefficient MuSCAT $z$-short | | $0.17\pm 0.05$
|
††thanks: Corresponding author<EMAIL_ADDRESS>
# Neutron detection and application with a novel 3D-projection scintillator
tracker in the future long-baseline neutrino oscillation experiments
S. Gwon Chung-Ang University, Seoul, South Korea P. Granger IRFU, CEA
Saclay, Gif-sur-Yvette, France G. Yang University of California, Berkeley,
CA, USA Lawrence Berkeley National Laboratory, CA, USA S. Bolognesi IRFU,
CEA Saclay, Gif-sur-Yvette, France T. Cai University of Rochester,
Rochester, NY, USA M.Danilov LPI, Lebedev Physics Institute, Moscow, Russia
A.Delbart IRFU, CEA Saclay, Gif-sur-Yvette, France A. De Roeck CERN,
European Organization for Nuclear Research S. Dolan CERN, European
Organization for Nuclear Research G. Eurin IRFU, CEA Saclay, Gif-sur-Yvette,
France R.F. Razakamiandra University of Antananarivo, Madagascar S. Fedotov
INR, Institute for Nuclear Research, Moscow, Russia G. Fiorentini Aguirre
South Dakota School of Mines and Technology, Rapid City, SD, USA R. Flight
University of Rochester, Rochester, NY, USA R. Gran University of Minnesota,
Duluth, MN, USA C. Ha Chung-Ang University, Seoul, South Korea C.K. Jung
Stony Brook University, Stony Brook, NY, USA K.Y. Jung Chung-Ang University,
Seoul, South Korea S. Kettell Brookhaven National Laboratory, Upton, NY, USA
M.Khabibullin INR, Institute for Nuclear Research, Moscow, Russia A.
Khotjantsev INR, Institute for Nuclear Research, Moscow, Russia M. Kordosky
College of William and Mary, Williamsburg, VA, USA Y. Kudenko INR, Institute
for Nuclear Research, Moscow, Russia Moscow Institute of Physics and
Technology (MIPT), Moscow, Russia Moscow Institute of Engineering and Physics
(MEPhl), Moscow, Russia T. Kutter Louisiana State University, Baton Rouge,
LA, USA J. Maneira University of Lisbon, Lisbon, Portugal S. Manly
University of Rochester, Rochester, NY, USA D.A. Martinez Caicedo South
Dakota School of Mines and Technology, Rapid City, SD, USA C. Mauger
University Pennsylvania, Philadelphia, PA, USA K. McFarland University of
Rochester, Rochester, NY, USA C. McGrew Stony Brook University, Stony Brook,
NY, USA A. Mefodev INR, Institute for Nuclear Research, Moscow, Russia O.
Mineev INR, Institute for Nuclear Research, Moscow, Russia D. Naples
University of Pittsburgh, Pittsburgh, PA, USA A. Olivier University of
Rochester, Rochester, NY, USA V. Paolone University of Pittsburgh,
Pittsburgh, PA, USA S. Prasad Louisiana State University, Baton Rouge, LA,
USA C. Riccio Stony Brook University, Stony Brook, NY, USA J. Rodriguez
Rondon South Dakota School of Mines and Technology, Rapid City, SD, USA D.
Sgalaberna ETH Zurich, Zurich, Switzerland A. Sitraka South Dakota School
of Mines and Technology, Rapid City, SD, USA K. Siyeon Chung-Ang University,
Seoul, South Korea N.Skrobova LPI, Lebedev Physics Institute, Moscow, Russia
H. Su University of Pittsburgh, Pittsburgh, PA, USA S.Suvorov INR,
Institute for Nuclear Research, Moscow, Russia A. Teklu Stony Brook
University, Stony Brook, NY, USA M. Tzanov Louisiana State University, Baton
Rouge, LA, USA E. Valencia College of William and Mary, Williamsburg, VA,
USA K. Wood Stony Brook University, Stony Brook, NY, USA E. Worcester
Brookhaven National Laboratory, Upton, NY, USA N.Yershov INR, Institute for
Nuclear Research, Moscow, Russia
###### Abstract
Neutrino oscillation experiments require a precise measurement of the neutrino
energy. However, the kinematic detection of the final-state neutron in the
neutrino interaction is missing in current neutrino oscillation experiments.
The missing neutron kinematic detection results in a feed-down of the detected
neutrino energy compared to the true neutrino energy. A novel 3D-projection
scintillator tracker, which consists of roughly ten million active cubes
covered with an optical reflector, is capable of measuring the neutron kinetic
energy and direction on an event-by-event basis using the time-of-flight
technique thanks to the fast timing, fine granularity, and high light yield.
The $\bar{\nu}_{\mu}$ interactions tend to produce neutrons in the final
state. By inferring the neutron kinetic energy, the $\bar{\nu}_{\mu}$ energy
can be reconstructed better, allowing a tighter incoming neutrino flux
constraint. This paper shows the detector’s ability to reconstruct neutron
kinetic energy and the $\bar{\nu}_{\mu}$ flux constraint achieved by selecting
the charged-current interactions without mesons or protons in the final state.
††preprint: preprint for arXiv
## I Introduction
The future long-baseline neutrino oscillation experiments will accomplish a
measurement of the CP-violating phase with unprecedented precision by
measuring the difference in oscillations to electron neutrinos and electron
antineutrinos [1]. The neutrino energies of the future long-baseline neutrino
experiments range from hundreds of MeV up to a few GeV. The neutrino
interaction modes in this range are mainly charged-current quasi-elastic
(CCQE) scattering, CC resonant scattering (RES), and CC deep-inelastic
scattering (DIS). The neutrino cross section for those scattering modes has
different energy dependence [2]. In order to discern the oscillation
phenomena, the experiments reconstruct the neutrino energy in the detector via
the CC interaction resultant visible particles.
A near detector is needed to measure unoscillated neutrino spectra and
constrain the systematic uncertainties such as neutrino flux, interaction
cross section, and detector acceptance. A stringent constraint on the flux and
neutrino interaction cross section from the near detector is required to
achieve a precise oscillation measurement. While it is relatively
straightforward to reconstruct charged particles, neutrons present a
particular challenge in neutrino event reconstruction. Considering the
neutrons share some significant portion of initial neutrino energy, it is very
beneficial to detect neutron kinematics directly in particle detectors.
The 3D-projection scintillator tracker (3DST) is proposed to be a powerful
near detector in future long-baseline experiments [3, 4, 1, 5, 6]. It is
capable of detecting neutron kinematics on an event-by-event basis. In this
manuscript, a flux constraint study is performed that includes the neutron
kinematics as an application to demonstrate the potential of 3DST to constrain
the flux uncertainty in long-baseline neutrino oscillation experiments. The
DUNE flux [7] is taken as an example in this work due to its wide energy
coverage.
The structure of the paper is as follows. Section II presents the key features
of the 3DST detector. Section III describes the neutron kinematic measurement.
Section IV shows a detailed description of the detector simulation setup.
Section V details the neutron detection performance, including neutron and
neutrino energy reconstructions and low-transverse-momentum ($\delta p_{T}$)
event selection. Section VI illustrates a neutrino flux constraint study.
## II The 3D-projection scintillator tracker
The role of the near detector in the long-baseline neutrino experiments is to
constrain the neutrino flux and cross-section systematic uncertainties and
central values that are applied to the far detector. A stringent systematic
constraint requires an accurate measurement of the neutrino interaction at the
near detector. DUNE can operate in forward horn current (FHC) and reverse horn
current (RHC) modes, which mainly produce neutrinos and antineutrinos,
respectively. This study focuses on the RHC mode. Among the final-state
particles, especially in the CCQE channel, the neutron is the most difficult
one to reconstruct. In most cases, the missing neutron energy leads to a
noticeably lower reconstructed neutrino energy than the true neutrino energy.
Fig. 2 shows the ratios of the averaged primary neutron energy to neutrino
(top plot) and antineutrino (bottom plot) energy in different CC interaction
modes from the GENIE generator [8]. The average energy fractions carried by
neutrons are about 3% and 10% in neutrino and antineutrino QE modes with
energy below $1\text{\,}\mathrm{G}\mathrm{e}\mathrm{V}$, respectively. It is
worth noting that for 10% of the RES channel in the antineutrino mode, the
energy fraction carried by neutrons reaches above 40% at low energy.
Figure 1: The concept of the 3D-projection scintillator tracker. Figure is
taken from [4].
Figure 2: Average energy fraction transferred to the primary neutrons relative
to the neutrino energy (top) and the antineutrino energy (bottom). The average
ratios are calculated for CCQE, CCRES, CCCOH, and CCDIS interaction modes. The
dashed lines show for each channel the energy fractions below $90\%$ of the
distribution.
The original conceptual proposal of the 3DST detector can be found in [3]. The
conceptual design of the detector is shown in Fig. 1. The detector consists of
roughly ten million optically isolated plastic scintillator (CH) cubes with a
total dimension of 2.4 m $\times$ 2.16 m $\times$ 1.92 m. The scintillation
light inside each cube is absorbed by three wavelength-shifting fibers
perpendicular to each other passing through the cube and read out by a SiPM at
the end of each fiber. Three 2D-readout images of an event are constructed and
combined to form a pseudo-3D image. The 3DST is characterized by the following
features:
* •
Fine granularity with 1.5 $\times$ 1.5 $\times$ 1.5 cm3 cube size and a fully
active target;
* •
4$\pi$ solid angle acceptance giving momentum reconstruction even for the low
momentum tracks;
* •
Fast timing, 0.9 ns for each fiber and 0.5 ns for each cube (combining three
fibers), suitable for detector neutrons [9].
Due to the homogeneous and efficient performance of this detector, T2K has
first adopted such a detector (SuperFGD) in its upgrade program. The SuperFGD
is being fully built and will be a key component of the upgraded off-axis near
detector ND280. In order to better characterize the detector, a SuperFGD
prototype detector has been built. The prototype detector has a dimension of
24 cm $\times$ 8 cm $\times$ 48 cm with a 1 cm $\times$ 1 cm $\times$ 1 cm
cube size. The prototype has been exposed to a charged particle beamline at
CERN, and a significant amount of knowledge about the detector response was
learned [10]. In addition, in light of the importance of neutron kinematic
detection, two neutron beam tests were completed at the Los Alamos National
Laboratory (LANL) in December 2019 and 2020. A large amount of neutron
interaction data with energy ranging from 0 to 800 MeV has been collected with
the prototype detector. With the LANL beam test data, the detector response to
neutrons can be understood in detail. It demonstrated the individual neutron
kinematics detection capability of the prototype by measuring the n-CH total
cross section. The main results have been published in [11]. In addition, the
beam test program is providing the neutron detection efficiency, scattering
angle, secondary scattering rate, and exclusive channel production
information, such as pion or proton production, as functions of the neutron
energy.
Furthermore, the 3DST neutron kinematic detection application has been
explored in previous works. One of the possible measurements is the transverse
kinematic balance of final-state particles in the process of CCQE
$\bar{\nu}_{l}~{}p\rightarrow l^{+}n$. When an antineutrino interacts with a
target proton in the detector, if the proton is not bound, as in hydrogen, the
sum of momenta in the plane perpendicular to the direction of incoming
antineutrino, denoted by $\delta p_{T}$, vanishes. Non-zero $\delta p_{T}$
implies that the final-state particles are not free from Fermi motion, binding
energy, or final-state interactions (FSI) inside the nucleus [12]. Thus, the
$\delta p_{T}$ provides a powerful enhancement for the $\nu$-p sample
selection. A comprehensive study of the transverse kinematics balance in the
context of the SuperFGD detector is presented in [12]. In addition, more
recently, the impact of the SuperFGD neutron detection capability on the
neutrino interaction cross section and flux constraint has been studied
quantitatively [13]. Another possible physics application with 3DST is
neutrino flux constraint with the CC0$\pi$0p1n channel, which has a $\mu+$ and
a neutron in the final state. The CC0$\pi$0p1n channel has a relatively small
model uncertainty compared to other channels due to the simple event topology.
The channel is selected in this work in order to show the possibility of
constraining the incoming neutrino flux uncertainty with neutron kinematic
detection, as illustrated in Section VI.
It is worth noting that the CCQE and the CC0$\pi$0p1n channel are not
identical. The CCQE channel is a type of true interaction channel, while the
CC0$\pi$0p1n is a topological channel at the analysis level. We select the
CC0$\pi$0p1n channel mainly intending for the CCQE interaction. Non-CCQE
events can also contribute to the selected CC0$\pi$0p1n sample.
## III Neutron energy measurement
This work mainly explores the impact of the 3DST detector in the DUNE near
detector hall with a distance of 574 m from the proton beam target. For
simplicity, we focus on the CC0$\pi$ interaction in this paper. Topologically,
the CC$0\pi$ channel is dominated by the CCQE and 2p2h events, and the main
target channel is CCQE. The CCRES interactions also contribute to CC0$\pi$ if
the pions are absorbed in the nucleus. The relative fractions of CCQE, 2p2h,
and others are 22.8%, 8.5%, and 68.7%, respectively.
Additionally, we try to select events with no protons and only one neutron in
the final state.
The work presented in this paper strongly depends on the neutrino-interaction,
nuclear and particle-propagation modeling. A few caveats should be clearly
stated here.
* •
The nuclear modeling uncertainties in the CC$0\pi$0p1n channel are covered in
an approximated way by present studies. This study is an example demonstration
of the 3DST capability.
* •
Both out-of-fiducial external background and internal backgrounds are
critical. The background estimate depends on the neutrino interaction modeling
on the fiducial material and out-of-fiducial material. The detail of handling
these backgrounds and the robustness of the background modeling, in particular
in the CC0$\pi$0p1n channel study, is discussed in later chapters.
* •
Systematic uncertainty due to FSI and secondary interaction (SI) may not be
fully covered by this study. This problem is still under study by the
community [14], and no robust uncertainties are available yet.
In neutrino experiments, neutrino energy reconstruction is performed mainly in
three methods with the presence of neutrons:
1. 1.
Using kinematics of the lepton: the energy and scattering angle of the lepton
in the CCQE interaction is used, assuming momentum conservation for the two-
body interaction. However, other channels can mimic the CCQE signature, thus
causing significant bias.
2. 2.
Summing up all energy deposits inside the fiducial volume: neutrons carry more
energy in the form of kinetic energy than the energy deposit. The “feed-down”
of the reconstructed energy can be significant.
3. 3.
Measuring kinetic energy of the neutron: the neutron kinetic energy is
estimated by measuring the neutron-induced isolated hit’s time and distance to
the neutrino interaction vertex. In the CCQE interaction, the neutron kinetic
energy is added to the $\mu^{+}$ energy.
The last method is the so-called Time-of-Flight method (ToF).
Figure 3: Time-of-flight and lever arm. A neutron-induced cluster in the 3DST
is identified by the first cluster after the $\bar{\nu}_{\mu}$ interaction.
The ToF technique for the neutron kinetic energy estimation is illustrated in
Fig. 3. For a $\bar{\nu}_{\mu}$ CCQE event in the 3DST, the start of a
$\mu^{+}$ track at time $t_{1}$ is marked as the $\bar{\nu}_{\mu}$ interaction
point. Then a cluster of signals occurring at a certain distance from the
$\bar{\nu}_{\mu}$ interaction vertex at time $t_{2}$ is marked as the neutron
interaction point. In Fig. 3, the cluster in red represents a proton recoil.
The time difference $t_{2}-t_{1}$ is the neutron time-of-flight, and the
distance between the two interaction points is called the lever arm. For the
CC$0\pi$ channel, we expect primary neutrons to be the main source of the
isolated clusters. Selecting the first cluster in time allows us to pick up
the primary neutron’s first interaction, thus measuring its energy with the
travel time and distance.
## IV simulation setup
All analysis in this paper is based on a fully reconstructed Monte Carlo (MC)
simulation. The expected DUNE flux in antineutrino beam mode is used in the MC
simulation. The GENIE generator v3.00.04 tune G1810a [8] is used to model the
neutrino interaction with the nucleus in the detector. The modeling of the
final-state particle propagation in the detector was completed by the edep-sim
package, which is a wrapper of the GEANT4 software [15]. A realistic detector
geometry is generated by DUNENDGGD package [16]. The full size of the detector
is 2.4 m $\times$ 2.4 m $\times$ 1.95 m. The simulation for the signal
response of the detector, including the signal readout, DAQ, and calibration,
is completed by the erep-sim package [17].
As a final step of the simulation chain, a full event reconstruction for the
detector is performed by the CubeRecon package [18]. For each event, the
particle trajectories are projected into three 2D views, which contain each
fiber’s energy and time readouts. The three 2D views are converted into 3D
reconstructed objects. There are two object classes: tracks and clusters. A
track is an object longer than three voxels; otherwise, the object is a
cluster. The objects have all the hit information, such as the position,
charge, and time. The following analysis is performed with the fully
reconstructed objects.
## V Neutron detection performance
In this section, we investigate the neutron detection performance of 3DST as
well as the impact of this performance on the neutrino energy measurement.
This study focuses on neutron detection in the CC0$\pi$ $\bar{\nu}_{\mu}$
interactions. In addition to the full reconstruction, the following
assumptions are made:
* •
The particle identification (PID) of charged particles is assumed to be
perfect, given the excellent reported $dE/dx$ resolution of the 3DST [10].
* •
A muon momentum resolution of 4% is applied. This resolution is realistic
given the typical momentum resolutions reachable by spectrometers that would
be placed around the 3DST [19].
* •
An angular resolution of $1\text{\,}\mathrm{\SIUnitSymbolDegree}$ is applied
for the azimuthal and polar angles of the muon, given the granularity of the
detector.
In this section, a few timing resolutions are assumed and compared. The
expected timing resolution for the single channel readout is 0.9 ns, dominated
by the scintillation time [20]. For each cube, the signal is read out by three
channels thus the timing resolution can go down to
$\frac{0.9}{\sqrt{3}}\approx 0.5$ ns. Typically, the neutron-induced cluster
results in more than one cubes hence the timing resolution can go further down
to $\frac{0.5}{\sqrt{N}}$ ns, where $N$ is the number of fired cubes. The
electronics for the detector should be chosen to have a smaller impact on the
timing resolution.
Following these considerations, for each simulated event, the analysis
strategy is the following :
1. 1.
A single $\mu^{+}$ track with no additional tracks is considered.
2. 2.
The first isolated reconstructed object in time is selected, assuming it
corresponds to the first interaction of the primary neutron inside the
detector.
3. 3.
Topological cuts are applied to remove events not associated with the
interaction of a primary neutron.
4. 4.
The neutron momentum is estimated with the measured lever arm and time-of-
flight.
The muon track starting point is taken as the neutrino interaction vertex. If
there is more than one track within a spherical region centered at the vertex,
the event is rejected. The region is shown as the gray sphere in Fig. 4. The
radius of the sphere cut is set to 5.25 cm, which is the smallest distance
that the vertex can be isolated from other tracks. CC charged-pion production
events could be mostly rejected by this cut since most $\pi^{\pm}$ tracks in
the final state are close to the neutrino interaction vertex. The events
remaining after the rejection are defined as “single-track” events.
Figure 4: An example of a single-track event. If the event has multiple tracks
within the gray sphere, the event is rejected. The blue-colored track is the
$\mu^{+}$ track. The red-colored object is likely originated from the muon and
rejected. Among the remaining objects, the first in time is selected as the
neutron-induced candidate.
For some events, the first isolated object in time is not induced by the
primary neutron. In order to reject them, some additional selections are
applied. For most of the cases where the first cluster in time is not related
to the primary neutron, the energy deposit has been made by either a delta ray
from the $\mu^{+}$ track, a secondary neutron created by the primary neutron
or a primary proton produced by FSI. The secondary neutrons can hardly be
distinguished from primary neutrons as they have similar topologies. It is,
however, possible to remove most of the delta ray electrons and primary
protons by applying a cut on the angle between the $\mu^{+}$ track and the
direction defined by the vertex and the first cluster, as shown in Fig. 5.
Requiring an angle larger than $30\text{\,}\mathrm{\SIUnitSymbolDegree}$ with
the $\mu^{+}$ track allows us to increase the selection purity from 69% to 81%
with a loss of only 2% of signal. Each category in Fig. 5 is defined as
follows;
* •
Signal: Energy deposited by a primary neutron (by interacting with a proton,
for instance);
* •
Signal induced: Energy deposited by a secondary neutron that acquired kinetic
energy from an interaction with a primary neutron;
* •
$\delta$ electron: Energy deposited by a $\delta$ electron from the muon
track;
* •
Primary proton: Energy deposited by a primary proton;
* •
Background neutron: Energy deposited by a neutron that was neither created in
the primary interaction nor created by a primary neutron;
* •
Background other: Energy deposited by other kinds of particles, such as
mesons.
Figure 5: Distributions of the angular separation between the $\mu^{+}$ and
the first cluster in time for the different kinds of particles.
Furthermore, an additional selection can be made by requiring a minimum
distance between the antineutrino vertex and the earliest cluster (lever arm)
in order to select a subset of events with neutrons that travel a sufficient
distance. A longer lever arm results in a more precise energy reconstruction
by ToF, given that the relative uncertainty on the lever arm decreases,
leading to a better estimation of the neutron $\beta$, as reported in [12].
Fig. 6 shows how the neutron kinetic energy resolution evolves as a function
of the cut applied on the lever arm. Fig. 6 demonstrates that improving the
time resolution of such a detector allows for improving the neutron energy
resolution.
Figure 6: Neutron energy resolution as a function of the lever arm cut for
various time resolutions.
Finally, as reported in [12], an additional kinematic variable, the transverse
momentum imbalance $\delta p_{T}$, allows us to select a subset of events for
which the energy reconstruction is better controlled and as a consequence, the
antineutrino energy resolution is improved. In the case of a $\bar{\nu}$ CCQE
interaction $\bar{\nu}p\rightarrow l^{+}n$, the transverse momentum imbalance
is simply defined as:
$\delta p_{T}=\left|\vec{p^{l}}_{T}+\vec{p^{n}}_{T}\right|,$ (1)
where $p^{n}$ and $p^{l}$ are the outgoing neutron and lepton momenta,
respectively. The $T$ subscript refers to the projection of the vector onto
the plane transverse to the incoming neutrino direction.
There is no transverse momentum imbalance in the final state for an
interaction on a free nucleon (the hydrogen target). On the other hand,
nuclear targets subject to Fermi motion lead to a non-zero $\delta p_{T}$.
Furthermore, inelastic neutrino interactions on nuclei with no meson in the
final state can occur and are difficult to distinguish from elastic
interactions. For example, 2p2h interactions or the production of a $\pi$
reabsorbed by the nucleus does not result in mesons in the final state.
Consequently, selecting events with a low $\delta p_{T}$ allows the selection
of a hydrogen-enriched sample. This can be seen in Fig. 7 where the
reconstructed $\delta p_{T}$ distributions for both hydrogen and carbon
interactions are shown. Moreover, the interactions on carbon nuclei with a low
$\delta p_{T}$ value tend to suffer less from the FSI and 2p2h. The unseen
nucleon, in the case of 2p2h or absorbed pion, carries transverse momentum
that is not measured and leads to the measurement of a large $\delta p_{T}$.
In addition, applying a cut on $\delta p_{T}$ allows us to reject those events
for which the primary neutron is misidentified or a meson is not
reconstructed, as demonstrated in Fig. 8. It can be seen that the application
of a loose cut on $\delta p_{T}$, such as $\delta
p_{T}<$400\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$$, can be enough to remove
part of the background and enhance the selection purity from 81% to 88% while
a more stringent cut of $40\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$ results in
a purity of 93%. The efficiency for the neutrino hydrogen interaction with the
$\delta p_{T}$ cut is from 10% to 30%, depending on the lever arm requirement
[21].
Figure 7: Reconstructed $\delta p_{T}$ distributions for interactions on
Hydrogen and Carbon. Figure 8: Signal and background distributions in the
selected events as a function of the reconstructed $\delta p_{T}$. The blue
curve gives the integrated signal fraction for events with $\delta
p_{T,\text{reco}}$ below the considered value.
For the CCQE antineutrino interactions, it is possible to rely only on the
$\mu^{+}$ kinematics in order to compute the antineutrino energy:
$E_{\bar{\nu}}^{\text{lep}}=\frac{m_{n}^{2}-m_{p}^{2}-m_{\mu}^{2}+2m_{p}E_{\mu}}{2\left(m_{p}-E_{\mu}+p_{\mu}\cos{\theta_{\mu}}\right)},$
(2)
where $m_{n}$, $m_{p}$, and $m_{\mu}$ are the masses of the neutron, proton,
and muon, respectively, whilst $E_{\mu}$, $p_{\mu}$, and $\theta_{\mu}$ are
the energy, momentum, and angle of the outgoing $\mu^{+}$ with respect to the
incoming antineutrino. This formula is accurate only in the case of an
interaction on a free proton (hydrogen interaction). Fig. 9 shows the energy
resolution with no detector smearing where there is a peak at zero
corresponding to a perfect resolution for hydrogen interactions surrounded by
a wide distribution due to the smearing caused by the Fermi motion. Detecting
the primary neutron of the antineutrino interaction allows us to better
estimate the antineutrino energy by using a calorimetric measure of the total
energy in the final state:
$E_{\bar{\nu}}^{\text{cal}}=E_{\mu}+E_{n}+(m_{p}-m_{n}),$ (3)
where $E_{\mu}$ and $E_{n}$ are the muon energy and neutron energy measured
with the spectrometer and ToF, respectively. As shown in Fig. 9, without
detector smearing, the calorimetric energy reconstruction has a better
resolution than the reconstruction without the neutron kinetic energy
measurement.
Figure 9: Expected neutrino resolutions for the two reconstruction methods
assuming no detector smearing. The binding energy ($E_{b}$) is accounted for
the reconstruction of $C$ interactions.
For all the selected events from the reconstructed sample, the antineutrino
energy is reconstructed using the two formulas (2) and (3). The result of the
antineutrino resolution after applying a cut on $\delta p_{T}$ is given in
Fig. 10. It can be seen that both reconstruction methods give a very similar
result with an antineutrino energy resolution around 4.5%.
Figure 10: Obtained resolution on the interacting antineutrino with the two
different formulas for $\delta
p_{T}<$40\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$$, lever arm $>$ 10cm.
Furthermore, the impact on the neutrino energy resolution of the $\delta
p_{T}$ cut and of the detector time resolution is presented in Figs. 11-14.
Fig. 11 shows that for a time resolution of $0.5\text{\,}\mathrm{ns}$, the two
estimations of the neutrino energy have similar performance. Moreover, it can
be seen that imposing stricter $\delta p_{T}$ cuts allows for improving the
neutrino energy resolution. In addition, improving the time resolution results
in a better energy reconstruction using the calorimetric measurement as shown
in Figs. 12 and 14. The improvement in energy resolution with time resolution
is mainly noticeable for the calorimetric estimation of the energy, while it
remains limited with the leptonic-only estimation as shown in Fig. 13. The
time resolution directly impacts the uncertainty on the neutron time-of-flight
that is used to estimate its kinetic energy.
Figure 11: Evolution of the resolution on the reconstructed anti neutrino
energy as a function of the $\delta p_{T}$ cut for $0.5\text{\,}\mathrm{ns}$
time resolution, lever arm $>$ 10 cm. Figure 12: Evolution of the resolution
on the reconstructed anti neutrino energy as a function of the $\delta p_{T}$
cut for $0.25\text{\,}\mathrm{ns}$ time resolution, lever arm $>$ 10 cm.
Figure 13: Evolution of the resolution on the reconstructed anti neutrino
energy as a function of the $\delta p_{T}$ cut for different time resolutions
for $E_{\bar{\nu}}^{\text{lep}}$. Figure 14: Evolution of the resolution on
the reconstructed anti neutrino energy as a function of the $\delta p_{T}$ cut
for different time resolutions for $E_{\bar{\nu}}^{\text{cal}}$.
The energy resolution is not the only metric to assess the 3DST performance.
It is also necessary to check that the reconstructed neutrino energy spectrum
is not distorted, given such a detector to be installed as a near detector for
a long-baseline neutrino oscillation experiment. Fig. 15 shows that the energy
reconstruction presented here does not distort the reconstructed neutrino
energy spectrum, and either a selection of H or C interactions has no impact
on the shape of the reconstructed spectrum with $\chi^{2}$ test p-values above
0.2 for both cases ($\chi^{2}/d.o.f.$ = 58/50).
Figure 15: Reconstructed neutrino spectra and difference with the true one,
lever arm $>$ 10 cm.
Finally, one can fully measure the benefit of neutron detection on the
neutrino energy resolution by comparing the obtained resolutions with and
without the detection of the neutrons. Without neutron detection, there is no
way to estimate the $\delta p_{T}$. This is reflected in Figure 16 where the
neutrino energy resolution obtained using $E_{\text{lep}}$ without any $\delta
p_{T}$ selection is compared to the one obtained using the neutron information
fully. It can be seen that the neutron detection capabilities of such a
detector allow for a substantial improvement of the neutrino energy
measurement even for the $1\text{\,}\mathrm{n}\mathrm{s}$ conservative time
resolution. A significant part of this improvement can be found in the
suppression of the lower-end tail, corresponding to an underestimation of the
neutrino energy, mostly due to the availability of the $\delta p_{T}$
measurement.
Figure 16: Obtained resolution on the interacting antineutrino with and
without the neutron information for a $1\text{\,}\mathrm{n}\mathrm{s}$ time
resolution. The resolution without the neutron detection is represented by the
leptonic energy reconstruction with no $\delta p_{T}$ cut, while the one with
neutron detection uses the calorimetric energy reconstruction and $\delta
p_{T}<$40\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$$.
## VI CC0$\pi$0p1n channel analysis with neutron
On the top of the CC0$\pi$ selection in the previous section, the
$\bar{\nu}_{\mu}$ CC0$\pi$0p1n channel is further studied to constrain the
flux given its relatively small uncertainty on the cross section and
detection. Due to the neutron detection capability, the 3DST can constrain
$\bar{\nu}_{\mu}$ flux with the neutron, analogous to the current detectors
constraining ${\nu}_{\mu}$ flux with a proton in the final state. This section
describes the selection of the CC0$\pi$0p1n channel and further investigates
the $\bar{\nu}_{\mu}$ flux constraint.
### VI.1 Neutron selection
As mentioned in Chapter V, a CC0$\pi$0p sample can be selected and denoted as
the single-track sample. We can select a neutron sample among the single-track
sample. The neutron selection will be described later. The neutron sample may
contain three kinds of backgrounds:
* •
External background: the first object comes from an external source, such as
the neutrons from a neutrino interaction outside of the detector fiducial
volume.
* •
Internal non-neutron background: the first object comes from the targeted
neutrino interaction, but it is not neutron-induced such as a $\pi^{0}$ from
the neutrino interaction.
* •
Internal neutron background: by design, we are capable of detecting only one
neutron kinematics. A “multi-neutron event” is defined as an event with more
than one neutron in the final state. In multi-neutron events, other neutrons
are missed, causing the misreconstruction of the neutrino energy.
The external background can be reduced to 1% with cuts on the time difference
and distance between the neutrino interaction vertex and the neutron-induced
object. The purity as a function of the time difference and lever arm for
excluding the external background can be found in Fig. 137 in [1].
One of the main sources of the internal non-neutron background is delta rays
induced by the primary muon track. In order to reject them, objects inside a
cylindrical region with a radius of 4.25 cm surrounding the muon track are
removed, as shown in Fig. 4.
The other source of the internal non-neutron background is a $\pi^{0}$ from
the neutrino interaction vertex. There is also a small amount of deexcitation
photons in the neutrino interaction. To reduce these, the following cuts are
applied to the first object in time.
* •
ToF: negative ToF events are rejected to reduce misreconstructed events due to
the timing resolution of the detector.
* •
Energy deposit: the total energy deposit of the neutron-induced object tends
to be higher than others.
* •
Branch number: an object can induce small tracks looking like branches
attaching to the object. For neutron-induced object, the branch number,
defined as the number of small tracks, tends to be lower since neutron mainly
produces visible single-track protons.
The distributions for those variables above and the value of the cuts can be
found in Appendix A. At this stage, the selected sample has a 90% purity of
neutron candidates with 49% efficiency. Additional selections are needed to
reduce the internal neutron background. Multi-neutron events can have a large
spread of isolated neutron-induced objects in the plane transverse to the
incident neutrino compared to single-neutron events, as shown in Fig. 17. In
order to reduce multi-neutron events, the angles and the distances between the
adjacent objects in time are measured. Then we pick the biggest angle and
distance among the isolated objects as the “maximum angle” and “maximum
distance” respectively. Fig. 19 and LABEL:fig:max_distance_dist shows the
distributions of the maximum angle and distance. The events with values
smaller than the cuts are selected.
Lastly, if the primary neutron interacts in the detector without leaving
enough energy and interacts again with a high enough energy deposit, the first
scattering is not visible. This invisible scattering is not a major
background, given that the invisible scattering is mostly elastic, and it does
not change the neutron angle significantly.
Figure 17: The maximum angle in the two-neutron event. Each color represents a
list of objects induced by one neutron, and the numbers follow the time order.
The angles can be obtained between two adjacent objects, and the biggest one
is defined as a “maximum angle”. The “maximum distance” can be defined in a
similar sense. Figure 18: Maximum angle for various cases of neutron
multiplicity. It shows the separation of single-neutron and multiple-neutron
cases. The dashed line shows the selection cut. Figure 19: Maximum distance
for various cases of neutron multiplicity. It shows the separation of single-
neutron and multiple-neutron cases. The dashed line shows the selection cut.
### VI.2 Efficiency and purity
With all the selection cuts presented in previous section, a significant
background reduction is achieved.
Purity and efficiency
---
Cut | purity | efficiency
ToF (including threshold) | 0.48 | 0.70
energy deposit | 0.50 | 0.58
branch number | 0.53 | 0.54
max angle | 0.80 | 0.27
max distance | 0.81 | 0.23
Table 1: Purity and efficiency for each step of selection. The selections are
applied step by step to the sample. The purity is the number of signal samples
divided by the number of samples after the cuts, and the efficiency is the
number of samples after the cuts divided by the number of samples before the
cuts.
Table. 1 shows the purity and efficiency of the sample at each selection step.
There is a significant reduction of efficiency by the energy deposit cut. In
the neutron beam test, we realized that there was a non-negligible amount of
electronic noise and cross-talk light through the cube holes. The energy
deposit cut can efficiently remove almost all the noise and cross-talk light.
In the end, the signal samples, CC0$\pi$0p1n, have a purity and efficiency of
81% and 23%.
Figure 20: The efficiency curves of each selection group which are single-
track and neutron selection.
Fig. 20 shows the efficiency as a function of Eν for the CC0$\pi$0p selection
and the neutron selection. Note that each efficiency has a different
denominator. In this analysis, we assume the efficiency uncertainty can be
well measured across the energy distribution and is ignored.
### VI.3 CC0$\pi$0p1n fitting
With the selected CC0$\pi$0p1n sample, a sensitivity study is performed to
investigate the capability of constraining DUNE flux uncertainties. There are
256 parameters used to account for the various systematic uncertainties of the
flux, such as hadron production, beam focusing mode, horn alignment, etc [7].
A principal component analysis (PCA) is used to obtain the 1$\sigma$
uncertainty as a function of true Eν [22].
Figure 21: The absolute value of the flux systematic uncertainties. The
biggest component (hadron production) and the sum of the biggest 10 of them
are shown in this figure. The sum will be used as a pre-fit uncertainty.
The biggest 10 of them, covering 95% of the variance, are used in this
analysis, and the largest one and the sum of the ten are shown in Fig. 21.
A $\chi^{2}$ fitting framework is developed for this study with $\chi^{2}$
defined as
$\begin{split}\chi^{2}=\sum(P-D)^{T}*M_{cov}^{-1}*(P-D)\\\
+\sum^{10}_{i=1}\frac{\left(f_{i,CV}-f_{i}\right)^{2}}{\sigma_{f_{i}}}+\frac{(f_{B,CV}-f_{B})^{2}}{\sigma_{f_{B}}}\\\
+\frac{(f_{e,CV}-f_{e})^{2}}{\sigma_{f_{e}}},\end{split}$ (4)
where $f$ is the pull term with the subscripts $i$, $B$, and $e$ indicating
flux, background, and energy scale, respectively. The $D$ is the CC0$\pi$0p1n
fake data sample. The $M_{cov}$ is a covariance matrix that includes the
statistical uncertainty and the cross-section uncertainty. The $CV$ is the
central value, which is set to 0, and the $\sigma_{f}$ is set to 1. The $P$ is
the predicted energy spectrum reweightable by $w$ and the energy scale,
defined as
$\begin{split}P=(P_{0}\times w)\times(1+E_{scale}\times
f_{e}),~{}\text{and}\\\ w=\prod^{10}_{i=0}\bigg{(}1+f_{i}\times
syst_{i}\bigg{)}\text{.}\end{split}$ (5)
The $E_{scale}$ is the 1 $\sigma$ shift on the energy spectrum due to the
energy scale uncertainty. The $syst_{i}$ is the 1 $\sigma$ shift on the energy
spectrum for the $i^{\text{th}}$ flux PCA component. The $P_{0}$ is the
nominal predicted energy spectrum.
In summary, the following systematic uncertainties are considered in the
fitting framework:
* •
DUNE flux systematic uncertainty;
* •
Cross section uncertainty: The GENIE Reweight package is used [23]. GENIE has
a list of cross-section parameters. All those parameters are varied
simultaneously 1,000 times to extract the integrated cross-section uncertainty
as a function of the neutrino energy. There is no correlation assumed among
the cross-section parameters. The correlations among neutrino energy bins are
embedded in the parameter variations according to GENIE. In addition, the bias
caused by the generators is considered. The default neutrino interaction
generator is GENIEv3. The GENIEv2 has a largest discrepancy from the GENIEv3
[13]. The cross-section discrepancy can be up to 10%, depending on the
neutrino energy. The largest discrepancy appears at the neutrino energy below
1 GeV, and the discrepancy decreases as the neutrino energy increases. The
difference between them is taken as an additional cross-section modeling
uncertainty;
* •
Background uncertainty: the background uncertainty is assumed to be 100%, and
it acts as an overall normalization shift.
* •
Energy scale: the neutrino energy is varied with smearings of neutron energy
and $\mu^{+}$ energy by 20% and 2%, respectively. The 1$\sigma$ from the
resulting Gaussian distribution of the neutrino energy is taken as the energy
scale uncertainty.
Figure 22: Fitting result for flux uncertainty of $\bar{\nu}_{\mu}$ flux in
RHC mode with CC0$\pi$0p1n sample. The ratio of the quadrature sum of the flux
uncertainty after and before the fit is presented.
The flux uncertainty constraint with the CC0$\pi$0p1n sample is presented by
comparing the post-fit and pre-fit flux uncertainties. The ratio of the post-
fit and pre-fit uncertainties is as shown in Fig. 22. The statistics are
assumed to be with one year and seven years of run time.
In order to understand the impact of neutron ToF detection, we compare our
nominal result with a mock data set with 100% neutron energy uncertainty. The
overall post-fit to pre-fit ratio is around 0.85 with one year run time. The
precise neutron energy measurement improves the flux constraint significantly.
An additional test was done to understand the ability of the fitter to recover
biased flux prediction. The mock data was tweaked by 1 $\sigma$ bias for the
largest flux component. Given the prior pull on the tweaked dial, the fitter
can recover the bias by 0.6 $\sigma$, and the remaining discrepancy was
recovered by moving other systematic pulls. If there is no prior pull on the
tweaked dial, the fitter can fully recover the bias.
Furthermore, the MINER$\nu$A experiment demonstrated an in-situ measurement of
the CC cross section in the NuMI beamline with the flux prediction obtained by
the low-$\nu$ method [24]. Following the same idea, the low-$\nu$ method can
be used with 3DST as well. More detail is discussed in Appendix B.
## VII conclusion
We studied the 10-ton-scale 3D-projection scintillator tracker’s capability of
detecting neutron kinematics on an event-by-event basis with a full
reconstruction and a GeV-scale neutrino beam. The neutron detection precision
was presented in detail. Overall neutron energy resolution below 20% can be
achieved with sub-ns timing resolution in the detector. Furthermore, we
studied how neutron kinematic detection improves the neutrino energy
reconstruction. In particular, the antineutrino energy resolution can go down
to a few percent with a transverse momentum cut. In addition, we performed a
flux constraint study with the individual neutron selection. The neutron
selection purity, efficiency, and potential backgrounds were studied in
detail. With a year of exposure, the CC0$\pi$0p1n channel can reduce the flux
uncertainty by almost a factor of two.
Event-by-event neutron kinematic detection opens a new era of fully utilizing
the final-state particle information in neutrino interactions. The near
detector of the next-generation experiments will play a crucial role in
understanding the neutrino interaction and neutrino flux at an unprecedented
level. Following the neutron detection method in this paper, the next-
generation near detectors can use the transverse plane variables to deeply
study neutrino-nucleus interactions and constrain the flux. The detector
design in this paper is uniquely suited for measuring the transverse momentum
variables due to its fast timing, fine granularity, passive material absence,
and low threshold. The T2K upgrade includes such a detector, and it will lead
the exploration of the GeV-scale neutrino interaction.
On top of the method in this paper, the target-independent neutrino flux
measurement can be completed with other complementary methods. For example, a
$\nu$-$e$ scattering measurement can provide a solid constraint on the
neutrino flux with various target materials, including carbon and liquid argon
[25]. Combining neutrino flux constraints in multiple ways can also have a
more significant impact. The constraint with the $\nu$-$e$ scattering method
is at a similar level as the CC0$\pi$0p1n method, as indicated in this paper.
The constraint with the CC0$\pi$0p1n sample is from a relatively small
uncertainty in the modeling due to simple event topology. In addition, we are
effectively benefitting from the low $\delta p_{T}$ selection (less nuclear
effect) since the neutron is strictly required, thus selecting a relatively
low $\delta p_{T}$ sample.
It is worth noting that our estimate on the CC0$\pi$0p1n systematic
uncertainty predominately relies on the neutrino interaction models,
particularly the GENIE and Geant4 models. Improvement of such models will
improve the cross-section uncertainty estimate and make the result more
accurate. Lastly, the current flux uncertainty is limited by knowledge of the
hadron production. In the future, we expect some more precise hadron
production measurements from the NA61/SHINE and EMPHATIC experiments [26, 27].
If we assume a 50% tighter constraint from the upcoming hadron production
experiments, the post-fit to pre-fit ratio will be 0.8 to 0.9 throughout the
neutrino energy.
###### Acknowledgements.
This work was supported by NRF grant funded by MSIT of Korea
(NRF-2022R1A2C1009686, NRF-2017R1A2B4004308). This work was supported in part
by the MHES (Russia) grant “Neutrino and astroparticle physics” No.
075-15-2020-778. We further acknowledge the support of the US Department of
Energy, Office of High Energy Physics.
## Appendix A Variable Distributions
The first object in time can be induced by either a neutron or other
particles. Depending on the inducing source, the variable distributions have a
distinctive feature. The background can be reduced by a combination of simple
1D cuts on each variable. There are two types of reconstructed objects:
cluster and track. Depending on the type, there are two distributions of the
total energy deposit for the first object. Events with branch number $>$ 0 and
energy deposit $<$ $510\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$ for cluster
case or energy deposit $<$ $3600\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$ for
track case are rejected.
Figure 23: Branch number attached to the first object in time. The object can
induce particles that look like branches. The signal tends to be lower since
neutron interacts less than other particles. Figure 24: The total energy
deposit of the first object in time in cluster case. Neutron-induced clusters
deposit larger energy. Figure 25: The total energy deposit of the first object
in time in the track case. Neutron-induced track deposits larger energy.
## Appendix B Low-$\nu$ Analysis
The low-$\nu$ method was proposed by Mishra [28] and has been used for
neutrino and antineutrino charged-current flux, and cross-section measurements
in the MINER$\nu$A experiment [24]. The peculiarity of the low-nu method is
that the predicted cross section as a function of energy results to be flat
for a certain cut on $\nu$, which is the energy transfer to the nuclear
system. Assuming perfect knowledge of the detection efficiency and geometric
acceptance, the shape of the low-$\nu$ sample energy spectrum is equal to the
shape of the incoming neutrino flux. A good low-$\nu$ sample can provide
correction and constraint on the neutrino flux.
On the other hand, the normalization in the low-$\nu$ region is rather
unclear. The experimental handling is usually to take an external measurement
of the high energy absolute cross section and scale the low-$\nu$ cross
section normalization to it. This study is not taking this normalization into
account.
The MINER$\nu$A experiment uses the calorimetric energy for the antineutrino
low-$\nu$ channel study, which may result in an underestimate of the neutrino
energy [24]. The 3DST is capable of obtaining information on each individual
particle in the final state, including the neutron. Therefore a different
energy transfer calculation method gives a hint of the usefulness of
individual neutron kinematics detection.
The CC inclusive cross section can be written as
$\begin{split}\frac{d\sigma}{d\nu}=\frac{G^{2}_{F}M}{\pi}\int^{1}_{0}\bigg{(}F_{2}-\frac{\nu}{E_{\nu}}[F_{2}\mp
xF_{3}]\\\
+\frac{\nu}{2E^{2}_{\nu}}\bigg{[}\frac{Mx(1-R_{L})}{1+R_{L}}F_{2}\bigg{]}\\\
+\frac{\nu^{2}}{2E^{2}_{\nu}}\bigg{[}\frac{F_{2}}{1+R_{L}}\mp
xF_{3}\bigg{]}\bigg{)}dx,\end{split}$ (6)
where $E_{\nu}$ is the neutrino energy, $\nu$ is the energy transfer to the
nuclear system and $G_{F}$ is Fermi constant [29]. The cross section will be
approximately constant as a function of $E_{\nu}$ if the $\nu$ is small enough
compared to $E_{\nu}$ as shown in Fig. 26.
Figure 26: Cross section shape as a function of $E_{\nu}$ with various $\nu$
selection cuts. The lower $\nu$ results in a flatter cross section.
With proper efficiency and acceptance correction, the utilization of low-$\nu$
events results in a rather stringent antineutrino flux shape constraint since
the measured neutrino spectrum shape directly reflects the flux shape.
The low-$\nu$ sample can be selected among the CC0$\pi$0p1n sample with the
selection of reconstructed $\nu$ $<$
$300\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$. The neutron’s kinetic energy can
be used as the reconstructed $\nu$ since, in the CC0$\pi$0p1n channel, the
energy transfer to the nuclear system will go to the neutron, assuming that
the binding energy of the nucleus is negligible.
Figure 27: Some events can have true $\nu$ larger than the low-$\nu$ cut
($300\text{\,}\mathrm{M}\mathrm{e}\mathrm{V}$); such events are defined as a
high-$\nu$ background. The true $\nu$ is $E_{\nu}-E_{\mu}$ and the
reconstructed $\nu$ is the measured kinetic energy of neutron by the ToF
technique. The right region of the dashed line shows the high-$\nu$
background.
A “high-$\nu$” background shown in Fig. 27 should be rejected since it can
make an undesired distortion of the desired flat cross section. The main
source of the high-$\nu$ background is events that have multiple neutrons in
the final state. Events with more than one neutron satisfy the low-$\nu$ cut
even though they have a higher true $\nu$. High-$\nu$ background can be
reduced by the selection mentioned in Section VI.1. Table. 2 shows the purity
and efficiency of the low-$\nu$ sample.
Purity and efficiency
---
Cut | purity | efficiency
ToF (including threshold) | 0.34 | 0.70
energy deposit | 0.35 | 0.57
branch number | 0.39 | 0.56
max angle | 0.64 | 0.30
max distance | 0.66 | 0.26
low-$\nu$ | 0.72 | 0.13
Table 2: purity and efficiency for each step of selection. The signal is
CC0$\pi$0p1n low-$\nu$ events.
The same $\chi^{2}$ fitting framework in Section VI.3 is used in this
analysis. The $\sigma_{f_{B}}$ can be constrained from 100% to 85% by sideband
fitting. The high-$\nu$ backgrounds can be used as a sideband for the
low-$\nu$ sample. The flux uncertainty constraint with the low-$\nu$ method is
presented by comparing the post-fit and pre-fit flux uncertainties as shown in
Fig. 28.
Figure 28: Low-$\nu$ fitting result for flux uncertainty with various samples.
It compares the quadrature sum of the flux uncertainty before and after the
fit. With and without the low-$\nu$ selection with one year and seven years
statistics, true CC0$\pi$0p1n low-$\nu$ sample with seven years statistics.
One important note is that according to Table. 2, the low-$\nu$ cut reduces
half of the statistics compared to the CC0$\pi$0p1n selection. This trade-off
leads to an insignificant improvement of the flux constraint with the
additional low-$\nu$ cut. Fig. 28 shows such a trade-off effect. With the same
running time, the overall constraints by a selected low-$\nu$ sample and the
CC0$\pi$0p1n sample are similar. At the low energy region, the selected
CC0$\pi$0p1n sample without a flat cross section can also provide flux
constraint due to relatively small cross-section uncertainty. Compared to the
CC0$\pi$0p1n sample, additional constraint on the high-energy neutrino ($>$
$3\text{\,}\mathrm{G}\mathrm{e}\mathrm{V}$) due to the low-$\nu$ selection can
be achieved. The overall flux constraint with the low-$\nu$ selection is shown
in Fig. 28. However, due to the ineffectiveness of the low-$\nu$ method and
loss of statistics, the constraint on the low energy region with the low-$\nu$
sample is less significant than the CC0$\pi$0p1n sample.
The low-$\nu$ method has a large model dependence since the low-$\nu$ cross
section strongly depends on the modeling of the neutrino interaction. There
are possible models such as GiBUU, NEUT, NuWro, GENIE, etc., and GENIEv3 is
used to model the interaction in this analysis. As reported in [13], the shape
of $\bar{\nu}_{\mu}-$CnHn cross section spreads along the choice of the model,
especially GENIEv2 and GENIEv3 10a configuration has the largest discrepancy
at $1\text{\,}\mathrm{G}\mathrm{e}\mathrm{V}$ $<$ E${}_{\nu}<$
$3\text{\,}\mathrm{G}\mathrm{e}\mathrm{V}$. Thus, the comparison of GENIEv2
and GENIEv3 is used to investigate the robustness of the low-$\nu$ method for
the flux constraint. The model uncertainty is obtained by comparing the true
CC0$\pi$0p Eν cross section with the two models, and it’s included as a
systematic uncertainty for the diagonal terms of $M_{cov}$. As shown in [13],
the low-$\nu$ method is fragile to potentially large and not well-known
systematic uncertainties due to the neutrino-nucleus interaction model.
However, it is not the target of the present paper to evaluate such
systematics, even if it is a crucial point that the community has to address
to demonstrate if the low-$\nu$ method can be used reliably. Here we use the
low-$\nu$ method only as an example to demonstrate the capability of the
proposed detector design.
## References
* Abed Abud _et al._ [2021] A. Abed Abud _et al._ (DUNE), Instruments 5, 31 (2021).
* Formaggio and Zeller [2012] J. A. Formaggio and G. P. Zeller, Rev. Mod. Phys. 84, 1307 (2012).
* Blondel _et al._ [2018] A. Blondel _et al._ , Journal of Instrumentation 13, P02006 (2018).
* [4] K. Abe _et al._ (T2K), arXiv:1901.03750 [physics.ins-det] .
* [5] S. Berns _et al._ , arXiv:2202.10961 [physics.ins-det] .
* Boyarintsev _et al._ [2021] A. Boyarintsev _et al._ , Journal of Instrumentation 16, P12010 (2021).
* [7] B. Abi _et al._ (DUNE), arXiv:2103.04797 [hep-ex] .
* Andreopoulos _et al._ [2010] C. Andreopoulos _et al._ , Nucl. Instrum. Meth. A614, 87 (2010).
* [9] I. Alekseev _et al._ , arXiv:2206.10507 [physics.ins-det] .
* Blondel _et al._ [2020] A. Blondel _et al._ , Journal of Instrumentation 15, P12003 (2020).
* [11] H. Budd _et al._ , arXiv:2207.02685 [physics.ins-det] .
* Munteanu _et al._ [2020a] L. Munteanu _et al._ , Phys. Rev. D 101, 092003 (2020a).
* [13] C. Wilkinson, S. Dolan, L. Pickering, and C. Wret, arXiv:2203.11821 [hep-ph] .
* [14] J. M. Franco-Patino, R. González-Jiménez, S. Dolan, M. B. Barbaro, J. A. Caballero, G. D. Megias, and J. M. Udias, arXiv:2207.02086 [nucl-th] .
* Agostinelli _et al._ [2003] S. Agostinelli _et al._ (GEANT4), Nucl. Instrum. Meth. A 506, 250 (2003).
* Palomino _et al._ [2019] J. Palomino, G. Yang, and C.-H. Jang (DUNE), PoS ICHEP2018, 869 (2019).
* [17] https://github.com/ClarkMcGrew/erep-sim.
* [18] https://github.com/ClarkMcGrew/CubeRecon.
* Lux _et al._ [2007] T. Lux _et al._ , Journal of Physics: Conference Series 65, 012018 (2007).
* Alekseev [2022] I. e. a. Alekseev, (2022), arXiv.2206.10507.
* Munteanu _et al._ [2020b] L. Munteanu _et al._ , 101 (2020b), 10.1103/physrevd.101.092003.
* Abi _et al._ [2020] B. Abi _et al._ (DUNE), (2020), arXiv:2002.03005 [hep-ex] .
* [23] C. Andreopoulos, C. Barry, S. Dytman, H. Gallagher, T. Golan, R. Hatcher, G. Perdue, and J. Yarba, arXiv:1510.05494 [hep-ph] .
* Devan _et al._ [2016] J. Devan _et al._ (The MINERvA Collaboration), Phys. Rev. D 94, 112007 (2016).
* Marshall _et al._ [2020] C. M. Marshall, K. S. McFarland, and C. Wilkinson, Phys. Rev. D 101, 032002 (2020).
* [26] S. Ilieva, arXiv:2011.00277 [hep-ex] .
* [27] T. Akaishi _et al._ , arXiv:1912.08841 [hep-ex] .
* [28] S. Mishra, in Proceedings of the Workshop on Hadron Structure Functions and Parton Distributions, edited by Geesaman, D. et al. (World Scientific, Singapore, 1990), p. 84.
* Devan [2015] J. D. Devan, Ph.D. thesis, Coll. William and Mary (2015).
|
# Spatio-Temporal Crop Aggregation for Video Representation Learning
Sepehr Sameni
Computer Vision Group
University of Bern
<EMAIL_ADDRESS>Simon Jenni
Adobe Research
<EMAIL_ADDRESS>Paolo Favaro
Computer Vision Group
University of Bern
<EMAIL_ADDRESS>
###### Abstract
We propose Spatio-temporal Crop Aggregation for video representation LEarning
(SCALE), a novel method that enjoys high scalability at both training and
inference time. Our model builds long-range video features by learning from
sets of video clip-level features extracted with a pre-trained backbone. To
train the model, we propose a self-supervised objective consisting of masked
clip feature prediction. We apply sparsity to both the input, by extracting a
random set of video clips, and to the loss function, by only reconstructing
the sparse inputs. Moreover, we use dimensionality reduction by working in the
latent space of a pre-trained backbone applied to single video clips. The
video representation is then obtained by taking the ensemble of the
concatenation of embeddings of separate video clips with a video clip set
summarization token. These techniques make our method not only extremely
efficient to train, but also highly effective in transfer learning. We
demonstrate that our video representation yields state-of-the-art performance
with linear, non-linear, and $k$-NN probing on common action classification
datasets.
## 1 Introduction
Videos provide rich and detailed information about objects and their
activities. Their analysis is, however, made challenging not only by the
difference in the information provided across space and time but also by the
high dimensionality of the data [45, 29, 68]. While computational resources
are expected to scale over time, so is the demand for higher data resolution
(both in space and time) and also the need for processing data with even more
dimensions, such as videos of volumetric data [10]. Therefore, it is of
paramount importance to explore methods that drastically reduce the
computational requirements to process videos. Moreover, the annotation of
videos is an extremely costly and time-consuming burden that makes the use of
models pre-trained in a self-supervised manner essential [44].
Self-supervised learning (SSL) is a very popular technique to reduce the need
for annotation, because it can build useful representations from unlabeled
data through an artificial goal, also called pseudo- or pretext-task. These
representations can either be evaluated through K-nearest neighbor or linear
probing [69, 70] or through fine-tuning (i.e., as the initialization
parameters of the trained model) [25, 54, 35] on a downstream task, where only
a small labeled dataset is available. More remarkably, SSL pre-trained models
can outperform models that were pre-trained on an annotated dataset [53, 4].
In the case of videos, SSL methods for video representation learning present
fundamental scalability challenges [54, 45, 29, 63]. A first major challenge
is that training models from scratch for any new pseudo-task is neither
feasible, sustainable, nor scalable. A more viable setting is one where data,
such as videos, is pre-processed only once via some pre-trained general-
purpose model (e.g., trained via self-supervised learning [18, 23, 26, 44, 7,
37, 42]), and the (compressed) representation is stored and used later for
other training purposes or retrieval. A fundamental question is whether we can
improve the performance of video representations by training a model on top of
pre-computed features. A second challenge is that even a single round of
training can be quite demanding. However, a lot of the video content is
redundant and sparsity could be used to reduce the computational cost [54].
Thus, to make the learning of video representations highly scalable, we
propose a method that works on four fronts:
1. 1.
Input Sparsity: Sparsity in the input to the model [54, 25, 3, 19, 1] is an
effective way to drastically reduce the computational load and memory
requirements, while taking advantage of the information redundancy in images
and videos [16]. Inspired by prior work, we extract a sparse set of clips,
instead of image patches or video tubelets, from a video. Each clip is then
fed separately to a neural network to obtain a video clip representation.
2. 2.
Output Sparsity: In recent MAE based SSL methods for vision, the proposed
pseudo-tasks are based on the reconstruction of the whole input [25, 54, 19,
16], not just the input of the model (also known as asymmetric decoding). In
this case, another effective reduction of the computations to make the
training even more scalable is to use the reconstruction loss only on input of
the network (which is already _sparse_). Prior to MAE [25] that was the common
technique for _dense_ inputs [69, 51, 59, 5].
3. 3.
Dimensionality Reduction: Similarly to prior work [69, 42, 13], instead of
directly processing the raw input data, we work in the latent space. This
allows us to further reduce the dimensionality of both input and output data.
4. 4.
Use of a Pre-trained Backbone: To reduce training time and further speed up
the processing per iteration, we exploit SSL pre-trained models. Our method
builds a video representation on top of a set of pre-trained features (one for
each video clip). It should be noted that a single feature captures very
limited information about the whole source video.
To integrate all these components, we propose a novel SSL method that we call
Spatio-temporal Crop Aggregation for video representation LEarning (SCALE).
Given an input video, SCALE extracts a random set of clips and produces an
embedding for each clip through a pre-trained backbone, which is kept frozen.
These initial embeddings are then augmented in two ways: 1) each one is
refined into a more discriminative feature and 2) the set of all embeddings is
summarized in a global feature. These global features can learn long-term
correlations in the whole video by aggregating the short-term information in
each clip embedding. The combination of the initial embeddings with their
refinement and the global feature is then used in an ensemble for applications
on new downstream tasks.
To train SCALE we introduce two novel pseudo-tasks, which aim to improve the
discriminability of the embeddings of each video clip as well as obtaining a
global representation of the video. One task, which we call Masked Clip
Modeling (MCM), is the reconstruction of a video clip embedding as in masked
autoencoders [54]. Masked embeddings are combined with positional encodings so
that the model can (spatio-temporally) relate the missing input embeddings to
the other available embeddings (similarly to BERT in Natural Language
Processing [12]). A second task is to train the model to output a global
feature token (which we refer to as CLS, just for consistency with previous
works [7, 12, 14, 55, 9]) for a set of clips via contrastive learning, so that
the global feature is invariant to the chosen set of clips from the same
video, but can discriminate summary features of clips from other videos. Both
tasks are trained via contrastive losses.
To summarize our contributions, we propose SCALE, a novel and highly scalable
video representation method that
* •
is trained via novel pseudo-tasks on sets of video clips (in contrast to
existing methods that work only with pairs of clips [44, 18, 45] at a time);
* •
results in video feature representations with a significant performance
improvement in $k$-NN (retrieval), linear, and non-linear probing across a
wide range of datasets for action classification (UCF [48], HMDB [34], SSv2
[22], Kinetics400 [32]);
* •
achieves consistent transfer learning performance improvement across different
state of the art pre-trained backbones (in terms of architectures, scale, and
pre-training tasks);
* •
achieves SotA results in several action classification datasets (surprisingly,
our fine-tuned model even outperforms the fully fine-tuned SVT [44] on HMDB
[34]).
## 2 Related Work
Our approach relates to many prior work on (self-supervised) video
representation learning. In particular, SCALE relates to SSL approaches on
videos, methods that rely on multiple views of the data, and predictive
methods, where part of the data is predicted from another part.
Self-Supervised Video Representation Learning. Early SSL approaches on video
were based on pseudo tasks, _e.g_., the recognition of transformations of
video frame sequences [39, 6, 61]. More recently, popular methods developed on
images have been successfully translated to video, _e.g_., contrastive methods
[18, 23, 26], clustering-based methods [44, 7], or predictive approaches [37,
42]. Often, these approaches are tailored to video by including additional
learning signals, _e.g_., by combining contrastive methods with temporal
constraints [11] and pretext tasks [31], or by incorporating audio [40], or
optical flow [24]. These approaches typically learn representations with
limited temporal extent, which can serve as backbones for our approach.
Learning from Multiple Views. Many methods have explored the use of multiple
views (_e.g_., generated through space-time cropping) to represent and learn
from videos. For example, many SSL approaches rely on multiple views of the
data, _e.g_., in contrastive formulations [18, 44], or predictive learning
[45], where invariance to views is the goal. Other approaches aim to learn
from the relation between two views, _e.g_., by predicting overlap [67], the
relative distance [49], or cross-view feature prediction [66, 52] and
reconstruction [41]. Besides exploiting multiple views for SSL, some works
also propose general multi-view video models, _e.g_., by capturing and fusing
features at different spatiotemporal resolutions [65], by aggregating
information over longer time-spans [62, 47, 63, 57], or by selecting important
frames [21]. These approaches, however, are learned end-to-end, whereas we
propose to learn a global video representation of pre-trained features
extracted over multiple crops using self-supervision.
Predictive Learning. One of our proposed SSL objectives is a prediction of
features of space-time crops given other crop features of the video. This
approach is similar to masked token prediction as in BERT [12] and relates to
several other methods in the literature. Masked input reconstruction methods
have recently become popular on images [25] and successfully translated to
video [54, 16]. Other approaches formulate masked prediction tasks in the
learned feature space [69, 52, 13] or some fixed latent space [51]. Another
line of work considers directional predictions (_e.g_., into the future) often
formulated via contrastive predictive coding [42] applied to video [37, 36,
50, 64, 20]. These masked prediction tasks are typically formulated on a fixed
grid (_e.g_., at the level of patches or frames). In contrast, our formulation
is continuous, predicting features of arbitrarily sampled space-time crops.
## 3 Scalable Video Representation Learning
To describe SCALE, we first introduce some basic notation and functions that
will be used often in the formulation. In particular, we will define a general
notation for the contrastive loss, which we use to define all of our training
losses.
### 3.1 Notation
We use lower-case letters (e.g., $z$) to denote generic vectors and capital
letters (e.g., $Z$) to denote their sets. The expression
$a\mathbin{\ThisStyle{\scalebox{0.87}{$\SavedStyle\scalerel*{\mathbin{\stackengine{.2pt}{\kern-0.1pt\scalebox{0.9}{$\bigcirc$}\kern-8.9pt\scalebox{0.9}{$\bigcirc$}}{\scalebox{0.96}{$\text{{c}}$}}{O}{c}{F}{T}{L}}}{b}$}}}b$
denotes the concatenation of $a$ and $b$. We also skip writing the parameters
of networks (usually denoted by $\theta$) if their presence and role is clear
from the context. Throughout the description of the method we refer to the
training of neural networks, and, therefore, at each iteration of the training
we sample a minibatch of videos. All the equations in the next sections are
written for a single video in the minibatch. Although we do not explicitly
indicate it, all the contrastive losses use also the other videos in the
minibatch as negatives.
### 3.2 Contrastive Loss
InfoNCE is a powerful method for representation learning [42] that can be used
to maximize the mutual information between two variables. Because we use this
loss between different variables throughout our method, we introduce here a
unified notation. Let the paired sets $A$ and $B$ have $N$ elements each,
$A=\\{a^{i}\\}_{i=1}^{N}$ and $B=\\{b^{i}\\}_{i=1}^{N}$, where $a^{i}$ are
vectors of dimension $d_{A}$ and $b^{i}$ are vectors of dimension $d_{B}$. We
also introduce two Multi Layer Perceptrons (MLP), parameterized with
$\theta_{A}$ and $\theta_{B}$, to project these vectors onto a common space of
dimension $d$. After feeding the elements $a^{i}$ and $b^{i}$ to the MLPs and
normalizing them, we obtain
$\tilde{a}^{i}=\frac{\mbox{MLP}_{\theta_{A}}(a^{i})}{\lVert\mbox{MLP}_{\theta_{A}}(a^{i})\rVert}\quad\text{and}\quad\tilde{b}^{i}=\frac{\mbox{MLP}_{\theta_{B}}(b^{i})}{\lVert\mbox{MLP}_{\theta_{B}}(b^{i})\rVert},$
(1)
where $\|\cdot\|$ denotes the $L_{2}$ norm. We define the per-element loss
based on the relative similarity of $\tilde{a}^{i}$ and $\tilde{b}^{i}$, and
by using a temperature $\tau$
$\tilde{\ell^{i}}(A,B,\theta_{A},\theta_{B})=-\log\frac{\exp\left(\frac{\tilde{a}^{i}\cdot\tilde{b}^{i}}{\tau}\right)}{\displaystyle\sum_{j=1}^{N}\textstyle\exp\left(\frac{\tilde{a}^{i}\cdot\tilde{b}^{j}}{\tau}\right)}.$
(2)
We then make the loss symmetric [43] by
$\ell^{i}(A,B,\theta_{A},\theta_{B})=\tilde{\ell^{i}}(A,B)+\tilde{\ell^{i}}(B,A)$
(3)
Finally we define the contrastive loss ${\cal
L}_{\text{cntr}}(A,B,\theta_{A},\theta_{B})$ as the mean of $\ell^{i}$
${\cal
L}_{\text{cntr}}(A,B,\theta_{A},\theta_{B})=\frac{1}{N}\sum_{i=1}^{N}{\ell^{i}}(A,B,\theta_{A},\theta_{B}).$
(4)
As mentioned earlier on, for simplicity, in the rest of the paper we will not
indicate the parameters of the MLPs, and simply write ${\cal
L}_{\text{cntr}}(A,B)$ or $\ell^{i}(A,B)$.
Figure 1: Video Representation Learning with SCALE. For each video, SCALE
extracts two sets of video clips $V_{1}^{1},\dots,V_{K}^{1}$ and
$V_{1}^{2},\dots,V_{K}^{2}$. Each video clip is processed separately through a
frozen backbone $\text{E}_{\mbox{\scriptsize\char 100}}$ and results in
encoded video clips $C_{1}^{1},\dots,C_{K}^{1}$ and
$C_{1}^{2},\dots,C_{K}^{2}$. Then, a random set of encodings in each set is
masked and reconstructed at the output of the predictor network (a transformer
neural network) ($\ell^{i}$). The predictor network is also fed a class token
CLS. The corresponding output token encodes a summary $\text{CLS}^{j}$ of the
$j$-th set of video clips. The objective for these summary tokens is to be
similar only when encoding video clips from the same video
($\mathcal{L}_{\text{SET}}$).
### 3.3 Training SCALE
In our method, we integrate $4$ principles to drastically reduce the
computational complexity of learning a video representation: Input sparsity,
output sparsity, dimensionality reduction, and use of a pre-trained backbone.
Moreover, we introduce $2$ pseudo-tasks to train the model. One task is based
on the (contrastive) reconstruction of a masked video clip given some context
video clips. The second task is to build a global representation that is
(contrastively) invariant to the set of input sampled video clips. The overall
training scheme of SCALE is illustrated in Figure 1.
Input sparsity. As a first step, rather than processing a whole video, we
collect a sparse set of short video clips from the same video. Given a video
$V\in\mathbb{R}^{H\times W\times T}$, where $H$, $W$ and $T$ are the height,
width and duration (in frames) of the video, we sample $2K$ clips. We divide
the clips into two sets randomly. Each clip in the first set
$V^{1}_{i}\in\mathbb{R}^{H_{i}^{1}\times W_{i}^{1}\times T_{i}^{1}}$, with
$i=1,\dots,K$, is obtained at the spatio-temporal location
$X_{i}^{1},Y_{i}^{1},Q_{i}^{1}$ with different data augmentations and
dimensions $H_{i}^{1}$, $W_{i}^{1}$ and $T_{i}^{1}$. Similarly, we denote the
second set of clips $V^{2}_{i}$, for $i=1,\dots,K$. We also normalize their
coordinates relative to the video dimensions and embed them onto a feature
vector $P_{i}^{j}$ by feeding them to a learnable MLP. We denote these
embeddings
$\displaystyle\textstyle
P^{j}_{i}=\mbox{MLP}\left(\left[\frac{X^{j}_{i}}{H},\frac{Y^{j}_{i}}{W},\frac{Q^{j}_{i}}{T},\frac{X^{j}_{i}+H^{j}_{i}}{H},\frac{Y^{j}_{i}+W^{j}_{i}}{W},\frac{Q^{j}_{i}+T^{j}_{i}}{T}\right]^{\top}\right),$
(5)
where $j=1,2$ and $i=1,\dots,K$.
Dimensionality reduction and pre-trained backbone. To reduce the
dimensionality of each video clip, we feed them independently to a frozen
encoder $\text{E}_{\mbox{\scriptsize\char 100}}$, to obtain the encodings
$C^{j}_{i}=\text{E}_{\mbox{\scriptsize\char 100}}(V^{j}_{i})$, where $j=1,2$
and $i=1,\dots,K$. Our framework is encoder agnostic and thus can work with
encoders obtained through different training schemes (supervised, contrastive,
or autoencoder). In addition to reducing the dimensionality, we make the
training even more scalable by using pre-trained and frozen encoders. Note,
however, that if performance is the main goal, it is possible to also train a
sparse backbone end to end with multiple clips. Thanks to token dropping, one
can drop up to 95% of the tokens [19] and still build a good representation.
Output sparsity. As a self-supervised signal for our video representation
learning, we use a (contrastive) reconstruction loss. To reduce the
computational cost, instead of predicting the features for the whole video
[59, 19, 54, 16] (asymmetric decoding), we only reconstruct a _sparse_ set of
masked clips. Our reconstruction objective is based on the observation that
video signals carry a lot of redundancy. Hence, we introduce a model, the
_predictor network_ , to predict masked video clip embeddings given the other
video clip embeddings (the context). We follow the general approach of BERT
[12] but implement the predictor network as a masked autoencoder, where the
reconstruction is based on a contrastive loss. The loss is applied only to a
sparse set $M^{1}\subset\\{1,\ldots,K\\}$ of masked video clips. These clips
are replaced by a learned MSK token. All embeddings $C^{1}_{i}$, including the
masked ones, are added to their corresponding position encoding $P^{1}_{i}$
and are then fed to the predictor network. We also include an additional
learnable CLS token as input for the predictor network, which will be used for
tasks with multiple video clips. We denote the outputs of the predictor
network as $\hat{C}^{1}_{i}$ for the tokens corresponding to $C^{1}_{i}$, and
as $\text{CLS}^{1}$ for the token corresponding to CLS. Similarly, we feed as
inputs separately from the previous set all the video clips $C^{2}_{i}$ with
their corresponding positional embeddings $P^{2}_{i}$, and the same CLS token,
and obtain $\hat{C}^{2}_{i}$ and $\text{CLS}^{2}$ respectively (see Figure 1
for a visual depiction of these processing steps).
Contrastive reconstruction. Modeling all the details of a masked clip, even in
the latent space and even given the redundancy in videos, is a demanding task.
Rather than increasing the capacity of our model, since we are aiming for
scalability, we keep our predictor network a (relatively) shallow network and
use contrastive learning [42]. With contrastive learning, the predicted
representation of the masked tokens should only be closer to the original
unmasked clip representation (after an MLP projection) than from all other
clips from the same video and the rest of the minibatch. Note that the rest of
the clips in the same video act as hard negatives in contrastive learning
[46]. Also, since we are using a frozen backbone, we can afford to use large
minibatch sizes, which is known to be beneficial for contrastive learning [9].
We call this contrastive reconstruction loss the Masked Clip Modeling (MCM)
loss
$\mathcal{L}_{\text{MCM}}=\sum_{i\in
M^{1}}\ell^{i}(\hat{C}^{1},C^{1})+\sum_{j\in
M^{2}}\ell^{j}(\hat{C}^{2},C^{2}).$ (6)
Multiple video clips loss. The predictor network outputs features for each
video clip that are highly discriminative. This task is similar to that of a
masked autoencoder [25] and gives you an enhanced per clip representation. For
many video tasks, we need a global representation for the whole video, for
that, we introduce an alternative pseudo-task that captures a more global
representation of a set of video clips. Our task takes inspiration from
contrastive learning methods used in SSL, which yield representations that
perform well with linear probing [8]. The loss aims to make the
$\text{CLS}^{1}$ and $\text{CLS}^{2}$ tokens returned from the predictor
network more similar (recall that these two tokens are obtained from two
separate groups of video clips extracted from the same video) than to other
class tokens from other videos within the minibatch
$\mathcal{L}_{\text{SET}}={\cal
L}_{\text{cntr}}\left(\text{CLS}^{1},\text{CLS}^{2}\right).$ (7)
In addition to our contrastive loss (InfoNCE), one can use clustering [7] or
regression [23] losses. We chose InfoNCE for simplicity and for better
compatibility between the losses. As the overall loss, we used the sum of the
both loss terms (without any weights)
$\mathcal{L}=\mathcal{L}_{\text{MCM}}+\mathcal{L}_{\text{SET}},$ (8)
## 4 Experiments
We evaluate SCALE on several commonly used action classification datasets for
video representation learning. As our performance metric, we mostly use linear
probing and non-linear probing [27], for the smaller datasets we also use
$k$-NN classifier (which is training-free) and demonstrate that the proposed
method improves upon both unsupervised and supervised backbones.
### 4.1 Experimental Setup and Protocols
Datasets: Following prior work [44, 18, 45] we use Kinetics-400 [32], UCF-101
[48] (split 1), HMDB-51 [34] (split 1), and Something-Something v2 (SSv2) [22]
to train and evaluate our models.
Pretrained backbones: We use the pretrained checkpoints of $\rho$BYOL [18],
SVT [44], and three variants of VideoMAE [54] (base(B), large(L), and fine-
tuned base(FT)). We choose $\rho$BYOL for their excellent linear performance,
SVT for the usage of ViT [14], and VMAE for showing 1) the applicability of
our proposed method to MAE models, 2) the scalability of our method to larger
models, and 3) possibility of using supervisedly fine-tuned models as our
backbone. All the models are self-supervisedly pretrained on Kinetics-400,
except the fine-tuned VMAE base that was also finetuned on Kinetics-400.
Self-supervised Training: For training data, we extract 16 clips of 16 frames
from each video per dataset and save their feature encodings to disk. We use
PySlowFast’s common data augmentations for that [15]. For evaluation, we
follow the 5x3 scheme [17] (uniformly sampling 5 clips from a video along its
temporal axis and then taking 3 spatial crops) and extract 15 clips from each
video (except for SSv2, where we extract 2x3 clips [58]). As the architecture
for the predictor network, we use an encoder-only Transformer [56] and a
three-layer MLP (without batch normalization [30]) for the contrastive heads.
Unless stated otherwise, we train our models for 500 epochs with a batch size
of 512 and use all 16 clips. We use Adam [33] with cosine annealing learning
rate schedule [38] for optimization.
Evaluation: Since our focus is on efficient and scalable video classification,
we always freeze the backbones in our evaluation (as in our self-supervised
pretraining) and either train a linear classifier [44, 18] or fine-tune the
predictor network (the transformer) with an additional linear head. Therefore,
when we refer to fine-tuning (ft), we _only_ adapt the non-linear head
(_e.g_., predictor network) but _not the backbone_. We apply a grid search for
the hyper-parameters of the heads covering learning rate, weight decay, batch
size, and optimizer type. Similar to MAE [25], we found that applying a batch
normalization layer [30] without affine transformations is beneficial for
VideoMAE models. As the linear baseline, we consider the well-established
ensembling approach, _i.e_., we average the softmax predictions of the 15
clips (6 for SSv2) to obtain the final prediction.
For models that process multiple clips at once (like ours), we likewise apply
a linear softmax head on the concatenation of the clip features and the
summary token before averaging to obtain the final prediction. For the smaller
datasets, we also use $k$-NN classification, where, similar to DINO [7], we
always use $k=20$ and work with l2 normalized representations.
Non-linear baselines: Since SCALE is a non-linear model, we consider an MLP on
top of the frozen backbone as a non-linear baseline (again ensembling
predictions over clips). As a further baseline and to illustrate the effect of
our self-supervised pre-training, we consider a Transformer trained on all the
clip representations. This Transformer uses the exact same architecture as
SCALE, and only differs in the initialization: in the case of SCALE we start
from our proposed SSL pre-trained weights instead of random initialization.
### 4.2 Results
SSv2: Multiple classes in SSv2 share similar backgrounds and only differ in
motion [28], suggesting that high performance on this dataset demonstrates
that the model has captured strong motion-related contextual cues [44].
Results in Table 1 show that we outperform the state of the art. On this
dataset, we see a huge performance gap between models that process single
clips at a time (Linear and MLP) and the models that work with multiple clips
(SCALE and Transformer). We can see SCALE ${}_{\text{linear}}$ is also
outperforming the MLP, which shows the self-supervised task is well-aligned
with motion understanding and longer temporal understanding of the video.
Table 1: SSv2 Results. Linear and non-linear probing accuracies on SSv2 [22].
We see that both SCALE ${}_{\text{linear}}$ and SCALE ${}_{\text{ft}}$
outperform other methods and improve the classification accuracies by a large
margin. We also see that SCALE ${}_{\text{ft}}$ with its better
initialization, always outperforms the Transformer.
| SVT | $\rho$BYOL | VMAE${}^{\text{B}}$ | VMAE${}^{\text{L}}$ | VMAE${}^{\text{B}}_{\text{ft}}$
---|---|---|---|---|---
Linear | $20.30$ | $25.30$ | $18.31$ | $27.94$ | $28.90$
SCALE ${}_{\text{linear}}$ | $25.26$ | $27.16$ | $21.24$ | $30.18$ | $33.25$
MLP | $21.43$ | $26.46$ | $19.42$ | $27.96$ | $29.83$
Transformer | $29.24$ | $30.99$ | $24.26$ | $34.39$ | $35.60$
SCALE ${}_{\text{ft}}$ | $29.68$ | $31.83$ | $25.25$ | $36.34$ | $37.38$
UCF-101 & HMDB-51: For these smaller datasets, besides linear and non-linear
probing, we also use $k$-NN probing (see Table 2 and Table 3). With SCALE
${}_{k\text{-NN}}$ we see a consistent improvement over the baseline, and
interestingly we found that pretrained MAE-based models benefit a lot from our
representation. This is not surprising since the VideoMAE was trained to be
variant (different inputs lead to different outputs), and during SCALE
training, it was trained to become invariant (with the SET loss term). Across
the board, we also see that in the case of linear probing, not only does SCALE
linear outperform Linear, but it also outperforms Transformer with many more
parameters. In the case of SVT, our SCALE linear also outperforms the best
reported linear accuracy on UCF101 (92.7% vs. 92.6% [45]). Finally, our SCALE
ft always performs well and achieves better results than all the other methods
and even outperforms fully fine-tuned SVT (68.1% vs. 67.2%).
Table 2: UCF Results. Linear and non-linear probing accuracies on UCF-101
[48]. SCALE ${}_{\text{ft}}$ is outperforming all the other models and in the
case of $\rho$BYOL even gets performance close to a fully fine-tuned model.
Also, in most cases SCALE ${}_{\text{linear}}$ outperforms the fine-tuned
Transformer and achieves state-of-the-art results in linear probing (previous
SotA using RGB frames was 92.6 [45]). We also see a large accuracy improvement
in $k$-NN probing, especially for pretrained MAE-based models. As a point of
reference current best fully fine-tuned accuracy (which is not comparable with
our setting) is 96.8% [57].
| SVT | $\rho$BYOL | VMAE${}^{\text{B}}$ | VMAE${}^{\text{L}}$ | VMAE${}^{\text{B}}_{\text{ft}}$
---|---|---|---|---|---
$k\text{-NN}$ | $87.20$ | $85.19$ | $35.05$ | $49.14$ | $96.82$
SCALE ${}_{k\text{-NN}}$ | $89.00$ | $83.47$ | $65.63$ | $76.02$ | $97.38$
Linear | $91.27$ | $89.55$ | $66.53$ | $84.53$ | $97.91$
SCALE ${}_{\text{linear}}$ | $92.65$ | $91.43$ | $74.46$ | $86.78$ | $98.14$
MLP | $91.17$ | $93.60$ | $71.97$ | $87.04$ | $98.04$
Transformer | $92.20$ | $94.34$ | $68.22$ | $86.30$ | $98.04$
SCALE ${}_{\text{ft}}$ | $92.94$ | $95.00$ | $76.07$ | $89.92$ | $98.46$
FT${}_{\text{reported}}$ | $93.7$ | $95.4$ | $96.1$ | - | -
Table 3: HMDB Results. Linear and non-linear probing accuracies on HMDB-51
[34]. Despite the small size of the dataset, we see that SCALE
${}_{\text{ft}}$ is outperforming all the other methods, and in the case of
SVT, it even outperforms the fully fine-tuned model (As a point of reference,
the best fully fine-tuned model achieves 75.9% [57]). We also see that SCALE
${}_{\text{linear}}$ is mostly better than the Transformer while having only a
single linear layer (the best linear accuracy in the literature is 66.7%
[45]). Similar to UCF results, we see a huge boost in the performance of
$k$-NN classifier for pretrained MAE-based models.
| SVT | $\rho$BYOL | VMAE${}^{\text{B}}$ | VMAE${}^{\text{L}}$ | VMAE${}^{\text{B}}_{\text{ft}}$
---|---|---|---|---|---
$k\text{-NN}$ | $51.83$ | $49.67$ | $21.96$ | $29.21$ | $72.81$
SCALE ${}_{k\text{-NN}}$ | $56.01$ | $51.56$ | $37.18$ | $51.30$ | $71.83$
Linear | $63.07$ | $61.17$ | $45.22$ | $60.26$ | $76.33$
SCALE ${}_{\text{linear}}$ | $66.33$ | $63.92$ | $52.15$ | $62.35$ | $78.36$
MLP | $63.00$ | $64.77$ | $49.01$ | $62.61$ | $77.45$
Transformer | $63.98$ | $66.16$ | $47.32$ | $61.50$ | $76.86$
SCALE ${}_{\text{ft}}$ | $68.10$ | $66.79$ | $51.89$ | $64.83$ | $79.34$
FT${}_{\text{reported}}$ | $67.2$ | $73.6$ | $73.3$ | - | -
Kinetics-400: We present our main results on Kinetics-400 [32] in Table 4. Our
SCALE ${}_{\text{linear}}$ with SVT backbone beats the previous state of the
art (71.8% vs. 71.5% [18]) and SCALE ${}_{\text{ft}}$ can even improve the
accuracy of VMAE${}^{\text{B}}_{\text{ft}}$, which is a strong supervised
model, from 81.5% to 81.84%.
Table 4: Kinetics-400 Results. Linear and non-linear probing accuracies on
Kinetics-400 [32] without any extra data and using RGB frames only. Even
though SCALE ${}_{\text{linear}}$ is only on par with Linear but for the non-
linear models, we see a clear boost from SCALE ${}_{\text{ft}}$. Note that the
best linear accuracy on this dataset (without any extra data) is 71.5 [18] and
the best full fine-tuning accuracy is 86.7 [60].
| SVT | $\rho$BYOL | VMAE${}^{\text{B}}$ | VMAE${}^{\text{L}}$ | VMAE${}^{\text{B}}_{\text{ft}}$
---|---|---|---|---|---
Linear | $71.71$ | $68.82$ | $43.50$ | $60.73$ | $81.52$
SCALE ${}_{\text{linear}}$ | $71.78$ | $68.38$ | $43.96$ | $60.66$ | $81.44$
MLP | $71.19$ | $69.42$ | $45.48$ | $61.64$ | $81.27$
Transformer | $72.18$ | $69.28$ | $44.85$ | $62.15$ | $81.70$
SCALE ${}_{\text{ft}}$ | $72.38$ | $69.63$ | $46.15$ | $62.67$ | $81.84$
Following the evaluation setup of self-supervised image representations [8, 7,
69], we also introduce low-shot K400 video classifications by sampling 10
percent of the videos (in a class-balanced way) and training the probes only
on those. We still test on the whole evaluation set of K400. This low-shot
setting is more aligned with the actual usage of self-supervised models in
which there is an abundant amount of unlabeled data for training via self-
supervision and a small set of labeled data for fine-tuning. Results in Table
5 show that our method is particularly effective in this low-shot setting.
While most other non-linear probes overfit and perform worse than the linear
probes, our SCALE ${}_{\text{ft}}$ does not overfit and clearly outperforms
the baselines.
Table 5: Kinetics-400 Low-shot Results. Linear and non-linear probing
accuracies on 10% of Kinetics-400 [32]. SCALE is more robust to the size of
the labeled dataset. SCALE ${}_{\text{ft}}$ does not overfit like the other
non-linear probes (MLP and Transformer) and outperforms the baselines.
| SVT | $\rho$BYOL | VMAE${}^{\text{B}}$ | VMAE${}^{\text{L}}$ | VMAE${}^{\text{B}}_{\text{ft}}$
---|---|---|---|---|---
Linear | $66.43$ | $56.43$ | $31.25$ | $48.42$ | $79.79$
SCALE ${}_{\text{linear}}$ | $65.96$ | $57.74$ | $34.03$ | $49.21$ | $79.94$
MLP | $65.44$ | $58.68$ | $30.47$ | $48.27$ | $79.37$
Transformer | $64.97$ | $58.95$ | $29.89$ | $48.27$ | $79.47$
SCALE ${}_{\text{ft}}$ | $67.01$ | $59.52$ | $33.92$ | $50.36$ | $80.47$
### 4.3 Ablations
In this section, we start from a baseline setup consisting of a two-layer
transformer with a hidden size of 256, 20% chance of masking clips, trained
with a batch size of 512 for 200 epochs, and using two sets of 8 views for
representation learning. Using SCALE ${}_{\text{ft}}$, we explore different
loss functions, masking ratios, number of layers, and finally, the number of
views during training and testing. All experiments are performed on UCF and
HMDB.
Table 6: Loss Function. SCALE ${}_{\text{ft}}$ accuracy with different loss
function combinations (the masking ratio here is 20%). We can see that having
MCM is always beneficial, and the SET loss is also almost always helpful. We
use both loss terms for our final model.
| | UCF-101 | HMDB-51
---|---|---|---
SET | MCM | SVT | $\rho$BYOL | SVT | $\rho$BYOL
✓ | ✗ | 91.80 | 92.20 | 64.50 | 63.59
✗ | ✓ | 92.01 | 93.81 | 62.81 | 64.05
✓ | ✓ | 93.20 | 92.99 | 64.57 | 65.61
Table 7: Masking Ratio. SCALE ${}_{\text{ft}}$ accuracy with different masking
ratios. We observe best results around 25% similar to NLP models [12] (15%),
and different from low-level video models like VideoMAE [54] (90%).
| UCF-101 | HMDB-51
---|---|---
Masking Ratio | SVT | $\rho$BYOL | SVT | $\rho$BYOL
0.15 | 93.18 | 93.25 | 64.37 | 64.83
0.25 | 93.20 | 93.81 | 65.49 | 65.62
0.35 | 93.15 | 93.06 | 64.18 | 65.22
0.45 | 92.96 | 93.02 | 63.39 | 64.96
Loss Function: As explained in the method section, we have $2$ loss terms, and
each of them can be enabled or disabled for the pretraining. In Table 6 we
show that having both loss terms is better than the individual loss terms.
Masking Ratio: Masking ratio is an important hyperparameter and depends on the
data modality, for example, BERT [12] uses 15%, MSN [3] uses 30% (for ViT-
Base), MAE [25] uses 75%, and VideoMAE [54] uses 90 to 95% masking. Since our
clip representations are somewhat abstract representations of the video, we
expect the optimal masking ratio to be close to NLP models rather than video
MAEs. We have observed a steady decrease in the pretraining task’s performance
with higher masking ratios, so we only tested low masking ratios in Table 7
and found out that 25% is the optimal masking ratio.
Table 8: Transformer Capacity. SCALE ${}_{\text{ft}}$ accuracy with different
model capacities, having more than one layer of transformer and not having too
few hidden channels are necessary for better performance.
Hidden | Num | UCF-101 | HMDB-51
---|---|---|---
Dim | Layers | SVT | $\rho$BYOL | SVT | $\rho$BYOL
64 | 1 | - | 92.62 | - | 63.16
128 | 1 | - | 92.83 | - | 63.68
256 | 1 | - | 92.86 | - | 64.39
128 | 2 | 92.78 | 93.52 | 63.26 | 65.55
256 | 2 | 93.20 | 93.81 | 65.49 | 65.62
512 | 2 | 92.57 | 93.66 | 65.68 | 64.77
128 | 3 | 92.33 | 93.25 | 64.83 | 65.55
256 | 3 | 92.75 | 93.52 | 65.49 | 65.16
512 | 3 | 92.86 | 92.93 | 65.49 | 64.84
Transformer Capacity: We also tune the number of layers of the transformer and
its hidden size in Table 8, as we can see having more than one layer of
transformer is necessary for good results and too few hidden channels are also
not ideal. There is a trade-off, and having deeper transformers lead to worse
performance.
Table 9: Number of Views. SCALE ${}_{\text{ft}}$ accuracy with different
number of clips and batch sizes. More views lead to consistent improvement and
large batch sizes are not necessary because of the hard negative samples.
Num | Batch | UCF-101 | HMDB-51
---|---|---|---
Views | Size | SVT | $\rho$BYOL | SVT | $\rho$BYOL
4 $\times$ 2 | 256 | 92.65 | 92.83 | 64.35 | 64.24
6 $\times$ 2 | 256 | 92.70 | 93.07 | 64.57 | 64.37
8 $\times$ 2 | 256 | 92.80 | 93.49 | 64.64 | 64.63
4 $\times$ 2 | 512 | 92.67 | 93.49 | 64.85 | 64.50
6 $\times$ 2 | 512 | 93.18 | 93.68 | 64.90 | 65.35
8 $\times$ 2 | 512 | 93.20 | 93.81 | 65.49 | 65.62
4 $\times$ 2 | 1024 | 92.75 | 93.36 | 64.77 | 65.15
6 $\times$ 2 | 1024 | 92.96 | 93.57 | 64.96 | 65.48
8 $\times$ 2 | 1024 | 93.07 | OOM | 65.29 | OOM
Number of Views: Finally, we studied the model performance as we changed the
number of views and batch size fed to the model. As can be seen in Table 9,
having more views has a huge and consistent impact on the performance, and
since we have hard negatives for contrastive loss within the video, we are not
too reliant on large batch sizes.
## 5 Conclusions
In this paper, we introduced SCALE, a framework for video representation
learning by aggregating the information of multiple clips at the same time. We
combine contrastive learning and masked modeling with intuitions from
predictive coding to obtain improved global and local representations of clips
starting from frozen backbones. We evaluated these features using a wide array
of backbones on different action classification datasets and achieved strong
or state of the art results. The computational efficiency of our method is
extremely useful for videos and opens the possibility to a wider group of
researchers to work on video representation learning than previously possible.
We also believe that working with a set of clips (or, more abstractly, object
parts) is an interesting direction for representation learning. Finally, as a
surprising and maybe alarming observation, even contrastive representations
that were trained to be invariant to data augmentations and spatio-temporal
crops can be used for contrastive masked modeling. This might be due to benign
memorization [2], and understanding why this phenomenon happens might lead to
a better understanding of contrastive learning.
## 6 Acknowledgments
This work was supported by grant 200020_188690 of the Swiss National Science
Foundation.
## References
* [1] Hassan Akbari, Linagzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Transformers for multimodal self-supervised learning from raw video, audio and text. ArXiv, abs/2104.11178, 2021.
* [2] Sotiris Anagnostidis, Gregor Bachmann, Lorenzo Noci, and Thomas Hofmann. The curious case of benign memorization. ArXiv, abs/2210.14019, 2022.
* [3] Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Mike Rabbat, and Nicolas Ballas. Masked siamese networks for label-efficient learning. In European Conference on Computer Vision, pages 456–473. Springer, 2022.
* [4] Shekoofeh Azizi, Basil Mustafa, Fiona Ryan, Zach Beaver, Jana von Freyberg, Jonathan Deaton, Aaron Loh, Alan Karthikesalingam, Simon Kornblith, Ting Chen, Vivek Natarajan, and Mohammad Norouzi. Big self-supervised models advance medical image classification. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 3458–3468, 2021.
* [5] Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. ArXiv, abs/2106.08254, 2022.
* [6] Sagie Benaim, Ariel Ephrat, Oran Lang, Inbar Mosseri, William T Freeman, Michael Rubinstein, Michal Irani, and Tali Dekel. Speednet: Learning the speediness in videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9922–9931, 2020.
* [7] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9650–9660, 2021.
* [8] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR, 2020.
* [9] Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9640–9649, 2021.
* [10] Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3075–3084, 2019.
* [11] Ishan Dave, Rohit Gupta, Mamshad Nayeem Rizve, and Mubarak Shah. Tclr: Temporal contrastive learning for video representation. Computer Vision and Image Understanding, 219:103406, 2022.
* [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
* [13] Xiaoyi Dong, Jianmin Bao, Ting Zhang, Dongdong Chen, Weiming Zhang, Lu Yuan, Dong Chen, Fang Wen, and Nenghai Yu. Bootstrapped masked autoencoders for vision bert pretraining. ArXiv, abs/2207.07116, 2022.
* [14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
* [15] Haoqi Fan, Yanghao Li, Bo Xiong, Wan-Yen Lo, and Christoph Feichtenhofer. Pyslowfast. https://github.com/facebookresearch/slowfast, 2020.
* [16] Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, and Kaiming He. Masked autoencoders as spatiotemporal learners. arXiv preprint arXiv:2205.09113, 2022.
* [17] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 6201–6210, 2019.
* [18] Christoph Feichtenhofer, Haoqi Fan, Bo Xiong, Ross Girshick, and Kaiming He. A large-scale study on unsupervised spatiotemporal representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3299–3309, 2021.
* [19] Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra. Omnimae: Single model masked pretraining on images and videos. arXiv preprint arXiv:2206.08356, 2022.
* [20] Rohit Girdhar and Kristen Grauman. Anticipative video transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13505–13515, 2021.
* [21] Shreyank N Gowda, Marcus Rohrbach, and Laura Sevilla-Lara. Smart frame selection for action recognition. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 1451–1459, 2021.
* [22] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The” something something” video database for learning and evaluating visual common sense. In Proceedings of the IEEE international conference on computer vision, pages 5842–5850, 2017.
* [23] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271–21284, 2020.
* [24] Tengda Han, Weidi Xie, and Andrew Zisserman. Self-supervised co-training for video representation learning. Advances in Neural Information Processing Systems, 33:5679–5690, 2020.
* [25] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16000–16009, 2022.
* [26] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729–9738, 2020.
* [27] R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. ArXiv, abs/1808.06670, 2019.
* [28] Kaiqin Hu, Jie Shao, Yuan Liu, Bhiksha Raj, Marios Savvides, and Zhiqiang Shen. Contrast and order representations for video self-supervised learning. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 7919–7929, 2021.
* [29] Deng Huang, Wenhao Wu, Weiwen Hu, Xu Liu, Dongliang He, Zhihua Wu, Xiangmiao Wu, Mingkui Tan, and Errui Ding. Ascnet: Self-supervised video representation learning with appearance-speed consistency. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8096–8105, 2021.
* [30] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, pages 448–456. PMLR, 2015.
* [31] Simon Jenni and Hailin Jin. Time-equivariant contrastive video representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9970–9980, 2021.
* [32] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017.
* [33] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [34] Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, and Thomas Serre. Hmdb: a large video database for human motion recognition. In 2011 International conference on computer vision, pages 2556–2563. IEEE, 2011.
* [35] Xingbin Liu, Jinghao Zhou, Tao Kong, Xianming Lin, and Rongrong Ji. Exploring target representations for masked autoencoders. arXiv preprint arXiv:2209.03917, 2022.
* [36] Yue Liu, Junqi Ma, Yufei Xie, Xuefeng Yang, Xingzhen Tao, Lin Peng, and Wei Gao. Contrastive predictive coding with transformer for video representation learning. Neurocomputing, 482:154–162, 2022.
* [37] Guillaume Lorre, Jaonary Rabarisoa, Astrid Orcesi, Samia Ainouz, and Stephane Canu. Temporal contrastive pretraining for video action recognition. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 662–670, 2020.
* [38] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
* [39] Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In European conference on computer vision, pages 527–544. Springer, 2016.
* [40] Pedro Morgado, Nuno Vasconcelos, and Ishan Misra. Audio-visual instance discrimination with cross-modal agreement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12475–12486, 2021.
* [41] Charlie Nash, João Carreira, Jacob Walker, Iain Barr, Andrew Jaegle, Mateusz Malinowski, and Peter Battaglia. Transframer: Arbitrary frame prediction with generative models. arXiv preprint arXiv:2203.09494, 2022.
* [42] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
* [43] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, 2021.
* [44] Kanchana Ranasinghe, Muzammal Naseer, Salman Khan, Fahad Shahbaz Khan, and Michael S Ryoo. Self-supervised video transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2874–2884, 2022.
* [45] Adria Recasens, Pauline Luc, Jean-Baptiste Alayrac, Luyu Wang, Florian Strub, Corentin Tallec, Mateusz Malinowski, Viorica Pătrăucean, Florent Altché, Michal Valko, et al. Broaden your views for self-supervised video learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1255–1265, 2021.
* [46] Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. Contrastive learning with hard negative samples. ArXiv, abs/2010.04592, 2021.
* [47] Fadime Sener, Dipika Singhania, and Angela Yao. Temporal aggregate representations for long-range video understanding. In European Conference on Computer Vision, pages 154–171. Springer, 2020.
* [48] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
* [49] Chen Sun, Arsha Nagrani, Yonglong Tian, and Cordelia Schmid. Composable augmentation encoding for video representation learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8834–8844, 2021.
* [50] Dídac Surís, Ruoshi Liu, and Carl Vondrick. Learning the predictability of the future. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12607–12617, 2021.
* [51] Hao Tan, Jie Lei, Thomas Wolf, and Mohit Bansal. Vimpac: Video pre-training via masked token prediction and contrastive learning. arXiv preprint arXiv:2106.11250, 2021.
* [52] Chenxin Tao, Xizhou Zhu, Gao Huang, Yu Qiao, Xiaogang Wang, and Jifeng Dai. Siamese image modeling for self-supervised vision representation learning. arXiv preprint arXiv:2206.01204, 2022.
* [53] Nenad Tomasev, Ioana Bica, Brian McWilliams, Lars Holger Buesing, Razvan Pascanu, Charles Blundell, and Jovana Mitrovic. Pushing the limits of self-supervised resnets: Can we outperform supervised learning without labels on imagenet? In First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward at ICML 2022, 2022.
* [54] Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training. arXiv preprint arXiv:2203.12602, 2022.
* [55] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Herv’e J’egou. Training data-efficient image transformers & distillation through attention. In ICML, 2021.
* [56] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
* [57] Jue Wang, Gedas Bertasius, Du Tran, and Lorenzo Torresani. Long-short temporal contrastive learning of video transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14010–14020, 2022.
* [58] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal segment networks for action recognition in videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41:2740–2755, 2019.
* [59] Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen, Xiyang Dai, Mengchen Liu, Yu-Gang Jiang, Luowei Zhou, and Lu Yuan. Bevt: Bert pretraining of video transformers. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14713–14723, 2022.
* [60] Chen Wei, Haoqi Fan, Saining Xie, Chaoxia Wu, Alan Loddon Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 14648–14658, 2022.
* [61] Donglai Wei, Joseph J Lim, Andrew Zisserman, and William T Freeman. Learning and using the arrow of time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8052–8060, 2018.
* [62] Chao-Yuan Wu, Christoph Feichtenhofer, Haoqi Fan, Kaiming He, Philipp Krahenbuhl, and Ross Girshick. Long-term feature banks for detailed video understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 284–293, 2019.
* [63] Chao-Yuan Wu, Yanghao Li, Karttikeya Mangalam, Haoqi Fan, Bo Xiong, Jitendra Malik, and Christoph Feichtenhofer. Memvit: Memory-augmented multiscale vision transformer for efficient long-term video recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13587–13597, 2022.
* [64] Yu Wu, Linchao Zhu, Xiaohan Wang, Yi Yang, and Fei Wu. Learning to anticipate egocentric actions by imagination. IEEE Transactions on Image Processing, 30:1143–1152, 2020.
* [65] Shen Yan, Xuehan Xiong, Anurag Arnab, Zhichao Lu, Mi Zhang, Chen Sun, and Cordelia Schmid. Multiview transformers for video recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3333–3343, 2022.
* [66] Liangzhe Yuan, Rui Qian, Yin Cui, Boqing Gong, Florian Schroff, Ming-Hsuan Yang, Hartwig Adam, and Ting Liu. Contextualized spatio-temporal contrastive learning with self-supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13977–13986, 2022.
* [67] Yujia Zhang, Lai-Man Po, Xuyuan Xu, Mengyang Liu, Yexin Wang, Weifeng Ou, Yuzhi Zhao, and Wing-Yin Yu. Contrastive spatio-temporal pretext learning for self-supervised video representation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 3380–3389, 2022.
* [68] Zhang Zhang and Dacheng Tao. Slow feature analysis for human action recognition. IEEE transactions on pattern analysis and machine intelligence, 34(3):436–450, 2012.
* [69] Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. ibot: Image bert pre-training with online tokenizer. arXiv preprint arXiv:2111.07832, 2021.
* [70] Pan Zhou, Yichen Zhou, Chenyang Si, Weihao Yu, Teck Khim Ng, and Shuicheng Yan. Mugs: A multi-granular self-supervised learning framework. arXiv preprint arXiv:2203.14415, 2022.
|
# Effective Emission Heights of Various OH Lines From X-shooter and SABER
Observations of a Passing Quasi-2-Day Wave
###### Abstract
Chemiluminescent radiation of the vibrationally and rotationally excited OH
radical, which dominates the nighttime near-infrared emission of the Earth’s
atmosphere in wide wavelength regions, is an important tracer of the chemical
and dynamical state of the mesopause region between 80 and 100 km. As
radiative lifetimes and rate coefficients for collision-related transitions
depend on the OH energy level, line-dependent emission profiles are expected.
However, except for some height differences for whole bands mostly revealed by
satellite-based measurements, there is a lack of data for individual lines. We
succeeded in deriving effective emission heights for 298 OH lines thanks to
the joint observation of a strong quasi-2-day wave (Q2DW) in eight nights in
2017 with the medium-resolution spectrograph X-shooter at the Very Large
Telescope at Cerro Paranal in Chile and the limb-sounding SABER radiometer on
the TIMED satellite. Our fitting procedure revealed the most convincing
results for a single wave with a period of about 44 h and a vertical
wavelength of about 32 km. The line-dependent as well as altitude-resolved
phases of the Q2DW then resulted in effective heights which differ by up to 8
km and tend to increase with increasing vibrational and rotational excitation.
The measured dependence of emission heights and wave amplitudes (which were
strongest after midnight) on the line parameters implies the presence of a
cold thermalized and a hot non-thermalized population for each vibrational
level.
JGR: Atmospheres
Institut für Physik, Universität Augsburg, Augsburg, Germany Deutsches
Fernerkundungsdatenzentrum, Deutsches Zentrum für Luft- und Raumfahrt,
Weßling-Oberpfaffenhofen, Germany Institut für Astro- und Teilchenphysik,
Universität Innsbruck, Innsbruck, Austria Instituto de Astronomía, Universidad
Católica del Norte, Antofagasta, Chile
Stefan<EMAIL_ADDRESS>
X-shooter-based intensities of 298 OH lines from eight nights show a strong
Q2DW in southern summer 2017
Fits of the Q2DW phase in the X-shooter data and SABER-based OH emission
profiles were used to derive effective OH emission heights
The line-dependent wave amplitudes confirm the presence of cold and hot OH
populations for each vibrational level
## Plain Language Summary
Hydroxyl (OH) is an important molecule in the Earth’s atmosphere at altitudes
between 80 and 100 km. It is the main source of atmospheric nighttime
radiation in the near-infrared wavelength range and is therefore a valuable
tracer of the chemistry and dynamics at high altitudes. The emission spectrum
consists of various lines which are related to different levels of vibration
and rotation. Although the vertical emission distribution should depend on the
given line due to differences in the deactivation of the corresponding energy
levels, the line-specific details have been uncertain until now. We have
succeeded in deriving effective time-averaged emission heights for 298 OH
lines based on the combination of ground-based line-resolved and space-based
height-resolved observations of a very strong rising wave with a period close
to 2 days and a relatively short vertical wavelength in eight nights in 2017.
The resulting heights (obtained via the line-dependent wave phases) differ by
up to 8 km and generally increase with higher molecular vibration and
rotation. They are valuable for ground-based studies of other waves and
contribute (combined with conclusions from the wave amplitudes) to a better
understanding of the internal processes in OH molecules.
## 1 Introduction
The nocturnal atmospheric emission of wide wavelength regions in the near-
infrared is dominated by chemiluminescent radiation of various roto-
vibrational bands up to the vibrational level $v=9$ of the electronic ground
state of the hydroxyl (OH) radical [<]e.g.,¿meinel50,noll15,rousselot00.
Excited OH is mostly produced by the reaction of hydrogen with ozone [Bates
Nicolet (1950)]. The nightglow emission essentially originates from altitudes
between 80 and 100 km, and is therefore an important tracer of the chemistry
and dynamics in the Earth’s mesopause region. Rocket flights showed typical
peak heights of about 87 km and layer widths of about 8 km [Baker Stair
(1988)]. Moreover, OH emission profiles have frequently been observed by limb-
sounding instruments in space [<]e.g.,¿baker07,dodd94,savigny12,wuest20,yee97.
Rare ground-based altitude measurements relied on the identification of the
same emission features in imaging instruments at different sites
[<]e.g.,¿kubota99,moreels08. yu17 combined wind observations with a meteor
radar and a Fabry-Perot interferometer focusing on OH to estimate peak heights
of the radiation. Finally, rough estimates are also possible by means of the
well-established negative relation between effective emission height and
emission intensity [<]e.g.,¿garcia17,liu06,mulligan09,savigny15,yee97, which
can be explained by the larger change of the layer profile at lower altitudes
due to the stronger relative variations of atomic oxygen (which is required
for the ozone production). The published relations had to be calibrated with
satellite observations.
The hydrogen–ozone reaction mainly produces OH populations in high $v$ between
7 and 9 [<]e.g.,¿adler97. In contrast, measured populations increase with
decreasing $v$ [<]e.g.,¿takahashi81,cosby07,noll15. This discrepancy is caused
by the step-by-step relaxation process due to the radiation of photons and
collisions with other atmospheric constituents [<]e.g.,¿adler97,xu12,noll18b.
As the radiative lifetimes and rate coefficients for the collisions with
different species depend on the OH level, the resulting emission profiles for
individual lines should differ. Models show an increase of the peak emission
height with increasing $v$
[<]e.g.,¿adler97,makhlouf95,mcdade91,savigny12,xu12. The spread between high
and low $v$ may amount to several kilometers with possibly larger height
shifts for low $v$. Collisions of OH with atomic oxygen appear to be
especially crucial for the $v$-dependent discrepancies [von Savigny . (2012)].
The impact of rotational energy on the altitude distribution has been modeled
quite rarely. dodd94 discussed the differences between populations with
rotational quantum numbers $N$ up to 7 and those with $N\geq 11$. Density
profiles for the larger $N$ appear to peak higher with similar altitude
differences as obtained for the comparison of low and high $v$. Moreover, the
density peaks seem to show less variation for larger $N$ with respect to
changes in $v$. noll18b presented modeling results for $v=9$ and found maximum
$N$-dependent height differences between 1.5 and 2.8 km depending on the
uncertain rate coefficients for collisions with atomic oxygen.
Measurements of the impact of the OH energy state on the height distribution
were successfully performed with space-based limb sounding. The studies mostly
focused on the comparison of the emission profiles of a few roto-vibrational
bands [Noll . (2016), Sheese . (2014), von Savigny Lednyts’kyy (2013), von
Savigny . (2012)]. The results suggest typical height changes for $\Delta v=1$
of about 0.4 to 0.5 km. These differences can vary by a significant fraction
of the values. They tend to increase with lower peak altitude and higher
atomic oxygen concentration [von Savigny Lednyts’kyy (2013)]. The former can
change by several kilometers forced by waves on different time scales and the
general circulation [<]e.g.,¿liu06,nikoukar07,noll16,teiser17,yee97. The
majority of the variability is found below the emission peak [Nikoukar .
(2007)], where the atomic oxygen density profile is particularly steep. In
contrast to whole roto-vibrational bands, there is a lack of studies of
individual lines with different $N$. The only noteworthy data known to us are
related to observations with the Cryogenic InfraRed Radiance Instrumentation
for Shuttle (CIRRIS 1A) Michelson interferometer onboard Space Shuttle
Discovery for three days in 1991 [Dodd . (1994)]. The measured limb profiles
comprise lines of several $v$ with $N\leq 4$ as well as purely rotational
lines with $N$ around 15 and 30. However, the interpretation of the data
required sophisticated modeling.
There is an alternative approach to estimate effective emission heights of OH
lines. Perturbations passing the mesopause region will produce characteristic
patterns in the OH intensity time series that will be shifted in time
depending on the altitude of the emission [Noll . (2015), Schmidt . (2018)].
In this way, the relative layering of different emissions can be obtained from
ground-based observations of individual lines. Nevertheless, the derivation of
absolute heights will also need information on the vertical propagation of the
perturbation through the OH emission layer as can be measured by satellite-
based limb-sounding instruments. As the satellites relevant for nightglow
observations have nearly Sun-synchronous orbits
[<]e.g.,¿russell99,savigny12,yee97, the minimum time scale of the variations
needs to be of the order of days for a promising combination of the profile
data with ground-based spectra of OH lines in order to link wave phases with
altitudes in the same region.
The so-called quasi-2-day wave (Q2DW) can achieve very high amplitudes in the
mesopause region at low to middle latitudes. In particular, strong Q2DWs occur
in the Southern Hemisphere for several weeks in the summer months January and
February [<]e.g.,¿ern13,gu19,tunbridge11,walterscheid15. The period is close
to 2 days but can vary from 42 to 54 h [Gu . (2019)]. For periods very close
to 48 h, there can be phase locking with tidal modes [Walterscheid Vincent
(1996)]. Southern Q2DWs usually show a dominating westward-moving longitudinal
pattern with a zonal wavenumber of 3 (W3), which can be accompanied by other
modes with different periods, zonal wavenumbers, and propagation directions
[<]e.g.,¿he21,pedatella12,tunbridge11. Q2DWs are regarded to belong to the
class of Rossby-gravity waves [<]e.g.,¿salby81. Their genesis is probably
related to baroclinic/barotropic instabilities in the summertime easterly
zonal wind jet in the lower mesosphere [Plumb (1983)], which seem to be linked
to increased gravity-wave drag [Ern . (2013)]. The amplitudes maximize at the
OH emission heights and can reach more than 10 K in temperature [Gu . (2019),
Tunbridge . (2011)] and 50 m s-1 in wind [Limpasuvan . (2005), Wu . (1993)].
Between the Q2DW source region and these altitudes at low to middle latitudes,
the propagation perpendicular to the zonal component is effectively upward.
The situation is different in the lower thermosphere well above the OH layer,
where a northward meridional flow tends to be dominant in the Southern
Hemisphere [Yue . (2012)]. At OH emission heights, vertical wavelengths are
mostly longer than 20 km and are often even beyond 100 km [Huang . (2013),
Reisin (2021)]. The impact of a Q2DW on OH emission was investigated by
pedatella12 based on space-based OH profile data for January 2006. The peak
emission rate varied by a factor of about 4. This large effect appears to be
mostly related to Q2DW-induced variations of the atomic oxygen concentration
at the OH emission heights. The concentration can also be modulated by tides
and gravity waves.
We have succeeded in deriving effective emission heights for about 300 OH
lines by means of the combination of ground-based spectroscopic and space-
based height-resolved observations of a strong Q2DW in January to February
2017. The OH line intensities originate from near-infrared spectra of the
medium-resolution spectrograph X-shooter [Vernet . (2011)] of the Very Large
Telescope (VLT) of the European Southern Observatory (ESO) at Cerro Paranal in
Chile (24.6∘ S, 70.4∘ W). Scans of the OH emission profiles in two filter
bands at about 1.6 and 2.1 $\mu$m are related to measurements with the
Sounding of the Atmosphere using Broadband Emission Radiometry (SABER)
instrument onboard the Thermosphere Ionosphere Mesosphere Energetics Dynamics
(TIMED) satellite [Russell . (1999)]. In the following section 2, we will
describe the data sets for both instruments. Then, we will present our
approach to fit the observed wave (section 3). Our results will be shown in
section 4, which also includes our estimates of line-dependent effective
emission heights (section 4.3) and a brief discussion of another Q2DW in 2019
(section 4.4). Finally, we will draw our conclusions (section 5).
## 2 Data
### 2.1 X-shooter
The X-shooter medium-resolution echelle spectrograph [Vernet . (2011)] mounted
at an 8 m telescope of the VLT can simultaneously observe a very wide
wavelength range from 0.3 to 2.5 $\mu$m covered by three spectroscopic arms
with separate optical components and detectors. Since the strongest OH bands
for all upper vibrational levels $v^{\prime}$ from 2 to 9 are found in the
near-infrared regime [<]e.g.,¿rousselot00, we focus our analysis on the
correspondingly named NIR arm, which extends from 1.0 to 2.5 $\mu$m. For this
arm and scientific targets, the projected slit width can vary between 0.4 and
1.5′′ on the sky, which corresponds to a spectral resolving power between
12,000 and 3,500. The projected slit length is fixed to 11′′.
The spectra used originate from the ESO Science Archive Facility. This study
only uses a small fraction of the data that were processed by us. The whole
sample comprises about 90,000 spectra in the NIR arm with an exposure time of
at least 10 s that were taken between October 2009 and September 2019. The
basic processing of the raw echelle spectra was performed by means of the
official reduction pipeline [Modigliani . (2010)] in version v2.6.8 and
calibration data preprocessed by ESO [<]see also¿unterguggenberger17. The
resulting two-dimensional (2D) wavelength-calibrated sky spectra were
subjected to post-processing optimized for the retrieval of airglow emission.
First, the 2D spectra were averaged along the spatial direction in order to
obtain one-dimensional (1D) spectra. Then, the separation of sky emission and
astronomical target performed by the pipeline was improved. For this goal, we
extracted residual sky in the 2D object spectrum from the third of the slit
positions with the lowest integrated counts in the whole wavelength range.
After the minimization of noise by smoothing, we added the resulting 1D mean
spectrum to the sky spectrum. This approach was particularly useful for bright
star-like astronomical targets.
Although the pipeline produces flux-calibrated spectra, we performed this
processing step independently in order to minimize the errors by means of a
consistent approach for the entire data set. For the NIR arm, pipeline-
processed spectra of the relatively bright spectrophotometric standard stars
EG 274 and LTT 3218 [Moehler . (2014)] were corrected for telluric absorption
using a model-based fitting approach [Smette . (2015), Kausch . (2015)] and
then compared to the theoretical spectral energy distributions in order to
derive individual response curves that indicate the wavelength-dependent
instrumental quantum efficiency. As the resulting response curves for both
stars did not exactly match, we lowered the quantum efficiency of the EG 274
curves (by about 3% in a wide wavelength range) for a better consistency.
Thus, remaining systematic uncertainties in the absolute flux calibration for
clear sky conditions mainly depend on the quality of the reference spectrum
for LTT 3218. As the response curves are relatively uncertain at wavelengths
longer than 1.9 $\mu$m [<]see¿noll15, we used spectra of so-called telluric
standard stars with well-known spectral energy distributions (Rayleigh–Jeans
law) for atmospheric absorption measurements to derive a general correction
function. In order to further improve the quality, master response curves for
10 time periods with lengths between 9 and 15 months were created including
only the best-quality data and involving a final smoothing procedure. The
splitting of the periods is usually at New Year but also considers a change in
the calibration products in January 2013 and a recoating of the main mirror in
December 2016. An analysis of the variability of the flux-calibrated star
spectra points to relative uncertainties of only 2 to 3% up to 2.1 $\mu$m in
the NIR arm. This is a clear improvement in comparison to a maximum difference
in the response of about 12%. The final set of master response curves was
applied to the 1D sky spectra. A minor fraction of the spectra were taken with
a so-called $K$-blocking filter [Vernet . (2011)], which reduces the useful
wavelength range to wavelengths shorter than 2.1 $\mu$m. These spectra had to
be calibrated with a 1.3 times higher response in order to correctly consider
differences in the calibration data.
The measurement of the line intensities was performed in two steps. First, the
underlying continuum was determined. After various tests, our procedure used
the first quintile of the pixel intensities (cut at 20%) in wavelength ranges
with a relative width of 0.008 to derive the continuum at the corresponding
central positions. The width was increased by the factors 5.0 and 2.5 around
the dense roto-vibrational O2 emission bands at about 1.27 and 1.58 $\mu$m,
respectively [<]e.g.,¿rousselot00. It was multiplied by a factor of 0.4 beyond
2.08 $\mu$m in order to avoid issues with the steeply increasing continuum due
to thermal radiation of the telescope. After subtraction of the resulting
continuum, the residual flux was integrated in specific wavelength ranges
depending on the central wavelength of the OH $\Lambda$ doublet, separation of
both components [<]e.g.,¿noll20, and slit width. As an X-shooter profile of an
unresolved line doublet (which is true in most cases) can be approximated by
the combination of a fixed Gaussian and a slit-dependent boxcar, the
integration limits had been optimized to assure relatively stable distances to
the positions marking half peak intensity. For example, the size of these
margins amounts to about 60% of the full integration range minus the $\Lambda$
doublet separation for the most frequently used slit with a width of 0.9′′.
The separation of the integration limits was sufficiently wide to avoid
significant flux losses by uncertainties in the wavelength calibration
(fraction of a pixel). Positions of suitable lines were taken from brooke16.
Important selection criteria were clear detectability (at least for long
exposures), the lack of blends with other emission lines, a smooth continuum
without strong absorption features, and high atmospheric transmission. In the
end, the selection procedure (which also included a check for outliers in the
scientific analysis) resulted in 298 OH $\Lambda$ doublets from 14 bands,
which cover upper vibrational levels $v^{\prime}$ between 2 and 9 and upper
rotational levels $N^{\prime}$ up to 15.
The measured line intensities were corrected for the van Rhijn effect [van
Rhijn (1921)], i.e. the projected layer width, assuming a reference altitude
of 87 km. The correction of this effect is important as there is a strong
variation in the zenith angles of astronomical observations. We also
considered the absorption of line emission by molecules in the lower
atmosphere in a similar way as decribed by noll15. The approach involves the
calculation of line-specific transmissions based on the zenith angle, Cerro
Paranal atmospheric data [Noll . (2012)], the Line-By-Line Radiative Transfer
Model [<]LBLRTM,¿clough05, and the assumption of purely Doppler-broadened OH
lines for a temperature of 190 K at brooke16 wavelengths. Transmissions for
entire $\Lambda$ doublets were derived by means of weights for the individual
components that were taken from the branch-specific OH level population fits
of noll20. Reference line transmissions were calculated for zenith and a
typical amount of precipitable water vapour (PWV) of 2.5 mm. Doublets with
reference transmissions lower than 70% were not considered for the final line
set (average of 96%). Differences in the optical depth in the spectroscopic
data set were calculated depending on the most crucial parameters, zenith
angle and PWV. For the latter, accurate measurements with a Low Humidity And
Temperature PROfiler (L-HATPRO) microwave radiometer are available at Cerro
Paranal [Kerber . (2012)]. However, as the L-HATPRO data set starts in 2014
and shows several gaps afterwards, we mostly used these data to calibrate
intensity ratios of lines with low and high transmission in the bands OH(2-0)
and OH(6-4) to estimate PWV values for each spectrum [<]cf.¿xu20. Up to very
high L-HATPRO PWVs of about 16 mm, the approach works quite well with a mean
offset of -0.1 mm and a standard deviation of 0.6 mm. With an upper cut at 6
mm (which still represents about 90% of the data), the systematic shift is
negligible and the scatter is only half as large.
The quality of the measurement of OH line intensities strongly varies from
spectrum to spectrum due to strong changes in spectral resolution, exposure
time, atmospheric water vapour, and especially contamination by (spatially
extended) astronomical targets that can have a different effect on each line.
For this reason, we performed a line-specific selection of suitable
observations. Starting with a preselected set of 88,481 useful spectra, we
carried out an iterative $\sigma$-clipping procedure for each line. This
outlier rejection involved limits with respect to the underlying continuum,
the intensity error, and the intensity itself. The error was estimated from
the deviation of the residual continuum at both margins of the integration
range from the zero line. It was made sure that it was always lower than 10%
for the selected measurements. As a result, the line-specific selection rates
varied between 69.4% and 99.8% with a mean value of 94.1% for wavelengths
shorter than 2.1 $\mu$m. In order to handle the wide range of exposure times
between 10 s and 2.5 h and the corresponding impact on the data quality, we
did not use the individual measurements for the analysis. Instead, we divided
the 10 years into bins of 30 min, which are sufficiently short for the
analysis of Q2DWs. Each bin contains the line-specific spectra with matching
central time. The effective intensities were calculated weighted by the
exposure time. In the subsequent analysis, we only considered those bins with
a summed exposure time of at least 10 min. The average numbers of the selected
bins were 19,480 and 17,001 for lines at wavelengths shorter and longer than
2.1 $\mu$m. A lower minimum exposure time of 5 min did not significantly
change the results of the analysis but increased the scatter.
The resulting binned time series were inspected with respect to Q2DW features.
The focus was on the months January and February. The X-shooter data set shows
a strongly varying time coverage due to the different astronomical observing
programs and the usual mounting of two to three instruments at the same
telescope, which can cause gaps of the order of weeks. As a consequence, we
clearly identified Q2DW events only in the years 2017 and 2019. Moreover, only
fractions of the lifetime of a wave were well covered by X-shooter
observations. The good intervals are eight nights from 26 January to 3
February 2017 and seven nights from 11 to 18 January 2019. The corresponding
maximum numbers of bins of 30 min are 88 and 89, respectively. As the sample
of selected spectra is different for each line, up to three bins from 2017
were lost for line wavelengths below 2.1 $\mu$m. At longer wavelengths, where
30 OH lines are affected by the optional $K$-blocking filter, the bin numbers
were 78 and 35 for the years 2017 and 2019, respectively. The comparison of
Q2DW results of lines with the same upper levels but different bin numbers did
not show any significant discrepancies for the data from 2017. However, the
lines at long wavelengths had to be excluded from the analysis of the data
from 2019. The drop from 89 to 35 bins was too large.
Our discussion of the results in section 4 will focus on the event in 2017
since only these data were suitable to estimate effective emission heights of
OH lines, i.e. the main goal of this study. The issues related to the data
from 2019 will briefly be described in section 4.4.
### 2.2 SABER
For better understanding the Q2DW-related X-shooter data and adding important
altitude-dependent information, we also considered limb-sounding data (version
v2.0) of the SABER radiometer on TIMED [Russell . (1999)]. The observing
channels centered on 1.64 and 2.06 $\mu$m essentially cover emission of the
OH(4-2) and OH(5-3) bands as well as the OH(8-6) and OH(9-7) bands [Baker .
(2007)]. Consequently, the effective upper vibrational levels of both channels
are about 4.6 and 8.3 [Noll . (2016)]. We used the so-called “unfiltered”
products, i.e. the given volume emission rates (VERs) had been corrected for
the missing line emission of the stated OH bands in the wavelength ranges
covered by the channels [Mlynczak . (2005)]. For some checks, we also
considered VER profiles of the channel at about 1.27 $\mu$m, which mostly
comprises the O2(a-X)(0-0) emission band [<]e.g.,¿noll16,rousselot00.
Moreover, we used the kinetic temperature products, which are based on CO2
observations at 15 $\mu$m combined with modeling [Dawkins . (2018), Remsberg .
(2008)]. All profiles were interpolated to match the same vertical grid with a
step size of 0.2 km. The natural vertical resolution of about 2 km for the
relevant altitude range smoothes the profiles, but estimates of the peak
altitude are possible with much higher accuracy. For the two OH channels, we
also calculated vertically integrated VERs, which should correlate with the
summed X-shooter line intensities in the same wavelength range. The
integration was limited to altitudes that did not deviate more than 15 km from
the mean of the two heights with half peak emission, which was often several
hundred meters higher than the emission peak.
As the X-shooter data just revealed clear Q2DW events in 2017 and 2019
(section 2.1), we only selected the archived SABER data from January and
February of both years. The spatial selection was performed for a latitude
band centered on Cerro Paranal with a full width of 10∘. In terms of the
longitude, the width was 20∘. As the satellite moves during single scans, the
reference selection coordinates were taken at 87 km. The size of the
geographical area is a compromise between minimization of spatial effects and
maximization of the sample size [<]see¿[and the discussion in section
4.2]noll16. For the period from 26 January to 3 February 2017, the described
limits resulted in 44 profiles. As the nearly Sun-synchronous orbit of TIMED
precesses with a period of about 60 days [Russell . (1999)], the available
coverage of local time (LT) is very limited for our study. In the case of the
eight nights from 2017, the coverage is restricted to LTs of about 21:00 and
about 04:00. Each time window includes half of the profiles. For the seven
nights from 11 to 18 January 2019, only 19 profiles with LTs close to 23:00
from the descending node of the TIMED orbit could be used. Measurements at
about 06:00 could not be taken as the Sun was above the horizon. The latter
significantly changes the OH emission with respect to total intensity and
effective layer height [<]e.g.,¿yee97.
## 3 Methods
For the derivation of effective emission heights, we need to fit the Q2DW in
the time series of X-shooter-based OH line intensities (section 2.1) and
broad-band SABER emission rates (section 2.2) with a suitable wave model. In
the end, it is important that the resulting effective wave phases for the
various OH lines can be linked to altitudes via a SABER-based monotonic
phase–height relation with a significant range of phases for the same wave
period. Moreover, the model needs to be sufficiently robust to produce
consistent fits for all OH lines and all relevant altitude levels. As long as
these conditions are fulfilled, it is not essential that the model is
particularly accurate with respect to the real wave properties, which could be
quite complex. Even a model with very different wave parameters might work
(see section 4.3). We therefore applied a simple periodic cosine function.
However, we did not use the entire data set for the fits as it turned out that
especially the important data from 2017 showed a strong dependence of the wave
amplitude on local time, which is probably related to significant interactions
between the Q2DW and tides (see section 4.1). In order to avoid a line-
dependent systematic bias, we fitted only data points with similar LT and
combined the reliable fits with respect to the wave phases afterwards. The
fitting of the LT dependence of the amplitude by a second wave did not work as
the variability pattern could not be reproduced by a simple trigonometric
function. Motivated by previous results on Q2DWs (see section 1), we also
tested a model with two Q2DW components using the whole data set. This model
was only partly able to fit the time series. Moreover, the best wave
parameters were not convincing as very similar amplitudes and vertical
wavelengths were needed for components with different periods (above and below
48 h) and opposite vertical propagation directions. The introduction of
additional wave components could improve the fits. However, the increased
number of fit parameters could negatively affect the robustness of the fits
and the derivation of useful phases for the emission height estimates.
For our final wave fits, we used the fit formula
$f(t,t_{\mathrm{LT}})=c(t_{\mathrm{LT}})\left(a(t_{\mathrm{LT}})\cos\left(2\pi\left(\frac{t}{T}\,–\,k\,\Delta\lambda\,-\,\phi\right)\right)+1\right),$
(1)
where $t$ is the time of the observation relative to a reference. For the Q2DW
in 2017, we used 30 January 2017 12:00 LT (mean solar noon) at Cerro Paranal,
which is in the middle of the time interval. For 2019, the reference was 15
January 2019 12:00 LT. The period of the wave is $T$ and a relative amplitude
in the range from 0 to 1 is given by $a$. The latter depends on the LT-related
data selection, which is marked by $t_{\mathrm{LT}}$. This also applies to the
factor $c$, which can alternatively be interpreted as an additive constant. As
we divided all data by the average of the respective times series before the
fit, the product $c{\cdot}a$ represents the wave amplitude relative to the
sample mean. Moreover, $c$ values clearly different from 1 indicate additional
variability beyond the Q2DW and/or limitations in the assumption of a cosine
function. The latter is not unlikely in the view of the huge amplitudes found
(see section 4). The height-dependent phase $\phi$ is defined relative to $T$,
i.e. it is a fraction without unit. As the temporal term in the formula is
positive, the minus sign in front of $\phi$ is needed for the correct
definition [<]cf.¿forbes95. The phase is given for the longitude of Cerro
Paranal $\lambda_{\mathrm{CP}}$ relative to the full circle. Deviations
$\Delta\lambda$ from $\lambda_{\mathrm{CP}}$ cause an influence of the zonal
wavenumber $k$ on the fit. Westward wave propagation is marked by negative
$k$. As the dominating Q2DW mode in the southern hemisphere is W3 (see section
1), we used $k=-3$. This choice was confirmed by additional fits with free $k$
of a SABER sample without longitude and LT restriction, which also agreed with
the SABER-based results of gu19 for 2017. For the local X-shooter data, we
always set $\Delta\lambda=0$. For the fits of the satellite-based SABER data,
the zonal term is a small but significant correction with average positive and
negative $\Delta\lambda$ of about 0.014 (i.e. 5∘) for 2017 and about 0.012 for
2019. The fitting was performed with a least-squares algorithm involving
bounds (e.g. for excluding negative amplitudes). After some tests with $T$ as
a free fit parameter, we decided to make separate fits for a grid of periods
$T$ from 40 to 60 h with a step size of 1 h. This approach significantly
improved the robustness of the fits and made sure that the resulting wave
phases for the different OH emissions referred to the same $T$. Consequently,
the only free parameters for each fit were $c$, $a$, and $\phi$.
Table 1: Derivation of optimum Q2DW phase relative to a period of 44 h at 30 January 2017 12:00 LT at Cerro Paranal for X-shooter-based relative OH(4-2)P1(1) intensities LT bin | Sample | Phase | Errora | Weightb
---|---|---|---|---
$20-21$ | 6 | 0.733 | 0.131 | 0.000
$21-22$ | 11 | 0.847 | 0.554 | 0.000
$22-23$ | 8 | 0.426 | 0.369 | 0.000
$23-24$ | 11 | 0.406 | 0.104 | 0.000
$00-01$ | 12 | 0.406 | 0.049 | 0.127
$01-02$ | 13 | 0.381 | 0.031 | 0.201
$02-03$ | 14 | 0.388 | 0.022 | 0.285
$03-04$ | 12 | 0.423 | 0.016 | 0.387
$04-05$ | 1 | 0.000 | 9.999 | 0.000
Average | 51 | 0.403 | 0.018 | 1.000
aFit uncertainty for LT bins and weighted
standard deviation of hour-specific phases in
the case of the weighted average.
bWeight of zero for LT bins with less than
10 data points and/or a phase error more
than 5 times higher than the minimum.
For the X-shooter data consisting of relative intensities of 30 min bins for
each line, the fitting procedure included three steps. First, we used all data
of each line for a rough initial estimate of the phase $\phi$. In the next
step, we performed separate fits for each hour of the night with at least five
available bins. The hour-dependent fits were then used to derive an optimum
phase for all data. Table 1 illustrates this procedure for
OH(4-2)P1($N^{\prime}=1$), a period of 44 h, and the year 2017\. As the wave
amplitude strongly depended on LT in 2017 (section 4.1), we only considered
the four to five 1 h intervals with the most reliable phases for each line,
which essentially excluded the evening data. In the case of 2019, six to eight
intervals could be used. Intervals with less than nine 30 min bins were never
included. Line-dependent differences in the number of bins were only observed
for intervals that were usually rejected. The optimum phase was calculated by
weighted averaging using the inverse of the phase error of the fits of the
selected LT intervals as weights (Table 1). Assuming the existence of a unique
$\phi$ for the considered data, its uncertainty was derived from the weighted
standard deviation of the phases of the individual fits. In the final step of
the fit procedure, the hour-dependent fits were repeated with the phase fixed.
In this way, the parameters $c$ and $a$ and their uncertainties were derived
for each hour.
The fitting of the SABER data could be simplified. The derivation of an
optimum phase was not necessary as only one LT range for 2017 provided
reasonable fits due to the LT dependence of the wave amplitude (see section
4.2). For 2019, only one nighttime interval was available (section 2.2). As an
alternative fitting approach for 2017, we fitted all data simultaneously but
with two additional fit parameters in order to scale $c$ and $a$ depending on
the half of the night. The results were almost identical but with slightly
higher uncertainties. Therefore, we preferred the separate fits with the
morning data as the reference for the phase derivation as in the case of the
X-shooter data. In general, we fitted the vertically integrated VERs as well
as the VER profiles from the two OH-related channels. For the latter, we first
rebinned the profile data with a step size of 1 km. Moreover, we usually
started the fitting procedure at an altitude of 87 km, where the OH emission
is relatively strong. Thereafter, we used the resulting fit parameters as
start values for the adjacent altitudes. The results for the latter were then
taken for the next layers. In this way, we also obtained reasonable fits for
heights with very weak OH emission (see section 4.2). Altitudes between 79 and
99 km were considered.
## 4 Results
### 4.1 OH Lines
Figure 1: X-shooter-based OH line intensity time series for the local time
period from 26 January to 3 February 2017 given as deviation in days from mean
solar noon at Cerro Paranal on 30 January. Circles show the intensities of
OH(4-2)P1(1) (a) and OH(4-2)P1(14) (b) relative to the mean of the considered
30 min bins. The local times of the latter are emphasized by different colors.
The Q2DW fits with a fixed period of 44 h, LT-dependent amplitudes, and line-
specific phases are marked by solid lines. Daytime data gaps are bridged by
dotted lines.
In this section, we discuss the results of the Q2DW fits of the X-shooter-
based OH line measurements for the wave event in 2017. Figure 1 shows examples
of the corresponding time series of 30 min bins (section 2.1) for the
$\Lambda$ doublets OH(4-2)P1(1) (a) and OH(4-2)P1(14) (b), which only differ
by the rotational quantum number. However, the difference in the rotational
energy of 3,239 cm-1 is the largest of the entire set of 298 lines. Hence, the
variability patterns should deviate. The inspection of the figure confirms
this assumption. Deviations can be observed in the day-to-day development of
the maximimum and minimum relative intensities and the local times showing
these extreme values. Nevertheless, the basic pattern is similar. There are
alternating nights with low and high relative intensities, which clearly
points to the presence of a Q2DW. The apparent amplitudes are large. For
OH(4-2)P1(1), the maximum-to-minimum ratio is 6.1. The standard deviation
relative to the mean for the 88 bins indicates 0.45. For OH(4-2)P1(14), the
corresponding values for the 86 available bins are slightly lower (4.4 and
0.36, respectively).
Figure 2: Derivation of the most likely period for the Q2DW in 2017 based on
the root mean square of the relative intensity fit (circles, left axis) and
the phase error relative to the period (squares, right axis) averaged for all
298 OH lines.
Figure 1 also displays our best fits with LT-dependent amplitudes for both OH
emissions. The fits were performed for all LT bins except for 04:00 to 05:00,
which includes only a single observation (Table 1). The fixed period for all
LT bins was set to 44 h. This decision is justified by Figure 2, which shows
the root mean square (rms) for the differences between fit and observed data
relative to the line-specific mean intensity averaged for all 298 lines as a
function of the period grid from 40 to 60 h. Moreover, the average phase error
relative to the period (Table 1) is displayed for the same set of lines and
periods. The minimum values of both indicators (0.15 for the rms and 0.015 for
the phase uncertainty) are clearly located at 44 h. The accuracy of this
period, which is limited by the step size of 1 h, is highly sufficient for the
estimation of emission heights as we will discuss in section 4.3. Moreover,
note that the scatter in the periods from the minimum rms derived for the
individual lines is fairly small (63% with 44 h and 33% with 45 h) despite
significant differences in the time series. Q2DW periods below 48 h appear to
be rather typical in regions close to Cerro Paranal as long-term OH
observations at El Leoncito (31.8∘ S, 69.3∘ W) indicate [Reisin (2021)].
Figure 2 reveals that periods close to 2 full days are not able to reproduce
the change of the variability pattern from night to night shown in Figure 1.
In particular, the small apparent amplitudes of the first two nights for
OH(4-2)P1(1) suggest that the maximum and minimum were outside the range of
local times covered by the X-shooter data, i.e. too late in the morning. With
a period distinctly smaller than 48 h, it was then possible to observe the
extreme intensities afterwards at nighttime. Therefore, the time series is
just long enough to robustly derive a wave period. The variability pattern of
OH(4-2)P1(14) is more regular at the beginning with high deviations from the
mean intensity, which excludes that the Q2DW was significantly weaker in the
first nights as the consideration of only the P1(1) data could suggest. The
differences between both lines are obviously the result of shifts in the
positions of the extremes. Consequently, both emissions need to show different
Q2DW phases. Our fits revealed phases of 0.403 and 0.229 at the reference time
for the lines with $N^{\prime}=1$ and 14, respectively. The resulting phase
difference is much larger than the mean uncertainty reported above (which
should be even smaller for phase comparisons). Hence, the wave of 2017 is
obviously suitable to safely separate lines based on their fitted phases.
Figure 3: Dependence of fitted Q2DW parameters on local time for OH(4-2)P1(1)
(circles) and OH(4-2)P1(14) (squares). The abscissa shows the mean local time
for nighttime intervals with a width of 1 h. Results are provided for the
amplitude $c{\cdot}a$ (solid lines) and the additive constant $c$ (dashed
lines). In all cases, the plotted values are relative to the mean line
intensity for the Q2DW-related sample. The mean fit uncertainties for
$c{\cdot}a$ and $c$ for data points with non-zero weight for the phase
derivation (Table 1) are 0.09 and 0.06, respectively. The legend also provides
the effective phases $\phi$ relative to the period of 44 h for both OH lines
(cf. Table 1).
The results for the two example $\Lambda$ doublets are further analyzed in
Figure 3, which shows the amplitude, i.e. the product of the fit parameters
$c$ and $a$ relative to the mean intensity of the time series, as a function
of the mean local time for intervals with a width of 1 h. The wave amplitudes
of both emissions strongly depend on local time. There is no detection of a
Q2DW before 22:00. Afterwards the amplitudes increase and then become
relatively stable. This development is faster for OH(4-2)P1(14), where a
shallow maximum with an amplitude of 0.46 is visible between 0:00 and 01:00.
The maximum for OH(4-2)P1(1) is significantly higher and amounts to 0.74. It
is present between 01:00 and 02:00. These remarkable structures cannot be
explained by a single wave with fixed amplitude. A model with multiple wave
components would certainly better reproduce the patterns, but at least a model
with two components did not return reliable wave properties (section 3). The
persistent lack of wave-like variability at the beginning of the nights is
hard to explain in this way. Hence, our assumption of LT-dependent Q2DW
amplitudes, which leads to the narrow peaks of the fit models in Figure 1,
appears to be the most promising approach for achieving a satisfying agreement
with the observed time series. This model requires that the physical
conditions that control the wave propagation and/or the sensitivity of the OH
emission for Q2DWs change depending on the time of the day. Nonlinear
interaction with especially the diurnal and semidiurnal tides [<]e.g.,¿palo99
could explain this behavior. In this context, phase locking, which would
significantly weaken the diurnal tide [Walterscheid Vincent (1996), Hecht .
(2010), Walterscheid . (2015)], did not seem to be active at Cerro Paranal
during the considered time interval as periods very close to 48 h cannot
explain the time series. The LT dependence of the Q2DW amplitude in OH
emission might also be affected by the negative nocturnal trend of atomic
oxygen (produced at daytime) that would always be present without vertical
dynamics especially at the lowest OH emission altitudes [Marsh . (2006)].
Figure 3 also shows the constant (or factor) $c$ relative to the mean
intensity. For an undisturbed cosine oscillation centered on the mean
intensity of the time series, $c$ should be close to 1 (see section 3). In
reality, the values range between 0.7 and 1.3. They cause that the maximum
$c{\cdot}a$ are not located at the end of the night as in the case of $a$ (not
shown). Although the joint fitting of $c$ and $a$ might lead to some
systematics due to possible degeneracies, asymmetries in very large intensity
variations that cause larger deviations during the crest of the wave are more
likely. Moreover, there are also variations in the nocturnal OH intensity
without a wave. An inspection of the entire X-shooter data set suggests that
this is probably a minor effect as the climatological changes for the
investigated times in southern summer are of the order of only 10%
[<]cf.¿noll17. The intensity even tends to decrease with increasing LT in
contrast to our results for $c$.
Figure 4: Maximum amplitude $c{\cdot}a$ relative to the mean (a) and phase
$\phi$ relative to the period at 30 January 12:00 LT at Cerro Paranal (b) for
all 298 OH lines used for the fit of the Q2DW event in 2017. The abscissa
shows the energy of the upper level of the transition minus the lowest energy
for the corresponding vibrational state $v^{\prime}$. The latter is given by
colored numbers. The representative amplitude (phase) uncertainties are about
0.021 (0.005), 0.023 (0.008), and 0.038 (0.017) for energy differences below
400 cm-1, between 400 and 800 cm-1, and above 800 cm-1, respectively.
Next, we discuss the whole set of 298 lines. Figure 4a shows the line-specific
maximum of the amplitude $c{\cdot}a$ of all hour intervals as a function of
the energy of the upper state of the transition relative to the lowest energy
of the corresponding vibrational level $v^{\prime}$. The reference states are
characterized by a rotational quantum number $N^{\prime}=1$ and the electronic
substate $\mathrm{X}^{2}\mathrm{\Pi}_{3/2}$ ($F^{\prime}=1$) [<]e.g.,¿noll20,
i.e. the energy difference equals 0 cm-1 for P${}_{1}(1)$ and Q${}_{1}(1)$
lines. If we focus on these lines, the figure reveals a clear increase of the
amplitude for decreasing $v^{\prime}$ with values between 0.56 for
$v^{\prime}=9$ and 0.74 for $v^{\prime}=3$. This trend is highly significant
as the uncertainties derived from lines with the same upper level are only
about 0.02. The amplitude differences for adjacent vibrational levels are
larger for high $v^{\prime}$. For increasing rotational energy, we find an
increase for all vibrational levels which is more pronounced for low
$v^{\prime}$. Up to about 500 cm-1, the increase ranges from 0.12 for
$v^{\prime}=9$ to 0.23 for $v^{\prime}=2$. For the lowest vibrational levels,
this rise results in enormous amplitudes of more than 90% of the sample mean
intensity. At higher rotational energies, the amplitudes decrease again. This
trend appears to start slightly later for lower $v^{\prime}$, but not later
than about 700 cm-1. The drop of the amplitude is much stronger for lower
$v^{\prime}$. For the highest $N^{\prime}$, the amplitude appears to be
independent of the vibrational level and is distinctly smaller than for low
$N^{\prime}$. The mean amounts to about 0.48 for energy differences $\Delta
E^{\prime}$ larger than 1300 cm-1.
The complex pattern in Figure 4a needs an explanation. As we can expect that
the emission altitudes increase with higher $v^{\prime}$ and $N^{\prime}$
[<]e.g.,¿adler97,dodd94,noll18b,savigny12, lower amplitudes with increasing
quantum numbers imply a decrease of the wave amplitude with increasing
altitude. Such a trend is consistent with the dependence of the amplitude on
$v^{\prime}$ for $\Delta E^{\prime}$ up to at least 1,000 cm-1 and the
comparison of amplitudes for very low and very high $N^{\prime}$. An issue
seems to be that the highest amplitudes are located at intermediate rotational
energies. However, this feature can be explained by means of the structure of
the OH rotational level population, which can be characterized by a cold and a
hot population for each $v^{\prime}$ [Cosby Slanger (2007), Kalogerakis .
(2018), Kalogerakis (2019), Noll . (2020), Oliva . (2015)]. While the
temperature of the cold population, which dominates the emission at low
$N^{\prime}$, is consistent with the ambient temperature, i.e. about 190 K on
average at Cerro Paranal [Noll . (2016), Noll . (2020)], the hot population
shows apparent temperatures between about 700 K for $v^{\prime}=9$ [Noll .
(2020)] and about 12,000 K for $v^{\prime}=2$ [Oliva . (2015)]. Such extreme
values reflect significant deviations from the local thermodynamic equilibrium
(LTE), which are related to the non-LTE nascent population of the
hydrogen–ozone reaction [Llewellyn Long (1978)] as well as an insufficient
frequency of collisions that thermalize the rotational level population
compared to $v$-changing collisions and the radiation of airglow photons [Noll
. (2018)]. An inspection of the combined populations now shows that the
contributions of the cold and the hot component are of the same order at
similar $\Delta E^{\prime}$ as those where we find the maximum amplitudes in
Figure 4a [Noll . (2020)]. Consequently, the latter can be explained by an
increased sensitivity of the corresponding OH lines to the Q2DW due to the
strong impact of the wave-induced variation of the ambient temperature
(affecting the cold population) on the rotational energy where cold and hot
populations have similar contributions. Changes in the cold component are more
crucial because of its much steeper decline with increasing energy. Also note
that the relative contribution of the hot population is lower than a percent
for $v^{\prime}\leq 6$ at 0 cm-1 [Noll . (2020), Oliva . (2015)]. The rapid
growth of this fraction with increasing $\Delta E^{\prime}$ can then explain
the increase of the Q2DW amplitudes until the maximum values at 500 to 700
cm-1. Moreover, the drop of the amplitudes marks the energies where the
contribution of the cold population becomes minor. Hence, the highest
$N^{\prime}$ show a pure hot population, which can obviously be described by a
single amplitude. The lack of a dependence of the amplitude on $v^{\prime}$
suggests that the $v^{\prime}$-specific hot components are linked and might be
represented by a single population (at least in part). The structure of the
full roto-vibrational populations, which show connections between high
$N^{\prime}$ for adjacent $v^{\prime}$ [Cosby Slanger (2007), Noll . (2020)],
seems to confirm this interpretation.
Q2DW phases relative to the period of 44 h at 30 January 2017 12:00 LT are
shown for all 298 OH $\Lambda$ doublets in Figure 4b. In general, there is a
clear decrease of the phase with increasing $v^{\prime}$ and $N^{\prime}$. For
the levels with $F^{\prime}=1$ and $N^{\prime}=1$, the phase ranges from 0.436
for $v^{\prime}=2$ to 0.333 for $v^{\prime}=9$, which is distinctly more than
the uncertainty of 0.005 derived from lines with the same upper levels. For
higher $N^{\prime}$, the phases decrease almost linearly and similar for all
$v^{\prime}$ with shifts of about 0.04 up to about 500 cm-1. The trend may
even hold until about 1,500 cm-1 but with a decreasing difference between low
and high $v^{\prime}$. For the highest $v^{\prime}$, there is a flattening of
the phase change, which might even stop at the end. However, this remains
questionable due to the relatively small number of lines and relatively high
uncertainties. The average phase for $\Delta E^{\prime}\geq 2,000$ cm-1 is
0.23, which is about 0.2, i.e. almost 9 h, lower than the maximum shown by
OH(2-0)P1(1). This relatively large spread is promising in terms of a
sensitive derivation of effective emission heights for various OH emission
lines (see section 4.3).
### 4.2 OH Emission Profiles
Figure 5: SABER-based vertically integrated OH VERs for the local time period
from 26 January to 3 February 2017 given as deviation in days from mean solar
noon at Cerro Paranal on 30 January. The results for the OH-related channels
centered on 1.6 $\mu$m (a) and 2.1 $\mu$m (b) are shown relative to the mean
of the considered 44 data points (circles). The local times at the reference
coordinates of the observations are emphasized by different colors. The Q2DW
fits for the two LT ranges are marked by crosses.
In order to link wave phases and heights, we need altitude-resolved OH
emission measurements. The required data were obtained by the two OH channels
of the limb-scanning SABER radiometer (section 2.2). For a comparison with the
X-shooter-based time series, Figure 5 shows vertically integrated VERs of the
channels OH(1.6 $\mu$m) (a) and OH(2.1 $\mu$m) (b) relative to the mean of the
44 measurements that are representative of the region around Cerro Paranal
during the eight nights in 2017 covered by the X-shooter data set. Both
channels show variability patterns that are consistent with a strong Q2DW for
the 22 data points taken at about 04:00 LT, whereas the data related to about
21:00 LT do not display clear wave features. These results agree with those of
the X-shooter data set in terms of the pronounced time dependence of the wave
amplitude (Figure 3). The maximum-to-minimun ratios and standard deviations
relative to the mean are 7.4 and 0.46 for all OH(1.6 $\mu$m) data and 6.6 and
0.42 for all OH(2.1 $\mu$m) data. The smaller values for the latter are
consistent with the decrease of the wave amplitude with increasing
$v^{\prime}$ for the X-shooter data (Figure 4). Remember that the effective
$v^{\prime}$ are about 4.6 and 8.3 for the two SABER OH channels (section
2.2). The maximum-to-minimum ratios are higher than the values of 6.1 for
OH(4-2)P1(1) and especially 4.4 for OH(4-2)P1(14) (section 4.1). The standard
deviations are very similar to the value of 0.45 for the line with
$N^{\prime}=1$ but higher than the result of 0.36 for the line with high
$N^{\prime}$. These findings agree well as the integrated emission of the
SABER channels is dominated by lines with small $N^{\prime}$. Consequently,
SABER and X-shooter data show a consistent picture of the Q2DW event in 2017.
Differences in the measurement approaches and sample properties do not appear
to have a significant impact.
Figure 5 also shows our fits of the two LT ranges based on the approach
described in section 3. Compared to the 04:00 LT data that we exclusively used
for the phase derivation, the amplitude $c{\cdot}a$ for the 21:00 LT data was
significantly smaller. The ratios for the 21:00 and 04:00 amplitudes were
about 0.09 for OH(1.6 $\mu$m) and about 0.20 for OH(2.1 $\mu$m). Thus, the fit
of the evening measurements shows only very small Q2DW-related variations.
Moreover, the corresponding factors for $c$ were 0.86 and 0.87, i.e. the
intensity level for the evening data was lower on average. For the subsequent
analysis, we only focused on the morning data.
Figure 6: Derivation of the most likely period for the morning data of the
Q2DW in 2017 based on the root mean square of the relative intensity fit
(circles, left axis) and the phase error relative to the period (squares,
right axis) for the SABER OH channel centered on 2.1 $\mu$m.
The fitting was performed for the same wave period as in the case of the
X-shooter data, i.e. 44 h (section 4.1). Using the same period is important
for the derivation of the effective emission heights. Nevertheless, we checked
a wide range of periods in terms of the fit quality. The results for OH(2.1
$\mu$m) and the 04:00 LT data are shown in Figure 6. The corresponding plot
for OH(1.6 $\mu$m) is very similar. The rms relative to the mean indicates a
clear minimum of 0.13 at 43 h. The phase error derived from the fit also shows
a minimum there (0.013), although it is less pronounced. Consequently, the
SABER-related fit returns the best result for a period which is only 1 h lower
than for the X-shooter data. This shift can also be observed for the maximum
phase uncertainty (48 vs. 49 h). In the view of the various differences
between both data sets, the deviation is small. It is not crucial for the
estimates of the line-specific emission altitudes. A comparison of the results
for 43 and 44 h did not indicate significant deviations with respect to the
general uncertainties (section 4.3). In the following, we will focus on the
results for 44 h, which represents the best period from the X-shooter data
set, which is much larger than the SABER data set in terms of OH emission
features and observing times.
Interestingly, the period can change significantly if the geographical area is
not restricted to locations close to Cerro Paranal. Our additional check of
the SABER data without longitude and LT limits (cf. section 3) revealed a most
likely period of 49 h for the two OH channels. This result is in good
agreement with the findings of gu19, who reported a period of 48 h for the
Q2DW in 2017 based on the kinetic temperatures from SABER. We repeated this
fit and found the same. For the global fits, we preferentially used the
combined evening and morning data, i.e. we neglected a possible LT dependence
of the amplitude. This setting resulted in better fits, which is also in
contrast to our experience with the data restricted to the region around Cerro
Paranal. Moreover, an inspection of the wave properties depending on the
longitude showed that the South American sector exhibited the clearest Q2DW
variability pattern. In other regions, the wave was much less pronounced. This
observation may explain why gu19 only identified an intermediate wave
amplitude of 10 K at 82.5 km for the event in 2017. The latitude- and time-
resolved SABER-based results for a fixed period of 48 h from xiong18 show
amplitudes slightly below 10 K at 84 km at about 25∘ S for the relevant time
interval at the end of January and beginning of February. We find about 18 K
around Cerro Paranal in the morning. In addition, the local times with the
strongest variations changed depending on longitude, which indicates why
global fits without the corresponding scaling factors worked better. This
complex variability pattern suggests that the Q2DW is significantly disturbed
by interactions with other planetary waves, tides, gravity waves, and/or the
mean flow. For OH emission from SABER observations, pedatella12 already found
a complex longitude dependence of the Q2DW-related variability due to
contributions of nonmigrating tides, stationary planetary waves, and secondary
waves that are preferentially generated by the nonlinear interaction of the
Q2DW and migrating tides. The latter supports our assumption that such
interactions were important for the Q2DW event in 2017 with a pronounced LT
dependence of the measured amplitude.
Figure 7: Vertical VER profiles for the Q2DW event in 2017 from the SABER OH
channels centered on 1.6 $\mu$m (a) and 2.1 $\mu$m (b). As described in the
legend, both subfigures show two individual profiles with dates separated by
about 24 h (crosses and plus signs), mean profiles (circles) for the evening
(upper peak) and morning (lower peak), as well as standard deviation (SD)
profiles (diamonds) for the evening (upper peak) and morning (lower peak). For
the calculation of the curves representing the evening data, one profile with
unusually large VERs below 80 km had been excluded.
We now turn to the discussion of the height-dependent impact of the Q2DW.
Figure 7 shows different kinds of emission profiles for the two OH channels
centered on 1.6 $\mu$m (a) and 2.1 $\mu$m (b). For each channel, two
individual profiles from 31 January and 1 February 2017 at about 21:00 LT are
displayed. The huge differences between both profiles with a time difference
of 1 day demonstrate the implications for the OH emission by a large amplitude
Q2DW. On 31 January, the OH(1.6 $\mu$m) profile peaked at a very low altitude
of 82 km with an enormous VER of 81 nW m-3. In contrast, the emission almost
vanished completely at this altitude on the next day. As only the emission at
the highest altitudes remained relatively similar, the peak moved upward by 11
km to 93 km, where the VER was only 8 nW m-3. The situation for OH(2.1 $\mu$m)
is very similar with the peaks just 1 km higher. It is not clear whether the
VER minimum at 89 km on 1 February is real as it cannot be seen in the
corresponding data from OH(1.6 $\mu$m).
Figure 7 also shows mean and standard deviation profiles for the two LT
regimes of about 21:00 and 04:00 in the eight selected nights. The evening
data indicate typical mean profiles with respect to the long-term averages for
Cerro Paranal [Noll . (2017)]. The centroid altitudes of the plotted profiles
of both channels are 88.1 and 89.5 km, respectively. The standard deviation
profiles of the 21:00 LT data only indicate relatively weak variations. The
peaks of these profiles are about 1 to 2 km lower than for the mean profiles.
These shifts are expected due to the steepening of the atomic oxygen gradient
with decreasing height [<]e.g.,¿smith10, which makes the lower altitudes more
sensitive to vertical transport of this important gas for the OH production
and subsequent relaxation and destruction processes. The strong oxygen-induced
emission variations at low altitudes lead to a significant perturbation of the
mean profile for the morning data. The latter therefore extends to lower
heights (with a 3 km lower peak) and gets more broadened for both channels.
The peak of the standard deviation profile moves in a similar way. The
emission variability is much larger than in the case of the evening data (as
expected). Note that the mean centroid emission altitudes of the individual
profiles for 04:00 LT of 88.1 km for OH(1.6 $\mu$m) and 89.1 km for OH(2.1
$\mu$m) are very similar to those of the 21:00 LT data (88.0 km and 89.5 km)
since the large differences in the VERs which affect the plotted mean profiles
do not matter here. Nevertheless, the larger impact of the Q2DW on the morning
data can be recognized by the standard deviation of the individual centroid
altitudes of about 2.3 km for both OH channels, which is distinctly higher
than about 0.6 km for the evening data (if one outlier is excluded).
Figure 8: Fitted phase relative to the period of 44 h as a function of height
for the SABER product kinetic temperature (stars) and the VERs of O2(a-X)(0-0)
at 1.27 $\mu$m (diamonds), OH(1.6 $\mu$m) (squares), and OH(2.1 $\mu$m)
(circles). The time series comprised the 22 limb scans taken at about 04:00 LT
in eight nights of 2017. The reference time for the plotted phases was 30
January 12:00 LT at Cerro Paranal. Fit uncertainties, which are useful for
relative quality comparisons, are indicated by dotted lines. Moreover, the
plot shows the resulting regression line for a linear fit of the OH(2.1
$\mu$m) phases in the altitude range between 80 and 97 km.
With the knowledge of the height-dependent response of the OH emission on the
passing Q2DW, we now discuss the wave phases as a function of altitude. Figure
8 shows the phase functions between 79 and 99 km from fits of the morning data
with a period of 44 h as described in section 3 for both OH channels. The
phases are given for 30 January 2017 12:00 LT at Cerro Paranal. At least for
the altitude range between 80 and 90 km, there is a clear trend of decreasing
phase with increasing height for the OH emissions at about 1.6 and 2.1 $\mu$m.
For 80 to 89 km, both curves are almost identical with a mean absolute
difference of only 0.005. However, the discrepancy is rapidly growing above 89
km. While the trend continues for OH(2.1 $\mu$m), OH(1.6 $\mu$m) shows a
complex behavior with increasing and decreasing phase. The latter is not
trustworthy. As expected from the ratio of standard deviation to mean profiles
in Figure 7, the fitted wave amplitudes relative to the mean rapidly decrease
with increasing altitude. From 80 to 93 km, $c{\cdot}a$ drops from 1.18 to
0.19 for OH(2.1 $\mu$m). However, this is still moderate compared to a
decrease from 1.23 to 0.04 for OH(1.6 $\mu$m). Consequently, it appears that
the wave-induced variability in the emission at about 1.6 $\mu$m was too low
to track the wave in a reliable way (cf. evening data in Table 1 and Figure
3), whereas it was still sufficient for OH(2.1 $\mu$m). Note that the latter
emission peaks higher in the atmosphere (Figure 7).
In order to check this interpretation, we also fitted the emission of
O2(a-X)(0-0) at about 1.27 $\mu$m, which is also observed by SABER. The
profile of this airglow emission strongly changes during the night [Noll .
(2016)] due to the decay of an excited population originating from ozone
photolysis at daytime [<]e.g.,¿lopez89. However, in the second half of the
night, the emission distribution is similar to the one of OH. The fits only
show a relatively weak decrease of the wave amplitude by a factor 2 over the
entire plotted altitude range. Hence, the resulting phases in Figure 8 appear
to be reliable. They clearly support the trend found for OH(2.1 $\mu$m). This
result is further confirmed by fits of SABER-based kinetic temperature
profiles [Dawkins . (2018)], where the corresponding phase profile is also
plotted. The temperature fits are relatively robust since the wave amplitude
remained relatively high in the entire studied altitude regime (at least 11 K
up to 98 km with maximum values of about 18 K at 82 km and 21 K at 94 km). The
shape of the atomic oxygen profile does not matter for the temperature.
In conclusion, we have strong hints for a monotonically decreasing phase with
increasing height over the entire altitude range relevant for OH. This
implication is ideal for the estimate of effective emission heights. Moreover,
the earlier maxima at higher altitudes indicate that the wave was rising. This
interpretation is also supported by test fits of the diurnal tide with the
same fitting algorithm [<]cf., e.g.,¿griffith21. Consequently, the wave
propagation is consistent with the assumed origin of Q2DWs in the lower
mesosphere [<]e.g.,¿ern13. The results suggest that the OH-relevant wave
phases are best described by the profile fits for OH(2.1 $\mu$m). There is an
almost linear relation between phase and height over a wide altitude range. We
found that the optimimum interval for a linear regression extends from 80 to
97 km. Then, we obtain a very high coefficent of determination r2 of 0.995.
The inverse slope of the regression line corresponds to the vertical
wavelength $\lambda_{\mathrm{z}}$ of the wave. It amounts to $31.7\pm 0.6$ km,
which is near the peak of the distribution of W3 Q2DW wavelengths derived by
huang13 based on SABER data. If we only consider the altitude range up to 89
km, where OH(1.6 $\mu$m) can also be used, it is almost the same
$\lambda_{\mathrm{z}}$ but with a larger uncertainty ($31.9\pm 1.6$ km). For
OH(1.6 $\mu$m), we then obtain $33.6\pm 1.8$ km, which agrees within the
regression uncertainties.
### 4.3 Effective OH Emission Heights
Combining the line-specific effective phases from section 4.1 with the slope
of the relation between phase and height for OH(2.1 $\mu$m) from section 4.2
for the same wave period of 44 h allowed us to estimate the effective emission
heights of 298 OH lines. For a direct conversion, it is just necessary to
subtract the phase for each line from the intercept at 0 km of 3.027 and to
multiply the result with the vertical wavelength of 31.74 km (see section
4.2). However, our calculation was more complex as we also considered possible
phase deviations due to the differences in the lines of sight as well as
geographical and temporal distributions of the X-shooter and SABER
measurements for the Q2DW event in 2017. In order to estimate this effect, we
used the fitted effective phases of 0.388 and 0.345 for the vertically
integrated VERs of the OH channels centered on 1.6 and 2.1 $\mu$m,
respectively, and compared them with X-shooter-based effective phases for sets
of OH lines that are representative of these channels. We weighted the phases
for individual lines as shown in Figure 4b by the product of the measured line
intensity and the channel-specific transmission [Baker . (2007)] at the line
position. Moreover, we checked the impact of the fact that our line sample is
not complete (section 2.1). We found that the effective $v^{\prime}$ of our
line mixes did not change for OH(1.6 $\mu$m) but deviated by $+0.08$ for
OH(2.1 $\mu$m) compared to the reference values of 4.57 and 8.29 by noll16.
However, lowering the contribution of the $v^{\prime}=9$ lines to get a match
in the effective $v^{\prime}$ for OH(2.1 $\mu$m), i.e. 8.29, did not affect
the effective phases, which turned out to be 0.379 and 0.327 for the X-shooter
data. Consequently, the SABER phases appear to be shifted by about 0.0135 on
average. We subtracted this value from the intercept of the regression line
before we calculated the effective emission heights. In terms of altitude,
this shift corresponds to a change of $-0.43$ km with an uncertainty of 0.13
km derived from the difference between the results for both channels.
For the estimate of the height uncertainties, we also checked whether our
simple linear regression analysis without the consideration of the height-
dependent fit quality (Figure 8) could cause systematic phase offsets. For
this purpose, we calculated a residual phase deviation from the differences
between measured phase and regression line at all heights weighted by the
wave-induced variability. The resulting height uncertainty only amounts to
0.11 km thanks to the convincing fit of the OH(2.1 $\mu$m) profile data.
Concerning the X-shooter-related uncertainties, we used the representative
phase uncertainties 0.005, 0.008, and 0.017 (cf. caption of Figure 4), which
are based on phase differences for OH lines with the same upper level for the
energy differences $\Delta E^{\prime}$ below 400 cm-1, between 400 and 800
cm-1, and above 800 cm-1, respectively. Combined with the small uncertainties
reported above, the effective absolute height errors resulted in about 0.24,
0.30, and 0.55 km, respectively.
The effective heights for the 298 OH $\Lambda$ doublets range from 81.8 to
89.7 km with an average of 84.88 km and a standard deviation of 1.43 km.
Compared to the uncertainties, the line-specific differences are highly
significant. The given altitudes are representative of the maximum emission
variability induced by the Q2DW in the eight investigated nights of 2017. This
can be illustrated for the vertically integrated VERs of the SABER channels at
1.6 and 2.1 $\mu$m, for which we derived effective heights of 83.75 and 85.13
km based on the already stated phases. The profile plots for about 04:00 LT in
Figure 7 indicate that these wave-related heights are between the peak at 83
km and the centroid altitude at 84.6 and 85.6 km of the standard deviation
profiles of the two channels.
If the evening data of both instruments had resulted in reliable wave fits, we
would probably have obtained significantly higher altitudes in agreement with
the standard deviation profiles of these data. Hence, the resulting heights of
our approach depend on the wave properties and the state of the background
atmosphere during the analyzed time interval. Therefore, it is desirable to
also provide heights for the studied OH lines which are representative of
longer time scales. Moreover, there should be a closer relation to the peak or
centroid emission altitudes that are usually used in the literature,
especially with respect to studies of OH rotational temperatures as indicators
of the effective ambient temperature for the OH emission layer. The relevant
heights for temperature and intensity variations can differ by several
kilometers [<]e.g.,¿swenson98. For example, we find for the ratio of the
intensities of OH(3-1)P1(1) and OH(3-1)P2(2), which are often used for
rotational temperature studies [<]e.g.,¿beig03,schmidt13,noll15, a phase shift
of $-0.13$ compared to the mean phase for both lines that corresponds to an
altitude difference of $+3.8$ km with an uncertainty of several hundred meters
(including a possible small discrepancy in the phase–height relations of
temperature and OH intensity as indicated by Figure 8). Note that the general
use of temperatures instead of intensities for height estimates is not
possible as rotational temperatures show much higher measurement uncertainties
and could vary in a different way than the satellite-based kinetic
temperatures due to the non-LTE contributions which increase with increasing
$v^{\prime}$ and $N^{\prime}$ [Noll . (2020)]. For the derivation of the
reference heights, we considered the 14-year averages of the centroid
altitudes of $87.81\pm 0.02$ km for OH(1.6 $\mu$m) and $89.20\pm 0.02$ km for
OH(2.1 $\mu$m) from noll17 that were calculated based on 4,496 SABER profiles
taken close to Cerro Paranal. These values are about 4.06 and 4.07 km higher
than our results for the Q2DW phase fits but they are close to the averages of
the individual centroid altitudes during the investigated event (section 4.2).
The large difference can therefore be mainly explained by the strongly
increasing VERs for emission profiles with lower peaks and the resulting
impact on the vertical variability distribution. It is promising that the
shifts are almost the same for both OH channels as it implies that the Q2DW-
based effective height differences between lines are also representative of
the long-term averages of the centroid emission heights. At least for lines
with low $N^{\prime}$ that mainly contribute to the SABER VERs, the deviations
should be much smaller than the stated uncertainties of a few hundred meters.
Consequently, we can also provide effective OH emission heights for average
conditions at Cerro Paranal by shifting all line-specific heights by $+4.07$
km. The resulting mean height for all 298 $\Lambda$ doublets is 88.95 km,
which is in between the centroid altitudes for the reference profiles of both
SABER OH channels.
Our height estimates are based on a model that assumes a single wave with a
period of 44 h and an amplitude depending on local time. In order to increase
the confidence in the corresponding results, it is important to know how
changes in this model affect the derived emission heights. As already
mentioned in section 4.2, we could have also used a period of 43 h as
indicated by the SABER-based fit results in Figure 6. In comparison to 44 h,
we obtained a mean height for the 298 lines which is 0.09 km higher before and
0.02 km lower after the shift. The standard deviation did not change, i.e. it
is 1.43 km. Hence, the impact of a period change by 1 h is negligible. We also
performed a test for an extremely different period of 56 h, which marks a
secondary minimum of the phase error in Figure 6 and results in a downward-
propagating Q2DW with a $\lambda_{z}$ of $53.9\pm 2.3$ km for OH(2.1 $\mu$m).
Nevertheless, the changes of the two mean heights were only $+0.21$ and
$-0.17$ km and the standard deviation just decreased by $0.08$ km. This
promising result demonstrates that even an unrealistic period can lead to
reliable heights as long as a sufficiently linear phase–height relation with a
significant spread of phases is present. For this reason, periods around 50 h
do not work for the analyzed Q2DW as they mark the reversal of the vertical
propagation direction, which leads to very long $\lambda_{z}$. In section 3,
we have already discussed a two-wave model with fixed amplitudes that we
checked as an alternative but resulted in unlikely wave parameters and low-
quality fits. Nevertheless, even such a model appears to provide useful
information on the OH height distribution. Focusing on the standard deviation
of the effective heights of all lines, the best-fitting waves with periods of
43 and 51 h and opposite vertical propagation directions returned 2.18 and
0.72 km. Both values show a large discrepancy but the average is 1.45 km,
which is very close to the value for our preferred model of 1.43 km.
Figure 9: Final effective heights for the considered 298 OH lines derived from
a combination of fits of X-shooter line measurements and SABER VER data of the
OH channel centered on 2.1 $\mu$m for the Q2DW event in 2017. The wave-related
effective heights were shifted upward by 4.07 km to be representative of the
mean SABER-related centroid altitudes for Cerro Paranal from noll17. The
abscissa shows the energy of the upper level of the transition minus the
lowest energy for the corresponding vibrational state $v^{\prime}$. The latter
is given by colored numbers. The representative height uncertainties are about
0.24, 0.30, and 0.55 km for energy differences below 400 cm-1, between 400 and
800 cm-1, and above 800 cm-1, respectively.
With the confirmation of the robustness of our results, we now show the
distribution of the reference heights for all investigated OH lines in Figure
9. The altitudes range from 85.9 to 93.8 km, i.e. the maximum difference is
almost 8 km and therefore of the same order as the width of the full OH
emission layer [<]e.g.,¿baker88. The detailed structure of the plotted data
distribution was already discussed in terms of the wave phases in section 2.1.
The distribution is just inverted compared to the phases in Figure 4b. The
effective emission heights increase for higher vibrational and rotational
excitations as expected. Focusing on the levels with $F^{\prime}=1$ and
$N^{\prime}=1$, the altitude difference between $v^{\prime}=9$ (89.14 km) and
2 (85.89 km) amounts to 3.26 km, which is about 0.47 km for a difference of 1
in $v^{\prime}$ on average. This result agrees very well with the
corresponding values found in previous studies of a few OH bands related to
satellite data [Noll . (2016), Sheese . (2014), von Savigny Lednyts’kyy
(2013), von Savigny . (2012)] and ground-based data combined with a sodium
lidar [Schmidt . (2018)]. The height difference for $\Delta v^{\prime}=1$
decreases with increasing vibrational excitation. Our analysis revealed 0.59
km for $v^{\prime}\leq 4$ but 0.29 km (i.e. the half) for $v^{\prime}\geq 7$.
This behavior agrees qualitatively with the modeling results of savigny12 and
is obviously caused by the $v^{\prime}$-dependent Einstein and collisional
rate coefficients. As the previously published results refer to emissions of
bands instead of lines, we also checked the change of the height differences
with increasing rotational energy. Taking only data with $\Delta E^{\prime}$
between 400 and 600 cm-1 ($N^{\prime}$ between 4 and 6) as in section 4.1, we
found almost the same difference between $v^{\prime}=9$ and 2 (3.20 km) but at
altitudes that are about 1.1 km higher. The change of the differences for
$\Delta v^{\prime}=1$ with $v^{\prime}$ also agrees within the uncertainties.
Consequently, the results for entire bands should agree with those of the
significantly contributing brighter lines of low to intermediate $N^{\prime}$.
For the highest rotational levels, we do not see a clear dependence of the
effective emission altitudes on $v^{\prime}$. The differences appear to
decrease with increasing $N^{\prime}$. As the nearly linear height increase
with $\Delta E^{\prime}$ seems to flatten (especially for higher
$v^{\prime}$), there might even be a nearly constant effective emission height
for the highest $N^{\prime}$. This behavior was already modeled by dodd94.
However, quantitatively, there are some discrepancies. Our mean for all lines
with $\Delta E^{\prime}\geq 2,000$ cm-1 is 92.3 km. If we assume that this
height is representative of all $v^{\prime}$, we estimate
$v^{\prime}$-specific maximum changes compared to $\Delta E^{\prime}=0$ cm-1
between 3.2 and 6.4 km. dodd94 only modeled 0 to 2 km and a reference height
for the highest $N^{\prime}$ of 89 km. Based on the data set of noll17 that we
also used for the derivation of our reference heights, noll18b modeled
rotational level populations with a focus on $v^{\prime}=9$. For two models
with very different rate coefficients for collisions of OH with atomic oxygen,
the maximum height changes resulted in 1.5 and 2.8 km and a maximum emission
height of almost 92 km in the latter case. If we estimate the height changes
from the plotted data for $v^{\prime}=9$, we obtain a rough lower limit of 1.5
to 2 km. These values might be better for a comparison since $v^{\prime}=9$
levels with $\Delta E^{\prime}\geq 2,000$ cm-1 would be far beyond the
exothermicity limit of the hydrogen–ozone reaction [Cosby Slanger (2007), Noll
. (2018)] of about 1,070 cm-1. In any case, the model results appear to match
the correct order of magnitude, although the uncertainties are large. The
model of noll18b also predicts a flattening of the height increase for high
$N^{\prime}$. Moreover, this model gives nearly constant heights for the
lowest $N^{\prime}$. The latter is something which we do not find in our
analysis that suggests a linear increase due to the growing contribution of
the hot population. The nearly constant height for the OH lines with the
highest $\Delta E^{\prime}$ of all $v^{\prime}$ also seems to be supported by
the measured wave amplitudes. The interesting levels are occupied by a pure
hot population [Noll . (2020)], which appears to show a nearly constant
amplitude for the Q2DW in 2017 (Figure 4). Moreover, the SABER-based fits of
the profiles indicate a steep gradient of the amplitude at the heights
relevant for OH (section 4.2). Hence, an almost constant amplitude would
require a relatively narrow altitude distribution for the OH lines dominated
by the hot population. For lower $N^{\prime}$, the interpretation of the wave
amplitudes is more difficult due to the mixing of cold and hot populations
with different altitude distributions.
### 4.4 Impact of Time Range
Figure 10: Maximum amplitude $c{\cdot}a$ relative to the mean (a) and phase
$\phi$ relative to the period at 15 January 12:00 LT at Cerro Paranal (b) for
all 270 OH lines used for the fit of the Q2DW event in 2019. The plot is
similar to Figure 4. Figure 11: Fitted phase relative to the period of 44 h as
a function of height for the SABER product kinetic temperature (stars) and the
VERs of O2(a-X)(0-0) at 1.27 $\mu$m (diamonds), OH(1.6 $\mu$m) (squares), and
OH(2.1 $\mu$m) (circles). The time series comprised the 19 limb scans taken at
about 23:00 LT in seven nights of 2019. The reference time for the plotted
phases was 15 January 12:00 LT at Cerro Paranal. Fit uncertainties, which are
useful for relative quality comparisons, are indicated by dotted lines.
Moreover, the plot shows the resulting regression line for a linear fit of the
OH(2.1 $\mu$m) phases in the altitude range between 80 and 93 km.
Our OH height estimates are based on eight nights of a single Q2DW. Although
they appear to be reliable according to the discussion in section 4.3, the
analysis of another wave event would be a good quality check. As already
described in section 2.1, we were able to identify another Q2DW in the
X-shooter data of seven nights from 11 to 18 January 2019. We analyzed the
corresponding line measurements in the same way as for the Q2DW in 2017.
However, we only considered lines with wavelengths shorter than 2.1 $\mu$m
because of too few spectra taken without a $K$-blocking filter (section 2.1).
The resulting fits showed that a period of 44 h also appears to be the best
choice. On the other hand, the LT dependence of the wave amplitude was very
different from the curves in Figure 3. The variation was much smaller. The
maximum and minimum $c{\cdot}a$ values only differed by a factor of 2 for the
two example lines. Moreover, the highest amplitudes were reached between 22:00
and midnight, which corresponds to a shift of $-2$ h compared to the data from
2017. While the maximum relative amplitude for OH(4-2)P1(14) did not change
much (0.49 vs. 0.46 for 2017), it was significantly lower for OH(4-2)P1(1)
(0.31 vs. 0.74 for 2017). As a consequence, hot populations obviously showed a
stronger response to the Q2DW than cold populations, which is the opposite
situation compared to 2017. This reversal was also seen for the dependence of
the amplitude on $v^{\prime}$ for low $N^{\prime}$ (Figure 10a). Nevertheless,
the impact of the mixing of both populations at intermediate rotational
energies (400 to 800 cm-1) was similar. The related lines showed the largest
amplitudes, although only values up to about 0.62 were found. In conclusion,
the two-population model appears to be confirmed by the data from 2019.
However, the Q2DW was weaker and showed different amplitude relations, which
suggests that the properties of such waves are highly variable due to changes
in their generation, propagation, and interaction with the background
atmosphere (including other waves). This high amount of variability had
already been observed before [<]e.g.,¿ern13,gu19,tunbridge11.
For our purpose, it is important that a wave can be used for estimates of
effective emission heights. Hence, the phase relations are crucial.
Unfortunately, the pattern for our set of OH lines was completely different
from the situation in 2017 (Figure 4b). A clear $v^{\prime}$ dependence was
not found and there was no monotonic decrease of the phase with increasing
$v^{\prime}$ (Figure 10b). Instead, the phase even increased up to about 600
cm-1. The expected behavior was only present for higher energies. An important
detail for the explanation of this unexpected structure is the maximum range
of phases, which amounts to only 0.041. The standard deviation was 0.007.
Consequently, the phase was almost constant. We also fitted the available
SABER data for a better understanding. For the seven nights in 2019, we could
use 19 profiles, all taken at about 23:00 LT (section 2.2). The profile fits
for both OH channels agree quite well with the X-shooter data. There were only
small phase changes without clear direction (Figure 11). For OH(2.1 $\mu$m),
we found a maximum phase difference of only 0.13 for the height interval
between 80 and 93 km, which minimizes the deviation of the fitted phases from
a linear relation. The resulting regression line is almost vertical with a
highly uncertain wavelength $\lambda_{z}$ of $280\pm 240$ km. The rising wave
turns into a descending one with $\lambda_{z}=300\pm 210$ km for a fit up to
97 km as in the case of 2017 (Figure 8). Hence, the propagation direction
remains unclear. The long wavelength may explain the reduced dependence of the
wave amplitude on the OH line parameters (partly related to the emission
height) and local time. The latter might point to a less efficient interaction
of the Q2DW with the migrating diurnal tide, which has a relatively short
$\lambda_{z}$ [<]e.g.,¿forbes95 that is similar to the about 32 km for our
best fit of the Q2DW in 2017.
huang13 investigated $\lambda_{z}$ of southern W3 Q2DWs based on SABER
temperature data from 2002 to 2011 and found a wide range of possible values
from above 10 km to beyond 100 km at an altitude of 85 km. As other studies
using different techniques also found a high variability for low-to-middle
southern latitudes in the mesopause region [Ern . (2013), Guharay . (2013),
Reisin (2021)], the differences in $\lambda_{z}$ for the analyzed Q2DWs in
2017 and 2019 do not appear to be uncommon. For a strong event in southern
summer 2002 to 2003, huang13 also investigated the change of the wavelength
over the lifetime of the wave of several weeks. They found a trend of
decreasing $\lambda_{z}$ and less variability with increasing age. As we
investigated data from 11 to 18 January in 2019 but from 26 January to 3
February in 2017, this trend would be consistent with our wave fits. However,
as the Q2DWs can be quite different from year to year, a convincing check
would need a detailed wave analysis over the entire lifetime for the different
years. In any case, the properties of the Q2DW in the selected time interval
in 2019 were not suitable for our phase-sensitive investigation.
## 5 Conclusions
In our study, we derived reference centroid emission heights for average
conditions of various individual OH lines for the first time. This success
required the combination of OH line intensities from ground-based spectra
taken with the astronomical X-shooter spectrograph at Cerro Paranal in Chile
and space-based limb sounding of emission profiles in the two OH-related
channels of the SABER radiometer on TIMED. Moreover, we benefited from the
observation of a strong quasi-2-day wave (Q2DW) with both instruments in eight
nights at the beginning of 2017. For the region around Cerro Paranal and the
given observing period (mainly limited by the X-shooter data coverage), our
wave fits of both data sets (separated depending on local time) with a cosine
function revealed a most likely period of 44 h and vertical wavelength of
about 32 km (based on the SABER channel centered on 2.1 $\mu$m), which makes
this wave event very suitable for phase-sensitive investigations. The
amplitudes strongly varied. Our fits revealed particularly high amplitudes up
to almost 100% of the mean intensity for emission lines related to
intermediate rotational energy between 400 and 800 cm-1 and low vibrational
upper level $v^{\prime}$, the second half of the night, and altitudes below
the emission peak. The high values for lines with intermediate rotational
energies indicate an amplification of the variation due to the mixing of cold
(thermalized) and hot (non-thermalized) OH rotational level populations, which
maximizes for the stated energy range. The local time dependence (with no wave
detection at the beginning of the night) suggests that the Q2DW was strongly
affected by the changing diurnal atmospheric conditions (e.g. by tides). Apart
from the altitude dependence of the intrinsic amplitude of the upward-
propagating wave, the OH emission should also be affected by the increasing
relative atomic oxygen variability with decreasing height. Furthermore, the
wave properties significantly depend on the selection of the geographical area
and the time range. Another Q2DW which was present in the X-shooter data of
2019 indicated a vertical wavelength being too long to provide sufficient
phase sensitivity for our purpose.
From the effective wave phases of each line measured by X-shooter and the
relation between phase and altitude from the height-resolved fits of the SABER
profiles, we first estimated effective emission heights that are
representative of the altitudes with the strongest wave amplitudes during the
studied eight nights in 2017. For OH(2.1 $\mu$m), the phase change between 80
and 97 km was almost perfectly linear, which allowed us to derive reliable
heights without ambiguities. The resulting heights of the 298 investigated OH
emissions cover a range of about 8 km with an average of 84.9 km. Lines with
higher $v^{\prime}$ and/or rotational upper level $N^{\prime}$ show higher
effective altitudes. At low rotational energies, the height increase appears
to be almost linear, whereas lines with high $N^{\prime}$ indicate a
flattening of the trend and a decreasing difference between different
$v^{\prime}$. The latter could imply the presence of a universal hot
population. Finally, we derived line-specific reference altitudes that are
representative of the long-term centroid heights at Cerro Paranal. In
combination with results for both SABER OH channels from a previous study, we
found that a fixed positive shift of about 4.1 km is obviously sufficient for
our line set. The resulting heights therefore range from 85.9 to 93.8 km with
an average of 88.9 km.
Our results may provide important constraints for a better modeling of the
layering of OH emission. Moreover, the line-dependent heights could be used to
study other wave events where suitable profile data are not available. In
particular, gravity waves with their too short periods for repeated satellite
observations may constitute an appealing target. Finally, our approach could
also be applied to other airglow emissions. The X-shooter spectra contain
various candidates. Consequently, the results of our study and possible future
applications are quite promising with respect to a better understanding of the
chemistry and dynamics of the Earth’s mesopause region.
## Open Research
The basic X-shooter data for this project originate from the ESO Science
Archive Facility at http://archive.eso.org and are related to different
observing programs. In particular, raw NIR-arm spectra taken between 26
January and 3 February 2017 and between 11 and 18 January 2019 were processed
(using the corresponding calibration data) and then analyzed. This project
made also use of SABER v2.0 limb-sounding products at http://saber.gats-
inc.com from January and February of the years 2017 and 2019. Both archives
can be accessed after registration.
The input data for the fitting procedure described in section 3 and the
results as shown in the figures are available at the public repository Zenodo
[Noll . (2022)]. In detail, the data release includes time series of the
intensities of the investigated OH lines measured in the X-shooter spectra for
the analyzed periods binned in 30 min steps. The corresponding SABER profiles
for the two OH channels as well as the vertically integrated OH emissions are
also provided as time series. Moreover, there are tables with the plot data of
the 11 figures. For the Q2DW from 2019, where only two figures are included in
the paper, some additional tables similar to the ones for the Q2DW from 2017
are considered.
###### Acknowledgements.
Stefan Noll is financed by the project NO 1328/1-3 of the German Research
Foundation (DFG). We thank Holger Winkler from Universität Bremen for his
contribution to the discussion and Sabine Möhler from ESO for her support with
respect to the X-shooter calibration data. Moreover, we are grateful to the
three anonymous reviewers for their valuable comments.
## References
* Adler-Golden (1997) adler97Adler-Golden, S. 199709\. Kinetic parameters for OH nightglow modeling consistent with recent laboratory measurements Kinetic parameters for OH nightglow modeling consistent with recent laboratory measurements. J. Geophys. Res.10219969-19976. 10.1029/97JA01622
* Baker Stair (1988) baker88Baker, DJ. Stair, AT., Jr. 198804\. Rocket measurements of the altitude distributions of the hydroxyl airglow Rocket measurements of the altitude distributions of the hydroxyl airglow. Phys. Scripta37611-622. 10.1088/0031-8949/37/4/021
* Baker . (2007) baker07Baker, DJ., Thurgood, BK., Harrison, WK., Mlynczak, MG. Russell, JM. 200705\. Equatorial enhancement of the nighttime OH mesospheric infrared airglow Equatorial enhancement of the nighttime OH mesospheric infrared airglow. Phys. Scr.75615-619. 10.1088/0031-8949/75/5/004
* Bates Nicolet (1950) bates50Bates, DR. Nicolet, M. 195009\. The Photochemistry of Atmospheric Water Vapor The Photochemistry of Atmospheric Water Vapor. J. Geophys. Res.55301-327. 10.1029/JZ055i003p00301
* Beig . (2003) beig03Beig, G., Keckhut, P., Lowe, RP., Roble, RG., Mlynczak, MG., Scheer, J.Fadnavis, S. 200312\. Review of mesospheric temperature trends Review of mesospheric temperature trends. Rev. Geophys.41RG1015. 10.1029/2002RG000121
* Brooke . (2016) brooke16Brooke, JSA., Bernath, PF., Western, CM., Sneden, C., Afşar, M., Li, G. Gordon, IE. 201601\. Line strengths of rovibrational and rotational transitions in the X2$\Pi$ ground state of OH Line strengths of rovibrational and rotational transitions in the X2$\Pi$ ground state of OH. J. Quant. Spectrosc. Radiat. Transf.168142-157. 10.1016/j.jqsrt.2015.07.021
* Clough . (2005) clough05Clough, SA., Shephard, MW., Mlawer, EJ., Delamere, JS., Iacono, MJ., Cady-Pereira, K.Brown, PD. 200503\. Atmospheric radiative transfer modeling: a summary of the AER codes Atmospheric radiative transfer modeling: a summary of the AER codes. J. Quant. Spectrosc. Radiat. Transf.91233-244. 10.1016/j.jqsrt.2004.05.058
* Cosby Slanger (2007) cosby07Cosby, PC. Slanger, TG. 2007\. OH spectroscopy and chemistry investigated with astronomical sky spectra OH spectroscopy and chemistry investigated with astronomical sky spectra. Can. J. Phys.8577-99. 10.1139/P06-088
* Dawkins . (2018) dawkins18Dawkins, ECM., Feofilov, A., Rezac, L., Kutepov, AA., Janches, D., Höffner, J.Russell, J. 2018\. Validation of SABER v2.0 Operational Temperature Data With Ground-Based Lidars in the Mesosphere-Lower Thermosphere Region (75–105 km) Validation of SABER v2.0 Operational Temperature Data With Ground-Based Lidars in the Mesosphere-Lower Thermosphere Region (75–105 km). J. Geophys. Res.1239916-9934. 10.1029/2018JD028742
* Dodd . (1994) dodd94Dodd, JA., Armstrong, PS., Lipson, SJ., Lowell, JR., Blumberg, WAM., Nadile, RM.Green, BD. 1994\. Analysis of hydroxyl earthlimb airglow emissions: Kinetic model for state-to-state dynamics of OH(v,N) Analysis of hydroxyl earthlimb airglow emissions: Kinetic model for state-to-state dynamics of OH(v,N). J. Geophys. Res.993559-3586. 10.1029/93JD03338
* Ern . (2013) ern13Ern, M., Preusse, P., Kalisch, S., Kaufmann, M. Riese, M. 201305\. Role of gravity waves in the forcing of quasi two-day waves in the mesosphere: An observational study Role of gravity waves in the forcing of quasi two-day waves in the mesosphere: An observational study. J. Geophys. Res. Atmos.1183467-3485. 10.1029/2012JD018208
* Forbes (1995) forbes95Forbes, JM. 199501\. Tidal and Planetary Waves Tidal and Planetary Waves. Geophys. Monogr. Series8767. 10.1029/GM087p0067
* García-Comas . (2017) garcia17García-Comas, M., José López-González, M., González-Galindo, F., de la Rosa, JL., López-Puertas, M., Shepherd, MG. Shepherd, GG. 201710\. Mesospheric OH layer altitude at midlatitudes: variability over the Sierra Nevada Observatory in Granada, Spain (37° N, 3° W) Mesospheric OH layer altitude at midlatitudes: variability over the Sierra Nevada Observatory in Granada, Spain (37° N, 3° W). Ann. Geophys.351151-1164. 10.5194/angeo-35-1151-2017
* Griffith . (2021) griffith21Griffith, MJ., Dempsey, SM., Jackson, DR., Moffat-Griffin, T. Mitchell, NJ. 202106\. Winds and tides of the Extended Unified Model in the mesosphere and lower thermosphere validated with meteor radar observations Winds and tides of the Extended Unified Model in the mesosphere and lower thermosphere validated with meteor radar observations. Ann. Geophys.39487-514. 10.5194/angeo-39-487-2021
* Gu . (2019) gu19Gu, SY., Dou, XK., Yang, CY., Jia, M., Huang, KM., Huang, CM. Zhang, SD. 201901\. Climatology and Anomaly of the Quasi-Two-Day Wave Behaviors During 2003-2018 Austral Summer Periods Climatology and Anomaly of the Quasi-Two-Day Wave Behaviors During 2003-2018 Austral Summer Periods. J. Geophys. Res. Space Phys.)124544-556. 10.1029/2018JA026047
* Guharay . (2013) guharay13Guharay, A., Batista, PP., Clemesha, BR. Schuch, NJ. 201301\. Study of the quasi-two-day wave during summer over Santa Maria, Brazil using meteor radar observations Study of the quasi-two-day wave during summer over Santa Maria, Brazil using meteor radar observations. J. Atmos. Sol.-Terr. Phys.9283-93. 10.1016/j.jastp.2012.10.005
* He . (2021) he21He, M., Chau, JL., Forbes, JM., Zhang, X., Englert, CR., Harding, BJ.Makela, JJ. 202107\. Quasi-2-Day Wave in Low-Latitude Atmospheric Winds as Viewed From the Ground and Space During January-March, 2020 Quasi-2-Day Wave in Low-Latitude Atmospheric Winds as Viewed From the Ground and Space During January-March, 2020. Geophys. Res. Lett.48e93466. 10.1029/2021GL093466
* Hecht . (2010) hecht10Hecht, JH., Walterscheid, RL., Gelinas, LJ., Vincent, RA., Reid, IM. Woithe, JM. 201008\. Observations of the phase-locked 2 day wave over the Australian sector using medium-frequency radar and airglow data Observations of the phase-locked 2 day wave over the Australian sector using medium-frequency radar and airglow data. J. Geophys. Res. Atmos.115D16115. 10.1029/2009JD013772
* Huang . (2013) huang13Huang, YY., Zhang, SD., Yi, F., Huang, CM., Huang, KM., Gan, Q. Gong, Y. 201306\. Global climatological variability of quasi-two-day waves revealed by TIMED/SABER observations Global climatological variability of quasi-two-day waves revealed by TIMED/SABER observations. Ann. Geophys.311061-1075. 10.5194/angeo-31-1061-2013
* Kalogerakis (2019) kalogerakis19Kalogerakis, KS. 2019Feb. Technical note: Bimodality in mesospheric OH rotational population distributions and implications for temperature measurements Technical note: Bimodality in mesospheric OH rotational population distributions and implications for temperature measurements. Atmos. Chem. Phys.192629-2634. 10.5194/acp-19-2629-2019
* Kalogerakis . (2018) kalogerakis18Kalogerakis, KS., Matsiev, D., Cosby, PC., Dodd, JA., Falcinelli, S., Hedin, J.Thiebaud, JE. 2018\. New Insights for mesospheric OH: Multi-quantum vibrational relaxation as a driver for non-local thermodynamic equilibrium New Insights for mesospheric OH: Multi-quantum vibrational relaxation as a driver for non-local thermodynamic equilibrium. Ann. Geophys.3613-24. 10.5194/angeo-36-13-2018
* Kausch . (2015) kausch15Kausch, W., Noll, S., Smette, A., Kimeswenger, S., Barden, M., Szyszka, C.Kerber, F. 201504\. Molecfit: A general tool for telluric absorption correction. II. Quantitative evaluation on ESO-VLT/X-Shooter spectra Molecfit: A general tool for telluric absorption correction. II. Quantitative evaluation on ESO-VLT/X-Shooter spectra. Astron. Astrophys.576A78. 10.1051/0004-6361/201423909
* Kerber . (2012) kerber12Kerber, F., Rose, T., Chacón, A., Cuevas, O., Czekala, H., Hanuschik, R.Naylor, DA. 201209\. A water vapour monitor at Paranal Observatory A water vapour monitor at Paranal Observatory. IS. McLean, SK. Ramsay H. Takami (), Ground-based and Airborne Instrumentation for Astronomy IV Ground-based and airborne instrumentation for astronomy iv ( 8446, 84463N). 10.1117/12.924340
* Kubota . (1999) kubota99Kubota, M., Ishii, M., Shiokawa, K., Ejiri, MK. Ogawa, T. 199901\. Height Measurements of Nightglow Structures Observed by All-Sky Imagers Height Measurements of Nightglow Structures Observed by All-Sky Imagers. Adv. Space Res.24593-596. 10.1016/S0273-1177(99)00206-9
* Limpasuvan . (2005) limpasuvan05Limpasuvan, V., Wu, DL., Schwartz, MJ., Waters, JW., Wu, Q. Killeen, TL. 200509\. The two-day wave in EOS MLS temperature and wind measurements during 2004-2005 winter The two-day wave in EOS MLS temperature and wind measurements during 2004-2005 winter. Geophys. Res. Lett.32L17809. 10.1029/2005GL023396
* Liu Shepherd (2006) liu06Liu, G. Shepherd, GG. 200605\. An empirical model for the altitude of the OH nightglow emission An empirical model for the altitude of the OH nightglow emission. Geophys. Res. Lett.33L09805. 10.1029/2005GL025297
* Llewellyn Long (1978) llewellyn78Llewellyn, EJ. Long, BH. 197805\. The OH Meinel bands in the airglow - The radiative lifetime The OH Meinel bands in the airglow - The radiative lifetime. Can. J. Phys.56581-586. 10.1139/p78-076
* López-González . (1989) lopez89López-González, MJ., López-Moreno, JJ., López-Valverde, MA. Rodrigo, R. 198901\. Behaviour of the O 2 infrared atmospheric (0-0) band in the middle atmosphere during evening twilight and at night Behaviour of the O 2 infrared atmospheric (0-0) band in the middle atmosphere during evening twilight and at night. Planet. Space Sci.3761-72. 10.1016/0032-0633(89)90069-X
* Makhlouf . (1995) makhlouf95Makhlouf, UB., Picard, RH. Winick, JR. 1995\. Photochemical-dynamical modeling of the measured response of airglow to gravity waves 1. Basic model for OH airglow Photochemical-dynamical modeling of the measured response of airglow to gravity waves 1. Basic model for OH airglow. J. Geophys. Res.10011289-11312. 10.1029/94JD03327
* Marsh . (2006) marsh06Marsh, DR., Smith, AK., Mlynczak, MG. Russell, JM., III. 200610\. SABER observations of the OH Meinel airglow variability near the mesopause SABER observations of the OH Meinel airglow variability near the mesopause. J. Geophys. Res.111A10S05. 10.1029/2005JA011451
* McDade (1991) mcdade91McDade, IC. 199107\. The altitude dependence of the OH(X 2$\Pi$) vibrational distribution in the nightglow: Some model expectations The altitude dependence of the OH(X 2$\Pi$) vibrational distribution in the nightglow: Some model expectations. Planet. Space Sci.391049-1057. 10.1016/0032-0633(91)90112-N
* Meinel (1950) meinel50Meinel, AB. 195005\. OH Emission Bands in the Spectrum of the Night Sky. I OH Emission Bands in the Spectrum of the Night Sky. I. Astrophys. J.111555-564. 10.1086/145296
* Mlynczak . (2005) mlynczak05Mlynczak, MG., Martin-Torres, FJ., Crowley, G., Kratz, DP., Funke, B., Lu, G.Paxton, L. 200512\. Energy transport in the thermosphere during the solar storms of April 2002 Energy transport in the thermosphere during the solar storms of April 2002. J. Geophys. Res.110A12S25. 10.1029/2005JA011141
* Modigliani . (2010) modigliani10Modigliani, A., Goldoni, P., Royer, F., Haigron, R., Guglielmi, L., François, P.Christensen, L. 201007\. The X-shooter pipeline The X-shooter pipeline. DR. Silva, AB. Peck BT. Soifer (), Observatory Operations: Strategies, Processes, and Systems III Observatory operations: Strategies, processes, and systems iii ( 7737, 773728). 10.1117/12.857211
* Moehler . (2014) moehler14Moehler, S., Modigliani, A., Freudling, W., Giammichele, N., Gianninas, A., Gonneau, A.Vinther, J. 201408\. Flux calibration of medium-resolution spectra from 300 nm to 2500 nm: Model reference spectra and telluric correction Flux calibration of medium-resolution spectra from 300 nm to 2500 nm: Model reference spectra and telluric correction. Astron. Astrophys.568A9. 10.1051/0004-6361/201423790
* Moreels . (2008) moreels08Moreels, G., Clairemidi, J., Faivre, M., Mougin-Sisini, D., Kouahla, MN., Meriwether, JW.Veliz, O. 200810\. Stereoscopic imaging of the hydroxyl emissive layer at low latitudes Stereoscopic imaging of the hydroxyl emissive layer at low latitudes. Planet. Space. Sci.561467-1479. 10.1016/j.pss.2008.04.012
* Mulligan . (2009) mulligan09Mulligan, FJ., Dyrland, ME., Sigernes, F. Deehr, CS. 200911\. Inferring hydroxyl layer peak heights from ground-based measurements of OH(6-2) band integrated emission rate at Longyearbyen (78° N, 16° E) Inferring hydroxyl layer peak heights from ground-based measurements of OH(6-2) band integrated emission rate at Longyearbyen (78° N, 16° E). Ann. Geophys.274197-4205. 10.5194/angeo-27-4197-2009
* Nikoukar . (2007) nikoukar07Nikoukar, R., Swenson, GR., Liu, AZ. Kamalabadi, F. 200710\. On the variability of mesospheric OH emission profiles On the variability of mesospheric OH emission profiles. J. Geophys. Res. Atmos.112D19109. 10.1029/2007JD008601
* Noll . (2012) noll12Noll, S., Kausch, W., Barden, M., Jones, AM., Szyszka, C., Kimeswenger, S. Vinther, J. 201207\. An atmospheric radiation model for Cerro Paranal. I. The optical spectral range An atmospheric radiation model for Cerro Paranal. I. The optical spectral range. Astron. Astrophys.543A92. 10.1051/0004-6361/201219040
* Noll . (2015) noll15Noll, S., Kausch, W., Kimeswenger, S., Unterguggenberger, S. Jones, AM. 201504\. OH populations and temperatures from simultaneous spectroscopic observations of 25 bands OH populations and temperatures from simultaneous spectroscopic observations of 25 bands. Atmos. Chem. Phys.153647-3669. 10.5194/acp-15-3647-2015
* Noll . (2016) noll16Noll, S., Kausch, W., Kimeswenger, S., Unterguggenberger, S. Jones, AM. 201604\. Comparison of VLT/X-shooter OH and O2 rotational temperatures with consideration of TIMED/SABER emission and temperature profiles Comparison of VLT/X-shooter OH and O2 rotational temperatures with consideration of TIMED/SABER emission and temperature profiles. Atmos. Chem. Phys.165021-5042. 10.5194/acp-16-5021-2016
* Noll . (2017) noll17Noll, S., Kimeswenger, S., Proxauf, B., Unterguggenberger, S., Kausch, W. Jones, AM. 201710\. 15 years of VLT/UVES OH intensities and temperatures in comparison with TIMED/SABER data 15 years of VLT/UVES OH intensities and temperatures in comparison with TIMED/SABER data. J. Atmos. Sol.-Terr. Phys.16354-69. 10.1016/j.jastp.2017.05.012
* Noll . (2018) noll18bNoll, S., Proxauf, B., Kausch, W. Kimeswenger, S. 201810\. Mechanisms for varying non-LTE contributions to OH rotational temperatures from measurements and modelling. II. Kinetic model Mechanisms for varying non-LTE contributions to OH rotational temperatures from measurements and modelling. II. Kinetic model. J. Atmos. Sol.-Terr. Phys.175100-119. 10.1016/j.jastp.2018.05.005
* Noll . (2022) noll22dsNoll, S., Schmidt, C., Kausch, W., Bittner, M. Kimeswenger, S. 2022\. Data for the paper ”Effective emission heights of various OH lines from X-shooter and SABER observations of a passing quasi-2-day wave” [Dataset]. Data for the paper ”Effective emission heights of various OH lines from X-shooter and SABER observations of a passing quasi-2-day wave” [Dataset]. Zenodo. 10.5281/zenodo.7371927
* Noll . (2020) noll20Noll, S., Winkler, H., Goussev, O. Proxauf, B. 202005\. OH level populations and accuracies of Einstein-A coefficients from hundreds of measured lines OH level populations and accuracies of Einstein-A coefficients from hundreds of measured lines. Atmos. Chem. Phys.205269-5292. 10.5194/acp-20-5269-2020
* Oliva . (2015) oliva15Oliva, E., Origlia, L., Scuderi, S., Benatti, S., Carleo, I., Lapenna, E.Pedani, M. 201509\. Lines and continuum sky emission in the near infrared: observational constraints from deep high spectral resolution spectra with GIANO-TNG Lines and continuum sky emission in the near infrared: observational constraints from deep high spectral resolution spectra with GIANO-TNG. Astron. Astrophys.581A47. 10.1051/0004-6361/201526291
* Palo . (1999) palo99Palo, SE., Roble, RG. Hagan, ME. 199907\. Middle atmosphere effects of the quasi-two-day wave determined from a General Circulation Model Middle atmosphere effects of the quasi-two-day wave determined from a General Circulation Model. Earth, Planets and Space51629-647. 10.1186/BF03353221
* Pedatella Forbes (2012) pedatella12Pedatella, NM. Forbes, JM. 201201\. The quasi 2 day wave and spatial-temporal variability of the OH emission and ionosphere The quasi 2 day wave and spatial-temporal variability of the OH emission and ionosphere. J. Geophys. Res. Space Phys.117A01320. 10.1029/2011JA017186
* Plumb (1983) plumb83Plumb, RA. 198301\. Baroclinic Instability of the Summer Mesosphere: A Mechanism for the Quasi-Two-Day Wave?. Baroclinic Instability of the Summer Mesosphere: A Mechanism for the Quasi-Two-Day Wave?. J. Atmos. Sci.40262-262. 10.1175/1520-0469(1983)040¡0262:BIOTSM¿2.0.CO;2
* Reisin (2021) reisin21Reisin, ER. 202107\. Quasi-two-day wave characteristics in the mesopause region from airglow data measured at El Leoncito (31.8°S, 69.3°W) Quasi-two-day wave characteristics in the mesopause region from airglow data measured at El Leoncito (31.8°S, 69.3°W). J. Atmos. Sol.-Terr. Phys.218105613. 10.1016/j.jastp.2021.105613
* Remsberg . (2008) remsberg08Remsberg, EE., Marshall, BT., Garcia-Comas, M., Krueger, D., Lingenfelser, GS., Martin-Torres, J.Thompson, RE. 200809\. Assessment of the quality of the Version 1.07 temperature-versus-pressure profiles of the middle atmosphere from TIMED/SABER Assessment of the quality of the Version 1.07 temperature-versus-pressure profiles of the middle atmosphere from TIMED/SABER. J. Geophys. Res. Atmos.)113D17101. 10.1029/2008JD010013
* Rousselot . (2000) rousselot00Rousselot, P., Lidman, C., Cuby, JG., Moreels, G. Monnet, G. 200002\. Night-sky spectral atlas of OH emission lines in the near-infrared Night-sky spectral atlas of OH emission lines in the near-infrared. Astron. Astrophys.3541134-1150.
* Russell . (1999) russell99Russell, JM., III, Mlynczak, MG., Gordley, LL., Tansock, J. Esplin, R. 199910\. Overview of the SABER experiment and preliminary calibration results Overview of the SABER experiment and preliminary calibration results. AM. Larar (), Optical Spectroscopic Techniques and Instrumentation for Atmospheric and Space Research III Optical spectroscopic techniques and instrumentation for atmospheric and space research iii ( 3756, 277-288). 10.1117/12.366382
* Salby (1981) salby81Salby, ML. 198109\. Rossby normal modes in nonuniform background configurations. II - Equinox and solstice conditions Rossby normal modes in nonuniform background configurations. II - Equinox and solstice conditions. J. Atmos. Sci.381827-1840. 10.1175/1520-0469(1981)038¡1827:RNMINB¿2.0.CO;2
* Schmidt . (2018) schmidt18Schmidt, C., Dunker, T., Lichtenstern, S., Scheer, J., Wüst, S., Hoppe, UP. Bittner, M. 201808\. Derivation of vertical wavelengths of gravity waves in the MLT-region from multispectral airglow observations Derivation of vertical wavelengths of gravity waves in the MLT-region from multispectral airglow observations. J. Atmos. Sol.-Terr. Phys.173119-127. 10.1016/j.jastp.2018.03.002
* Schmidt . (2013) schmidt13Schmidt, C., Höppner, K. Bittner, M. 201309\. A ground-based spectrometer equipped with an InGaAs array for routine observations of OH(3-1) rotational temperatures in the mesopause region A ground-based spectrometer equipped with an InGaAs array for routine observations of OH(3-1) rotational temperatures in the mesopause region. J. Atmos. Sol.-Terr. Phys.102125-139. 10.1016/j.jastp.2013.05.001
* Sheese . (2014) sheese14Sheese, PE., Llewellyn, EJ., Gattinger, RL. Strong, K. 201410\. OH Meinel band nightglow profiles from OSIRIS observations OH Meinel band nightglow profiles from OSIRIS observations. J. Geophys. Res.11911,417-11,428. 10.1002/2014JD021617
* Smette . (2015) smette15Smette, A., Sana, H., Noll, S., Horst, H., Kausch, W., Kimeswenger, S.Taylor, J. 201504\. Molecfit: A general tool for telluric absorption correction. I. Method and application to ESO instruments Molecfit: A general tool for telluric absorption correction. I. Method and application to ESO instruments. Astron. Astrophys.576A77. 10.1051/0004-6361/201423932
* Smith . (2010) smith10Smith, AK., Marsh, DR., Mlynczak, MG. Mast, JC. 201009\. Temporal variations of atomic oxygen in the upper mesosphere from SABER Temporal variations of atomic oxygen in the upper mesosphere from SABER. J. Geophys. Res.115D18309. 10.1029/2009JD013434
* Swenson Gardner (1998) swenson98Swenson, GR. Gardner, CS. 199801\. Analytical models for the responses of the mesospheric OH∗ and Na layers to atmospheric gravity waves Analytical models for the responses of the mesospheric OH∗ and Na layers to atmospheric gravity waves. J. Geophys. Res.1036271-6294. 10.1029/97JD02985
* Takahashi Batista (1981) takahashi81Takahashi, H. Batista, PP. 198107\. Simultaneous measurements of OH(9,4), (8,3), (7,2), (6,2) and (5,1) bands in the airglow Simultaneous measurements of OH(9,4), (8,3), (7,2), (6,2) and (5,1) bands in the airglow. J. Geophys. Res.865632-5642. 10.1029/JA086iA07p05632
* Teiser von Savigny (2017) teiser17Teiser, G. von Savigny, C. 201708\. Variability of OH(3-1) and OH(6-2) emission altitude and volume emission rate from 2003 to 2011 Variability of OH(3-1) and OH(6-2) emission altitude and volume emission rate from 2003 to 2011\. J. Atmos. Sol.-Terr. Phys.16128-42. 10.1016/j.jastp.2017.04.010
* Tunbridge . (2011) tunbridge11Tunbridge, VM., Sandford, DJ. Mitchell, NJ. 201106\. Zonal wave numbers of the summertime 2 day planetary wave observed in the mesosphere by EOS Aura Microwave Limb Sounder Zonal wave numbers of the summertime 2 day planetary wave observed in the mesosphere by EOS Aura Microwave Limb Sounder. J. Geophys. Res. Atmos.116D11103. 10.1029/2010JD014567
* Unterguggenberger . (2017) unterguggenberger17Unterguggenberger, S., Noll, S., Feng, W., Plane, JMC., Kausch, W., Kimeswenger, S.Moehler, S. 201703\. Measuring FeO variation using astronomical spectroscopic observations Measuring FeO variation using astronomical spectroscopic observations. Atmos. Chem. Phys.174177-4187. 10.5194/acp-17-4177-2017
* van Rhijn (1921) vanrhijn21van Rhijn, PJ. 1921\. On the brightness of the sky at night and the total amount of starlight On the brightness of the sky at night and the total amount of starlight. Publ. Kapteyn Astron. Lab. Groningen311-83.
* Vernet . (2011) vernet11Vernet, J., Dekker, H., D’Odorico, S., Kaper, L., Kjaergaard, P., Hammer, F.Zacchei, A. 201112\. X-shooter, the new wide band intermediate resolution spectrograph at the ESO Very Large Telescope X-shooter, the new wide band intermediate resolution spectrograph at the ESO Very Large Telescope. Astron. Astrophys.536A105. 10.1051/0004-6361/201117752
* von Savigny (2015) savigny15von Savigny, C. 201505\. Variability of OH(3-1) emission altitude from 2003 to 2011: Long-term stability and universality of the emission rate-altitude relationship Variability of OH(3-1) emission altitude from 2003 to 2011: Long-term stability and universality of the emission rate-altitude relationship. J. Atmos. Sol.-Terr. Phys.127120-128. 10.1016/j.jastp.2015.02.001
* von Savigny Lednyts’kyy (2013) savigny13von Savigny, C. Lednyts’kyy, O. 201311\. On the relationship between atomic oxygen and vertical shifts between OH Meinel bands originating from different vibrational levels On the relationship between atomic oxygen and vertical shifts between OH Meinel bands originating from different vibrational levels. Geophys. Res. Lett.405821-5825. 10.1002/2013GL058017
* von Savigny . (2012) savigny12von Savigny, C., McDade, IC., Eichmann, KU. Burrows, JP. 201209\. On the dependence of the OH∗ Meinel emission altitude on vibrational level: SCIAMACHY observations and model simulations On the dependence of the OH∗ Meinel emission altitude on vibrational level: SCIAMACHY observations and model simulations. Atmos. Chem. Phys.128813-8828. 10.5194/acp-12-8813-2012
* Walterscheid . (2015) walterscheid15Walterscheid, RL., Hecht, JH., Gelinas, LJ., MacKinnon, A., Vincent, RA., Reid, IM.Pautet, PD. 201503\. Simultaneous observations of the phase-locked 2 day wave at Adelaide, Cerro Pachon, and Darwin Simultaneous observations of the phase-locked 2 day wave at Adelaide, Cerro Pachon, and Darwin. J. Geophys. Res. Atmos.1201808-1825. 10.1002/2014JD022016
* Walterscheid Vincent (1996) walterscheid96Walterscheid, RL. Vincent, RA. 199611\. Tidal generation of the phase-locked 2-day wave in the southern hemisphere summer by wave-wave interactions Tidal generation of the phase-locked 2-day wave in the southern hemisphere summer by wave-wave interactions. J. Geophys. Res.101D2126,567-26,576. 10.1029/96JD02248
* Wu . (1993) wu93Wu, DL., Hays, PB., Skinner, WR., Marshall, AR., Burrage, MD., Lieberman, RS. Ortland, DA. 199312\. Observations of the quasi 2-day wave from the High Resolution Doppler Imager on Uars Observations of the quasi 2-day wave from the High Resolution Doppler Imager on Uars. Geophys. Res. Lett.202853-2856. 10.1029/93GL03008
* Wüst . (2020) wuest20Wüst, S., Bittner, M., Yee, JH., Mlynczak, MG. Russell, I., James M. 202011\. Variability of the Brunt-Väisälä frequency at the OH∗-airglow layer height at low and midlatitudes Variability of the Brunt-Väisälä frequency at the OH∗-airglow layer height at low and midlatitudes. Atmos. Meas. Tech.136067-6093. 10.5194/amt-13-6067-2020
* Xiong . (2018) xiong18Xiong, J., Wan, W., Ding, F., Liu, L., Hu, L. Yan, C. 201804\. Two Day Wave Traveling Westward With Wave Number 1 During the Sudden Stratospheric Warming in January 2017 Two Day Wave Traveling Westward With Wave Number 1 During the Sudden Stratospheric Warming in January 2017. J. Geophys. Res. Space Phys.1233005-3013. 10.1002/2017JA025171
* J. Xu . (2012) xu12Xu, J., Gao, H., Smith, AK. Zhu, Y. 201201\. Using TIMED/SABER nightglow observations to investigate hydroxyl emission mechanisms in the mesopause region Using TIMED/SABER nightglow observations to investigate hydroxyl emission mechanisms in the mesopause region. J. Geophys. Res.117D02301. 10.1029/2011JD016342
* JY. Xu . (2020) xu20Xu, JY., Liu, WJ., Bian, JC., Liu, X., Yuan, W. Wang, C. 202007\. Method for retrieval of atmospheric water vapor using OH airglow for correction of astronomical observations Method for retrieval of atmospheric water vapor using OH airglow for correction of astronomical observations. Astron. Astrophys.639A29. 10.1051/0004-6361/201834621
* Yee . (1997) yee97Yee, JH., Crowley, G., Roble, RG., Skinner, WR., Burrage, MD. Hays, PB. 199709\. Global simulations and observations of O(1S), O2(1$\Sigma$) and OH mesospheric nightglow emissions Global simulations and observations of O(1S), O2(1$\Sigma$) and OH mesospheric nightglow emissions. J. Geophys. Res.10219949-19968. 10.1029/96JA01833
* Yu . (2017) yu17Yu, T., Zuo, X., Xia, C., Li, M., Huang, C., Mao, T.Liu, L. 201704\. Peak height of OH airglow derived from simultaneous observations a Fabry-Perot interferometer and a meteor radar Peak height of OH airglow derived from simultaneous observations a Fabry-Perot interferometer and a meteor radar. J. Geophys. Res. Space Phys.1224628-4637. 10.1002/2016JA023743
* Yue . (2012) yue12Yue, J., Liu, HL. Chang, LC. 201203\. Numerical investigation of the quasi 2 day wave in the mesosphere and lower thermosphere Numerical investigation of the quasi 2 day wave in the mesosphere and lower thermosphere. J. Geophys. Res. Atmos.)117D05111. 10.1029/2011JD016574
|
# On Bernoulli trials with unequal harmonic success probabilities
Thierry Huillet Thierry Huillet
Laboratoire de Physique Théorique et Modélisation
CY Cergy Paris University, CNRS UMR-8089
Site de Saint Martin, 2 avenue Adolphe-Chauvin
95302 Cergy-Pontoise, France<EMAIL_ADDRESS>and Martin Möhle Martin
Möhle
Fachbereich Mathematik
Eberhard Karls Universität Tübingen
Auf der Morgenstelle 10
72076 Tübingen, Germany<EMAIL_ADDRESS>
###### Abstract.
A Bernoulli scheme with unequal harmonic success probabilities is
investigated, together with some of its natural extensions. The study includes
the number of successes over some time window, the times to (between)
successive successes and the time to the first success. Large sample
asymptotics, statistical parameter estimation, and relations to Sibuya
distributions and Yule–Simon distributions are discussed. This toy model is
relevant in several applications including reliability, species sampling
problems, record values breaking and random walks with disasters.
###### Key words and phrases:
Bernoulli variables, Ewens–Pitman sampling formula, Markov chains, Rényi’s
records, Sibuya distribution, Stirling numbers, Yule–Simon distribution
###### 2020 Mathematics Subject Classification:
Primary: 60J10; Secondary: 60C05
## 1\. Introduction
Introduce two weights $w_{1}>0$ and $w_{2}\geq 0$, put $w:=w_{1}+w_{2}>0$, and
let $I_{1},I_{2},\ldots$ be independent Bernoulli random variables with
‘harmonic’ success probabilities
(1) $\mathbf{P}(I_{m}=1):=\frac{w_{1}}{w+m-1},\qquad
m\in\mathbb{N}:=\\{1,2,\ldots\\},$
decreasing inversely proportional to the number $m$ of the trial. We note the
following property of Bernoulli trials with such success probabilities: the
first success time $K_{1}^{+}:=\inf\\{m\in\mathbb{N}:I_{m}=1\\}$ is either a
small units number or a very large one due to power-law tails of this random
variable, see (19) below. In words indeed, if the $I_{m}$’s fail to take the
value $1$ in the first steps, this tendency will be enhanced in the
forthcoming steps resulting, for such models, in large (heavy-tailed) values
of $K_{1}^{+}$. So $K_{1}^{+}$ either will take small values close to $1$ (the
mode of $K_{1}^{+}$ is at $1$ with probability mass $w_{1}/w$ decreasing with
$w_{2}/w_{1}$ if $w_{2}>0$) or very large values (responsible of its heavy-
tailedness with tail index $w_{1}$): small values of $w_{2}/w_{1}$ favors
early first success time while small values of $w_{1}$ favors late first
success. So, the larger the number of steps for which no success was observed,
the smaller the probability to see a success in the next step even though this
probability is relatively large (harmonic decay in our case). This may be seen
from the following argument:
Let $J_{m}:=1-I_{m}$, $m\in\mathbb{N}$, and let $M_{n}=\prod_{m=1}^{n}J_{m}$.
The event $M_{n}=1$ is realized when no success was observed till time $n$.
$M_{n}$ is a multiplicative random walk
$M_{n+1}=M_{n}J_{n+1},\quad M_{0}=1,$
for which the probability of a success at step $n+1$ given no success till $n$
is $\mathbf{P}(M_{n+1}=0\mid M_{n}=1)=\mathbf{P}(I_{n+1}=1)$. If $w_{2}=0$,
then the probability $\mathbf{P}(M_{n}=1)=\prod_{m=1}^{n}\mathbf{P}(I_{m}=0)$
of no success by time $n\in\mathbb{N}$ is obviously equal to $0$, since
$I_{1}=1$ in this case. If $w_{2}>0$ then this probability is equal to
$\prod_{m=1}^{n}\mathbf{P}(I_{m}=0)=\prod_{m=1}^{n}\frac{w_{2}+m-1}{w+m-1}=\frac{\Gamma(w)\Gamma(w_{2}+n)}{\Gamma(w_{2})\Gamma(w+n)}\sim\frac{\Gamma(w)}{\Gamma(w_{2})}\frac{1}{n^{w_{1}}},\quad
n\to\infty,$
since $\Gamma(c+n)\sim n^{c}\Gamma(n)$ as $n\to\infty$ for any $c>0$. Thus,
the probability of no success by time $n$ is small for large $n$, since
$w_{1}>0$.
Examples of such enhancement mechanisms are
* •
$I_{m}=1$ if some paper is cited the day $m$ after its publication. Oversight.
* •
$I_{m}=1$ if some new species is discovered the day $m$ after a systematic
daily sampling campaign. Rareness.
* •
$I_{m}=1$ if some new word is used (or created) as the $m$-th word of some
ongoing book. Scarcity.
* •
$I_{m}=1$ if some individual renews its support to some political party the
day (month) $m$ after its creation. Weariness.
* •
Time unit increases by $1$ when some athlete attempts to improve some record
previously established. $I_{m}=1$ if he/she succeeds at $m$-th trial: higher
records become more and more difficult to break.
In several situations a success is actually a failure. Examples are
* •
$I_{m}=1$ if some device breaks down the day $m$ after it was put into
service. Resilience.
* •
$I_{m}=1$ if some population collapses the day $m$ after it came to birth.
Resilience.
* •
$I_{m}=1$ if some patient contracts some illness the day $m$ after birth date.
Immunity.
* •
$I_{m}=1$ if some driver has an accident the day $m$ after obtaining its
driving licence. Experience.
The number $n$ of observations can be finite (possibly large though, depending
on the time scale) or infinite. For instance, a typical driver only has
finitely many driving days in his life (possibly randomly finite), but the
attempts to break a record are potentially infinitely many.
For ‘harmonic’ Bernoulli sequences of the form (1) we study the number
$S_{n}:=\sum_{m=1}^{n}I_{m}$ of successes among the first $n\in\mathbb{N}_{0}$
trials, the time $K_{l}^{+}:=\inf\\{m\in\mathbb{N}:S_{m}=l\\}$ of the $l$-th
success, $l\in\mathbb{N}_{0}$, and the times
$L_{l}^{+}:=K_{l}^{+}-K_{l-1}^{+}$ elapsed between successive successes,
$l\in\mathbb{N}$, and analyse the associated Markov chains. It turns out that
Sibuya distributions play an important role in this context. The two-parameter
$(w_{1},w_{2})$-Sibuya distribution arises as the distribution of the waiting
time till the first success. The shifted $(w_{1},w_{2})$-Sibuya distribution
has many appealing properties, among them discrete self-decomposability and
heavy-tailedness, [11]. It includes the ‘bare’ Sibuya distribution
($w_{1}+w_{2}=1$, see [21]) and the Yule–Simon distribution ($w_{2}=1$, see
[27]). The case $w_{2}=0$ is degenerate as far as the waiting time for the
first success is concerned, but it appears to make sense from the point of
view of the number of successes in the Ewens species sampling problem [4]. The
case $(w_{1},w_{2})=(1,0)$ also appears in the study of the number of record
values stemming from an arbitrary independent and identically distributed
(iid) sequence of observations, see [13, 18, 19].
## 2\. Number of successes
In this section we are mainly interested in the number
$S_{n}:=\sum_{m=1}^{n}I_{m}$ of successes among the first $n\in\mathbb{N}_{0}$
trials. Note that $0\leq S_{n}\leq n$ for $n\in\mathbb{N}_{0}$. In particular,
$S_{0}=0$.
In the following, $s(n,k)$, $n,k\in\mathbb{N}_{0}:=\\{0,1,\ldots\\}$, denote
the Stirling numbers of the first kind. Recall that the unsigned Stirling
numbers of the first kind $|s(n,k)|$ are characterized via
$[z]_{n}=\sum_{k\geq 0}|s_{n,k}|z^{k}$, $z\in\mathbb{R}$,
$n\in\mathbb{N}_{0}$, where $[z]_{0}:=1$ and $[z]_{n}:=z(z+1)\cdots(z+n-1)$,
$n\in\mathbb{N}$. These numbers satisfy the recursion
$|s_{n+1,k}|=n|s_{n,k}|+|s_{n,k-1}|$ with $|s_{n,k}|=0$ for $k>n$,
$|s_{n,n}|=1$ and $|s_{n,0}|=\delta_{n,0}$ (Kronecker symbol).
### 2.1. The Markov chain $(S_{n},n\in\mathbb{N}_{0})$
Clearly, $(S_{n},n\in\mathbb{N}_{0})$ is a time-inhomogeneous Markov chain
with state-space $\mathbb{N}_{0}$ and transition probabilities
(2) $\mathbf{P}(S_{n+1}=k+1\mid S_{n}=k)=1-\mathbf{P}(S_{n+1}=k\mid
S_{n}=k)=\frac{w_{1}}{w+n},\quad n,k\in\mathbb{N}_{0}.$
Note that the probability (2) that the chain moves from state $k$ at time $n$
to state $k+1$ at time $n+1$ does not depend on the current state $k$. The
increments $S_{n}-S_{n-1}=I_{n}$, $n\in\mathbb{N}$, are independent but not
identically distributed. The chain $(S_{n},n\in\mathbb{N}_{0})$ also coincides
with the chain studied in the restaurant process with a cocktail bar [12,
Section 6.1] with parameters $(\alpha,\theta_{1},\theta_{2}):=(0,w_{1},w)$,
where $S_{n}$ counts the number of occupied tables after $n$ customers have
entered the restaurant. The probability-generating function (pgf) $z\mapsto
f_{n}(z):=\mathbf{E}(z^{S_{n}})$ of $S_{n}$ is given by
$f_{n}(z)=\prod_{m=0}^{n-1}\frac{w_{1}z+w_{2}+m}{w+m}=\frac{[w_{1}z+w_{2}]_{n}}{[w]_{n}}=\frac{[w_{1}(z-1)+w]_{n}}{[w]_{n}},\quad
z\in\mathbb{R},$
Clearly, $f_{n}$ is a polynomial of degree $n$ of the form
$f_{n}(z)=\frac{1}{[w]_{n}}\sum_{l=0}^{n}|s_{n,l}|(w_{1}z+w_{2})^{l}=\frac{1}{[w]_{n}}\sum_{k=0}^{n}z^{k}w_{1}^{k}\sum_{l=k}^{n}\binom{l}{k}|s_{n,l}|w_{2}^{l-k}.$
Denoting by $[z^{k}]f_{n}(z)$ the coefficient in front of $z^{k}$ of $f_{n}$
yields
(3)
$\mathbf{P}(S_{n}=k)=[z^{k}]f_{n}(z)=\frac{w_{1}^{k}}{[w]_{n}}\sum_{l=k}^{n}\binom{l}{k}|s_{n,l}|w_{2}^{l-k},\qquad
k\in\\{0,\ldots,n\\}.$
With $(n)_{0}:=1$ and $(n)_{l}:=n(n-1)\cdots(n-l+1)$ for $l\in\mathbb{N}$,
$S_{n}$ has the $l$-th descending factorial moment
(4)
$\mathbf{E}((S_{n})_{l})=l![(z-1)^{l}]f_{n}(z)=\frac{w_{1}^{l}}{[w]_{n}}\sum_{k=l}^{n}(k)_{l}|s_{n,k}|w^{k-l},\qquad
l\in\mathbb{N}_{0}.$
Note that $\mathbf{E}((S_{n})_{l})=0$ for $l>n$. The distribution
$\pi_{n}(k):=\mathbf{P}(S_{n}=k)$ of $S_{n}$ can be recursively computed via
$\pi_{0}(k)=\delta_{k,0}$ and
(5)
$\pi_{n+1}(k)=\frac{w_{1}}{w+n}\pi_{n}(k-1)+\frac{w_{2}+n}{w+n}\pi_{n}(k),\quad
n,k\in\mathbb{N}_{0}.$
Note that $\pi_{n}(k)=0$ for $k\notin\\{0,\ldots,n\\}$. Comparing (5) with the
recursion [7, Theorem 1] $s_{r}(n+1,k)=s_{r}(n,k-1)+(n+r)s_{r}(n,k)$ for the
generalized Stirling numbers $s_{r}(n,k):=S(n,k;-1,0,r)$,
$n,k\in\mathbb{N}_{0}$, $r\in\mathbb{R}$, in the notation of [7] having
vertical generating functions [7, Theorem 2] $k!\sum_{n\geq
0}s_{r}(n,k)t^{n}/n!=(1-t)^{-r}(-\log(1-t))^{k}$, $r\in\mathbb{R}$,
$k\in\mathbb{N}_{0}$, $|t|<1$, it follows that (3) can be alternatively
expressed in terms of these generalized Stirling numbers as
(6) $\pi_{n}(k)=\frac{w_{1}^{k}}{[w]_{n}}s_{w_{2}}(n,k),\qquad
k\in\\{0,\ldots,n\\},$
in agreement with [12, Eq. (14)] for
$(\alpha,\theta_{1},\theta_{2}):=(0,w_{1},w)$. Similarly, (4) can be written
as
(7) $\mathbf{E}((S_{n})_{l})=\frac{w_{1}^{l}}{[w]_{n}}l!s_{w}(n,l),\qquad
l\in\mathbb{N}_{0}.$
Introducing the superdiagonal stochastic transition matrices
$\Pi_{n}:=\left(\begin{array}[]{llll}\frac{w_{2}+n}{w+n}&\frac{w_{1}}{w+n}&0&\cdots\\\
0&\frac{w_{2}+n}{w+n}&\frac{w_{1}}{w+n}&0\\\ 0&0&\frac{w_{2}+n}{w+n}&\cdots\\\
\vdots&\vdots&0&\ddots\end{array}\right),\qquad n\in\mathbb{N}_{0},$
the distributions $\mathbf{\pi}_{n}:=(\pi_{n}(k),k\in\mathbb{N}_{0})$ of
$S_{n}$, $n\in\mathbb{N}_{0}$, satisfy the recursion
$\mathbf{\pi}_{n+1}=\mathbf{\pi}_{n}\Pi_{n}$, $n\in\mathbb{N}_{0}$. Thus,
$\mathbf{\pi}_{n}=\mathbf{\pi}_{0}\prod_{m=0}^{n-1}\Pi_{m}$,
$n\in\mathbb{N}_{0}$, with $\mathbf{\pi}_{0}=(1,0,0,\ldots)$.
###### Remark 1.
The pgf $f_{n}$ of $S_{n}$ has only real zeros $-(w_{2}+m)/w_{1}$,
$m\in\\{0,\ldots,n-1\\}$. By [15, Proposition 1],
$(\pi_{n}(0),\ldots,\pi_{n}(n))$ is a Pólya frequency sequence. Thus, the
infinite matrix $M:=(\pi_{n}(k-l))_{k,l\in\mathbb{N}_{0}}$ (where
$\pi_{n}(k)=0$ for $k\notin\\{0,\ldots,n\\}$) is totally positive of any
arbitrary order, i.e., all principal minors of any arbitrary order of $M$ have
nonnegative determinant.
### 2.2. Special cases
\- $w=1$:
$f_{n}(z)=\mathbf{E}(z^{S_{n}})=\frac{[w_{1}(z-1)+1]_{n}}{n!}=\frac{1}{n!}\sum_{k=0}^{n}|s_{n+1,k+1}|w_{1}^{k}(z-1)^{k}$
showing that $S_{n}$ has $k$-th descending factorial moment
$\mathbf{E}((S_{n})_{k})=k![(z-1)^{k}]f_{n}(z)=\frac{k!}{n!}|s_{n+1,k+1}|w_{1}^{k},\qquad
k\in\mathbb{N}_{0}.$
Note that in that case, necessarily $w_{1}\in(0,1)$.
\- $w_{2}=1$:
$f_{n}(z)=\mathbf{E}(z^{S_{n}})=\frac{[w_{1}z+1]_{n}}{[w]_{n}}=\frac{[w_{1}z]_{n+1}}{z[w_{1}]_{n+1}}=\frac{1}{[w_{1}]_{n+1}}\sum_{k=0}^{n+1}|s_{n+1,k}|w_{1}^{k}z^{k-1}$
showing that ($|s_{n,0}|=\delta_{n,0}$)
$\pi_{n}(k)=\mathbf{P}(S_{n}=k)=\frac{|s_{n+1,k+1}|w_{1}^{k+1}}{[w_{1}]_{n+1}},\qquad
k\in\\{0,\ldots,n\\}.$
\- $w_{2}=0$:
$f_{n}(z)=\mathbf{E}(z^{S_{n}})=\frac{[w_{1}z]_{n}}{[w]_{n}}=\frac{1}{[w_{1}]_{n+1}}\sum_{k=0}^{n}|s_{n,k}|w_{1}^{k}z^{k}$
showing that
$\pi_{n}(k)=\mathbf{P}(S_{n}=k)=\frac{|s_{n,k}|w_{1}^{k}}{[w_{1}]_{n}},\qquad
k\in\\{0,\ldots,n\\}.$
If in addition $w_{1}=1$, then $S_{n}$ is the number of record values of an
arbitrary iid sequence of observations appearing before $n$; [13, 18, 19]. In
this case the law $\pi_{n}(k)=|s_{n,k}|/n!$, $k\in\\{0,\ldots,n\\}$, of
$S_{n}$ coincides with the distribution of the number of cycles of a
permutation of size $n$ chosen uniformly at random.
### 2.3. Poisson approximation
Clearly,
$\mu_{n}:=\mathbf{E}(S_{n})=\sum_{m=1}^{n}\mathbf{P}(I_{m}=1)=w_{1}\sum_{m=0}^{n-1}1/(w+m)=w_{1}\log
n+O(1)$ and
$\sigma_{n}^{2}:={\rm
Var}(S_{n})=\sum_{m=1}^{n}\mathbf{P}(I_{m}=0)\mathbf{P}(I_{m}=1)=w_{1}\sum_{m=0}^{n-1}\frac{w_{2}+m}{(w+m)^{2}}\sim
w_{1}\log n$
as $n\to\infty$. The law of $S_{n}$ is in total variation distance close to
the law of $N_{n}\overset{d}{\sim}\text{Poi}(\mu_{n})$, (see [15] and [24]),
because
$\mu_{n}-\sigma_{n}^{2}=w_{1}^{2}\sum_{m=0}^{n-1}1/(w+m)^{2}\ll\mu_{n}$ (see
[2, Theorems 1 and 2]) with LeCam Poisson approximation of the total variation
distance $d_{TV}(S_{n},N_{n}):=\frac{1}{2}\sum_{k\geq
0}|\pi_{n}(k)-\mu_{n}^{k}e^{-\mu_{n}}/k!|$ given by (see [20])
(8) $\frac{1}{32}\min(1,\mu_{n}^{-1})(\mu_{n}-\sigma_{n}^{2})\leq
d_{TV}(S_{n},N_{n})\leq(1-e^{-\mu_{n}})\frac{\mu_{n}-\sigma_{n}^{2}}{\mu_{n}}.$
Therefore (and also by the Lindeberg–Feller central limit theorem),
$(S_{n}-\mu_{n})/\sigma_{n}\to\mathcal{N}(0,1)$ in distribution as
$n\to\infty$, consistently with the fact that
$(N_{n}-\mu_{n})/\sigma_{n}\to\mathcal{N}(0,1)$ in distribution as
$n\to\infty$. Since $\sum_{n\geq
2}\mathbf{P}(I_{n}=0)\mathbf{P}(I_{n}=1)/(\log n)^{2}<\infty$, it follows from
well-known law of large numbers results for sums of independent, but not
identically distributed random variables, that $(\log
n)^{-1}\sum_{m=1}^{n}(I_{m}-\mathbb{E}(I_{m}))\to 0$ almost surely or,
equivalently, that $S_{n}/\mu_{n}\to 1$ almost surely as $n\to\infty$.
### 2.4. Maximum likelihood estimation
With $i:=(i_{1},\ldots,i_{n})\in\\{0,1\\}^{n}$ an observed sequence of
$I:=(I_{1},\ldots,I_{n})$ and $k:=\sum_{m=1}^{n}i_{m}$,
(9)
$\mathbf{P}(I=i)=\prod_{m=1}^{n}\frac{w_{1}^{i_{m}}(w_{2}+m-1)^{1-i_{m}}}{w+m-1}=\frac{w_{1}^{k}\prod_{m=1}^{n}(w_{2}+m-1)^{1-i_{m}}}{[w]_{n}}.$
This probability is not symmetric in $i_{1},\ldots,i_{n}$ since the random
variables $I_{1},\ldots,I_{n}$ are not exchangeable. Using
$\partial_{w}\log[w]_{n}=\sum_{m=0}^{n-1}1/(w+m)=\Psi(w+n)-\Psi(w)$, where
$\Psi$ denotes the digamma function obeying $\Psi(z)=\log z-1/(2z)+O(1/z^{2})$
as $z\to\infty$, the two equations $\partial_{w_{j}}\log\mathbf{P}(I=i)=0$,
$j\in\\{1,2\\}$, yield
(10)
$\frac{k}{\widehat{w}_{1}}=\sum_{m=0}^{n-1}\frac{1}{\widehat{w}+m}=\Psi(\widehat{w}+n)-\Psi(\widehat{w})$
and
(11)
$\sum_{m=0}^{n-1}\frac{1-i_{m+1}}{\widehat{w}_{2}+m}=\Psi(\widehat{w}+n)-\Psi(\widehat{w}),$
where $(\widehat{w}_{1},\widehat{w}_{2})$ is the maximum likelihood estimator
(MLE) of $(w_{1},w_{2})$ based on the observed sequence
$i=(i_{1},\ldots,i_{n})$ and $\widehat{w}:=\widehat{w}_{1}+\widehat{w}_{2}$.
It is easily checked that the $2\times 2$ Hesse matrix $J$ of the map
$(w_{1},w_{2})\mapsto\mathbf{P}(I=i)$ is negative semi-definite at
$(\widehat{w}_{1},\widehat{w}_{2})$. As $n\to\infty$, asymptotic normality of
$(\widehat{w}_{1},\widehat{w}_{2})$ is expected at rate $n^{-1/2}$, the
limiting normal law having mean $(w_{1},w_{2})$ and covariance matrix either
the inverse of the expected Fisher information matrix or the inverse $J^{-1}$
of the observed information matrix $J$ evaluated at
$(\widehat{w}_{1},\widehat{w}_{2})$.
If the model has two independent parameters $(w_{1},w_{2})$ that have to be
estimated, the first equation (10) gives
$\widehat{w}_{1}=k/(\Psi(\widehat{w}+n)-\Psi(\widehat{w}))$ as a function of
$\widehat{w}$ (and $k$) and so $\widehat{w}_{2}=\widehat{w}-\widehat{w}_{1}$
as a function of $\widehat{w}$. Plugging this expression of $\widehat{w}_{2}$
into the second equation (11) yields an equation in the single variable
$\widehat{w}$ that can be solved from the data $i=(i_{1},\ldots,i_{n})$. An
expression of both $\widehat{w}_{1}$ and $\widehat{w}_{2}$ then follows.
When $w=w_{1}+w_{2}=1$, there is only one parameter to estimate, say $w_{1}$,
and (10) and (11) yield
$\frac{k}{\widehat{w}_{1}}=\sum_{m=1}^{n}\frac{1-i_{m}}{m-\widehat{w}_{1}}\text{
(entailing }\widehat{w}_{1}\in(0,1)\text{)}.$
When $w_{2}=1$ or $0$, only this first equation (10) is needed and the
searched $\widehat{w}_{1}$ solves
$\frac{k}{\widehat{w}_{1}}=\sum_{m=0}^{n-1}\frac{1}{\widehat{w}_{1}+m+1}\quad\text{or}\quad\frac{k}{\widehat{w}_{1}}=\sum_{m=0}^{n-1}\frac{1}{\widehat{w}_{1}+m}.$
Note that $\Psi(\widehat{w}+n)-\Psi(\widehat{w})\sim\log n$ as $n\to\infty$.
Thus, by (10), $\widehat{w}_{1}\sim k/\log n$ as $n\to\infty$ and, by (11), a
large $n$ approximation for $\widehat{w}_{2}$ is the solution $w$ of the
equation $\sum_{m=0}^{n-1}(1-i_{m+1})/(w+m)=\log n$.
###### Remark 2.
(Random number of observations) It can be natural to assume that the number of
observations is finite but random (and independent of $I_{1},I_{2},\ldots$).
In this case one has to replace $n$ by a random variable $N$ taking values in
$\mathbb{N}$, and $\mathbf{E}(z^{S_{N}})=\sum_{n\geq
1}\mathbf{E}(z^{S_{n}})\mathbf{P}(N=n)$ yields the law of the number of
successes over the time window $N$ with supposedly (or not) known mean
$\mathbf{E}(N)$. For example, $N$ could be geometrically distributed
$\mathbf{P}(N=n)=p(1-p)^{n-1}$, $n\in\mathbb{N}$, with parameter $p\in(0,1)$.
For instance, it can be a good modeling issue to infer that there are only
finitely many days in a species sampling campaign, geometrically distributed
(without any further information but its mean number). The random variable
$S_{N}$ then counts the total number of sampled species over the observation
window $N$.
## 3\. Times to successive successes
For $l\in\mathbb{N}$ let $K_{l}^{+}:=\inf\\{n\in\mathbb{N}:S_{n}=l\\}$ be the
time elapsed till the $l$-th success. Furthermore, put $K_{0}^{+}:=0$. The
process $(K_{l}^{+},l\in\mathbb{N}_{0})$ is called the first-passage time
process of the random walk $(S_{n},n\in\mathbb{N}_{0})$. Such processes have
been studied extensively in the literature. We refer the reader to [3] and the
references therein. We have $\mathbf{P}(K_{l}^{+}>n)=\mathbf{P}(S_{n}<l)$ as
the laws of $(K_{l}^{+},S_{n})$ are mutual inverse in the sense of inverse
sampling ([9, p. 192–194]. It follows from this, (8), and the works [18, 19]
(see also [12, Proposition 1]), that
(12) $\frac{w_{1}\log K_{l}^{+}}{l}\overset{\text{a.s.}}{\to}1\text{ as
}l\to\infty\text{ and }\frac{w_{1}\log
K_{l}^{+}-l}{\sqrt{l}}\overset{d}{\to}\mathcal{N}(0,1)\text{ as }l\to\infty,$
and the law of iterated logarithm for the $\log K_{l}^{+}$’s. And similarly
for the time elapsed between contiguous successes, while replacing $K_{l}^{+}$
by $L_{l}^{+}:=K_{l}^{+}-K_{l-1}^{+}$ in (12) with the notable exception that
the first almost sure convergence is now a convergence in probability [13].
### 3.1. The laws of the times to successive successes and times elapsed
between contiguous successes
The law of $K_{l}^{+}$ is easily obtained as follows. Clearly,
$\\{K_{l}^{+}=n\\}=\\{S_{n-1}=l-1,I_{n}=1\\}$. The independence of $S_{n-1}$
and $I_{n}$ thus yields
(13)
$\mathbf{P}(K_{l}^{+}=n)=\mathbf{P}(I_{n}=1)\mathbf{P}(S_{n-1}=l-1)=\frac{w_{1}}{w+n-1}\pi_{n-1}(l-1).$
Using (3) the law of $K_{l}^{+}$ is therefore given by
(14)
$\mathbf{P}(K_{l}^{+}=n)=\frac{w_{1}^{l}}{[w]_{n}}\sum_{k=l-1}^{n-1}\binom{k}{l-1}|s_{n-1,k}|w_{2}^{k-l+1},\qquad
n\geq l.$
We also conclude that $L_{l+1}^{+}:=K_{l+1}^{+}-K_{l}^{+}=i$ is realized if
and only, for some $n\geq l$: $S_{n-1}=l-1$ and $I_{n}$ is a success and
$S_{n+i-1}=l$ and $I_{n+i}$ is a success. Hence, with $i\geq 1$,
(15) $\mathbf{P}(L_{l}^{+}=i)=\sum_{n\geq
l}\frac{w_{1}}{w+n-1}\frac{w_{1}}{w+n+i-1}\pi_{n-1}(l-1)\pi_{n+i-1}(l),$
where $\pi_{n}(l)$ is given by (3). When $(w_{1},w_{2})=(1,0)$, it follows
from (3) in [13], developing problem $32$ on p. 268 in [10], that
$\mathbf{P}(L_{l}^{+}>i)=\sum_{k=0}^{i}(-1)^{k}\binom{i}{k}(1+k)^{-l}.$
The law of $K_{l}^{+}$ can be obtained on a computer by launching a three-term
recursion. Indeed, from (13), the recursion (5) on $\pi_{n}(l)$ yields a
recursion for $\mathbf{P}(K_{l}^{+}=n)$ with $\mathbf{P}(K_{l}^{+}=n)=0$ if
$n<l$. With $n\geq l$, this is
(16)
$\mathbf{P}(K_{l+1}^{+}=n+1)=\frac{w_{1}}{w+n}\mathbf{P}(K_{l}^{+}=n)+\frac{w_{2}+n-1}{w+n}\mathbf{P}(K_{l+1}^{+}=n).$
Introducing the lower-triangular matrix $P=(P_{n,l})$, where
$P_{n,l}:=\mathbf{P}(K_{l}^{+}=n)$, $l\leq n$, we see that $P_{n+1,l+1}$ can
be obtained from its north-west and north neighbors. With the knowledge of the
first column of $P$ and its diagonal, this recursion becomes effective,
starting from $P_{3,2}$ obtained from $P_{2,2}$ and $P_{2,1}$. For $n=l$, Eq.
(16) reduces to
$\mathbf{P}(K_{l+1}^{+}=l+1)=(w_{1}/(w+l))\mathbf{P}(K_{l}^{+}=l)$, which
yields the diagonal terms
$\mathbf{P}(K_{l}^{+}=l)=\prod_{m=0}^{l-1}w_{1}/(w+m)$. The entries
$\mathbf{P}(K_{1}^{+}=n)$ of the first column of $P$ are given in (20) below.
### 3.2. Markov structure of $(K_{l}^{+},l\in\mathbb{N})$
The homogeneous Markov structure of the sequence $(K_{l}^{+},l\in\mathbb{N})$
follows from
$\mathbf{P}(K_{l+1}^{+}-m>n\mid
K_{l}^{+}=m)=\prod_{k=0}^{n-1}\frac{w_{2}+m+k}{w+m+k}=\frac{[w_{2}+m]_{n}}{[w+m]_{n}}=\prod_{k=m}^{m+n-1}\frac{w_{2}+k}{w+k},$
where $m\geq l$ and $n>0$. The random variable
$L_{l+1}^{+}:=K_{l+1}^{+}-K_{l}^{+}\geq 1$ is the ‘time-lag’ elapsed between
the $l$-th and the $(l+1)$-th success. Its law depends on $K_{l}^{+}$. It is
thus expected that, for each $l\geq m$, the larger $m$ is, the larger is
$\mathbf{P}(K_{l+1}^{+}-m>n\mid K_{l}^{+}=m)$, because
$\frac{\mathbf{P}(K_{l+1}^{+}-(m+1)>n\mid
K_{l}^{+}=m+1)}{\mathbf{P}(K_{l+1}^{+}-m>n\mid
K_{l}^{+}=m)}=\frac{w_{2}+m+n}{w+m+n}\frac{w+m}{w_{2}+m}>1.$
The chain $(K_{l}^{+},l\in\mathbb{N})$ therefore obeys a sort of reinforcement
property. For general information on random processes with reinforcement we
refer the reader to [14].
From Stirling’s formula, $\Gamma(z+b)/\Gamma(z+a)\sim z^{b-a}$ as
$z\to\infty$. For fixed $m\ll n$, for each $m\geq l$, we indeed get
$\mathbf{P}(K_{l+1}^{+}-m>n\mid
K_{l}^{+}=m)=\frac{[w_{2}+m]_{n}}{[w+m]_{n}}=\frac{\Gamma(w+m)}{\Gamma(w_{2}+m)}(n^{-w_{1}}+O(n^{-(w_{1}+1)})),$
translating that, given $K_{l}^{+}=m$, the tails of $L_{l+1}^{+}$ have a tail
index $w_{1}$. Given the $l$-th record occurred at $m\ll n$, the waiting time
till the $(l+1)$-th has power-law tails with exponent $w_{1}$. Note however
that the probability that $K_{l+1}^{+}-m=1$ is $w_{1}/(w+m)$ which is small
only if $m\gg 1$. Introducing $c_{m}:=\Gamma(w+m)/\Gamma(w_{2}+m)$, for each
$l\leq m$, $c_{m+1}/c_{m}=(w+m)/(w_{2}+m)>1$ translating that the tails of
$L_{l+1}^{+}$ get heavier as $m$ increases, but without affecting the tail
index itself, only the prefactor.
With $m^{\prime}>m\geq l\geq 1$, we similarly get
$\displaystyle\mathbf{P}(K_{l+1}^{+}=m^{\prime}\mid K_{l}^{+}=m)$
$\displaystyle=$
$\displaystyle\frac{w_{1}}{w+m^{\prime}-1}\prod_{n=m}^{m^{\prime}-2}\frac{w_{2}+n}{w+n}\text{,}$
$\displaystyle\mathbf{P}(K_{l+1}^{+}=m^{\prime})$ $\displaystyle=$
$\displaystyle\sum_{m\geq l}\mathbf{P}(K_{l+1}^{+}=m^{\prime}\mid
K_{l}^{+}=m)\mathbf{P}(K_{l}^{+}=m),$
with initial condition $\mathbf{P}(K_{1}^{+}=m^{\prime})$ given below in (20)
if $w_{2}\neq 0$. The homogeneous Markov structure of
$(K_{l}^{+},l\in\mathbb{N})$ appears more clearly, recalling
$\mathbf{P}(K_{l}^{+}=m)$ is given by (14). For $w_{2}=0$ the initial
condition should start with $\mathbf{P}(K_{2}^{+}=m^{\prime})$ given in (23).
Introducing the excess time to $l$-th failure $K_{l}:=K_{l}^{+}-l\geq 0$, now
a shifted random variable taking values in $\mathbb{N}_{0}$,
(17)
$\mathbf{P}(K_{l+1}=n)=\frac{w_{1}}{w+n}\mathbf{P}(K_{l}=n)+\frac{w_{2}+n}{w+n}\mathbf{P}(K_{l+1}=n-1).$
Replacing $n$ by $n+l$ in (14) shows that the excess time $K_{l}$ has
distribution
$\mathbf{P}(K_{l}=n)=\frac{w_{1}^{l}}{[w]_{n+l}}\sum_{k=l-1}^{n+l-1}\binom{k}{l-1}|s_{n+l-1,k}|w_{2}^{k-l+1},\qquad
l\in\mathbb{N},n\in\mathbb{N}_{0}.$
Alternatively, the three-terms recursion (17) can be solved numerically using
initially $\mathbf{P}(K_{1}=n)$ and observing
$\mathbf{P}(K_{l}=0)=w_{1}^{l}/[w]_{l}$. Consequently, for all
$n,n^{\prime}\geq 0$,
(18) $\mathbf{P}(K_{l+1}>n^{\prime}\mid
K_{l}=n)=\prod_{m=n+1}^{2n+n^{\prime}+1}\frac{w_{2}+m}{w+m}$
and, with
$\mathbf{P}(K_{l+1}=n^{\prime}\mid
K_{l}=n)=\frac{w_{1}}{w+2n+n^{\prime}+1}\prod_{m=n+1}^{2n+n^{\prime}}\frac{w_{2}+m}{w+m},$
$\mathbf{P}(K_{l+1}=k^{\prime})=\sum_{k\geq
0}\mathbf{P}(K_{l+1}=k^{\prime}\mid K_{l}=k)\mathbf{P}(K_{l}=k),$
emphasizing the inhomogeneous Markov structure of $(K_{l},l\in\mathbb{N})$ as
well. Setting $n,n^{\prime}=0$ in (18), we get in particular
$\mathbf{P}(K_{l+1}=0\mid K_{l}=0)=w_{1}/(w+1)$. Note that $K_{l}$ also
represents the number of failures till the observation of the $l$-th success,
a generalized version of the negative binomial distribution.
## 4\. Time to first success
If $l=1$, with $K_{0}^{+}:=0$, the distribution of the time to the first
success reads ($L_{1}^{+}=K_{1}^{+}-K_{0}^{+}=K_{1}^{+}$)
(19)
$\mathbf{P}(K_{1}^{+}>n)=[z^{0}]\mathbf{E}(z^{S_{n}})=\frac{[w_{2}]_{n}}{[w]_{n}}\sim\frac{\Gamma(w)}{\Gamma(w_{2})}n^{-w_{1}},\qquad
n\to\infty.$
and
(20)
$\mathbf{P}(K_{1}^{+}=n)=\frac{w_{1}}{w+n-1}\frac{[w_{2}]_{n-1}}{[w]_{n-1}}=\frac{w_{1}}{w}\frac{[w_{2}]_{n-1}}{[w+1]_{n-1}},\qquad
n\in\mathbb{N}.$
It is easily seen that the law of $K_{1}^{+}$ is unimodal with mode at $n=1$
having mass $w_{1}/w$.
Upon shifting, $K_{1}=K_{1}^{+}-1\geq 0$ has a generalized (heavy tailed with
index $w_{1}$) Sibuya distribution [11] with probability generating function
(pgf)
(21) $\mathbf{E}(z^{K_{1}})=\frac{w_{1}}{w}F(1,w_{2};w+1;z)$
observing $(w+n)[w]_{n}=w[w+1]_{n}$, where $F:={{}_{2}}F_{1}$ is the Gauss
hypergeometric function $F(a,b;c;z):=\sum_{n\geq
0}([a]_{n}[b]_{n}/[c]_{n})(z^{n}/n!)$. The initial condition to the recursion
(17) giving $\mathbf{P}(K_{l}=k)$ is
$\mathbf{P}(K_{1}=k)=\frac{w_{1}}{w+k}\frac{[w_{2}]_{k}}{[w]_{k}}=\frac{w_{1}}{w}\frac{[w_{2}]_{k}}{[w+1]_{k}},\qquad
k\in\mathbb{N}_{0}.$
###### Remark 3.
(Time to first success in a $N-$Bernoulli trial with $N$ finite and Geometric
$(p)$).
In that case, $\mathcal{K}_{l}^{+}=\inf\\{n\in\\{1,\ldots,N\\}:S_{n}=l\\}$ and
$\mathcal{K}_{1}^{+}=\inf\\{m\in\\{1,\ldots,N\\}:I_{m}=1\\}$. Therefore,
$\mathcal{K}_{1}^{+}=\infty$ with probability
$\mathbf{P}(S_{N}=0)=\mathbf{E}(\prod_{m=1}^{N}\mathbf{P}(I_{m}=0))$ and
$\mathbf{P}(\mathcal{K}_{1}^{+}>n)=\mathbf{P}(S_{n}=0\mid N\geq n)$ with
probability $\mathbf{P}(S_{N}>0)$, where
$\mathbf{P}(S_{n}=0\mid N\geq
n)=\mathbf{P}(S_{n}=0)=\frac{[w_{2}]_{n}}{[w]_{n}}.$
So, if $w_{2}>0$, the new $\mathcal{K}_{1}^{+}$ has an atom at $\infty$ with
mass
$\mathbf{P}(S_{N}=0)=q\sum_{n\geq
1}\frac{[w_{2}]_{n}}{[w]_{n}}p^{n-1}=\frac{q}{p}[F(1,w_{2};w;p)-1]$
translating that no success was registered before $N$.
### 4.1. Special cases
\- Sibuya: $w=1\Rightarrow w_{1}$, $w_{2}=1-w_{1}\in(0,1)$ with
$\mathbf{E}(z^{K_{1}})=w_{1}F(1,1-w_{1};2;z)=z^{-1}(1-(1-z)^{w_{1}})$,
equivalently, $\mathbf{P}(K_{1}=k)=w_{1}[1-w_{1}]_{k}/(k+1)!$, $k\geq 0$.
\- Yule-Simon: $w_{2}=1$, $w_{1}>0$ with
$\mathbf{E}(z^{K_{1}})=\frac{w_{1}}{w_{1}+1}F(1,1;w_{1}+2;z)$, equivalently,
$\mathbf{P}(K_{1}=k)=w_{1}\frac{k!}{[w_{1}+1]_{k+1}}$.
\- Ewens: $w_{2}=0$, $w_{1}>0$: this is a singular case for which
$\mathbf{P}(K_{1}^{+}=n)=\delta_{n,1}$.
In view of $F(a,b;c;z)=F(b,a;c;z)$, the Yule–Simon distribution with $a=b=1$
and $c=w_{1}+2$ is the only one in the class (21) to be identifiable
(different parameters yield different distributions).
### 4.2. Falling factorial moments of $K_{1}$
$K_{1}^{+}=K_{1}+1$ is an important random variable if one considers that the
first occurrence of a success may lead to a stop of some ongoing process.
With $a=1$, $b=w_{2}$, $c=w+1$, $i$ integer, using the special values and
differential identities
$\displaystyle F(a,b;c;1)$ $\displaystyle=$
$\displaystyle\frac{[c-a]_{a}}{[c-a-b]_{a}}$ $\displaystyle\frac{{\rm
d}^{i}}{{\rm d}z^{i}}F(a,b;c;z)$ $\displaystyle=$
$\displaystyle\frac{[a]_{i}[b]_{i}}{[c]_{i}}F(a+i,b+i;c+i;z),$
evaluated at $z=1$, with $(K_{1})_{i}=K_{1}(K_{1}-1)\cdots(K_{1}-i+1)$, when
$i<w_{1}$, we get the descending $i$-th factorial moments of $K_{1}$ as
$\mathbf{E}[(K_{1})_{i}]=\varphi^{(i)}(1)=\frac{i![w_{2}]_{i}}{[w_{1}-i]_{i}},\qquad
i<w_{1},$
where $\varphi(z):=\mathbf{E}(z^{K_{1}})=\frac{w_{1}}{w}F(1,w_{2};w+1;z)$. In
particular, if $w_{1}>1$, $\mathbf{E}(K_{1})=w_{2}/(w_{1}-1)<\infty$ and, if
$w_{1}>2$,
${\rm
Var}(K_{1})=\varphi^{\prime\prime}(1)+\varphi^{\prime}(1)-(\varphi^{\prime}(1))^{2}=\frac{w_{1}(w-1)\mathbf{E}(K_{1})}{(w_{1}-1)(w_{1}-2)}<\infty.$
Overdispersion holds. The mean $\mathbf{E}(K_{1}^{+})=\frac{w-1}{w_{1}-1}>1$
and the variance ${\rm Var}(K_{1}^{+})={\rm Var}(K_{1})$ of $K_{1}^{+}$ (if
they exist) may be used to estimate $(w_{1},w_{2})$ by the method of moments
provided empirical values of these quantities are available.
### 4.3. MLE estimator of $(w_{1},w_{2})$ from $K_{1}^{+}$
If we have an $L$-sample $(n_{1},\ldots,n_{L})$ for the time $K_{1}^{+}$ to
first success,
$\mathbf{P}(K_{1}^{+}(1)=n_{1},\ldots,K_{1}^{+}(L)=n_{L})=w_{1}^{L}\prod_{l=1}^{L}\frac{1}{w+n_{l}-1}\frac{[w_{2}]_{n_{l}-1}}{[w]_{n_{l}-1}}.$
Considering
$\partial_{w_{k}}\log\mathbf{P}(K_{1}^{+}(1)=n_{1},\ldots,K_{1}^{+}(L)=n_{L})=0$
for $k\in\\{1,2\\}$ yields a MLE $(\widehat{w}_{1},\widehat{w}_{2})$ for
$(w_{1},w_{2})$ based on the histogram of the observed time to first failure
sample $(n_{1},\ldots,n_{L})$. With
$\widehat{w}=\widehat{w}_{1}+\widehat{w}_{2}$, we get
$\frac{L}{\widehat{w}_{1}}-\sum_{l=1}^{L}\frac{1}{\widehat{w}+n_{l}-1}-\sum_{l=1}^{L}(\Psi(\widehat{w}+n_{l}-1)-\Psi(\widehat{w}))=0$
and
$-\sum_{l=1}^{L}\frac{1}{\widehat{w}+n_{l}-1}-\sum_{l=1}^{L}\big{(}\Psi(\widehat{w}+n_{l}-1)-\Psi(\widehat{w})-\Psi(\widehat{w}_{2}+n_{l}-1)+\Psi(\widehat{w}_{2})\big{)}=0.$
The first equation gives $\widehat{w}_{1}$ as a function of $\widehat{w}$ (and
the data) and so $\widehat{w}_{2}=\widehat{w}-\widehat{w}_{1}$ as a function
of $\widehat{w}$. Plugging this expression of $\widehat{w}_{2}$ into the
second equation yields an equation in the single variable $\widehat{w}$ that
can be solved from the data. A separate expression of both $\widehat{w}_{1}$
and $\widehat{w}_{2}$ then follows. Asymptotic normality of this estimator is
proved in [11], together with an expression of the Fisher information matrix.
### 4.4. The Ewens case $w_{2}=0$
In a sampling problem from a Poisson–Dirichlet partition $\text{PD}(\theta)$
of the unit interval modeling species abundances, the law of the number
$S_{n}=\sum_{m=1}^{n}I_{m}$ of distinct sampled species for a size $n$ uniform
sample obeys (5), [4], [1] and [24], with $w_{1}=\theta$, $w_{2}=0$ and
$S_{1}=1$, corresponding to $K_{1}^{+}=1$. Because sampling is modeled as
uniform throws on a partition of the unit interval, necessarily on day $n=1$,
a new species is sampled but new species with smaller abundance become
increasingly unlikely to be subsequently sampled. The $\text{PD}(\theta)$
partition of the unit interval has countably many pieces, so the sampling
process potentially never stops. Here $K_{l}^{+}$ ($l\geq 2$) is the sample
size till $l$ new species have been sampled with, from (14)
(22)
$\mathbf{P}(K_{l}^{+}=n)=w_{1}[z^{l-1}]\frac{[w_{1}z]_{n-1}}{[w_{1}]_{n}}=w_{1}^{l-1}\frac{|s_{n-1,l-1}|}{[w_{1}+1]_{n-1}},\qquad
n\geq l.$
This distribution seems to be new. Note the resulting ‘vertical’ identity for
the $|s_{n,l}|$’s: $\sum_{n\geq l}\frac{|s_{n,l}|}{[w_{1}+1]_{n}}=w_{1}^{-l}$
for all $w_{1}>0$.
The random variable $K_{2}^{+}$ is the time to second non-trivial discovery of
a new species (after $K_{1}^{+}=1$), with, recalling $|s_{n-1,1}|=(n-2)!$,
(23) $\mathbf{P}(K_{2}^{+}=n)=w_{1}\frac{(n-2)!}{[w_{1}+1]_{n-1}},\qquad n\geq
2,$
reducing to $\mathbf{P}(K_{2}^{+}=n)=1/(n(n-1))$ when $w_{1}=1$. With
$K_{2}^{+}-1\geq 0$ the time elapsed since $K_{1}^{+}=1$, we thus have
$\mathbf{E}(z^{K_{2}^{+}-1})=w_{1}\int_{0}^{z}\frac{F(1,1;w_{1}+1;t)-1}{t}{\rm
d}t.$
The above theory applies to this fundamental Ewens model. Given $S_{n}=k$, the
probability to discover a new species at time $n+1$ is $w_{1}/(w_{1}+n)$,
decreasing inversely proportional to $n$ and independently of $k$. Recall from
(10) that the MLE $\widehat{w}_{1}$ for $w_{1}$ is characterized by
$k/\widehat{w}_{1}=\Psi(\widehat{w}_{1}+n)-\Psi(\widehat{w}_{1})$ and hence
only depends on $k=i_{1}+\cdots+i_{n}$. See [22, p. 41, Eq. (3.7.7)].
## 5\. An extension of the harmonic Bernoulli trial
With $\alpha>0$, consider the inhomogeneous Bernoulli trial with
$\mathbf{P}(I_{m}=1)=w_{1}/(w+m^{\alpha}-1)$, $m\in\mathbb{N}$.
For $\alpha\in(0,1)$ the successful events are more frequent than for
$\alpha=1$. Then,
$\mu_{n}:=\mathbf{E}(S_{n})=\sum_{m=1}^{n}\mathbf{P}(I_{m}=1)=w_{1}\sum_{m=0}^{n-1}1/(w+m^{\alpha})\sim\frac{w_{1}}{1-\alpha}n^{1-\alpha}$
as $n\to\infty$ and
$\sigma_{n}^{2}:={\rm
Var}(S_{n})=\sum_{m=1}^{n}\mathbf{P}(I_{m}=1)\mathbf{P}(I_{m}=0)=w_{1}\sum_{m=0}^{n-1}\frac{w_{2}+m^{\alpha}}{(w+m^{\alpha})^{2}}\sim\frac{w_{1}}{1-\alpha}n^{1-\alpha}$
and the law of $S_{n}$ is close in the sense of total variation distance to
$P_{n}\overset{d}{\sim}{\rm Poi}(\mu_{n})$ for this new $\mu_{n}$ now growing
algebraically with $n$.
Clearly also,
(24)
$\frac{w_{1}(K_{l}^{+})^{1-\alpha}}{l(1-\alpha)}\overset{\text{a.s.}}{\to}1\text{
as }l\to\infty\text{ and
}\frac{w_{1}(K_{l}^{+})^{1-\alpha}/(1-\alpha)-l}{\sqrt{l}}\overset{d}{\to}\mathcal{N}(0,1)\text{
as }l\to\infty.$
The time to the $l$-th success occurs much sooner than when $\alpha=1$.
If $\alpha>1$, then $S_{n}$ converges in distribution to a Poisson random
variable with finite mean
$\mu_{\infty}:=\lim_{n\to\infty}\mu_{n}=w_{1}\sum_{m=0}^{\infty}1/(w+m^{\alpha})$.
## 6\. A related random walk with disasters
Bernoulli trials with unequal harmonic success probabilities are also relevant
in the context of growth-collapse random walks with disasters. Discrete-time
integral-valued growth-collapse processes where long periods of linear growth
alternate with rare catastrophic events occur in a large variety of systems. A
collapse or catastrophic event is when the size of some population shrinks by
a random number of units, not exceeding the current system’s size. A total
disaster is when the size of the system shrinks instantaneously to zero (a
massive extinction event). Disastrous growth-collapse models occur as models
for population growth subject to rare catastrophic extinction events.
A one-parameter version of such discrete-time models was investigated in [8].
Here, holding probabilities were allowed (with some probability the system’s
size can be left unchanged) and pure reflection at the origin was assumed
(once in state zero, the system’s size grows by one unit with probability
$1$). Whenever zero is a reflection/absorption barrier, pomp periods will
alternate with periods of scarcity. We herewith focus on discrete-time
disastrous growth-collapse models with no holding probability and with zero
either standing for a reflection or an absorption barrier. The probabilities
of either growth or disastrous events will be chosen to be dependent on the
current state as in the Bernoulli model with harmonic success probabilities,
and this will favor large populations in the long run.
With $\alpha>0$, define $q_{n}:=w_{1}/(w+n^{\alpha})$ and $p_{n}:=1-q_{n}$,
$n\in\mathbb{N}_{0}$. With $(U_{m},m\in\mathbb{N})$ an iid sequence of
uniforms,
(25) $N_{m+1}:=(N_{m}+1)\mathbf{1}(U_{m+1}\leq p_{N_{m}}),\qquad N_{0}\geq 0,$
defines a time-homogeneous Markov chain $(N_{m},m\in\mathbb{N}_{0})$ that
moves from state $n$ to state $n+1$ with probability $p_{n}$ or is sent from
state $n$ to state $0$ with probability $q_{n}$ (a disaster event).
The transition matrix of this Markov chain with state-space $\mathbb{N}_{0}$
is
$P=\left(\begin{array}[]{cccccc}q_{0}&p_{0}&&&&\cdots\\\
q_{1}&0&p_{1}&&&\cdots\\\ \vdots&\vdots&\ddots&\ddots&&\cdots\\\
q_{n}&0&\cdots&0&p_{n}&\cdots\\\
\vdots&\vdots&&&\ddots&\ddots\end{array}\right).$
Let us distinguish two cases.
Case 1. Assume that $w_{2}=0$. In this case state $0$ is absorbing. Let
$n\in\mathbb{N}$. The probability
$\mathbf{P}(N_{m}\to\infty\,|\,N_{0}=n)=\prod_{m\geq n}p_{m}$ is equal to $0$
if and only if $\sum_{m\geq n}q_{m}=\infty$, which in turn holds if and only
if $\alpha\leq 1$. Thus, for $\alpha\leq 1$ the chain
$(N_{m},m\in\mathbb{N}_{0})$, started from state $N_{0}\equiv n$, will
eventually go extinct. For $\alpha>1$ the chain, started from state $n$, will
tend to infinity with probability $\prod_{m\geq n}p_{m}>0$ and go extinct with
complementary probability $1-\prod_{m\geq n}p_{m}$. The extinction time
$\tau_{n,0}:=\inf\\{m\in\mathbb{N}_{0}:N_{m}=0,N_{0}=n\\}$ has pgf
$\mathbf{E}(z^{\tau_{n,0}})=\sum_{m\geq n}q_{m}z^{m}\prod_{k=n}^{m-1}p_{k}$,
$|z|<1$, and $\tau_{n,0}$ takes the value $\infty$ with probability
$\prod_{m\geq n}p_{m}$ being strictly positive if and only if $\alpha>1$.
Case 2. Assume that $w_{2}>0$. Then state $0$ is reflecting and all states are
communicating since $w_{1}>0$ by assumption. The chain
$(N_{m},m\in\mathbb{N}_{0})$ is hence irreducible and obviously aperiodic.
This is a small variation of a Markov chain whose salient statistical features
were studied in [5]. From the study in [5] we conclude that:
* •
For $\alpha>1$ the chain is transient. After a finite number of returns to $0$
(excursions) the chain drifts to infinity.
* •
For $\alpha<1$ the chain is positive recurrent with invariant probability
measure $\pi_{n}=\pi_{0}\prod_{k=0}^{n-1}p_{k}$, $n\in\mathbb{N}_{0}$, where
the normalizing constant $\pi_{0}$ is determined by
$\sum_{n=0}^{\infty}\pi_{n}=1$.
* •
For $\alpha=1$ (critical case) the chain is null-recurrent if $0<w_{1}\leq 1$
and positive recurrent if $w_{1}>1$. For the latter case $w_{1}>1$ the
invariant probability measure is given by
$\pi_{n}=\pi_{0}[w_{2}]_{n}/[w]_{n}$, $n\in\mathbb{N}_{0}$, with normalizing
constant $\pi_{0}:=(w_{1}-1)/(w-1)$, having heavy tails with index $w_{1}>1$.
In the recurrent case ($\alpha\leq 1$) the sample paths of
$(N_{m},m\in\mathbb{N}_{0})$, started at $N_{0}=0$, are made of iid excursions
through state $0$. The first excursion has length $L_{1}^{+}$ and height
$L_{1}^{+}-1$, where $L_{1}^{+}:=\inf\\{m\in\mathbb{N}:N_{m}=0,N_{0}=0\\}$ is
the time elapsed till the first disaster. Clearly, in the positive recurrent
case ($\alpha<1$ or $\alpha=1$ and $w_{1}\leq 1$) the invariant probability
measure has the general form
$\pi_{n}=\mathbf{P}(L_{1}^{+}>n)/\mathbf{E}(L_{1}^{+})$, $n\in\mathbb{N}_{0}$.
With $(L_{i}^{+}-1,i\in\mathbb{N})$ iid copies of the first excursion height
$L_{1}^{+}-1$, of interest for the control of overcrowding are the random
variables
$T_{1}(n):=\inf\\{m\in\mathbb{N}:N_{m}>n\mid N_{0}=n_{0}\\}\text{ and
}\inf\\{i\in\mathbb{N}:\max_{j\in\\{1,\ldots,i\\}}(L_{j}^{+}-1)>n\\},$
corresponding to the first (overcrossing) time the chain $N_{m}$ exceeds $n$
given $N_{0}=n_{0}<n$ and the number of the corresponding excursion.
Let $P_{(n)}$ be the truncated upper-left corner with size $(n+1,n+1)$ of the
full irreducible transition matrix $P$ of $N_{m}$ (its north-west part). With
$\mathbf{1}^{\prime}=(1,\ldots,1)$ and
$\mathbf{e}_{n_{0}}^{\prime}=(0,\ldots,0,1,0,\ldots,0)$ transpose row vectors
with size $n+1$ (with $1$ in position $n_{0}+1$ for
$\mathbf{e}_{n_{0}}^{\prime}$), it follows from Propositions $11$ and $12$ of
[6] that
(26)
$\mathbf{P}_{n_{0}}(T_{1}(n)>l)=\mathbf{e}_{n_{0}}^{\prime}P_{(n)}^{l}\mathbf{1},$
where $P_{(n)}^{l}$ is the $l$-th power of $P_{(n)}$.
$\mathbf{P}(T_{1}(n)>l)=1$ for $l\in\\{1,\ldots,n-n_{0}\\}$. At this time
$T_{1}(n)$, the state of the chain $N_{m}$ is $n+1$ because the overshoot can
only be $1$. So $T_{1}(n)$ has geometric tails with decay-rate parameter the
spectral radius of $P_{(n)}$ and
$\mathbf{E}_{n_{0}}(T_{1}(n))=\mathbf{e}_{n_{0}}^{\prime}(I-P_{(n)})^{-1}\mathbf{1}.$
Clearly, given $N_{0}=n_{0}<n$, with $N_{l}^{*}=\max_{m\leq l}N_{m}$ the
extremal process of $N_{m}$, the events $N_{l}^{*}\leq n$ and $T_{1}(n)>l$
coincide, so (26) also gives the marginal law
$\mathbf{P}_{n_{0}}(N_{l}^{*}\leq n)$ of $N_{l}^{*}$.
The extremal chain $N_{l}^{*}$ only grows (by one unit) at the record times
$R_{k}:=\inf\\{r\in\mathbb{N}:r>R_{k-1},N_{r}>N_{R_{k-1}}\\}$ of $N_{m}$.
## 7\. A more general Markov model for the number of successes
As before, let $w_{1}>0$ and $w_{2}\geq 0$ and define $w:=w_{1}+w_{2}$. A more
general model can be introduced by taking an additional third parameter
$\alpha\in[0,1]$ and assuming that the number $S_{n}$ of successes forms a
Markov chain $(S_{n},n\in\mathbb{N}_{0})$ satisfying $S_{0}=0$ and
$\mathbf{P}(S_{n+1}=k+1\mid S_{n}=k)=1-\mathbf{P}(S_{n+1}=k\mid
S_{n}=k):=\frac{w_{1}+k\alpha}{w+n},\qquad n\in\mathbb{N}_{0}.$
In this case $S_{n}$ coincides with the number of occupied tables in the
restaurant process with a cocktail bar [12] after $n$ customers have entered
the restaurant. For $\alpha=0$ we are back to the model studied before. For
$\alpha>0$ the transition probabilities of the random walk
$(S_{n},n\in\mathbb{N})$ now depend not only on the time $n$ but also on the
current state $S_{n}=k$. The distribution of $S_{n}$ can be expressed as (see
[12, Eq. (14)])
$\mathbf{P}(S_{n}=k)=\frac{[w_{1}|\alpha]_{k}}{[w]_{n}}S(n,k;-1,-\alpha,w_{2}),\qquad
k\in\\{0,\ldots,n\\},$
where $[w_{1}|\alpha]_{0}:=1$,
$[w_{1}|\alpha]_{k}:=\prod_{i=0}^{n-1}(w_{1}+i\alpha)$ for $k\in\mathbb{N}$
and $S(n,k;-1,-\alpha,w_{2})$ denote the generalized Stirling numbers in the
notation of Hsu and Shiue [7], which can be calculated as follows. For
$\alpha=0$ it follows from (3) that
$S(n,k;-1,0,w_{2})=\sum_{l=k}^{n}\binom{l}{k}w_{2}^{l-k}|s_{n,l}|$,
$k\in\\{0,\ldots,n\\}$. For $\alpha\neq 0$, the Dobiński-type formula [7,
Theorem 4] yields
$S(n,k;-1,-\alpha,w_{2})=\frac{\alpha^{-k}}{k!}\sum_{l=0}^{k}(-1)^{l}\binom{k}{l}[w_{2}-l\alpha]_{n},\qquad
k\in\\{0,\ldots,n\\}.$
Note that $\mathbf{P}(S_{n}=0)=[w_{2}]_{n}/[w]_{n}$ does not depend on
$\alpha\in[0,1]$. In particular, for any $n\in\mathbb{N}$,
$\mathbf{P}(S_{n}=0)=0$ if and only if $w_{2}=0$. Formulas for the moments of
$S_{n}$ are provided in [12, Section 6.1] for $\alpha=0$ and in [12, Corollary
1] for $\alpha>0$. The behavior of $S_{n}$ for $\alpha>0$ differs
substantially from the case $\alpha=0$. For $\alpha>0$, as $n\to\infty$,
$S_{n}/n^{\alpha}$ converges almost surely and in $L^{p}$ for any $p>0$ to a
limiting random variable being three-parameter
$(\alpha,\beta,\gamma)$-Mittag–Leffler distributed, where $\beta:=w$ and
$\gamma:=w_{1}/\alpha$, see [12, Theorem 3]. We refer the reader to [12,
Section 7] for further details on the three-parameter Mittag–Leffler
distribution ${\rm ML}(\alpha,\beta,\gamma)$. For $\alpha=1$ the limiting
distribution ${\rm ML}(1,w,w_{1})=\beta(w_{1},w_{2})$ is the beta distribution
with parameters $w_{1}$ and $w_{2}$, in agreement with well-known results for
standard Pólya urns.
If $w_{2}=0$ then $S_{n}$ counts the number of distinct species in a sample of
size $n$ taken from Pitman and Yor’s [17] two-parameter stick-breaking ${\rm
PD}(\alpha,w_{1})$-partition of the unit interval, extending the Ewens case.
We refer the reader to Chapter 3 of Pitman’s lecture notes [16] for further
information on the two-parameter model and to Yamato and Sibuya [25] and
Yamato, Sibuya and Nomachi [26] for some further related works.
Assume now that $w_{2}>0$. In this case $S_{n}$ may no longer be seen, stricto
sensu, as the number of new species in a sample of size $n$ taken from a
partition of the unit interval. However (see [12, Theorem 2]), $S_{n}$ is the
number of new species (excluding a ‘fictitious species’ $0$ with beta
distributed ‘abundance’ $B_{0}\stackrel{{\scriptstyle
d}}{{=}}\beta(w_{2},w_{1})$) in a sample of size $n$ drawn from a kind of
three-parameter Poisson–Dirichlet partition ${\rm
PD}(\alpha,w_{1},w_{2}):=(B_{0},(1-B_{0})\text{PD}(\alpha,w_{1}))$, where
$B_{0}$ is independent of ${\rm PD}(\alpha,w_{1})$.
Note that $K_{1}^{+}:=\inf\\{n\in\mathbb{N}:S_{n}=1\\}$ has distribution
$\mathbf{P}(K_{1}^{+}=n)=\mathbf{P}(S_{n-1}=0)\mathbf{P}(S_{n}=1\mid
S_{n-1}=0)=w_{1}\frac{[w_{2}]_{n-1}}{[w]_{n}},\quad n\in\mathbb{N},$
so that $S_{n}^{+}:=S_{n+K_{1}^{+}-1}$ (with $S_{1}^{+}=1$) coincides (in law)
with the number of new species from a PD$(\alpha,w_{1})$-partition of the unit
interval. Whenever a sample hits the ‘fictitious species’ $0$, sampling simply
fails to draw any new species: this event thus represents the possibility of a
failure of the sampling process from scratch. The probability that in a sample
of size $n$ there are $n_{0}$ failure events clearly is the beta binomial
probability mass function
$\binom{n}{n_{0}}[w_{2}]_{n_{0}}[w_{1}]_{n-n_{0}}/[w]_{n}$,
$n_{0}\in\\{0,\ldots,n\\}$. If $\alpha=0$ then $S_{n}$ is the number of new
species (excluding the ‘fictitious species’ $0$ with ‘abundance’ $B_{0}$) in a
sample of size $n$ drawn from the partition ${\rm
PD}(0,w_{1},w_{2})=(B_{0},(1-B_{0}){\rm PD}(0,w_{1}))$, extending the Ewens
case.
Let $n_{0}\in\mathbb{N}_{0}$ and $n_{1},\ldots,n_{k}\in\mathbb{N}$ and put
$n:=n_{0}+\cdots+n_{k}$. Note that
$\displaystyle\mathbf{P}(S_{n}=k,N_{n}(0)=n_{0},N_{n}(1)=n_{1},\ldots,N_{n}(k)=n_{k})$
(27) $\displaystyle\hskip 14.22636pt=\
n!\frac{[w_{1}|\alpha]_{k}}{[w]_{n}}\frac{[w_{2}]_{n_{0}}}{n_{0}!}\prod_{l=1}^{k}\frac{[1-\alpha]_{n_{l}-1}}{(n_{l}-1)!\sum_{j=l}^{k}n_{j}}$
is the joint distribution that there are $n_{0}$ visits to the reservoir set
with size $B_{0}$ (accounting for early failure events of the sampling
process, or missed samples) and $S_{n}=k$ distinct visited species in order of
appearance with positive sample sizes $n_{1},\ldots,n_{k}$ not in the
reservoir. For $w_{2}=0$, (27) reduces to the two-parameter
Donnelly–Tavaré–Griffiths distribution ${\rm DTG}(w_{1},\alpha)$ (see [26,
Theorem 1])
(28) $\mathbf{P}(S_{n}=k,N_{n}(1)=n_{1},\ldots,N_{n}(k)=n_{k})\ =\
n!\frac{[w_{1}|\alpha]_{k}}{[w_{1}]_{n}}\prod_{l=1}^{k}\frac{[1-\alpha]_{n_{l}-1}}{(n_{l}-1)!\sum_{j=l}^{k}n_{j}}.$
For $\alpha=0$, (28) reduces to
$\mathbf{P}(S_{n}=k,N_{n}(1)=n_{1},\ldots,N_{n}(k)=n_{k})\ =\
n!\frac{w_{1}^{k}}{[w_{1}]_{n}}\prod_{l=1}^{k}\frac{1}{\sum_{j=l}^{k}n_{j}},$
which is [23, Eq. (1)] with $\alpha$ there replaced by $w_{1}$.
Summing (27) over all $n_{1},\ldots,n_{k}\in\mathbb{N}$ with
$n_{1}+\cdots+n_{k}=n-n_{0}$, the joint probability that, in a sample of size
$n$, there are $S_{n}=k$ new sampled species and $n_{0}\leq n$ visits to the
‘fictitious species’ is thus
$\mathbf{P}(N_{n}(0)=n_{0},S_{n}=k)\ =\
\binom{n}{n_{0}}\frac{[w_{2}]_{n_{0}}[w_{1}|\alpha]_{k}}{[w]_{n}}S(n-n_{0},k;-1,-\alpha,0).$
Observing that
$\sum_{k=0}^{n-n_{0}}S(n-n_{0},k;-1,-\alpha,0)[w_{1}|\alpha]_{k}=[w_{1}]_{n-n_{0}}$,
the probability that, in a sample of size $n$, there are $n_{0}\leq n$ visits
to the ‘fictitious species’ is thus the beta-binomial probability
$\mathbf{P}(N_{n}(0)=n_{0})=\binom{n}{n_{0}}[w_{2}]_{n_{0}}[w_{1}]_{n-n_{0}}/[w]_{n}$,
in agreement with the explanations above.
## acknowledgment
T. Huillet acknowledges partial support from the Chair ‘Modélisation
mathématique et biodiversité’ of Veolia-Ecole Polytechnique-MNHN-Fondation X
and from the labex MME-DII Center of Excellence (Modèles mathématiques et
économiques de la dynamique, de l’incertitude et des interactions,
ANR-11-LABX-0023-01 project). This work was also funded by CY Initiative of
Excellence (grant ‘Investissements d’Avenir’ ANR-16-IDEX-0008), Project
‘EcoDep’ PSI-AAP2020-0000000013.
## References
* [1] Arratia, R., Barbour, A. D. and Tavaré, S. (1992). Poisson process approximations for the Ewens sampling formula. Ann. Appl. Probab. 2(3), 519–535. MR1177897
* [2] Barbour, A. D. and Hall, P. (1984). On the rate of Poisson convergence. Math. Proc. Cambridge Philos. Soc. 95(3), 473–480. MR0755837
* [3] Denisov, D., Sakhanenko, A. and Wachtel, V. (2018). First-passage times for random walks with nonidentically distributed increments. Ann. Probab. 46(6), 3313–3350. MR3857857
* [4] Ewens, W. J. (1972). The sampling theory of selectively neutral alleles. Theoret. Popululation Biol. 3, 87–112; erratum, ibid. 3, 240; erratum, ibid. 3, 376. MR0325177
* [5] Goncalves, B. and Huillet, T. (2020). Scaling features of two special Markov chains involving total disasters. J. Stat. Phys. 178(2), 499–531. MR4055249
* [6] Goncalves, B. and Huillet, T. (2021). Keeping random walks safe from extinction and overpopulation in the presence of life-taking disasters. Math. Popul. Stud. Online first publication: https://doi.org/10.1080/08898480.2021.1976476.
* [7] Hsu, L. C. and Shiue, P. J.-S. (1998). A unified approach to generalized Stirling numbers. Adv. in Appl. Math. 20(3), 366–384. MR1618435
* [8] Huillet, T. E. (2011). On a Markov chain model for population growth subject to rare catastrophic events. Physica A 390, no 23–24, 4073–4086.
* [9] Johnson, N. L. and Kotz, S. (1977). _Urn Models and Their Application_. Wiley, New York. MR0488211
* [10] Karlin, S. (1966). _A First Course in Stochastic Processes_. Academic Press, New York-London. MR0208657
* [11] Kozubowski, T. J. and Podgórski, K. (2018). A generalized Sibuya distribution. Ann. Inst. Statist. Math. 70(4), 855–887. MR3830290
* [12] Möhle, M. (2021). A restaurant process with cocktail bar and relations to the three-parameter Mittag–Leffler distribution. J. Appl. Probab. 58(4), 978–1006. MR4342591
* [13] Neuts, M. F. (1967). Waitingtimes between record observations. J. Appl. Probab. 4(1), 206–208. MR0208652
* [14] Pemantle, R. (2007). A survey of random processes with reinforcement. Probab. Surv. 4, 1–79. MR2282181
* [15] Pitman, J. (1997). Probabilistic bounds on the coefficients of polynomials with only real zeros. J. Combin. Theory Ser. A 77(2), 279–303. MR1429082
* [16] Pitman, J. (2006). _Combinatorial Stochastic Processes_ , _Lecture Notes in Mathematics_ 1875, Springer, Berlin. MR2245368
* [17] Pitman, J. and Yor, M. (1997). The two-parameter Poisson–Dirichlet distribution derived from a stable subordinator. Ann. Probab. 25(2), 855–900. MR1434129
* [18] Rényi, A. (1962). On outstanding values of a sequence of observations. In: Selected papers of A. Rényi, Vol. 3, pp. 50–65, Akadémiai Kiadó, Budapest.
* [19] Rényi, A. (1962). Théorie des éléments saillants d’une suite d’observations. Ann. Fac. Sci. Univ. Clermont-Ferrand, 8(2), 7–13. MR0286162
* [20] Sevast’yanov, B. A. (1972). Poisson limit law for a scheme of sums of dependent random variables. Theory Probab. Appl. 17(4), 695–699. (Russian original reviewed in MR0310943).
* [21] Sibuya, M. (1979). Generalized hypergeometric, digamma and trigamma distributions. Ann. Inst. Statist. Math. 31(3), 373–390. MR0574816
* [22] Tavaré, S. and Zeitouni, O. (2004). _Lectures on Probability Theory and Statistics_. Lecture Notes in Mathematics 1837, Springer, Berlin. MR2071629
* [23] Yamato, H. (1997). On the Donnelly–Tavaré–Griffiths formula associated with the coalescent. Commun. Statist. Theory Methods 26(3), 589–599. MR1436290
* [24] Yamato, H. (2017). Poisson approximations for sum of Bernoulli random variables and its application to Ewens sampling formula. J. Japan Statist. Soc. 47(2), 187–195. MR3791201
* [25] Yamato, H. and Sibuya, M. (2000). Moments of some statistics of Pitman sampling formula. Bull. Inform. Cybernet. 32(1), 1–10. MR1792352
* [26] Yamato, H., Sibuya, M. and Nomachi, T. (2001). Ordered sample from two-parameter GEM distribution. Statist. Probab. Lett. 55(1), 19–27. MR1860188
* [27] Yule, G. U. (1925). A mathematical theory of evolution based on the conclusions of Dr. J. C. Willis, F.R.S., Philos. Trans. Roy. Soc. London Ser. B 213, 21–87.
|
# From Actions to Events: A Transfer Learning Approach Using Improved Deep
Belief Networks
Mateus Roder1, Jurandy Almeida2, Gustavo H. de Rosa1, Leandro A. Passos1,
André L. D. Rossi1, João P. Papa1 1Department of Computing, São Paulo State
University – UNESP, Bauru, Brazil
{mateus.roder, gustavo.rosa, leandro.passos, andre.rossi<EMAIL_ADDRESS>2Instituto de Ciência e Tecnologia, Universidade Federal de São Paulo –
UNIFESP, São José dos Campos, Brazil
<EMAIL_ADDRESS>
###### Abstract
In the last decade, exponential data growth supplied machine learning-based
algorithms’ capacity and enabled their usage in daily-life activities.
Additionally, such an improvement is partially explained due to the advent of
deep learning techniques, i.e., stacks of simple architectures that end up in
more complex models. Although both factors produce outstanding results, they
also pose drawbacks regarding the learning process as training complex models
over large datasets are expensive and time-consuming. Such a problem is even
more evident when dealing with video analysis. Some works have considered
transfer learning or domain adaptation, i.e., approaches that map the
knowledge from one domain to another, to ease the training burden, yet most of
them operate over individual or small blocks of frames. This paper proposes a
novel approach to map the knowledge from action recognition to event
recognition using an energy-based model, denoted as Spectral Deep Belief
Network. Such a model can process all frames simultaneously, carrying spatial
and temporal information through the learning process. The experimental
results conducted over two public video dataset, the HMDB-51 and the UCF-101,
depict the effectiveness of the proposed model and its reduced computational
burden when compared to traditional energy-based models, such as Restricted
Boltzmann Machines and Deep Belief Networks.
## I Introduction
Machine Learning (ML) techniques emerged in the last decades as revolutionary
tools capable of solving or slightly alleviating the burden imposed by
repetitive and tedious tasks. Recently, the advent of Deep Learning (DL)
algorithms powered up such advances, providing astonishing predictions over
complex domains. On the other hand, video-based domain tasks still pose
challenging assignments to intelligent algorithms, e.g., recognizing human
actions in videos [1].
An action recognition task is characterized by an observation of a complete
sequence of movements performed by a human followed by its classification [2].
Such a task plays a fundamental role in video surveillance-based security
systems, content-based video retrieval, and self-driving cars, among others.
Event recognition is a similar task which models the ability to retrieve
specific actions from sequences of videos, focusing on learning behaviors
related to events of interest, i.e., events which comprise specific activities
and objects in a given scene. Thus, there is a slight difference between
action and event recognition tasks, i.e., while the former attempts to
identify any action performed in the scene, the latter remarks specific
movements, such as “is there anybody drinking in the street?” or “does this
video contains any unusual behavior?”. Event recognition approaches are
commonly employed for monitoring public or private areas in search for
anomalous behaviors, such as violent assaults on sports events, abandoned
objects on train stations, and route obstruction on industrial environments,
among others.
Despite their similarities, both action and event recognition tasks present
their particularities, posing distinct challenges while training an
intelligent model. An interesting approach used to solve event recognition
tasks is importing some correlated knowledge extracted over action
recognition-based models, denoted as transfer learning.
Transfer learning studies the possibilities of transferring knowledge from
source domains to different contexts (target domains). To illustrate such an
idea, consider an autonomous-driving car trained with road-traffic data from
New Zealand. It will not work effectively on Brazilian streets due to
different signage and road-traffic rules, a distinct influx of vehicles, and
right-lane-based driving, among other issues. However, adapting the knowledge
learned in New Zealand to Brazil can reduce computational costs and time
needed to train new models [3]. In a nutshell, the transfer learning considers
two or more feature space distributions been related or equal, where the task
related to each one can be also related or equal, providing useful information
for the target task [4].
Several works addressed the problem of action and event recognition through
transfer learning strategies. Farajidavar et al. [5], for instance, proposed a
transductive transfer learning method for action recognition in tennis games.
Further, Shao et al. [6] published a survey comprising several approaches for
these tasks, such as space approximation [7], Gaussian mixture model [8], and
geometric reasoning [9]. Recently, novel approaches considered DL methods,
such as Wang et al. [10], who introduced Generative Adversarial Networks for
action recognition using partial-modalities. Tas et al. [11] employed a
Convolutional Neural Network (CNN) for action recognition and supervised
domain adaptation on 3D body skeletons. Furthermore, Gao et al. [12]
introduced a two-stream graph CNN for zero-shot action recognition, while Liu
et al. [13] proposed image-to-video adaptation and fusion networks in the same
context.
Among several DL algorithms, an energy-based model known as Deep Belief
Network (DBN) [14] obtained considerable popularity in the last years due to
its notable results in a wide variety of applications [15, 16]. It is composed
of multiple hidden layers, such that each layer is a greedily trained
Restricted Boltzmann Machine (RBMs) [17]. Even though some works proposed DBNs
for action [18, 19] and a specific type of event recognition [20], as far as
we know, there is still no work that has successfully addressed the concepts
of transfer learning from actions to events with video through DBNs, i.e., to
learn useful features from highly structured actions and movements to the
generalization of high-level events. Therefore, the main contributions of this
work are threefold: (i) to introduce DBNs in the video event classification
domain, (ii) to propose two approaches, denoted as Aggregative-DBN and
Gradient-DBN, that employ frame fusion and image gradient respectively, and
(iii) to support the lack of video-based event recognition in literature.
Additionally, we consider the following hypotheses: (i) DBNs are able to learn
useful correlations that map actions to high-level events in video-based
domain tasks; (ii) the overall accuracy can be improved using the proposed
approaches; and (iii) the overall training time can be reduced with the
Aggregative-based approach.
The remainder of this paper is organized as follows. Section II introduces the
main theoretical concepts used in the manuscript, while Section III presents
the proposed approaches. Further, Sections IV and V describe the methodology
adopted in this work and the experimental results. Finally, Section VI states
conclusions and future work.
## II Theoretical Background
### II-A Video-based Domain
Let $\mathcal{F}=\\{F_{1},F_{2},\dots,F_{n}\\}$ be a temporal sequence of
frames (images) $F_{i}$, which represents a soundless scene and possible
movements of its components. Such frames can be classified according to the
complexity of their internal representations and the interaction level between
their entities. Thus, it is possible to generate four classification-based
categories: attributes and movements, low-level events and actions,
interaction, and high-level events [21].
Figure 1 illustrates the hierarchical form of the categories discussed
hereafter. The movement characterizes the lowest representation level of a
frame and is widely employed to recognize human actions, such as body movement
[22]. On the other hand, low-level events and actions represent a particular
chain of movements, usually carried out by an entity, e.g., car or person.
Additionally, if such actions are performed or interacted by more than one
entity or object, it is possible to categorize them as an interaction [21].
Finally, the highest-level category, denoted as complex events, represents the
interaction of entities or sequence of actions in a specific time window in
the video. For instance, one can identify a birthday party as an event
composed of several actions and entities in a single scene.
Therefore, the event recognition task attempts to detect the complex events’
spatial and temporal locations in a sequence of frames [21]. The literature
lacks in standardizing the difference between actions and events, making them
interchangeable in most applications [21].
Figure 1: Video-based domain hierarchical complexity.
### II-B Applied Transfer Learning
Transfer learning has recently received attention due to the advent of the
ImageNet111https://image-net.org/ dataset and the increased processing power
of GPUs. Such a task consists in transferring the knowledge from the source
domain to the target domain, providing information that can be useful on the
target domain, mainly when data are insufficient or the computational
resources are scarce. In this way, it is possible to train deep neural
networks in large-scale image/video datasets and use them to fine-tune more
specific tasks [23].
Given the previous concepts, it is possible to elucidate the mathematical
formulation regarding the problem addressed here. Let $\Gamma$ be a high-level
event recognition task, as well as let $\mathcal{D}_{S}$ and $\mathcal{D}_{T}$
be the source (action domain) and the target space domains (event domain),
respectively. Additionally, the source domain is composed of the subspaces
$\mathcal{A}\in\mathbb{R}^{d_{a}}$, $\mathcal{M}\in\mathbb{R}^{d_{m}}$, and
$\mathcal{I}\in\mathbb{R}^{d_{i}}$, where
$\\{\mathcal{A},\mathcal{M},\mathcal{I}\\}\subset\mathcal{D}_{S}$, while the
target domain is composed of $\mathcal{E}\in\mathbb{R}^{d_{e}}$, where
$\\{\mathcal{E}\\}\subset\mathcal{D}_{T}$. The subspace $\mathcal{A}$ stands
for the $d_{a}$-dimensional base actions, $\mathcal{M}$ stands for the
$d_{m}$-dimensional movements, $\mathcal{I}$ represents the interactions
between $d_{i}$-dimensional entities, and $\mathcal{E}$ stands for the
$d_{e}$-dimensional high-level events.
From the transfer learning theory [24], a specific case is when the data from
the source domain and target domain keep their probabilities, letting
$\mathcal{D}_{S}$ to be identical to $\mathcal{D}_{T}$, differing only on the
target task, which is the problem addressed in this paper. Finally, it is
possible to formulate the proposed approach for the given task using Equation
1, as follows:
$\Gamma=\\{y_{\mathcal{D}_{T}},f(\mathcal{D}_{S})\\},$ (1)
where $y_{\mathcal{D}_{T}}$ stands for the target domain labels and
$f(\mathcal{D}_{S})$ for the function that learns features from the source
domain. Such learned features are useful for the target domain and its
respective event classification task, i.e., a neural network that learns from
$\mathcal{D}_{S}$ and is fine-tuned in $\mathcal{D}_{T}$ with labels
$y_{\mathcal{D}_{T}}$. Here, $\mathcal{D}_{T}$ has the same probability
distribution that $\mathcal{D}_{S}$, as aforementioned.
### II-C Restricted Boltzmann Machines
Restricted Boltzmann Machines (RBMs) [25, 17] are described as a bipartite
graph composed of two layers of neurons, i.e., a visible layer
$\textbf{v}\in\\{0,1\\}^{m}$, which is responsible for the input data, and a
hidden layer $\textbf{h}\in\\{0,1\\}^{n}$, whose units map the data
representation into a latent space. This interaction is modeled by a weight
matrix, $\textbf{W}\in\Re^{m\times n}$, which connects each visible unit
$v_{i}$ to all hidden units $h_{j}$, and vice-versa, denoted by the arc
$w_{ij}$.
The RBM learning procedure is performed by the minimization of an energy
function concerning some intrinsic variables, described as follows:
$E(\textbf{v},\textbf{h})=-\sum_{i=1}^{m}b_{i}v_{i}-\sum_{j=1}^{n}c_{j}h_{j}-\sum_{i=1}^{m}\sum_{j=1}^{n}v_{i}h_{j}w_{ij},$
(2)
where $\textbf{b}\in\Re^{m}$ and $\textbf{c}\in\Re^{n}$ stand for the bias
vector considering the visible and hidden layers, respectively.
Computing the joint probability of the system poses an intractable task due to
the increasing number of possible states. However, since the model is
represented as a bipartite graph, one can compute both the visible and hidden
units’ activation in a mutually independent fashion, performed as follows:
$P(v_{i}=1|\textbf{h})=\phi\left(\sum_{j=1}^{n}w_{ij}h_{j}+b_{i}\right),$ (3)
and
$P(h_{j}=1|\textbf{v})=\phi\left(\sum_{i=1}^{m}w_{ij}v_{i}+c_{j}\right).$ (4)
Note that $\phi(\cdot)$ stands for the logistic-sigmoid function. Finally, we
can solve the equations above by iteratively sampling over a Markov Chain,
using the well-known Contrastive Divergence (CD) algorithm [17].
The learning process in RBMs consists of an optimization problem whose goal is
to minimize the energy function given in Equation 2. In other words, such a
process ends up maximizing the marginal probability distribution of the
visible units, defined as follows:
$P(\textbf{v})=\frac{\displaystyle\sum_{\textbf{h}}e^{-E(\textbf{v},\textbf{h})}}{\displaystyle\sum_{\textbf{v},\textbf{h}}e^{-E(\textbf{v},\textbf{h})}},$
(5)
which is commonly handled in its natural logarithm version, i.e., more
precisely, we aim at maximizing the negative logarithm of the likelihood
function (Negative Log-Likelihood - NLL). Moreover, regarding the visible
units, such a procedure can be easily extended to the continuous domain, which
is useful to model any type of input. The changes occur on the energy
function, as follows:
$E(\textbf{v},\textbf{h})=\sum_{i=1}^{m}\dfrac{(v_{i}-b_{i})^{2}}{2\sigma^{2}_{i}}-\sum_{j=1}^{n}c_{j}h_{j}-\sum_{i=1}^{m}\sum_{j=1}^{n}\dfrac{v_{i}}{\sigma_{i}}h_{j}w_{ij}.$
(6)
Considering the derivatives, it is straightforward to show that the visible
prior becomes:
$P(v_{i}=1|\textbf{h})\sim\mathcal{N}\left(\sum_{j=1}^{n}w_{ij}h_{j}+b_{i},\sigma^{2}_{i}\right),$
(7)
and, when a Gaussian input with zero mean and one unit standard deviation is
employed, such a prior becomes simple and easy to sample, since the whole
procedure is still the same.
### II-D Deep Belief Networks
Deep Belief Networks (DBNs) [14] are generative graphical models composed of a
stack of RBMs, thus providing multiple layers of latent variables, such that
the hidden layer of the bottommost RBM is employed to feed the subsequent
input units successively until reaching the topmost layer. DBNs are trained
greedily, meaning that an RBM at a specific layer does not consider others
during its learning procedure. Thus, a DBN is composed of $L$ layers, where
$\textbf{W}^{l}$ is the weight matrix of an RBM at layer $l$. Additionally, we
can observe that the hidden units at layer $l$ become the input units of layer
$l+1$.
The aforementioned procedure stands for the generative pre-training.
Afterward, it is possible to attach fully-connected (FC) layers with softmax
outputs at the topmost hidden layer for a discriminative fine-tuning, which
can be used for classification tasks.
## III Proposed Approaches
This work proposes to employ DBNs as non-linear functions $f(\mathcal{D}_{S})$
to learn from the source domain, $\mathcal{D}_{S}$, the information that can
be used to map the target domain, $\mathcal{D}_{T}$, i.e., to extract
information from videos and use them to classify high-level events.
Additionally, two approaches are shown to the respective task.
### III-A Aggregative Deep Belief Networks
The first alternative architecture is denoted as Aggregative (A- prefix in
models), which modifies the DBNs’ first layer. Such a variation is designed
considering two main concepts: (i) to capture general spatio-temporal
information without additional techniques, such as optical flow algorithms,
and (ii) reduce the overall computational burden.
The proposed A-DBN processes all frames simultaneously instead of processing
one frame at a time, enabling a complete parameter update at each iteration.
In other words, A-DBN aggregates all frames $\\{F_{1},F_{2},\dots,F_{n}\\}$
into a single frame denoted as $F_{r}$, which represents their summation. This
procedure carries spatial information (contour and edges) along their temporal
trajectories, highlighted as “spectrum” in the resulting frame. Figure 2
depicts the $F_{r}$ aggregation process, as well as its highlighted region.
Afterward, A-DBN first layer infers the posterior distribution given $F_{r}$,
as follows:
$P(h_{j}=1|\bm{F_{r}})=\phi\left(\sum_{i=1}^{m}w_{ij}F_{ri}+c_{j}\right).$ (8)
Figure 2: Frames aggregation for the Aggregative-based approach.
### III-B Gradient Deep Belief Networks
The second alternative architecture denoted as Gradient (G- prefix in models)
also modifies the DBNs first layer. Such a variation is designed considering
one main concept, i.e., to capture general motion information between two
frames.
The proposed G-DBN processes two consecutive frames instead of processing one
frame at a time. In other words, G-DBN defines a resultant frame, $F_{r}$, as
the direct subtraction as follows: given $F_{1}$ and $F_{2}$, $F_{r}$ stands
for $F_{2}-F_{1}$. This procedure carries motion cues from the spatial domain
along their trajectories. Figure 3 depicts the $F_{r}$ generation process, as
well as its highlighted region. Afterward, G-DBN first layer infers the
posterior distribution given $F_{r}$, as follows:
$P(h_{j}=1|\bm{F_{r}})=\phi\left(\sum_{i=1}^{m}w_{ij}F_{ri}+c_{j}\right).$ (9)
Figure 3: Frames generation for the Gradient-based approach.
## IV Experiments
In this section, we describe the dataset and the experimental setup employed
to apply and compare DBNs with A-DBN, proposed in this paper, for the task of
event recognition from complex actions domain.
### IV-A Dataset
We opted to use two well-known datasets, UCF-101 [26] and HMDB-51 [27], as
they represent a big challenge and are well-established video action
recognition datasets. Both datasets comprises a significant amount of data
from real-world action videos collected from YouTube and classified among
$101$ and $51$ distinct classes, respectively. Moreover, such a diversity
becomes more expressive due to the substantial variations in camera motion,
object appearance and pose, scaling, viewpoint, cluttered background, and
illumination conditions.
The $13,320$ videos from UCF-101 are grouped in 5 macro-categories, which are
easily interpreted as high-level events. Also, this mapping expects inter-
class videos to share standard and essential features, which helps in
recognizing common actions and interactions. Following the authors’ guideline,
the high-level events used: (0) sports practice; (1) musical practice with an
instrument; (2) human-object interaction; (3) human body-motion; and (4)
people interacting. Random clips depicting such classes are presented in
Figure 4, each class is represented by the color of the border: green for $0$,
light-blue for $1$, blue for $2$, red for $3$, and purple for $4$.
Figure 4: Random clips from UCF-101 [26].
Similarly, the $6,766$ videos from HMDB-51 are grouped in 5 macro-categories,
easily interpreted as high-level events. Also, following the authors’
guideline, the high-level events are (0) human facial expression; (1)
manipulation of objects in the face region; (2) body movement; (3) interaction
between people and object(s); and (4) person interacting with each other,
where numbers in parentheses represent classes. Additionally, frames of clips
from the dataset are shown in Figure 5, where the color of the border
represents the event class: green for $0$, light blue for $1$, blue for $2$,
red for $3$, and purple for $4$.
Figure 5: Random clips from HMDB-51 [27].
Both datasets provide three partitions, each with separate data for training
and testing, where the first partition was used in this study, as it seems to
have the most difficult test samples to evaluate as cluttered background or
fewer interactions and actions. The process of splitting/acquiring the frames
is performed in a similar way to the work of Ng et al. [28], using $6$ frames
per video clip uniformly distributed over time. In their work, Ng et al.
showed that $6$ frames per video are enough to ensure a good performance,
achieving the same results as $20$ frames, for example, in addition to
imprinting a lower computational load on the action classification task.
Regarding the pre-processing step, two transformations were employed before
the image conversion to grayscale. The first transformation concerns cutting
operations, removing black regions that do not carry information, and resizing
from the original size ($240\times 320$) to $72\times 96$, to facilitate the
processing of energy-based models. The second transform stands for feature-
normalization using a Gaussian distribution with zero mean and unit variance.
### IV-B Experimental Setup
Regarding the experimental setup hardware, we employed an Intel 2x Xeon(R)
E5-2620 @ 2.20GHz (40 cores), a GTX 1080 Ti, and 128 GB of RAM. For the
unsupervised pre-training process, we opted to use mini-batches of $128$
samples and $3$ epochs per-layer. Finally, Table I describes the employed
architectures and hyper-parameters.
TABLE I: Configuration of the models used in this work. Model | Layers | Hidden Neurons | Momentum | Learning Rate
---|---|---|---|---
RBM | $1$ | $[2,000]$ | $[0.5]$ | $[1\cdot 10^{-3}]$
A-RBM | $1$ | $[2,000]$ | $[0.5]$ | $[1\cdot 10^{-3}]$
G-RBM | $1$ | $[2,000]$ | $[0.5]$ | $[1\cdot 10^{-3}]$
$\text{DBN}_{\alpha}$ | $2$ | $[2,000-2,000]$ | $[0.5;0.5]$ | $[1\cdot 10^{-3};5\cdot 10^{-4}]$
$\text{DBN}_{\beta}$ | $3$ | $[2,000-2,000-2,000]$ | $[0.5;0.5;0.5]$ | $[1\cdot 10^{-3};5\cdot 10^{-4};5\cdot 10^{-4}]$
$\text{DBN}_{\iota}$ | $2$ | $[4,000-4,000]$ | $[0.5;0.5]$ | $[5\cdot 10^{-4};5\cdot 10^{-4}]$
$\text{DBN}_{\zeta}$ | $3$ | $[4,000-4,000-4,000]$ | $[0.5;0.5;0.5]$ | $[5\cdot 10^{-4};5\cdot 10^{-4};5\cdot 10^{-4}]$
$\text{A-DBN}_{\alpha}$ | $2$ | $[2,000-2,000]$ | $[0.5;0.5]$ | $[1\cdot 10^{-3};5\cdot 10^{-4}]$
$\text{A-DBN}_{\beta}$ | $3$ | $[2,000-2,000-2,000]$ | $[0.5;0.5;0.5]$ | $[1\cdot 10^{-3};5\cdot 10^{-4};5\cdot 10^{-4}]$
$\text{A-DBN}_{\iota}$ | $2$ | $[4,000-4,000]$ | $[0.5;0.5]$ | $[5\cdot 10^{-4};5\cdot 10^{-4}]$
$\text{A-DBN}_{\zeta}$ | $3$ | $[4,000-4,000-4,000]$ | $[0.5;0.5;0.5]$ | $[5\cdot 10^{-4};5\cdot 10^{-4};5\cdot 10^{-4}]$
$\text{G-DBN}_{\alpha}$ | $2$ | $[2,000-2,000]$ | $[0.5;0.5]$ | $[1\cdot 10^{-3};5\cdot 10^{-4}]$
$\text{G-DBN}_{\beta}$ | $3$ | $[2,000-2,000-2,000]$ | $[0.5;0.5;0.5]$ | $[1\cdot 10^{-3};5\cdot 10^{-4};5\cdot 10^{-4}]$
$\text{G-DBN}_{\iota}$ | $2$ | $[4,000-4,000]$ | $[0.5;0.5]$ | $[5\cdot 10^{-4};5\cdot 10^{-4}]$
$\text{G-DBN}_{\zeta}$ | $3$ | $[4,000-4,000-4,000]$ | $[0.5;0.5;0.5]$ | $[5\cdot 10^{-4};5\cdot 10^{-4};5\cdot 10^{-4}]$
Each model is connected to two additional fully-connected layers that are
fine-tuned using the well-known Adam optimizer [29] with the learning rate
equals to $10^{-3}$ and the same number of epochs and mini-batch size. Such FC
layers have two configurations since they depend on the number of hidden
neurons from the last RBM/DBN layer, i.e., $2,000-1,000-5$ and
$4,000-2,000-5$. It is important to highlight that the RBMs were employed for
$2,000$ hidden neurons only as this work primarily focuses on using the
hierarchical information learned by DBNs.
Furthermore, we opted to use the same approach employed by transfer learning,
i.e., to freeze the connections from the first hidden layer and make a gentle
adjustment of subsequent hidden layers (learning rate equals to $10^{-6}$).
The cross-entropy loss was used when adjusting model weights during the fine-
tuning process, while the final measure was the accuracy on the testing set.
Finally, to mitigate any stochastic nature, each model was fully trained and
fine-tuned for $6$ repetitions.
## V Experimental Results
This section presents the experimental results concerning the DBNs and the
proposed approaches, i.e., the A-DBN and the G-DBN, applied to the task of
event recognition. All models follow the work of Ng et al. [28], using $6$
frames per video clip uniformly distributed over time.
### V-A Model Evaluation for Event Recognition
Regarding the main task, i.e., learning from actions to classify high-level
events, Tables II and III show the predictive performance for all models and
architectures. The highlighted result stands for the best mean accuracy.
Besides, it also presents the average running time for each model, averaged
over the six repetitions to analyze the models’ computation impact and
efficiency.
TABLE II: Mean accuracies (%) and running times (minutes) over the UCF-101 test set (fold 1). Architecture | Accuracy | Time
---|---|---
RBM | $38.71\pm 1.04$ | $315.00\pm 5.00$
A-RBM | $42.48\pm 0.94$ | $270.00\pm 5.00$
G-RBM | $44.04\pm 1.82$ | $314.00\pm 5.00$
$\text{DBN}_{\alpha}$ | $37.72\pm 4.67$ | $765.00\pm 5.00$
$\text{A-DBN}_{\alpha}$ | $44.66\pm 1.28$ | $540.00\pm 5.00$
$\text{G-DBN}_{\alpha}$ | $40.16\pm 11.63$ | $764.00\pm 5.00$
$\text{DBN}_{\beta}$ | $40.55\pm 3.54$ | $1,215.00\pm 5.00$
$\text{A-DBN}_{\beta}$ | $44.80\pm 2.02$ | $810.00\pm 5.00$
$\text{G-DBN}_{\beta}$ | $44.84\pm 0.08$ | $1,211.00\pm 5.00$
$\text{DBN}_{\iota}$ | $41.92\pm 2.65$ | $775.00\pm 6.00$
$\text{A-DBN}_{\iota}$ | $\bm{45.01\pm 1.39}$ | $550.00\pm 6.00$
$\text{G-DBN}_{\iota}$ | $44.84\pm 0.04$ | $773.00\pm 6.00$
$\text{DBN}_{\zeta}$ | $42.33\pm 4.51$ | $1,225.00\pm 6.00$
$\text{A-DBN}_{\zeta}$ | $44.87\pm 2.81$ | $820.00\pm 6.00$
$\text{G-DBN}_{\zeta}$ | $44.86\pm 0.14$ | $1,220.00\pm 6.00$
From Table II, we can notice prominent results, mainly for the proposed
Aggregative version of DBNs. Starting from the base model, A-RBM achieved a
better accuracy than RBM, i.e., $42.48\%$ against $38.71\%$, representing a
meaningful difference ($3.77$ points of mean percentual accuracy), while the
G-RBM model achieved $44.04\%$, outperforming also the Aggregative approach.
In addition, A-RBM has a significantly lower computational burden,
approximately $14\%$ less running time. To clarify, hereafter, the percentual
difference between models’ accuracy stands for the absolute mean value of the
proposed model (A- or G-) subtracted to its standard version, and for the
running time, it is the mean proposed approach time divided by its standard
model.
Regarding the second architecture, i.e., the $\alpha$ models, the Aggregative
version overcomes the standard DBN and G-DBN in mean accuracy by almost $7\%$
and $4.5\%$, respectively, representing a meaningful improvement. However, the
standard model and the G-DBN do not overpass its parameterless version (RBM
and G-RBM) in mean accuracy. Moreover, the running time for A-DBNα was $30\%$
smaller than DBNα, impacting positively to a lighter training burden.
Concerning the $\beta$ models, a similar behavior was observed, i.e., the
Aggregative version overpassed the standard DBN in approximately $4\%$ in mean
accuracy, and almost $33\%$ less running time. Also, the G-DBN model achieved
the best mean accuracy, $44.84\%$, with a small standard deviation. However,
such results show that adding more hidden layers does not lead to an
impressive performance improvement for A-DBN since its mean accuracy was close
to the A-DBNα, while for the G-DBNβ the performance was increased. Moreover,
even with the previous observation, the A-DBNβ still gives a better running
time than its baselines.
Regarding the fourth architecture ($\iota$ models), the same behavior was
observed, highlighting that A-DBNι achieved a remarkable mean accuracy of
$45.01\%$, the highest average value over the baselines. Moreover, the running
time for A-DBNι was $30\%$ better than its standard version. Here, the
Aggregative models showed that more hidden units might benefit the overall
performance for the event classification task with transfer learning.
Finally, the $\zeta$ models showed almost the same results as the $\iota$
models, mainly for the proposed approaches, A-DBN and G-DBN, which achieved a
mean accuracy of $44.87\%$ and $44.86\%$, respectively. Such results indicate
that more hidden neurons improve the models’ performance. However, the larger
versions may demand more epochs of pre-training and/or data. Also, the running
time of A-DBN overpasses its standard and the Gradient version by
approximately $33\%$.
Nonetheless, it is essential to notice that A-DBN models have no difficulty in
overpassing the A-RBM mean accuracy; however, the G-DBNα do not overpass its
simpler version on mean accuracy due to one specific run that pushed down the
performance (note the standard deviation). Overall, the performance
improvement observed can be directly linked to the higher abstraction achieved
by hidden layers. Such approaches can improve the DBN lower bound and provide
a further improvement in discriminative fine-tuning. Besides, they were also
pre-trained with a relatively small number of epochs, which can induce a less
efficient overall lower bound optimization. However, the results showed that
more hidden units in hidden layers improve the mean accuracy rate, as the
A-DBNι and A-DBNζ models have shown.
TABLE III: Mean accuracies (%) and running times (minutes) over the HMDB-51 test set (fold 1). Architecture | Accuracy | Time
---|---|---
RBM | $34.60\pm 3.90$ | $45.00\pm 5.00$
A-RBM | $35.19\pm 4.24$ | $30.00\pm 5.00$
G-RBM | $38.05\pm 0.36$ | $44.00\pm 5.00$
$\text{DBN}_{\alpha}$ | $34.49\pm 3.98$ | $144.00\pm 5.00$
$\text{A-DBN}_{\alpha}$ | $37.68\pm 4.19$ | $132.00\pm 5.00$
$\text{G-DBN}_{\alpha}$ | $38.40\pm 0.01$ | $143.00\pm 5.00$
$\text{DBN}_{\beta}$ | $33.83\pm 4.06$ | $216.00\pm 5.00$
$\text{A-DBN}_{\beta}$ | $38.70\pm 4.33$ | $195.00\pm 5.00$
$\text{G-DBN}_{\beta}$ | $38.40\pm 0.01$ | $214.00\pm 5.00$
$\text{DBN}_{\iota}$ | $34.41\pm 4.03$ | $150.00\pm 6.00$
$\text{A-DBN}_{\iota}$ | $38.23\pm 4.25$ | $138.00\pm 6.00$
$\text{G-DBN}_{\iota}$ | $38.40\pm 0.01$ | $148.00\pm 6.00$
$\text{DBN}_{\zeta}$ | $34.53\pm 4.04$ | $225.00\pm 6.00$
$\text{A-DBN}_{\zeta}$ | $\bm{38.86\pm 4.26}$ | $207.00\pm 6.00$
$\text{G-DBN}_{\zeta}$ | $37.41\pm 2.43$ | $223.00\pm 6.00$
From Table III, one can notice interesting results, mainly for the proposed
approaches. The base model, A-RBM achieved a better accuracy rate than RBM,
i.e., $35.19\%$ against $34.60\%$, representing a meaningful difference
($0.60$ points of mean percentual accuracy), while the G-RBM model achieved
$38.05\%$, beating also the Aggregative approach. Here, it is interesting to
note that the time difference between models was not so impressive, explained
by the low data volume.
Regarding the second architecture, i.e., the $\alpha$ models, the Aggregative
version overcomes the standard DBN in mean accuracy by almost $3\%$, while the
G-DBN overpassed its standard version in approximately $4\%$, respectively,
representing a meaningful improvement. Moreover, it is important to notice
that the running time for A-DBNα was $8\%$ smaller than DBNα, impacting
positively to a lighter training burden.
Concerning the $\beta$ models, was observed a similar behavior, i.e., the
Aggregative version overpassed the standard DBN in approximately $5\%$ in mean
accuracy, and almost $10\%$ less running time. Also, the G-DBN model
overpassed its standard version in approximately $5\%$ of mean accuracy, with
a lower standard deviation. However, such results show that adding more hidden
layers does not cause an impressive performance improvement for G-DBN since
its mean accuracy was close to the G-DBNα, while for the A-DBNβ the
performance was increased. Moreover, even with the previous observation, the
A-DBNβ still gives a better running time than its baselines.
Regarding the $\iota$ models, the baseline model slightly increased its
performance, however, the G-DBN achieved the highest mean accuracy, $38.40\%$,
overpassing the DBN and A-DBN models. The Aggregative version was also better
than DBN, with $38.23\%$, and $8\%$ less running time than DBN. However, such
results show that adding more hidden layers does not cause an impressive
performance improvement for the proposed approach, keeping the hidden neurons
in $2,000$ units, since the mean accuracies were close to the A-DBNβ and
G-DBNβ.
Finally, the $\zeta$ models showed an interesting performance improvement
regarding the Aggregative approach, which achieved a remarkable mean accuracy
of $38.86\%$. On the other hand, G-DBN suffers from a performance decreasing,
achieving $37.41\%$ of mean accuracy. This fact pointed out that more hidden
neurons improve the Aggregative’s performance while keeping the lowest mean
time for training ($207$ minutes).
In general, one can observe two interesting behaviors for the proposed
approaches, the Aggregative-based models were able to improve the models’
performance, resulted from the total frames aggregation that carries general
motion information. On the other hand, the Gradient-based models did not
improve their performance as the Aggregative-based, explained by the fact that
two-by-two frames on the employed datasets may not carry as much information
like the overall sum of the six frames.
## VI Conclusions and Future Works
In this paper, we addressed a transfer learning approach on two well-known
video datasets, learning from the action- to the event-based domain, through
energy-based models, such as RBMs and DBNs. Furthermore, we proposed
Aggregative-based and Gradient-based approaches that modify the processing of
the frames, simplifying the model’s complexity and giving robustness, saving
processing time, and improving the model’s generalization. Experimental
results show that the proposed approach can reduce the computational time by
as much as $33\%$ regarding the A-models.
The results were promising since most A-DBN architectures achieved a feasible
mean accuracy rate, highlighting the models A-DBNι and A-DBNζ. Also, the
Aggregative models showed a meaningful reduction in running time during the
unsupervised pre-training phase. The Gradient models showed a stable behavior,
varying almost nothing regarding the different architectures. Therefore, these
experimental results support our hypothesis that it is possible to transfer
the knowledge from actions to events with the employed energy-based models,
without complex inputs such as optical flow or convolutions. Finally, one can
highlight that by increasing the number of hidden neurons the overall
performance improved, pointing out the models’ opportunity to extract more
information.
Regarding future works, we plan to investigate the effect of combining
convolution operators in energy-based models, such as the Convolutional
Restricted Boltzmann Machine (CRBM). Additionally, we aim at employing more
complex models, such as Deep Boltzmann Machines (DBMs), trained with a more
expressive number of epochs. Finally, a future step is also to analyze how the
proposed approach can be applied to data augmentation tasks in the context of
event classification in videos.
## Acknowledgments
This research was supported by São Paulo Research Foundation - FAPESP (grants
#2013/07375-0, #2014/12236-1, #2019/07665-4, #2019/07825-1 and #2019/02205-5),
FAPESP-Microsoft Research Virtual Institute (grant #2017/25908-6), and
Brazilian National Council for Scientific and Technological Development - CNPq
(grants #314868/2020-8, #307066/2017-7 and #427968/2018-6).
## References
* [1] Y. Kong and Y. Fu, “Human action recognition and prediction: A survey,” _CoRR_ , vol. abs/1806.11230, 2018. [Online]. Available: http://arxiv.org/abs/1806.11230
* [2] A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates,” _IEEE Transactions on pattern analysis and machine intelligence_ , vol. 23, no. 3, pp. 257–267, 2001.
* [3] H. Venkateswara, S. Chakraborty, and S. Panchanathan, “Deep-learning systems for domain adaptation in computer vision: Learning transferable feature representations,” _IEEE Signal Processing Magazine_ , vol. 34, no. 6, pp. 117–129, 2017.
* [4] S. Sun, H. Shi, and Y. Wu, “A survey of multi-source domain adaptation,” _Information Fusion_ , vol. 24, pp. 84–92, 2015.
* [5] N. FarajiDavar, T. De Campos, J. Kittler, and F. Yan, “Transductive transfer learning for action recognition in tennis games,” in _2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops)_. IEEE, 2011, pp. 1548–1553.
* [6] L. Shao, F. Zhu, and X. Li, “Transfer learning for visual categorization: A survey,” _IEEE transactions on neural networks and learning systems_ , vol. 26, no. 5, pp. 1019–1034, 2014.
* [7] A. Quattoni, M. Collins, and T. Darrell, “Transfer learning for image classification with sparse prototype representations,” in _2008 IEEE Conference on Computer Vision and Pattern Recognition_. IEEE, 2008, pp. 1–8.
* [8] L. Cao, Z. Liu, and T. S. Huang, “Cross-dataset action detection,” in _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_. IEEE, 2010, pp. 1998–2005.
* [9] T. J. Darrell, I. A. Essa, and A. P. Pentland, “Task-specific gesture analysis in real-time using interpolated views,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 18, no. 12, pp. 1236–1242, 1996.
* [10] L. Wang, C. Gao, L. Yang, Y. Zhao, W. Zuo, and D. Meng, “Pm-gans: Discriminative representation learning for action recognition using partial-modalities,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 384–401.
* [11] Y. Tas and P. Koniusz, “Cnn-based action recognition and supervised domain adaptation on 3d body skeletons via kernel feature maps,” _arXiv preprint arXiv:1806.09078_ , 2018.
* [12] J. Gao, T. Zhang, and C. Xu, “I know the relationships: Zero-shot action recognition via two-stream graph convolutional networks and knowledge graphs,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 33, 2019, pp. 8303–8311.
* [13] Y. Liu, Z. Lu, J. Li, T. Yang, and C. Yao, “Deep image-to-video adaptation and fusion networks for action recognition,” _IEEE Transactions on Image Processing_ , vol. 29, pp. 3168–3182, 2019.
* [14] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” _Neural Computation_ , vol. 18, no. 7, pp. 1527–1554, 2006\.
* [15] H. Ali, S. N. Tran, E. Benetos, and A. S. d. Garcez, “Speaker recognition with hybrid features from a deep belief network,” _Neural Computing and Applications_ , vol. 29, no. 6, pp. 13–19, 2018.
* [16] M. Roder, L. A. Passos, L. C. F. Ribeiro, B. C. Benato, A. X. Falcão, and J. P. Papa, “Intestinal parasites classification using deep belief networks,” in _International Conference on Artificial Intelligence and Soft Computing_. Springer, 2020, pp. 242–251.
* [17] G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” _Neural Computation_ , vol. 14, no. 8, pp. 1771–1800, 2002\.
* [18] S. N. Gowda, “Human activity recognition using combinatorial deep belief networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , 2017, pp. 1–6.
* [19] K. H. Ali and T. Wang, “Learning features for action recognition and identity with deep belief networks,” in _2014 International Conference on Audio, Language and Image Processing_. IEEE, 2014, pp. 129–132.
* [20] Y. Zhang, Z. Liu, and W. Zhou, “Event recognition based on deep belief network,” _Acta Electronica Sinica_ , vol. 45, no. 6, pp. 1415–1423, 2017\.
* [21] Y.-G. Jiang, S. Bhattacharya, S.-F. Chang, and M. Shah, “High-level event recognition in unconstrained videos,” _International Journal of Multimedia Information Retrieval_ , vol. 2, no. 2, pp. 73–101, 2013.
* [22] J. Liu, B. Kuipers, and S. Savarese, “Recognizing human actions by attributes,” in _Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition_ , ser. CVPR ’11. IEEE Computer Society, 2011, pp. 3337–3344.
* [23] C. Tan, F. Sun, T. Kong, W. Zhang, C. Yang, and C. Liu, “A survey on deep transfer learning,” in _International conference on artificial neural networks_. Springer, 2018, pp. 270–279.
* [24] K. Weiss, T. M. Khoshgoftaar, and D. Wang, “A survey of transfer learning,” _Journal of Big data_ , vol. 3, no. 1, pp. 1–40, 2016.
* [25] P. Smolensky, “Parallel distributed processing: Explorations in the microstructure of cognition,” D. E. Rumelhart, J. L. McClelland, and C. PDP Research Group, Eds. Cambridge, MA, USA: MIT Press, 1986, vol. 1, ch. Information Processing in Dynamical Systems: Foundations of Harmony Theory, pp. 194–281.
* [26] K. Soomro, A. R. Zamir, and M. Shah, “Ucf101: A dataset of 101 human actions classes from videos in the wild,” _arXiv preprint arXiv:1212.0402_ , 2012\.
* [27] H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre, “Hmdb: a large video database for human motion recognition,” in _2011 International Conference on Computer Vision_. IEEE, 2011, pp. 2556–2563.
* [28] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond short snippets: Deep networks for video classification,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 4694–4702.
* [29] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” _arXiv preprint arXiv:1412.6980_ , 2014.
|
# Enhancement of magnon-photon-phonon entanglement in a cavity magnomechanics
with coherent feedback loop
Mohamed Amazioug LPTHE-Department of Physics, Faculty of sciences, Ibn Zohr
University, Agadir, Morocco Berihu Teklu Department of Applied Mathematics
and Sciences, Khalifa University, Abu Dhabi 127788, UAE Muhammad Asjad
Department of Applied Mathematics and Sciences, Khalifa University, Abu Dhabi
127788, UAE
###### Abstract
We propose a scheme to improve magnon-photon-phonon entanglement in cavity
magnomechanics using coherent feedback loop. In addition, we prove that the
steady state and dynamical state of the system is a genuine tripartite
entanglement state. We use the logarithmic negativity as the witness of
quantum correlations to quantify the entanglement of all bipartite subsystem,
and genuine tripartite entanglement via the nonzero minimum residual
contangle, in steady and dynamical regime. We consider the feasible experiment
parameters to realize the tripartite entanglement. We show that the
entanglement can be significantly improved with coherent feedback using a
suitable tuning of the reflective parameter of the beam splitter. The
entanglement is robust against the thermal effects. Our proposal scheme to
improve the entanglement can be of interest to applications in quantum
information.
## I Introduction
Cavity optomechanics has attracted significant attention for studying and
exploiting the interaction between optical and mechanical degrees of freedom.
In optomechanical systems, continuous variable (CV) states (Gaussian states)
describe the information encoded in mechanical and optical modes [1, 2, 3, 4].
In the recent years cavity optomechanical system plays an essential role for
studying many interesting phenomena such as quantum entangled states [5, 6, 7,
8]. Cooling the mechanical mode to their quantum ground states [9, 10, 11, 12,
13], photon blockade [14], generating mechanical quantum superposition states
[15, 16], enhancing precision measurements [17, 18, 19], gravitational-wave
detectors [20, 21, 22] and optomechanically induced transpency [23, 24, 25,
26] . Quantum state transfer between separate parts is a key tool to achieve
quantum information processing protocols and quantum communications [27, 28,
29]. Recently, cavity magnomechanics offers a robust platform where
ferrimagnetic cristal (e.g., yttrium iron garnet (YIG) sphere) is coupled with
a microwave cavity [34, 35]. We note that the Kittel mode [38] in the YIG
sphere can realize strong coupling with the microwave photons in a high-
quality cavity, leading to cavity polaritons[39, 40, 41, 42, 61] and the
vacuum Rabi splitting. Also, in the cavity magnomechanics, a magnon mode (spin
wave) is combined with a vibratory deformation mode of a ferromagnet (or
ferrimagnet) by the magnetostrictive force, and a microwave cavity mode by the
interaction of magnetic dipoles. The magnetostrictive interaction is a
dispersive interaction similar to a radiation pressure for a large
ferromagnet, where the frequency of the mechanical mode is very lower than the
magnon frequency [36, 37]. Besides, the first realization of the magnon-
photon-phonon interaction [36].
The Entanglement is a significant resource in quantum information processing.
This concept was introduced by E. Schrödinger in his replay to the EPR-paradox
proposed by A. Einstein et al. [44, 45]. The entanglement paly a crucial role
in various applications in quantum information processing, such as quantum
teleportation [46], superdense coding [47], telecloning [48] and quantum
cryptography [49]. The logarithmic negativity [50, 51] measure the amount of
bipartite entanglement systems characterized by continuous variables (CV) of
Gaussian states. In this work we consider the coherent feedback loop to
improve of entanglement in optomagnomechanical system. This technique is
studied theoretically in optomechanical systems [57, 55, 56] and recently
realized experimentally in optomechanical systems [30, 31, 32, 33].
In this paper, we investigate theoretically the improvement of the
entanglement of three bipartites systems and tripartite Gaussian states in an
optomagnomechanical system composed of a Fabry-Pérot cavity content inside YIG
sphere via coherent feedback as implemented in Fig. 1. A microwave field (not
shown) is implemented to improve magnon-phonon coupling. At YIG sphere site,
the magnetic field (along x axis) of the cavity mode, the drive magnetic field
(in y direction), and bias magnetic field (z direction) are common
perpendicular. The coherent feedback technique is presently implemented
experimentally in optomechanical systems [30, 31, 32, 33]. We employ the
logarithmic negativity [50, 51] to quantify the quantum correlations of three
bipartite mode and genuine tripartite entanglement state in stationary-state
and in dynamical state. We discuss the evolution of the entanglement of each
bipartite Gaussian states and genuine tripartite entanglement state under the
effect of the temperature. We demonstrate the role of the feedback technique
to make the entanglement robust under the variation of physical parameters
characterizing the optomagnechanical system. The first study, demonstrate that
the genuine tripartite magnon-phonon-photon entanglement exists in the system,
if the magnon mode is in resonance with anti-Stokes (blue sideband) and the
cavity mode is in resonance with Stokes (red sideband) [52]. Magnon squeezing
enhanced ground-state cooling in cavity magnomechanics [53]. Entanglement
enhancement in cavity magnomechanics by an optical parametric amplifier [54].
In this work, we consider the effects of coherent feedback loop on tripartite
entanglement.
The paper is organized as follows. In Sec. 2, we provide the expression of the
Hamiltonian and the corresponding non linear quantum Langevin equations of the
opto-magno-mechanical. In Sec. 3, we use linear quantum Lngevin equation and
we derive the covariance matrix of the tripartite system. In Sec. 4, we employ
the logarithmic negativity to derive the entanglement of three bipartite modes
and tripartite mode. The results and discussions are given in Sec. 5. A
conclusion closes this paper.
Figure 1: Schematic diagram of a single-mode cavity with feedback loop and a
YIG sphere. The magnons are embodied by a collective motion of a large number
of spins in a macroscopic ferrimagnet, and the magnon mode is directly driven
by a microwave source (not shown) to enhance the magnomechanical coupling. The
cavity is also driven by an electromagnetic field with amplitude $\Omega$. The
cavity photons and magnons are coupled via magnetic dipole interaction, and
the magnons and phonons are coupled via magnetostrictive (radiation pressure-
like) interaction.
## II The Model
The system under study is consists of a cavity magnomechanics drived by single
coherent laser source and the microwave. A YIG sphere (a 250-$\mu$m-diameter
sphere is used in Ref. [36] is placed inside the cavity, and where the
coherent feedback loop is implemented as illustrated in Fig. 1. The magnetic
dipole interaction mediates the coupling between magnons and cavity photons.
The magnons are coupled with phonons through magnetostrictive interaction. The
variable magnetization induced by the magnon excitation inside the YIG sphere
results in the deformation of its geometrical structure, which forms
vibrational modes (phonons) of the sphere, and vice versa [58]. We consider
that the size of the sphere is a lot smaller than the microwave wavelength, so
that the influence of the radiation pressure is negligible. The Hamiltonian of
the system is given by
$\mathcal{H}=\mathcal{H}_{free}+\mathcal{H}_{md}+\mathcal{H}_{mc}+\mathcal{H}_{dm}+\mathcal{H}_{dc}.$
(1)
The first term of $\mathcal{H}$ describes the free magnon and optical modes.
Writes as
$\mathcal{H}_{free}=\hbar\omega_{c}c^{{\dagger}}c+\hbar\omega_{m}m^{{\dagger}}m+\frac{\hbar\omega_{d}}{2}(q^{2}+p^{2}),$
(2)
where $c$ ($c^{{\dagger}}$) and $m$ ($m^{{\dagger}}$)
($[O,O^{{\dagger}}]\,{=}\,1$, $O\,{=}\,c,m$) are the annihilation (creation)
operator of the cavity and magnon modes, respectively, $q$ and $p$
($[q,p]\,{=}\,i$) are the dimensionless position and momentum quadratures of
the mechanical mode, and $\omega_{c}$, $\omega_{m}$, and $\omega_{d}$ are
respectively the resonance frequency of the cavity, magnon and mechanical
modes. The magnon frequency is determined by the external bias magnetic field
$H$ and the gyromagnetic ratio $\gamma$, i.e., $\omega_{m}=\gamma H$. The
second term of Eq. (1) is the Hamiltonian describing the interaction between
magnon and mechanical modes. It is written by
$\mathcal{H}_{md}=\hbar g_{md}m^{{\dagger}}mq.$ (3)
The single-magnon magnomechanical coupling rate $g_{md}$ is small, but the
magnomechanical interaction can be improved via driving the magnon mode with a
strong microwave field (directly driving the YIG sphere with a microwave
source [59, 60]). The third term in Eq. (1) gives the interaction between the
optical field and the magnon. It reads as
$\mathcal{H}_{mc}=\hbar g_{mc}(c+c^{{\dagger}})(m+m^{{\dagger}}).$ (4)
The coupling rate $g_{mc}$ between the magnon and microwave can be larger than
the dissipation rates $\kappa_{c}$ and $\kappa_{m}$ of the cavity and magnon
modes respectively, entering into the strong coupling regime,
$g_{mc}>\kappa_{c},\kappa_{m}$ [39, 40, 41, 42, 61]. In the frame rotating at
the drive frequency $\omega_{0}$ and applying the rotating-wave approximation
(RWA), $g_{mc}(c+c^{{\dagger}})(m+m^{{\dagger}})\to
g_{mc}(cm^{{\dagger}}+c^{{\dagger}}m)$ (valid when $\omega_{c},\omega_{m}\gg
g_{mc},\kappa_{c},\kappa_{m}$, which is easily satisfied [36]). The five term
in the Hamiltonian (1) represent the magnon mode is directly driven by a
microwave source (not shown) to enhance the magnomechanical coupling. It is
given by
$\mathcal{H}_{dm}=i\hbar\mathcal{E}(m^{{\dagger}}e^{-i\omega_{0}t}-me^{i\omega_{0}t}),$
(5)
where $\mathcal{E}=\frac{\sqrt{5}}{4}\gamma\\!\sqrt{N}B_{0}$ is the Rabi
frequency [52] describes the coupling strength of the drive magnetic field
(with $B_{0}$ and $\omega_{0}$ are respectively the amplitude and frequency )
with the magnon mode, where $\gamma/2\pi=28$ GHz/T, and the total number of
spins $N=\rho V$ with $V$ the volume of the sphere and $\rho=4.22\times
10^{27}$ m-3 the spin density of the YIG. The Rabi frequency $\mathcal{E}$ is
derived under the hypothesis of the low-lying excitations, $\langle
m^{{\dagger}}m\rangle\ll 2Ns$, with $s=\frac{5}{2}$ is the spin number of the
ground state Fe3+ ion in YIG. The last term in the Hamiltonian (1)
characterize the optical field derived of the system which is transmitted by
the beam splitter. It is given by
$\mathcal{H}_{dc}=\hbar\Omega\mu(c^{\dagger}e^{i\phi}-ce^{-i\phi}),$ (6)
where where $\phi$ is the phase of electromagnetic field, the quantity $\mu$
and $\tau$ denote the real amplitude transmission parameters of the beam
splitter satisfies the equation $\mu^{2}+\tau^{2}=1$ ($\mu$ and $\tau$ are
real and positive) [57]. The quantum Langevin equations (QLEs) characterizing
the system are given by
$\displaystyle\dot{c}$ $\displaystyle=$
$\displaystyle-(i\Delta_{fb}+\kappa_{fb})c-ig_{mc}m-i\eta\Omega
e^{i\phi}+\sqrt{2\kappa_{c}}c_{fb}^{\rm in},$ $\displaystyle\dot{m}$
$\displaystyle=$
$\displaystyle-(i\Delta_{m}+\kappa_{m})m-ig_{mc}c-ig_{md}mq+\mathcal{E}+\sqrt{2\kappa_{m}}m^{\rm
in},$ $\displaystyle\dot{q}$ $\displaystyle=$ $\displaystyle\omega_{d}p,$
$\displaystyle\dot{p}$ $\displaystyle=$
$\displaystyle-\omega_{d}q-\gamma_{d}p-g_{md}m^{{\dagger}}m+\chi,$ (7)
where $\kappa_{fb}=\kappa_{c}(1-2\tau\cos{\theta})$ and
$\Delta_{fb}=\Delta_{c}+2\kappa_{c}\tau\sin{\theta}$ (with
$\Delta_{c}=\omega_{c}-\omega_{0}$) are respectively the effective cavity
decay rate and the detuning with $\alpha$ describes the phase shift generated
by the reflectivity of the output field on the mirrors. The operator
$C^{in}_{fb}=\tau{e}^{{i}\theta}c^{out}+\mu c^{in}$ describes the input
optical field induced via the coherent feedback technique. Besides, the output
field $c^{out}$ and the cavity field $c$ are related via standard input-output
relation $c^{out}=\sqrt{2\kappa_{c}}c-\mu c^{in}$ [62] (i.e.
$C^{in}_{fb}=\tau\sqrt{2\kappa_{c}}{e}^{{i}\theta}c+c^{in}_{fb}$). In
addition, the non-zero coherent feedback correlations properties of the input
noise operators for the cavity $c^{in}_{fb}$ and $c^{in+}_{fb}$ (where
$c^{in}_{fb}=\mu(1-\tau{e}^{{i}\theta})c^{in}$) [63], are given by
$\displaystyle\langle c_{fb}^{\rm in}(t)\,c_{fb}^{\rm
in{\dagger}}(t^{\prime})\rangle$ $\displaystyle=$
$\displaystyle\mu^{2}(1-\tau{e}^{{i}\theta})(1-\tau{e}^{-{i}\theta})[n_{c}(\omega_{c}){+}1]\,\delta(t{-}t^{\prime}),$
$\displaystyle\langle c_{fb}^{\rm in{\dagger}}(t)\,c_{fb}^{\rm
in}(t^{\prime})\rangle$ $\displaystyle=$
$\displaystyle\mu^{2}(1-\tau{e}^{{i}\theta})(1-\tau{e}^{-{i}\theta})n_{c}(\omega_{c})\,\delta(t{-}t^{\prime})$
(8)
with $\delta_{m}=\omega_{m}-\omega_{0}$, $\kappa_{m}$ is the dissipation rate
of the magnon mode, $\gamma_{d}$ is the mechanical damping rate, and $m^{\rm
in}$ and $\xi$ are input noise operators for the magnon and mechanical modes,
respectively, which are zero mean and characterized by the following
correlation functions [63]
$\displaystyle\langle m^{\rm in}(t)\,m^{\rm in{\dagger}}(t^{\prime})\rangle$
$\displaystyle=$ $\displaystyle[n_{m}(\omega_{m})+1]\,\delta(t{-}t^{\prime})$
$\displaystyle\langle m^{\rm in{\dagger}}(t)\,m^{\rm in}(t^{\prime})\rangle$
$\displaystyle=$ $\displaystyle n_{m}(\omega_{m})\,\delta(t{-}t^{\prime})$ (9)
and
$\langle\chi(t)\chi(t^{\prime})\,{+}\,\chi(t^{\prime})\chi(t)\rangle/2\,\,{\simeq}\,\,\gamma_{d}[2n_{d}(\omega_{d}){+}1]\delta(t{-}t^{\prime})$
(10)
The mechanical quality factor ${\cal Q}=\omega_{d}/\gamma_{d}\,\,{\gg}\,1$ is
large for a Markovian approximation [64]. Where
$n_{j}(\omega_{j}){=}\big{[}{\rm
exp}\big{(}\frac{\hbar\omega_{j}}{k_{B}T}\big{)}{-}1\big{]}^{-1}$
$(j{=}c,m,d)$ are the equilibrium mean thermal photon, magnon, and phonon
number, respectively.
## III Linearization of quantum Langevin equations
The analytical solution of quantum Langevin equations III, can be obtain by
using the following linearization scheme $O=O_{s}+\delta O$
($O\,{=}\,c,m,q,p$), i.e. we decompose the mode operators as a sum of the
steady state average and a fluctuation quantum operator and neglecting second
order fluctuation terms when the magnon mode is strongly driven (large
amplitude $|m_{s}|\gg 1$ at the steady state), and the cavity field also has a
large amplitude $|a_{s}|\gg 1$ via the cavity-magnon beamsplitter interaction.
This allows us to linearize the dynamics of the system around the steady-state
values as
$m_{s}=\frac{-ig_{mc}c_{s}+\mathcal{E}}{i\Delta_{m}+\kappa_{m}}\quad;\quad
c_{s}=-\frac{ig_{mc}m_{s}+i\mu\Omega{e}^{i\phi}}{i\Delta_{fb}+\kappa_{fb}}$
(11)
which takes a simpler form
$m_{s}\simeq\frac{i\mathcal{E}\Delta_{fb}-i\mu\Omega{e}^{i\phi}}{g_{ma}^{2}-\Delta_{m}\Delta_{fb}}\quad;\quad\text{when}\quad|\Delta_{m}|,|\Delta_{fb}|\gg\kappa_{c},\kappa_{m}$
(12)
where $\Delta_{m}=\delta_{m}+g_{md}q_{s}$ is the effective magnon-drive
detuning including the frequency shift due to the magnomechanical interaction,
and $G_{md}=i\sqrt{2}g_{md}m_{s}$ is the effective magnomechanical coupling
rate, where $q_{s}=-\frac{g_{md}}{\omega_{d}}m_{s}^{2}$.
The linearized QLEs describing the quadrature fluctuations $(\delta Q,\delta
P,\delta x,\delta y,\delta q,\delta p)$, with $\delta Q=(\delta c+\delta
c^{{\dagger}})/\sqrt{2}$, $\delta P=i(\delta c^{{\dagger}}-\delta
c)/\sqrt{2}$, $\delta x=(\delta m+\delta m^{{\dagger}})/\sqrt{2}$, and $\delta
y=i(\delta m^{{\dagger}}-\delta m)/\sqrt{2}$, is given by
$\dot{\Lambda}(t)=\mathcal{F}\Lambda(t)+\nu(t),$ (13)
where $\Lambda(t)=\big{[}\delta Q(t),\delta P(t),\delta x(t),\delta
y(t),\delta q(t),\delta p(t)\big{]}^{T}$,
$\nu(t)=\big{[}\\!\sqrt{2\kappa_{c}}Q^{\rm in}(t),\sqrt{2\kappa_{c}}P^{\rm
in}(t),\sqrt{2\kappa_{m}}x^{\rm in}(t),\sqrt{2\kappa_{m}}y^{\rm
in}(t),0,\chi(t)\big{]}^{T}$ is the vector of input noises, and the drift
matrix $\mathcal{F}$ can be written as
$\mathcal{F}=\begin{pmatrix}-\kappa_{fb}&\Delta_{fb}&0&g_{mc}&0&0\\\
-\Delta_{fb}&-\kappa_{fb}&-g_{mc}&0&0&0\\\
0&g_{mc}&-\kappa_{m}&\tilde{\Delta}_{m}&-G_{md}&0\\\
-g_{mc}&0&-\tilde{\Delta}_{m}&-\kappa_{m}&0&0\\\ 0&0&0&0&0&\omega_{d}\\\
0&0&0&G_{md}&-\omega_{d}&-\gamma_{d}\\\ \end{pmatrix},$ (14)
The drift matrix in Eq. (14) is provided under the condition
$|\Delta_{m}|,|\Delta_{fb}|\gg\kappa_{c},\kappa_{m}$. In fact, we will show
later that
$|\Delta_{m}|,|\Delta_{fb}|\simeq\omega_{d}\gg\kappa_{fb},\kappa_{m}$ [see
Fig. 1 (b)] are optimal for the presence of all bipartite entanglements of the
system. Note that Eq. (11) is intrinsically nonlinear since $\Delta_{m}$
contains $|m_{s}|^{2}$. However, for a given value of $\Delta_{m}$ (one can
always alter $\Delta_{m}$ by adjusting the bias magnetic field) $m_{s}$, and
thus $G_{md}$, can be achieved straightforwardly.
The time evolution of the quantum fluctuations of the system is a continuous
variable (CV) three-mode Gaussian state is completely characterized by a
$6\times 6$ covariance matrix (CM) $\Gamma$, where
$\Gamma_{ij}=\frac{1}{2}\langle\Lambda_{i}(t)\Lambda_{j}(t^{\prime})+\Lambda_{j}(t^{\prime})\Lambda_{i}(t)\rangle$
($i,j=1,2,...,6$) of the covariance matrix (CM) $\Gamma$ satisfies [1, 65]
$d\Gamma/dt=\mathcal{F}\Gamma+\Gamma\mathcal{F}^{T}+\mathcal{D},$ (15)
where $\mathcal{D}={\rm
diag}\big{[}\kappa_{c}\mu^{2}(1-\tau)^{2}(2n_{c}+1),\kappa_{c}\mu^{2}(1-\tau)^{2}(2n_{c}+1),\kappa_{m}(2n_{m}+1),\kappa_{m}(2n_{m}+1),0,\gamma_{d}(2n_{d}+1)\big{]}$
is the diffusion matrix, which is defined through
$\langle\nu_{i}(t)\nu_{j}(t^{\prime})+\nu_{j}(t^{\prime})\nu_{i}(t)\rangle/2=\mathcal{D}_{ij}\delta(t-t^{\prime})$
and $\Gamma_{0}=diag(1,1,1,1,1,1)$ is the CM of the tripartite system at
$t=0$.
## IV Quantum correlations
We adopt the logarithmic negativity to quantify the correlations in the
bipartite subsystem in CV system. It is defined by [50, 51]
$\mathcal{E}_{N}=\max[0,-\log(2\xi^{-})]$ (16)
with $\xi^{-}$ being the smallest symplectic eigenvalue of partial transposed
covariance matrix of two mode Gaussian states
$\xi^{-}=\sqrt{\frac{\sigma-\sqrt{\sigma^{2}-4\det\Gamma}}{2}}$ (17)
The covariance matrix $\Gamma$ associated with the two magnon modes is given
by
$\Gamma=\begin{pmatrix}\mathcal{A}&\mathcal{C}\\\
\mathcal{C}^{T}&\mathcal{B}\end{pmatrix}$ (18)
The $2\times 2$ sub-matrices $\mathcal{A}$ and $\mathcal{B}$ in Eq. (18)
describe the autocorrelations of the two magnon modes and $2\times 2$ sub-
matrix $\mathcal{C}$ in Eq. (18) denotes the cross-correlations of the two
magnon modes. The symbol $\Gamma$ is written as
$\Gamma=\det\mathcal{A}+\det\mathcal{B}-\det\mathcal{C}$. The two subsystems
are entangled if $\mathcal{E}_{N}>0$.
To investigate tripartite entanglement of the system, we use the residual
contangle ${\cal R}$ [66] as quantitative measure, where contangle is a CV
analogue of tangle for discrete-variable tripartite entanglement [67]. A bona
fide quantification of tripartite entanglement is given by the minimum
residual contangle [66]
${\cal R}_{\rm min}\equiv{\rm min}\Big{[}{\cal R}^{c|md},\,{\cal
R}^{m|cd},\,{\cal R}^{d|cm}\Big{]},$ (19)
with ${\cal R}^{i|jk}\equiv C_{i|jk}-C_{i|j}-C_{i|k}\geq 0$ ($i,j,k=c,m,d$) is
the residual contangle, with $C_{v|w}$ the contangle of subsystems of $v$ and
$w$ ($w$ contains one or two modes), which is a proper entanglement monotone
defined as the squared logarithmic negativity [66]. Besides, a nonzero minimum
residual contangle ${\cal R}_{\rm min}\,{>}\,0$ exhibit the existence of
genuine tripartite entanglement in the system. ${\cal R}^{i|jk}>0$ is similar
to the Coffman-Kundu-Wootters monogamy inequality [67] hold for the system of
three qubits.
## V Resultats and Discusion
In this section, we will discuss the steady state quantum correlations of two
magnon under different effects by considering experimental values reported in
[52]: $\omega_{c}/2\pi=10$ GHz, $\omega_{d}/2\pi=10$ MHz ,
$\gamma_{d}/2\pi=100$ Hz, $\kappa_{c}/2\pi=\kappa_{m}/2\pi=1$ MHz,
$g_{mc}/2\pi=G_{md}/2\pi=3.2$ MHz, and and at low temperature $T=10$ mK.
Besides, YIG sphere with a diameter of 0.5 mm was used, which contains more
than $10^{17}$ spins.
Figure 2: Plot of bipartite entanglement (a) $Eom$, (b) $EmM$, and (c) $EoM$
versus detunings $\Delta_{c}$ and $\Delta_{m}$ with $\tau=0.1$ and $\theta=0$.
See text for the other parameters.
We plot in Fig. (3), the steady state of the three bipartite entanglement
$Eom$ (between the cavity and magnon mode), $EmM$ (between the magnon and
mechanical mode) and $EoM$ (between the cavity and mechanical mode) as a
function of the detunings $\Delta_{c}$ and $\Delta_{m}$ in the presence of
coherent feedback loop. We remark, the maximum value of entanglement of the
three bipartite is enhances by coherent feedback loop in comparison with
results in Ref. [52]. This can explain by the re-injection of the photon in
the cavity which improves the coupling between different bipartite modes. We
observe, when $\Delta_{c}=-\Delta_{d}$ the entanglement $Eom$ and $EoM$ is
maximum while the entanglement $EmM$ is 0.10.
Figure 3: (a) Plot of $Eom$, $EoM$ and $EmM$ as a function of
$\Delta_{c}/\omega_{d}$, temperature (see the figure (b)) and reflectivity
$\tau$ (see the figure (c)). We take $G_{md}/2\pi=4.8$ MHz and
$\Delta_{m}=0.9\omega_{d}$. The other parameters are as in Fig. 2;$\tau=0.1$
(a-b) $\Delta_{c}=-\omega_{d}$ (a-c). See text for the other parameters.
In Fig. 3, we present the steady state of the three bipartite entanglements
$Eom$, $EoM$ and $EmM$ versus different parameters. The three bipartite
entanglements are all not vanishing in different regions of
$\Delta_{c}/\omega_{d}$ in Fig. 3(a). This means the presence of tripartite
entanglement between photon-magnon-phonon. We remark that the three bipartite
entanglement are robust against temperature and survive up to about 200 mK
(see Fig.3 (b)) as also discussed in Ref. [52]. We can explain the diminishing
of all bipartite entanglement by the thermal effects induced by decoherence
phenomenon [68]. Besides, the two bipartite entanglement $Eom$ and $EoM$ are
enhanced with increasing values of the reflectity parameters $\tau$ (i.e.
decay rate $\kappa_{fb}$ reduces) and begin to decrease quickly after reach
its maximum entanglement value, i.e. one can say that the coherent feedback
enhances the bipartite entanglement as in Fig. 3(c). Moreover, the
entanglement between magnon and phonon is decreases quickly with increasing
$\tau$. This can be explained by the decoherence effects produce with re-
injection of photons in the cavity, because increasing the photon number is
responsible of more thermal effects which induce the degradation of the
quantum correlations between the two modes as also discussed in Ref. [57].
Figure 4: Plot of tripartite entanglement in terms of the minimum residual
contangle Rmin versus $\Delta_{c}$. We take $G_{md}/2\pi=4.8$ MHz,
$\Delta_{m}=0.9\omega_{b}$, $\Delta_{c}=-\omega_{b}$ and $\theta=0$. The other
parameters are as in Fig. (a); $\tau=0.2$. See text for the other parameters
We plot in Fig. 4(a) we plot in steady state the minimum of the residual
contangle $R_{min}$ versus of detuning $\Delta_{c}/\omega_{d}$ with
$G_{md}/2\pi=4.8$ MHz as in Ref. [52], for a fixed value of all other
parameters. We notice that the system is a genuinely tripartite entangled
state as shown by the nonzero minimum residual contangle $R_{min}$ in Fig.4
(b). Also we plot the evolution of the minimum of the residual contangle
versus the reflectivity $\tau$ for different values of the temperature $T$ as
implemented in Fig. 4(b). Firstly, we remark the enhancement of tripartite
entanglement with increasing the parameter $\tau$, i.e., the coherent feedback
loop enhances tripartite entanglement. This tripartite entanglement is
decreases quickly after reaching its maximum value for a specific value of
$\tau$. Besides, the $R_{min}$ decreases with increasing the temperature
(decoherence phenomenon), i.e. a higher temperature reduces the amount of the
tripartite entanglement. Also the region in which tripartite entanglement
exists increases with decreasing temperature as shown in Fig. 4(b).
Figure 5: Plot of time evolution of the all bipartite entanglement $Eom$,
$EoM$ and $EmM$ with $G_{mb}/2\pi=4.8$ MHz, $\tau=0.1$,
$\Delta_{c}=-\omega_{d}$ and $\theta=0$. See text for the other parameters.
We plot in Fig. 5(a) time-evolution of the three bipartite entanglement $Eom$,
$EoM$ and $EmM$. We remark that the entanglement in the Fig. 5(a) exhibits
three regimes of the entanglement of the all three entanglement. The first
regime concerns classically correlated states (zero entanglement), i.e. when
two are separated. This indicates the absence of any quantum correlations
transfer between two modes. The second regime corresponds to the emergence of
entanglement between the two modes. In this regime we observe the generation
of the oscillation in time this can be explained by the Sørensen-Mølmer
entanglement dynamics discussed in Refs. [55]. The third regime, corresponds
to large periods of evolution and associated with the entanglement between the
two modes when they reach the steady regime. We remark in Fig. 5(b) that the
system is a genuinely tripartite entangled state as shown by the nonzero
minimum residual contangle $R_{min}$.
## VI Conclusions
In summary, we have proposed a theoretical scheme to enhance the three
bipartite and tripartite entanglement in optomagnomechanical system. We have
quantified the amount of entanglement in all bipartite and tripartite mode via
logarithmic negativity. We have shown the genuinely tripartite entanglement
state via the nonzero minimum residual contangle $\mathcal{R}_{min}$. Besides,
we have discussed the behavior of stationary and dynamics of three bipartite
and tripartite entanglement versus the parameter reflective of beam splitter
and the phenomenon of decoherence effects using experimentally feasible
parameters. We have shown that the presence of coherent feedback loop enhances
the bipartite photon-magnon and magnon-phonon entanglement $Eom$ and $EmM$,
respectively. Moreover, the presence of coherent feedback loop degrade the
photon-phonon entanglement $EoM$ as shown in Fig. 3(c). Our results show that
the entanglement is fragile under thermal (decoherence) effects while the
robustness of entanglement in the presence of coherent feedback can be
achieved.
## References
* [1] D. Vitali, et al., Phys. Rev. Lett. 98, 030405 ((2007) ).
* [2] J. Manninen, M. Asjad, E. Selenius, R. Ojajarvi, P. Kuusela, F. Massel, Physical Review A 98, 043831 (2018).
* [3] M. Amazioug, M. Nassik and N. Habiballah. Eur. Phys. J. D 72, 171 (2018).
* [4] M. Asjad, P. Tombesi, D. Vitali, Optics Express 23 (6), 7786-7794 (2015).
* [5] M. Asjad, M. A. Shahzad, F. Saif, The European Physical Journal D 67, 1-5 (2013)
* [6] M. Asjad, S. Zippilli, D. Vitali, Physical Review A 93, 062307 (2016).
* [7] M. Amazioug, B. Maroufi and M. Daoud, Quantum Inf. Process. 19, 16 (2020).
* [8] M. Amazioug, M. Nassik and N. Habiballah, Eur. Phys. Jour. D 72, 9 (2018).
* [9] J. Teufel, T. Donner, D. Li, J. Harlow, M. Allman, K. Cicak, A. Sirois, J. Whittaker, K. Lehnert, R. Simmonds. Nature 475 359 (2011).
* [10] S. Machnes, J. Cerrillo, M. Aspelmeyer, W. Wieczorek, M.B. Plenio, A. Retzker. Phys. Rev. Lett. 108, 153601 (2012).
* [11] M. Asjad, N. E. Abari, S. Zippilli, D. Vitali, Optics Express 27, 32427 (2019).
* [12] J. Chan, T. P. M. Alegre, A.H. Safavi Naeini, J.T. Hill, A. Krause, S. Gröblacher, M. Aspelmeyer, O. Painter. Nature 478 (7367) 89 (2011).
* [13] M. Bhattacharya, P. Meystre. Phys. Rev. Lett. 99 (7) (2007) 073601.
* [14] M. Amazioug, M. Daoud, S. K. Singh, M. Asjad. arXiv preprint arXiv:2209.07401
* [15] J.Q. Liao, L. Tian. Phys. Rev. Lett. 116, 163602 (2016).
* [16] M. Asjad, D. Vitali, Journal of Physics B: Atomic, Molecular and Optical Physics 47, 045502 (2014).
* [17] Z.-X. Liu, B. Wang, C. Kong, L.-G. Si, H. Xiong, Y. Wu. Sci. Rep. 7, 12521 (2017).
* [18] H. Xiong, L.G. Si, Y. Wu. Appl. Phys. Lett. 110, 171102 (2017).
* [19] H. Xiong, Z.X. Liu, Y. Wu. Opt. Lett. 42, 3630 (2017).
* [20] C.M. Caves. Phys. Rev. Lett. 45, 75 (1980).
* [21] A. Abramovici, W.E. Althouse, R .W. P. Drever, Y. Gürsel, S. Kawamura, F.J. Raab, D. Shoemaker, L. Sievers, R.E. Spero, K.S. Thorne, R.E. Vogt, R. Weiss, S. E. Whitcomb, M. E. Zucker, LIGO: Science 256, 325 (1992).
* [22] V. Braginsky, S.P. Vyatchanin. Phys. Lett. A 293, 228 (2002).
* [23] A. H. Safavi-Naeini, T. P. Mayer Alegre, J. Chan, et al., Nature, 472, 69 (2011).
* [24] M. Asjad, Journal of Russian Laser Research 34, 159 (2013).
* [25] M. Asjad, Journal of Russian Laser Research 34, 278 (2013).
* [26] S. Weis, R. Riviere, S. Deleglise, et al., Science, 330, 1520 (2010).
* [27] M. Asjad, S. Zippilli, P. Tombesi, D. Vitali, Physica Scripta 90, 074055 (2015).
* [28] M. Asjad, P. Tombesi, D. Vitali, Physical Review A 94, 052312 (2016).
* [29] M. Asjad, M. Qasymeh, H. Eleuch, Optics Express 30, 21016 (2022).
* [30] M. Rossi, et al., Phys. Rev. Lett. 119, 123603 (2017).
* [31] J.B. Clark, et al., Nature 541, 191 (2017).
* [32] N. Kralj, et al., Quantum Sci. Technol. 2, 034014 (2017).
* [33] M. Rossi, et al., Nature 563, 53 (2018).
* [34] D. Lachance-Quirion, Y. Tabuchi, A. Gloppe, K. Usami, and Y. Nakamura, Appl. Phys. Express 12, 070101 (2019).
* [35] H. Y. Yuan, Y. Cao, A. Kamra, R. A. Duine, and P. Yan, arXiv:2111.14241
* [36] X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, Sci. Adv. 2, e1501286 (2016).
* [37] Z.-Y. Fan, R.-C. Shen, Y.-P. Wang, J. Li, and J. Q. You, Phys. Rev. A 105, 033507 (2022).
* [38] C. Kittel, Phys. Rev. 73, 155 (1948).
* [39] H. Huebl et al., Phys. Rev. Lett. 111, 127003 (2013).
* [40] Y. Tabuchi et al., Phys. Rev. Lett. 113, 083603 (2014).
* [41] X. Zhang et al., Phys. Rev. Lett. 113, 156401 (2014).
* [42] M. Goryachev et al., Phys. Rev. Appl. 2, 054002 (2014).
* [43] L. Bai et al., Phys. Rev. Lett. 114, 227201 (2015).
* [44] A. Einstein, B. Podolsky, N. Rosen. Phys. Rev. 47, 777 (1935).
* [45] E. Schrödinger. Math. Proc. Camb. Philos. Soc. 31, 555 (1935).
* [46] C.H. Bennett, G. Brassard, C. Crepeau, R. Jozsa, A. Peres, W.K. Wootters. Phys. Rev. Lett. 70, 1895 (1993).
* [47] C.H. Bennett, S.J. Wiesner. Phys. Rev. Lett. 69, 2881 (1992).
* [48] V. Scarani, S. Lblisdir, N. Gisin, A. Acin, Quantum cloning, Rev. Mod. Phys. 77, 1225 (2005).
* [49] A.K. Ekert, Quantum cryptography based on Bell’s theorem, Phys. Rev. Lett. 67, 661 (1991).
* [50] G. Vidal, R.F. Werner, Phys. Rev. A 65, 032314 (2002) .
* [51] G. Adesso, A. Serafini, F. Illuminati, Phys. Rev. Lett. 92, 087901 (2004) .
* [52] J. Li, Shi-Yao Zhu and G. S. Agarwal. Physical review letters 121, 203601 (2018).
* [53] M. Asjad, Jie Li, Shi-Yao Zhu, J.Q. You, https://doi.org/10.1016/j.fmre.2022.07.006, arXiv preprint arXiv:2203.10767, 2022.
* [54] B. Hussain, S. Qamar, and M. Irfan. Physical Review A 105, 063704 (2022).
* [55] J. Li, G. Li, S. Zippilli, D. Vitali, T. Zhang. Phys. Rev. A 95, 043819 (2017).
* [56] S. Huang, A. Chen. Appl. Sci. 9 3402 (2019).
* [57] M. Amazioug, B. Maroufi and M. Daoud. Physics Letters A. 384, 126705 (2020).
* [58] C. Kittel, Phys. Rev. 110 (1958) 836.
* [59] Y.-P. Wang et al., Phys. Rev. Lett. 120, 057202 (2018).
* [60] Y.-P. Wang et al., Phys. Rev. B 94, 224410 (2016).
* [61] L. Bai et al., Phys. Rev. Lett. 114, 227201 (2015).
* [62] D.F. Walls, G.J. Milburn, Quantum Optics, (Springer, Berlin, Germany, 1998).
* [63] C. W. Gardiner and P. Zoller, Quantum Noise (Springer, Berlin, Germany, 2000).
* [64] V. Giovannetti and D. Vitali, Phys. Rev. A 63, 023812 (2001).
* [65] P.C. Parks, V. Hahn, Stability Theory (Prentice Hall, New York, 1993).
* [66] G. Adesso and F. Illuminati, J. Phys. A 40, 7821 (2007);
* [67] V. Coffman, J. Kundu, and W. K. Wootters, Phys. Rev. A 61, 052306 (2000).
* [68] W. H. Zurek: Decoherence, einselection, and the quantum origins of the classical. Rev. Mod. Phys. 75, 715 (2003)
|
# The Herglotz variational principle for dissipative field theories
Jordi Gaset , Manuel Lainz , Arnau Mas , Xavier Rivas . e-mail:
<EMAIL_ADDRESS>(ORCID: 0000-0001-8796-3149).e-mail:
<EMAIL_ADDRESS>(ORCID: 0000-0002-2368-5853).e-mail:
<EMAIL_ADDRESS>(ORCID: 0000-0003-0532-0938).e-mail:
<EMAIL_ADDRESS>(ORCID: 0000-0002-4175-5157).
###### Abstract
In the recent years, with the incorporation of contact geometry, there has
been a renewed interest in the study of dissipative or non-conservative
systems in physics and other areas of applied mathematics. The equations
arising when studying contact Hamiltonian systems can also be obtained via the
Herglotz variational principle. The contact Lagrangian and Hamiltonian
formalisms for mechanical systems has also been generalized to field theories.
The main goal of this paper is to develop a generalization of the Herglotz
variational principle for first-order and higher-order field theories. In
order to illustrate this, we study three examples: the damped vibrating
string, the Korteweg–De Vries equation, and an academic example showing that
the non-holonomic and the vakonomic variational principles are not fully
equivalent.
Keywords: Herglotz variational principle, higher-order field theories, contact
field theory, Korteweg–De Vries equation
MSC 2020 codes: 37K58, 37L05, 53D10, 35Q53
###### Contents
1. 1 Introduction
2. 2 The Herglotz principle in mechanics
1. 2.1 Herglotz principle: implicit version
2. 2.2 Herglotz principle: vakonomic version
3. 2.3 Herglotz principle: nonholonomic version
3. 3 The Herglotz principle for fields
1. 3.1 Geometric structures
2. 3.2 Herglotz principle for fields: non-holonomic version
3. 3.3 Herglotz principle for fields: vakonomic version
4. 3.4 Relations between both approaches
4. 4 Higher-order Lagrangian densities
5. 5 Examples
1. 5.1 Vibrating string with damping
2. 5.2 The non-holonomic and the vakonomic principles are not equivalent
3. 5.3 The Korteweg–De Vries Lagrangian
6. 6 Conclusions and outlook
7. References
## 1 Introduction
It is well known that symplectic geometry is the natural geometric framework
to study Hamiltonian mechanical systems [1, 2, 36, 45]. When dealing with
time-dependent mechanical systems, cosymplectic geometry is the appropriate
framework to work with [10, 12, 29]. These two geometric structures have been
generalized to the so-called $k$-symplectic and $k$-cosymplectic structures in
order to deal with autonomous and non-autonomous field theories [3, 25, 26,
27, 46, 50, 54, 55].
In recent years, the interest in dissipative systems has grown significantly.
In part, this is due to the incorporation of contact geometry [4, 34, 42] to
the study of non-conservative Lagrangian and Hamiltonian mechanical systems
[6, 8, 20, 22, 31]. This approach has proved to be very useful in many
different problems in areas such as thermodynamics, quantum mechanics, general
relativity, control theory among others [7, 13, 18, 28, 33, 37, 42, 47, 49,
57, 58]. Recently, the notion of cocontact manifold has been developed in
order to introduce explicit dependence on time [14, 53].
This growing interest has driven researchers to look for a generalization of
$k$-symplectic and contact geometry in order to work with non-conservative
field theories. This new geometric framework is called $k$-contact geometry,
and has already been applied to the study of both Hamiltonian and Lagrangian
field theories in the autonomous [30, 32, 51] and non-autonomous [52] cases.
The contact formulation of mechanics has also been generalized to describe
higher-order mechanical systems in [17]. The Skinner–Rusk formalism has also
been studied in detail for both contact [15] and $k$-contact systems [39].
Recently, the notion of multicontact structure has been introduced [16],
generalizing the multisymplectic framework to deal with non-conservative field
theories. The Herglotz principle [23, 40, 41, 56] provides a variational
formulation for contact Hamiltonian systems. There have been several attempts
[35, 44] to generalize this theory to field theories.
In this paper we will derive this principle in a more general geometric
language and compare it to the existing approaches. In order to do that, we
will review three different formulations of the Herglotz principle for
mechanics, the implicit version, the vakonomic version, and the non-holonomic
version. In order to find a Herglotz principle for higher dimensions, we will
generalize the vakonomic and the non-holonomic versions of the Herglotz
principle for mechanical systems. We will see that the non-holonomic approach
yields the same field equations as in the $k$-contact [32, 52] and
multicontact [16] formalisms. On the other hand, in contrast to what happens
in mechanics, using the vakonomic approach we obtain an additional condition
that must be fulfilled. This new equation implies that the $k$-contact and
multicontact Lagrangian formalisms are not fully equivalent to the vakonomic
variational principle introduced in the present paper. One of the examples of
the last section will illustrate this fact.
The vakonomic Herglotz variational principle for first-order field theories is
then extended to a suitable Herglotz principle for higher-order non-
conservative field theories. As an example, the Korteweg–De Vries equation
[43] is discussed. This equation arises from a second-order Lagrangian and is
used to model waves in shallow waters. In order to have a dissipative
behaviour, we add a standard damping term to the Korteweg–De Vries Lagrangian
and use the variational principle to derive a non-conservative version of the
Korteweg–De Vries equation.
The organization of the paper is as follows. In first place, Section 2 offers
a review of the Herglotz principle in mechanics. In particular, we see three
different approaches: the implicit version, the vakonomic version and the non-
holonomic version. Section 3 is devoted to extend the Herglotz variational
principle from mechanics to field theory using the vakonomic and the non-
holonomic approaches. In Section 4 we generalize the results given in Section
3 to the case of higher-order Lagrangian densities using the vakonomic
variational principle.
Finally, Section 5 is devoted to study some examples of the theoretical
framework developed above. The first example deals with a first-order system
consisting of a damped vibrating string with friction linear to the velocity.
The second example shows that, as said before, the vakonomic variational
principle for field theories and the $k$-contact formulations are not
equivalent. We present an academic example consisting in taking the Lagrangian
of the previous example and slightly modifying the damping term. In this case,
we find a solution to the $k$-contact Euler–Lagrange equations that does not
satisfy the additional condition arising from the vakonomic principle. The
last example deals with the Korteweg–De Vries equation, which arises from a
second-order Lagrangian.
Throughout this paper, all the manifolds are assumed to be real, connected and
second countable. Manifolds and mappings are assumed to be smooth. The sum
over crossed repeated indices is understood.
## 2 The Herglotz principle in mechanics
The Herglotz principle, in simple terms, might be explained as follows. Given
a configuration manifold $Q$, consider a Lagrangian function
$L:\mathrm{T}Q\times\mathbb{R}\to\mathbb{R}$ depending on the positions
$q^{i}$, the velocities $\dot{q}^{i}$ and an extra variable $z$ that we can
think of as the _action_ , but we will soon discuss its meaning in more
detail. The Herglotz variational principle states that the trajectory of the
system $c(t)$ is a critical point of the action $\zeta(1)$, satisfying
$c(0)=q_{0}$, $c(1)=q_{1}$, $\zeta(0)=z_{0}$ and
$\begin{dcases}\frac{\mathrm{d}\zeta}{\mathrm{d}t}=L(c,\dot{c},\zeta)\,,\\\
\zeta(0)=z_{0}\,.\end{dcases}$
We note that the action is given by
$\zeta(1)=\int_{0}^{1}\frac{\mathrm{d}\zeta}{\mathrm{d}t}\mathrm{d}t+\zeta(0)=\int_{0}^{1}L(c(t),\dot{c}(t),\zeta(t))\mathrm{d}t+z_{0}\,,$
(1)
which, if the Lagrangian does not depend on $z$, coincides with the usual
Hamilton’s action up to a constant.
A slight modification of this principle, that is the one we will prefer in
this paper, is to consider the action as the increment of $z$, that is,
$\zeta(1)-\zeta(0)=\int_{0}^{1}\frac{\mathrm{d}\zeta}{\mathrm{d}t}\mathrm{d}t=\int_{0}^{1}L(c(t),\dot{c}(t),\zeta(t))\mathrm{d}t\,,$
(2)
which coincides exactly with Hamilton’s action if the Lagrangian $L$ does not
depend on $z$. Since both definitions of the action differ only by a constant
$z_{0}$, their critical curves are the same. Indeed, they are the curves $c$
such that $(c,\dot{c},\zeta)$ satisfy Herglotz’s equations:
$\frac{\partial L}{\partial
q^{i}}-\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial
L}{\partial\dot{q}^{i}}=\frac{\partial L}{\partial\dot{q}^{i}}\frac{\partial
L}{\partial z}\,.$ (3)
To be more precise, we distinguish two possible equivalent interpretations of
this principle. We can either understand it as an implicit action principle
for curves $c$ on $Q$, or as a constrained but explicit action principle for
curves $(c,\zeta)$ on $Q\times\mathbb{R}$. The three different ways to
formalize the Herglotz principle that we will see in this section are based on
[23]. Another version can be found in [19].
### 2.1 Herglotz principle: implicit version
For the first interpretation, we consider the (infinite dimensional) manifold
$\Omega(q_{0},q_{1})$ of curves $c:[0,1]\to Q$ with endpoints $q_{0},q_{1}\in
Q$. The tangent space of $\mathrm{T}_{c}\Omega(q_{0},q_{1})$, is the space of
vector fields along $c$ vanishing at the endpoints. That is,
$\displaystyle\mathrm{T}_{c}\Omega(q_{0},q_{1})$ $\displaystyle=\\{\delta
c\mid\delta c(t)\in\mathrm{T}_{c(t)}Q\,,\ \delta c(0)=0\,,\ \delta
c(1)=0\\}\,.$
Let $z_{0}\in\mathbb{R}$ and consider the operator
$\mathcal{Z}_{z_{0}}:c\in\Omega(q_{0},q_{1})\longmapsto\mathcal{Z}_{z_{0}}(c)\in\mathscr{C}^{\infty}([0,1]\to\mathbb{R})\,,$
(4)
where $\mathcal{Z}_{z_{0}}(c)$ is the only solution to the Cauchy problem
$\begin{dcases}\frac{\mathrm{d}\mathcal{Z}_{z_{0}}(c)}{\mathrm{d}t}=L(c,\dot{c},\mathcal{Z}_{z_{0}}(c))\,,\\\
\mathcal{Z}_{z_{0}}(c)(0)=z_{0}\,,\end{dcases}$ (5)
that is, it assigns to each curve on the base space its action as a function
of time. This map is well-defined because the Cauchy problem (5) always has a
unique solution.
Now, the _contact action functional_ maps each curve $c\in\Omega(q_{0},q_{1})$
to the increment of the solution of the Cauchy problem (5):
$\displaystyle\mathcal{A}_{z_{0}}:\Omega(q_{0},q_{1})$
$\displaystyle\longrightarrow\mathbb{R}$ (6) $\displaystyle c$
$\displaystyle\longmapsto\mathcal{Z}_{z_{0}}(c)(1)-\mathcal{Z}_{z_{0}}(c)(0)\,.$
Note that, by the fundamental theorem of calculus,
$\mathcal{A}_{z_{0}}(c)=\int_{0}^{1}L(c(t),\dot{c}(t),\mathcal{Z}_{z_{0}}(c)(t))\mathrm{d}t\,.$
The following theorem states that the critical points of this action
functional are precisely the solutions to Herglotz equation [21].
###### Theorem 2.1 (Herglotz variational principle, implicit version).
Let $L:\mathrm{T}Q\times\mathbb{R}\to\mathbb{R}$ be a Lagrangian function and
consider $c\in\Omega(q_{0},q_{1})$ and $z_{0}\in\mathbb{R}$. Then,
$(c,\dot{c},\mathcal{Z}_{z_{0}}(c))$ satisfies the Herglotz equations
$\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial
L}{\partial\dot{q}^{i}}-\frac{\partial L}{\partial q^{i}}=\frac{\partial
L}{\partial\dot{q}^{i}}\frac{\partial L}{\partial z}\,,$
if and only if $c$ is a critical point of the contact action functional
$\mathcal{A}_{z_{0}}$.
###### Proof.
In order to simplify the notation, let $\psi=\mathrm{T}_{c}\mathcal{Z}(\delta
v)$. Consider a curve $c_{\lambda}\in\Omega(q_{0},q_{1})$, namely a family of
curves in $Q$ with fixed endpoints $q_{0},q_{1}$ smoothly parametrized by
$\lambda\in\mathbb{R}$, such that
$\delta
c={\frac{\mathrm{d}c_{\lambda}}{\mathrm{d}\lambda}}\Big{|}_{\lambda=0}\,.$
Since $\mathcal{Z}(c_{\lambda})(0)=z_{0}$ for all $\lambda$, then $\psi(0)=0$.
We compute the derivative of $\psi$ by interchanging the order of the
derivatives using the differential equation defining $\mathcal{Z}$:
$\displaystyle\dot{\psi}(t)$
$\displaystyle={\frac{\mathrm{d}}{\mathrm{d}\lambda}\Big{|}_{\lambda=0}\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{Z}(c_{\lambda}(t))}=\frac{\mathrm{d}}{\mathrm{d}\lambda}\Big{|}_{\lambda=0}{L(c_{\lambda}(t),\dot{c}_{\lambda}(t),\mathcal{Z}(c_{\lambda})(t))}$
$\displaystyle=\frac{\partial L}{\partial q^{i}}(\chi(t)){\delta
c}^{i}(t)+\frac{\partial
L}{\partial\dot{q}^{i}}(\chi(t)){\delta\dot{c}}^{i}(t)+\frac{\partial
L}{\partial z}(\chi(t))\psi(t)\,.$
Hence, the function $\psi$ is the solution to the ODE above. Since
$\psi(0)=0$, necessarily,
$\psi(t)=\frac{1}{\sigma(t)}\int_{0}^{t}\sigma(\tau)\left({\frac{\partial
L}{\partial q^{i}}(\chi(\tau)){\delta c}^{i}(\tau)+\frac{\partial
L}{\partial\dot{q}^{i}}(\chi(\tau)){\delta\dot{c}}^{i}(\tau)}\right)\mathrm{d}\tau\,,$
(7)
where
$\sigma(t)=\exp\left({-\int_{0}^{t}\frac{\partial L}{\partial
z}(\chi(\tau))\mathrm{d}\tau}\right)>0\,.$ (8)
Integrating by parts and using and that the variation vanishes at the
endpoints, we get the following expression:
$\displaystyle\mathrm{T}_{c}\mathcal{A}(\delta c)$
$\displaystyle=\mathrm{T}_{c}\mathcal{Z}(\delta
c)(1)=\psi(1)=\frac{1}{\sigma(1)}\int_{0}^{1}{\delta
c}^{i}(t)\left(\sigma(t)\frac{\partial L}{\partial
q^{i}}(\chi(t))-\frac{\mathrm{d}}{\mathrm{d}t}\left(\sigma(t)\frac{\partial
L}{\partial\dot{q}^{i}}(\chi(t))\right)\right)\mathrm{d}t$
$\displaystyle=\int_{0}^{t}\delta c^{i}(t)\sigma(t)\left(\frac{\partial
L}{\partial q^{i}}(\chi(t))+\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial
L}{\partial\dot{q}^{i}}(\chi(t))-\frac{\partial
L}{\partial\dot{q}^{i}}(\chi(t))\frac{\partial L}{\partial
z}(\chi(t))\right)\mathrm{d}t\,,$
where we have used that
$\frac{\mathrm{d}\sigma}{\mathrm{d}t}(t)=-\frac{\partial L}{\partial
z}(\chi(t))\sigma(t)\,.$ (9)
Since this must hold for every possible variation, we have
$\sigma(t)\left(\frac{\partial L}{\partial
q^{i}}(\chi(t))+\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial
L}{\partial\dot{q}^{i}}(\chi(t))-\frac{\partial
L}{\partial\dot{q}^{i}}(\chi(t))\frac{\partial L}{\partial
z}(\chi(t))\right)=0\,,$
thus obtaining the Herglotz equation. ∎
### 2.2 Herglotz principle: vakonomic version
Another way to understand this principle is to think of it as a constrained
variational principle for curves on $Q\times\mathbb{R}$. This time, we will
work on the manifold $\widetilde{\Omega}(q_{0},q_{1},z_{0})$ of curves
$\widetilde{c}=(c,\zeta):[0,1]\to Q\times\mathbb{R}$ such that $c(0)=q_{0}$,
$c(1)=q_{1}$, $\zeta(0)=z_{0}$. Note that we do not constraint $\zeta(1)$. The
tangent space at the curve
$\widetilde{c}\in\widetilde{\Omega}(q_{0},q_{1},z_{0})$ is given by
$\displaystyle\mathrm{T}_{\widetilde{c}}\widetilde{\Omega}(q_{0},q_{1},z_{0})$
$\displaystyle=\\{\delta\widetilde{c}(t)=(\delta
c(t),\delta\zeta(t))\in\mathrm{T}_{\widetilde{c}(t)}(Q\times\mathbb{R})\mid\delta
c(0)=0,\,\delta c(1)=0,\delta\zeta(0)=0\\}\,.$ (10)
In this space, the action functional $\widetilde{\mathcal{A}}$ can be defined
as an integral
$\displaystyle\widetilde{\mathcal{A}}:\widetilde{\Omega}(q_{0},q_{1},z_{0})$
$\displaystyle\longrightarrow\mathbb{R}$ (11) $\displaystyle\widetilde{c}$
$\displaystyle\longmapsto\zeta(1)-\zeta(0)=\int_{0}^{1}\dot{\zeta}(t)\mathrm{d}t\,.$
We will restrict this action to the set of paths that satisfy the constraint
$\dot{\zeta}=L$. For this, we consider the paths at the zero set of the
constraint function $\phi_{L}$:
$\phi_{L}(q,\dot{q},z,\dot{z})=\dot{z}-L(q,\dot{q},z)\,.$ (12)
That is, we consider
$\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})=\\{\widetilde{c}=(c,\zeta)\in\widetilde{\Omega}(q_{0},q_{1},z_{0})\mid\phi_{L}\circ\dot{\widetilde{c}}=\dot{\zeta}-L(c,\dot{c},\zeta)=0\\}\,.$
(13)
Note that, since the Cauchy problem (5) has a unique solution, the elements
$(c,\zeta)\in\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})$ are precisely
$(c,\mathcal{Z}_{z_{0}}(c))$, where $c\in\Omega(q_{0},q_{1})$. That is, the
map
$\operatorname{Id}\times\mathcal{Z}_{z_{0}}:\Omega(q_{0},q_{1})\to\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})$
given by
${(\operatorname{Id}\times\mathcal{Z}_{z_{0}})}(c)=(c,\mathcal{Z}_{z_{0}}(c))$
is a bijection, with inverse $(\operatorname{pr}_{Q})_{*}(c,\zeta)=c$.
Moreover, the following diagram commutes
${\mathbb{R}}$${{\Omega(q_{0},q_{1})}}$${{\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})}}$$\scriptstyle{\operatorname{Id}\times\mathcal{Z}_{z_{0}}}$$\scriptstyle{\mathcal{A}}$$\scriptstyle{\widetilde{\mathcal{A}}}$
(14)
Hence $({c},\zeta)\in\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})$ is a critical
point of the functional $\widetilde{\mathcal{A}}$ if and only if $c$ is a
critical point of $\mathcal{A}$. So the critical points of $\mathcal{A}$
restricted to $\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})$ are precisely the
curves that satisfy the Herglotz equations.
###### Theorem 2.2 (Herglotz variational principle, vakonomic version).
Let $L:\mathrm{T}Q\times\mathbb{R}\to\mathbb{R}$ be a Lagrangian function and
let $(c,\zeta)\in\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})$. Then,
$(c,\dot{c},\zeta)$ satisfies the Herglotz equations:
$\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial
L}{\partial\dot{q}^{i}}-\frac{\partial L}{\partial q^{i}}=\frac{\partial
L}{\partial\dot{q}^{i}}\frac{\partial L}{\partial z}\,,$ (15)
if and only if $(c,\zeta)$ is a critical point of
$\widetilde{\mathcal{A}}|_{\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})}$.
We will provide an alternative proof by directly finding the critical points
of the functional $\widetilde{\mathcal{A}}$ restricted to
$\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})\subseteq\widetilde{\Omega}(q_{0},q_{1},z_{0}))$
using the following infinite-dimensional version of the Lagrange multiplier
theorem (see [2] for more details).
###### Theorem 2.3 (Lagrange multiplier Theorem).
Let $M$ be a smooth manifold and let $E$ be a Banach space. Consider a smooth
submersion $g:M\to E$ such that $A=g^{-1}(\\{0\\})$ is a smooth submanifold,
and a smooth function $f:M\to\mathbb{R}$. Then $p\in A$ is a critical point of
$f|_{A}$ if and only if there exists $\widehat{\lambda}\in E^{*}$ such that
$p$ is a critical point of $f+\widehat{\lambda}\circ g$.
###### Proof of Herglotz variational principle, vakonomic version.
We will apply this result to our situation. In the notation of the theorem,
$M=\widetilde{\Omega}(q_{0},q_{1},z_{0})$ is the smooth manifold. We pick the
Banach space $E=L^{2}([0,1]\to\mathbb{R})$ of square integrable functions.
This space is, indeed, a Hilbert space with inner product
$\langle\alpha,\beta\rangle=\int_{0}^{1}\alpha(t)\beta(t)\mathrm{d}t\,.$
Recall that, by the Riesz representation theorem, there exists a bijection
between $L^{2}([0,1]\to\mathbb{R})$ and its dual such that for every
$\widehat{\alpha}\in L^{2}([0,1]\to\mathbb{R})^{*}$ there exists $\alpha\in
L^{2}([0,1]\to\mathbb{R})$ such that
$\hat{\alpha}(\beta)=\langle\alpha,\beta\rangle$ for all $\beta\in
L^{2}([0,1]\to\mathbb{R})$.
Our constraint function is
$\displaystyle g:\widetilde{\Omega}(q_{0},q_{1},z_{0})$
$\displaystyle\longrightarrow L^{2}([0,1]\to\mathbb{R})$
$\displaystyle\widetilde{c}$
$\displaystyle\longmapsto(\phi_{L})\circ(\widetilde{c},\dot{\widetilde{c}})\,,$
where $\phi_{L}$ is a constraint function locally defining
$A=g^{-1}(0)=\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})$.
By Theorem 2.3, $c$ is a critical point of $f=\widetilde{\mathcal{A}}$
restricted to $\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})$ if and only if there
exists $\widehat{\lambda}\in L^{2}([0,1]\to\mathbb{R})^{*}$ (which is
represented by $\lambda\in L^{2}([0,1]\to\mathbb{R})$) such that $c$ is a
critical point of
$\widetilde{\mathcal{A}}_{\lambda}=\widetilde{\mathcal{A}}+\widehat{\lambda}\circ
g$.
Indeed,
$\widetilde{\mathcal{A}}_{\lambda}=\int_{0}^{1}L_{\lambda}(\widetilde{c}(t),\dot{\widetilde{c}}(t))\mathrm{d}t\,,$
where
$L_{\lambda}(q,z,\dot{q},\dot{z})=\dot{z}-\lambda\phi_{L}(q,z,\dot{q},\dot{z})\,.$
Since the endpoint of $\zeta$ is not fixed, the critical points of this
functional $\widetilde{\mathcal{A}}_{\lambda}$ are the solutions of the
Euler–Lagrange equations for $L_{\lambda}$ that satisfy the natural boundary
condition
$\frac{\partial
L_{\lambda}}{\partial\dot{z}}(\widetilde{c}(1),\dot{\widetilde{c}}(1))=1-\lambda(1)\frac{\partial\phi_{L}}{\partial\dot{z}}(\widetilde{c}(1),\dot{\widetilde{c}}(1))=0\,.$
Since $\phi_{L}=\dot{z}-L$, this condition reduces to $\lambda(1)=1$.
The Euler–Lagrange equations of $L$ are given by
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\left(\lambda(t)\frac{\partial\phi_{L}(\widetilde{c}(t),\dot{\widetilde{c}}(t))}{\partial\dot{q}^{i}}\right)-\lambda(t)\frac{\partial\phi_{L}(\widetilde{c}(t),\dot{\widetilde{c}}(t))}{\partial
q^{i}}$ $\displaystyle=0\,,$ (16a)
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\left(\lambda(t)\frac{\partial\phi_{L}(\widetilde{c}(t),\dot{\widetilde{c}}(t))}{\partial\dot{z}}\right)-\lambda(t)\frac{\partial\phi_{L}(\widetilde{c}(t),\dot{\widetilde{c}}(t))}{\partial
z}$ $\displaystyle=0\,,$ (16b)
Since $\phi_{L}=\dot{z}-L$, the equation (16b) for $z$ is just
$\frac{\mathrm{d}\lambda(t)}{\mathrm{d}t}=-\lambda(t)\frac{\partial
L}{\partial z}\,.$
Substituting on (16a) and dividing by $\lambda$, we obtain the Herglotz
equations (15). ∎
### 2.3 Herglotz principle: nonholonomic version
Another way to obtain the Herglotz equation of motion is through a non-linear
non-holonomic principle, the so-called Chetaev principle [38]. Instead of
restricting the space of admissible curves
$\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})\subseteq\Omega(q_{0},q_{1},z_{0})$
and find the critical points on this submanifold, we directly restrict the
space of admissible variations, so that the differential of the action has to
vanish only in a selection of variations. Hence, the solutions of this
principle are not necessarily critical points of the action functional
restricted to any space.
###### Definition 2.4.
A section
$\widetilde{c}=(c,c_{z})\in\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})$
satisfies the _non-holonomic Herglotz variational principle_ if
$\mathrm{T}_{\widetilde{c}}\mathcal{A}(\delta c)=0\,$ for all vector fields
$\delta c\in\mathrm{T}_{c}\widetilde{\Omega}(q_{0},q_{1},z_{0})$ such that
$\mathrm{d}\phi_{L}(\mathcal{I}(\delta c))=0$, where $\mathcal{I}$ denotes the
vertical endomorphism of $\mathrm{T}(\mathrm{T}(Q\times\mathbb{R}))$.
If $\delta\widetilde{c}=\delta q^{i}\dfrac{\partial}{\partial q^{i}}+\delta
z\dfrac{\partial}{\partial z}$, then
$\mathrm{d}\phi_{L}(\mathcal{I}(\delta c))=\mathrm{d}\phi_{L}\left(\delta
q^{i}\frac{\partial}{\partial\dot{q}^{i}}+\delta
z\frac{\partial}{\partial\dot{z}}\right)=\delta z-\delta q^{i}\frac{\partial
L}{\partial\dot{q}^{i}}\,.$
Then, the nonholonomic dynamics are given by [11, 38].
###### Theorem 2.5 (Herglotz’s variational principle, nonholonomic version).
Let $L:\mathrm{T}Q\times\mathbb{R}\to\mathbb{R}$ be a Lagrangian function and
let $\widetilde{c}=(c,c_{z})\in\widetilde{\Omega}_{L}(q_{0},q_{1},z_{0})$.
Then $\widetilde{c}$ satisfy the non-holonomic Herglotz variational principle
if, and only if, $(c,\dot{c},c_{z})$ satisfies Herglotz’s equations:
$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\left({\frac{\partial
L}{\partial\dot{q}^{i}}}\right)-\frac{\partial L}{\partial q^{i}}$
$\displaystyle=\frac{\partial L}{\partial\dot{q}^{i}}\frac{\partial
L}{\partial z}\,,$ (17) $\displaystyle\dot{z}$ $\displaystyle=L\,.$
## 3 The Herglotz principle for fields
In the literature, there exists a non-covariant formulation of the Herglotz
principle for fields theories [35]. A more general approach is given in [44],
although only a class of Lagrangian functions is considered, which we call
Lagrangians with closed action dependence (see Definition 3.6).
The method presented in [44] uses an implicit argument, similar to the method
presented in Section 2.1 for mechanical systems. We propose two alternative
methods: the non-holonomic principle, which is compatible with the $k$-contact
[30, 32], $k$-cocontact [52] and multicontact [16] formulations; and the
vakonomic principle, which can be extended to higher-order Lagrangians.
Consider a Lagrangian function $L(x^{\mu},u^{a},u^{a}_{\mu},z^{\mu})$
depending on the coordinates $(x^{\mu})$ of an $m$-dimensional spacetime $M$,
the values of fields $u^{a}$, their derivatives $u^{a}_{\mu}$ at the point $x$
and the variables $z^{\mu}$ that, in this context do not represent the action,
but the _action density_. In order to compute the action of a local field
$\sigma$ defined on $D\subseteq M$, we find a vector field $\zeta^{\mu}$ such
that
$D_{\mu}\zeta^{\mu}=L\,.$ (18)
Then, the action is
$\int L\mathrm{d}^{n}x=\int D_{\mu}\zeta^{\mu}\mathrm{d}^{n}x=\int_{\partial
D}\zeta^{\mu}\eta_{\mu}\mathrm{d}\sigma\,,$ (19)
where $\eta_{\mu}$ is the normal unit vector to the surface and
$\mathrm{d}\sigma$ is the surface differential. The last equality follows from
Stokes’ Theorem. Note that if $M$ is one-dimensional, the action is just
$\zeta(1)-\zeta(0)$, and thus we recover the Herglotz action for mechanical
systems.
The critical points of this action along the local fields $\sigma$ with the
same values on the boundary would be the solutions to the Herglotz field
equations
$D_{\mu}\left(\frac{\partial L}{\partial u^{a}_{\mu}}\right)-\frac{\partial
L}{\partial u^{a}}=\frac{\partial L}{\partial u^{a}}\frac{\partial L}{\partial
z^{\mu}}\,.$ (20)
These equations are obtained in [44] through an implicit argument, in a
similar spirit to the proof of Theorem 2.1. Note that the Lagrangian theory of
$k$-contact fields [32] provides the same equations.
However, we find two issues on this derivation of the variational principle.
First of all, the definition of $z^{\mu}$ in equation (18) depends on a metric
on $M$ in order to compute its divergence. This can be easily fixed by taking
$z^{\mu}$ to be components of a $(k-1)$-differential form instead of a vector
field.
The second issue is more subtle. The solution of (18) is not unique, and hence
the action is not well-defined. This is not a problem if the Lagrangian does
not depend on $\zeta^{\mu}$, because in this case all the solutions to (18)
differ only by an exact term, whose integral is zero, and does not contribute
to the action, but this is not true in general. Indeed, $\zeta$ may appear in
equation (19). In [44] the authors assume some conditions on the Lagrangian in
order to find a unique solution. Moreover, (18) might have no solutions and
hence we will need to add more constraints in order to ensure the existence of
solutions.
One way to fix this problem is to prescribe boundary conditions on (18) that
make the solution unique. However, we will avoid this problem choosing a
“constrained formulation” of this problem, in the same spirit of Theorems 2.2
and 2.5, instead of the “implicit” approach used in [44].
### 3.1 Geometric structures
Let $M$ be and $m$-dimensional orientable manifold representing the spacetime
and consider a fiber bundle $E\to M$. Let $(x^{\mu},u^{a})$ be adapted
coordinates on $E$ and let
$\mathrm{d}^{m}x=\mathrm{d}x^{1}\wedge\dots\wedge\mathrm{d}x^{m}$ be a volume
form on $M$. Then, we will denote
$\mathrm{d}^{m-1}x_{\mu}=i_{\frac{\partial}{\partial
x^{\mu}}}\mathrm{d}^{m}x\in\Omega^{m-1}(M)$. The configuration space is the
bundle $\pi:E\times_{M}\Lambda^{m-1}M\rightarrow M$, because the action
densities are $(m-1)$-forms on $M$. The adapted coordinates of the first jet
bundle $J^{1}(E\times_{M}\Lambda^{m-1}M)$ are
$(x^{\mu},u^{a},u^{a}_{\mu},z^{\nu},z^{\nu}_{\mu})$, where $z^{\nu}$ are the
coordinates of $\Lambda^{m-1}(M)$ induced by the local basis
$\left\\{\mathrm{d}^{m-1}x_{\nu}\right\\}_{\nu=1,\dots,m}$. We consider the
first jet of the action densities because it is necessary to intrinsically
define the constraint (18).
Given a coordinate system, the total derivative
$D_{\mu}:\mathscr{C}^{\infty}(J^{1}(E\times_{M}\Lambda^{m-1}M))\rightarrow\mathscr{C}^{\infty}(J^{2}(E\times_{M}\Lambda^{m-1}M))$,
for $\mu=1,\dots,m$, is a derivation given by
$D_{\mu}f=\frac{\partial f}{\partial x^{\mu}}+u^{a}_{\mu}\frac{\partial
f}{\partial u^{a}}+z^{\nu}_{\mu}\frac{\partial f}{\partial
z^{\nu}}+u^{a}_{\tau\mu}\frac{\partial f}{\partial
u^{a}_{\tau}}+z^{\nu}_{\tau\mu}\frac{\partial f}{\partial z^{\nu}_{\tau}}\,,$
where $f\in\mathscr{C}^{\infty}(J^{1}(E\times_{M}\Lambda^{m-1}M))$.
For any section $\rho:M\rightarrow E\times_{M}\Lambda^{m-1}M$, the total
derivative satisfies the property
$(j^{2}\rho)^{*}(D_{\mu}f)=\frac{\partial(j^{1}\rho)^{*}f}{\partial
x^{\mu}}\,.$
Given a vector field $\xi\in\mathfrak{X}(E\times_{M}\Lambda^{m-1}M)$ with
local flow $\gamma_{r}:E\times_{M}\Lambda^{m-1}M\rightarrow
E\times_{M}\Lambda^{m-1}M$, its complete lift to
$J^{1}(E\times_{M}\Lambda^{m-1}M)$ is the vector field
$\xi^{1}\in\mathfrak{X}(J^{1}(E\times_{M}\Lambda^{m-1}M))$ whose local flow is
$j^{1}\gamma_{r}$. If $\xi\in\mathfrak{X}(E\times_{M}\Lambda^{m-1}M)$ is a
vertical vector field with respect to the projection $\pi$ with local
expression
$\xi=\xi^{a}\frac{\partial}{\partial u^{a}}+\xi^{\nu}\frac{\partial}{\partial
z^{\nu}}\,,$
its complete lift is
$\xi^{1}=\xi^{a}\frac{\partial}{\partial
u^{a}}+\left(\frac{\partial\xi^{a}}{\partial
x^{\mu}}+u^{b}_{\mu}\frac{\partial\xi^{a}}{\partial
u^{b}}+z^{\tau}_{\mu}\frac{\partial\xi^{a}}{\partial
z^{\tau}}\right)\frac{\partial}{\partial
u^{a}_{\mu}}+\xi^{\nu}\frac{\partial}{\partial
z^{\nu}}+\left(\frac{\partial\xi^{\nu}}{\partial
x^{\mu}}+u^{b}_{\mu}\frac{\partial\xi^{\nu}}{\partial
u^{b}}+z^{\tau}_{\mu}\frac{\partial\xi^{\nu}}{\partial
z^{\tau}}\right)\frac{\partial}{\partial z^{\nu}_{\mu}}\,.$
The Lagrangian density
$\mathcal{L}:J^{1}(E\times_{M}\Lambda^{m-1}M)\to\Lambda^{m}M$ is a fiber
bundle morphism over $M$. In local coordinates,
$\mathcal{L}(x^{\mu},u^{a},u^{a}_{\mu},z^{\mu})=L(x^{\mu},u^{a},u^{a}_{\mu},z^{\mu})\mathrm{d}^{m}x$.
In order to define intrinsically the constraint (18), we define the _canonical
differential action form_ as
$\displaystyle\overline{DS}:J^{1}\Lambda^{m-1}M$
$\displaystyle\rightarrow\Lambda^{m}(M)$ $\displaystyle j^{1}\alpha$
$\displaystyle\longmapsto\mathrm{d}\alpha\,.$
The name is inspired by the canonical action form introduced in [16]. In local
coordinates, it reads
$\overline{DS}(z^{\nu},z^{\nu}_{\mu})=z^{\mu}_{\mu}\mathrm{d}^{m}x\,.$
Then, the constraint (18) can be written as
$\Phi=\tau^{*}\overline{DS}-\mathcal{L}=0\,,$ (21)
where $\tau:J^{1}(E\times_{M}\Lambda^{m-1}M)\rightarrow J^{1}\Lambda^{m-1}M$
is the natural projection. In local coordinates, $\Phi=\phi\mathrm{d}^{m}x$,
with $\phi=z^{\mu}_{\mu}-L$. The situation is described by the following
commutative diagram
${J^{1}(E\times_{M}\Lambda^{m-1}M)}$${J^{1}E}$${E\times_{M}\Lambda^{m-1}M}$${J^{1}\Lambda^{m-1}M}$${\Lambda^{m}M}$${E}$${\Lambda^{m-1}M}$${M}$$\scriptstyle{\pi^{1}}$$\scriptstyle{\tau}$$\scriptstyle{\mathcal{L}}$$\scriptstyle{\bar{\pi}}$$\scriptstyle{\pi}$$\scriptstyle{\overline{DS}}$$\scriptstyle{\rho}$$\scriptstyle{j^{1}\rho}$$\scriptstyle{\sigma}$$\scriptstyle{\zeta}$
Given a submanifold $D\subset M$, the set of sections that satisfy the
constraint $\Phi$ is denoted by
$\Omega=\\{\rho\in\Gamma_{D}(E\times_{M}\Lambda^{m-1}M)\,\text{ such that
}\,(j^{1}\rho)^{*}\Phi=0\\}\,.$
Then, the action associated to $\mathcal{L}$ is:
$\displaystyle\mathcal{A}:\Omega$ $\displaystyle\longrightarrow\mathbb{R}$
$\displaystyle\rho$
$\displaystyle\longmapsto\int_{D}(j^{1}\rho)^{*}\mathcal{L}\,.$
In general, the variations of this action are the elements tangent to $\rho$
which vanish at $\partial D$, which can be seen as the $\pi$-vertical vector
fields along $\rho$. Thus, we define:
$\displaystyle\mathrm{T}_{\rho}\Gamma_{D}=\\{\xi:D\rightarrow\mathrm{T}(E\times_{M}\Lambda^{m-1}M)\mid\xi(x)\in\mathrm{T}_{\rho(x)}(E\times_{M}\Lambda^{m-1}M)\,,\
\mathrm{T}\pi(\xi)=0\,,\ \xi|_{\partial D}=0\\}\,.$
We want to find the sections which are “critical” for the action $\mathcal{A}$
under the constraint $\Phi$. As we have commented before, this problem is not
well formulated. The constraint $\Phi$ involve velocities, and there are
several non-equivalent ways to select which variations have to be taken [38].
Inspired by the case of contact mechanics [24], we will describe two different
non-equivalent approaches: the non-holonomic and the vakonomic variational
principles.
### 3.2 Herglotz principle for fields: non-holonomic version
The approach presented in this section is inspired on [5, 59, 38]. Let
$D\subseteq M$ be an oriented manifold with compact closure and boundary
$\partial D$. The vertical lift [48] is a morphism of vector bundles
$\mathcal{S}:\mathrm{T}^{*}M\otimes_{J^{1}(E\times_{M}\Lambda^{m-1}M)}V(\pi)\rightarrow
V(\pi^{1})$ over the identity of $J^{1}(E\times_{M}\Lambda^{m-1}M)$ such that,
for any $j^{1}_{x}\phi\in J^{1}(E\times_{M}\Lambda^{m-1}M)$,
$\beta\in\mathrm{T}^{*}M\otimes_{J^{1}(E\times_{M}\Lambda^{m-1}M)}V(\pi)$ and
$f\in\mathscr{C}^{\infty}(J^{1}_{\phi(x)}(E\times_{M}\Lambda^{m-1}M))$, we
have:
$\mathcal{S}_{j^{1}_{x}\phi}(\beta)(f)=\left.\frac{\mathrm{d}}{\mathrm{d}t}\right|_{t=0}f(j^{1}_{x}\phi+t\beta)\,.$
We have that $\mathrm{T}^{*}M\otimes_{J^{1}(E\times_{M}\Lambda^{m-1}M)}V(\pi)$
is the vector bundle associated to the affine bundle
$\pi^{1}:J^{1}(E\times_{M}\Lambda^{m-1}M)\rightarrow
E\times_{M}\Lambda^{m-1}M$ and, hence, using the same coordinates
$(x^{\mu},u^{a},z^{\nu},u^{a}_{\mu},z^{\nu}_{\mu})$, the local expression of
the vertical lift is
$\mathcal{S}=\mathrm{d}u^{a}\otimes\frac{\partial}{\partial
x^{\mu}}\otimes\frac{\partial}{\partial
u^{a}_{\mu}}+\mathrm{d}z^{\nu}\otimes\frac{\partial}{\partial
x^{\mu}}\otimes\frac{\partial}{\partial z^{\nu}_{\mu}}\,.$
The dependence on the velocities of the constraint is implemented in the non-
holonomic version as a force. This can be formalized in different ways. For
instance, in [5] the authors use the vertical endomorphism. In our problem the
constraint is given by the $m$-form $\Phi$ instead of a function, and we find
that the vertical lift gives a more direct derivation of the equations. The
vertical lift is a $(2,1)$-tensor, and we are interested in the contraction of
both contravariant entrances with the form $\mathrm{d}\Phi$:
$\varphi=i_{\mathcal{S}}\mathrm{d}\Phi=\left(\frac{\partial\phi}{\partial
u^{a}_{\mu}}\mathrm{d}u^{a}+\frac{\partial\phi}{\partial
z^{\nu}_{\mu}}\mathrm{d}z^{\nu}\right)\otimes\mathrm{d}^{m-1}x_{\mu}\,.$
###### Definition 3.1.
A section $\rho\in\Omega$ satisfies the _non-holonomic Herglotz variational
principle_ if
$\mathrm{T}_{\rho}\mathcal{A}(\xi)=\int_{D}(j^{1}\rho)^{*}(\mathscr{L}_{\xi^{1}}\mathcal{L})=0\,.$
(22)
for all vector fields $\xi\in\mathrm{T}_{\rho}\Gamma_{D}$ such that
$\varphi(\xi^{1})=0\,,$ (23)
where $\mathscr{L}$ denotes the Lie derivative and $\varphi(\xi^{1})$ is the
contraction of $\xi^{1}$ with the first entrance of $\varphi$. In local
coordinates, it reads
$\varphi(\xi^{1})=\left(\frac{\partial\phi}{\partial
u^{a}_{\mu}}\xi^{a}+\frac{\partial\phi}{\partial
z^{\nu}_{\mu}}\xi^{\nu}\right)\otimes\mathrm{d}^{m-1}x_{\mu}\,.$
###### Theorem 3.2.
Let $\mathcal{L}:J^{1}(E\times_{M}\Lambda^{m-1}M)\to\Lambda^{m}M$ be a
Lagrangian density and let $\rho\in\Omega$. Then, $j^{1}\rho$ satisfies the
Herglotz field equations
$\displaystyle D_{\mu}\left(\frac{\partial L}{\partial
u^{a}_{\mu}}\right)-\frac{\partial L}{\partial u^{a}}$
$\displaystyle=\frac{\partial L}{\partial u^{a}_{\mu}}\frac{\partial
L}{\partial z^{\mu}}$ (24)
if, and only if, $\rho$ satisfies the non-holonomic Herglotz variational
principle (Definition 3.1).
###### Proof.
$\displaystyle\int_{D}(j^{1}\rho)^{*}(\mathscr{L}_{\xi^{1}}\mathcal{L})$
$\displaystyle=\int_{D}(j^{1}\rho)^{*}\left[\xi^{a}\frac{\partial L}{\partial
u^{a}}+\left(\frac{\partial\xi^{a}}{\partial
x^{\mu}}+u^{b}_{\mu}\frac{\partial\xi^{a}}{\partial
u^{b}}+z^{\tau}_{\mu}\frac{\partial\xi^{a}}{\partial
z^{\tau}}\right)\frac{\partial L}{\partial u^{a}_{\mu}}\right.$
$\displaystyle\quad\left.+\xi^{\nu}\frac{\partial L}{\partial
z^{\nu}}+\left(\frac{\partial\xi^{\nu}}{\partial
x^{\mu}}+u^{b}_{\mu}\frac{\partial\xi^{\nu}}{\partial
u^{b}}+z^{\tau}_{\mu}\frac{\partial\xi^{\nu}}{\partial
z^{\tau}}\right)\frac{\partial L}{\partial
z^{\nu}_{\mu}}\right]\mathrm{d}^{m}x$
$\displaystyle=\int_{D}(j^{1}\rho)^{*}\xi^{a}\frac{\partial L}{\partial
u^{a}}+\frac{\partial\xi^{a}\circ\rho}{\partial
x^{\mu}}(j^{1}\rho)^{*}\frac{\partial L}{\partial
u^{a}_{\mu}}+(j^{1}\rho)^{*}\xi^{\nu}\frac{\partial L}{\partial
z^{\nu}}+\frac{\partial\xi^{\nu}\circ\rho}{\partial
x^{\mu}}(j^{1}\rho)^{*}\frac{\partial L}{\partial
z^{\nu}_{\mu}}\mathrm{d}^{m}x$
$\displaystyle=\int_{D}(j^{1}\rho)^{*}\left[\xi^{a}\left(\frac{\partial
L}{\partial u^{a}}-D_{\mu}\frac{\partial L}{\partial
u^{a}_{\mu}}\right)+\xi^{\nu}\frac{\partial L}{\partial
z^{\nu}}\right]\mathrm{d}^{m}x+\int_{\partial
D}(j^{1}\rho)^{*}\xi^{a}\frac{\partial L}{\partial
u^{a}_{\mu}}\mathrm{d}^{m-1}x_{\mu}$
$\displaystyle=\int_{D}(j^{1}\rho)^{*}\left[\xi^{a}\left(\frac{\partial
L}{\partial u^{a}}-D_{\mu}\frac{\partial L}{\partial
u^{a}_{\mu}}\right)+\xi^{\nu}\frac{\partial L}{\partial
z^{\nu}}\right]\mathrm{d}^{m}x\,.$
If it vanishes for all $\xi$ satisfying equation (23), there exist functions
$\lambda_{\alpha}\in\mathscr{C}^{\infty}(J^{1}(E\times_{M}\Lambda^{m-1}M))$
such that
$\displaystyle\frac{\partial L}{\partial u^{a}}-D_{\mu}\left(\frac{\partial
L}{\partial u^{a}_{\mu}}\right)$
$\displaystyle=\lambda_{\alpha}\frac{\partial\phi}{\partial
u^{a}_{\alpha}}\,,$ $\displaystyle\frac{\partial L}{\partial z^{\nu}}$
$\displaystyle=\lambda_{\alpha}\frac{\partial\phi}{\partial
z^{\nu}_{\alpha}}\,.$
Combining both equations and using the expression $\phi=z^{\mu}_{\mu}-L$, we
see that $\lambda_{\mu}=\dfrac{\partial L}{\partial z^{\mu}}$ and
$\displaystyle D_{\mu}\left(\frac{\partial L}{\partial
u^{a}_{\mu}}\right)-\frac{\partial L}{\partial u^{a}}$
$\displaystyle=\frac{\partial L}{\partial u_{\mu}^{a}}\frac{\partial
L}{\partial z^{\mu}}\,.$ (25)
∎
The Herglotz field equations (24) are also called $k$-contact Euler–Lagrange
equations [32].
### 3.3 Herglotz principle for fields: vakonomic version
The approach presented in this section is inspired by the vakonomic version of
Herglotz principle [24], presented in Section 2.2.
In the vakonomic approach we only consider variations that transform sections
that satisfy the constraints into sections that also satisfy the constraints.
In other words, the lift of the variations to the first jet must be tangent to
the submanifold defined by the constraints. Thus, we have the following
variational principle. Let $D\subseteq M$ be an oriented manifold homeomorphic
to a ball and with boundary $\partial D$.
###### Definition 3.3.
A section $\rho\in\Omega$ satisfies the _vakonomic Herglotz variational
principle_ if
$\mathrm{T}_{\rho}\mathcal{A}(\xi)=\int_{D}(j^{1}\rho)^{*}(\mathscr{L}_{\xi^{1}}\mathcal{L})=0$
(26)
for every vector field $\xi\in\mathrm{T}_{\rho}\Gamma_{D}$ such that
$\mathscr{L}_{\xi^{1}}\Phi=0$.
This kind of constrained field theories has been studied, for instance, in
[9]. By Theorem 2.3, we can rewrite this as a problem without constraints
using Lagrange multipliers. We need to consider the Lagrangian
$\mathcal{L}_{\lambda}=\mathcal{L}+\lambda\Phi=\left(L+\lambda(z^{\mu}_{\mu}-L)\right)\mathrm{d}^{m}x=L_{\lambda}\mathrm{d}^{m}x\,,$
where $\lambda\in\mathscr{C}^{\infty}(M)$ is a function to be determined
called the Lagrange multiplier. Then, the action associated to
$\mathcal{L}_{\lambda}$ is
$\displaystyle\mathcal{A}_{\lambda}:\Omega$
$\displaystyle\longrightarrow\mathbb{R}$ $\displaystyle\rho$
$\displaystyle\longmapsto\int_{D}(j^{1}\rho)^{*}\mathcal{L}_{\lambda}\,.$
###### Corollary 3.4.
A section $\rho\in\Omega$ satisfies the vakonomic Herglotz variational
principle if, and only if,
$\mathrm{T}_{\rho}\mathcal{A}_{\lambda}(\xi)=\int_{D}(j^{1}\rho)^{*}(\mathscr{L}_{\xi^{1}}\mathcal{L}_{\lambda})=0$
(27)
for every vector field $\xi\in\mathrm{T}_{\rho}\Gamma_{D}$.
The corresponding equations are given by the following theorem.
###### Theorem 3.5.
Let $\mathcal{L}:J^{1}(E\times_{M}\Lambda^{m-1}M)\to\Lambda^{m}M$ be a
Lagrangian density and let $\rho\in\Omega$. Then, $j^{1}\rho$ satisfies the
Herglotz field equations:
$\displaystyle D_{\mu}\left(\frac{\partial L}{\partial
u^{a}_{\mu}}\right)-\frac{\partial L}{\partial u^{a}}$
$\displaystyle=\frac{\partial L}{\partial u_{\mu}^{a}}\frac{\partial
L}{\partial z^{\mu}}\,,$ (28)
and the condition
$\displaystyle D_{\nu}\frac{\partial L}{\partial z^{\mu}}$
$\displaystyle=D_{\mu}\frac{\partial L}{\partial z^{\nu}}\,,$ (29)
if, and only if, $\rho$ satisfies the vakonomic Herglotz variational
principle.
###### Proof.
The problem is the usual non-constrained Hamilton variational problem for the
Lagrangian $L_{\lambda}$. Considering variations with respect to $\delta
u^{a}$ and $\delta z^{\nu}$ we obtain the set of equations
$\displaystyle\frac{\partial L_{\lambda}}{\partial
u^{a}}-D_{\mu}\left(\frac{\partial L_{\lambda}}{\partial u^{a}_{\mu}}\right)$
$\displaystyle=0\,,$ (30) $\displaystyle\frac{\partial L_{\lambda}}{\partial
z^{\nu}}-D_{\mu}\left(\frac{\partial L_{\lambda}}{\partial
z^{\nu}_{\mu}}\right)$ $\displaystyle=0\,.$ (31)
These equations are just the Euler–Lagrange equations when considering $u^{a}$
and $z^{\nu}$ as dynamical variables. Expanding equation (31), we have
$(1-\lambda)\frac{\partial L}{\partial
z^{\nu}}-\frac{\partial\lambda}{\partial x^{\mu}}=0\,,$
and combining it with equation (30), we find that
$0=(1-\lambda)\frac{\partial L}{\partial
u^{a}}-D_{\mu}\left((1-\lambda)\frac{\partial L}{\partial
u^{a}_{\mu}}\right)=(1-\lambda)\frac{\partial L}{\partial
u^{a}}-(1-\lambda)D_{\mu}\left(\frac{\partial L}{\partial
u^{a}_{\mu}}\right)+(1-\lambda)\frac{\partial L}{\partial
z^{\nu}}\frac{\partial L}{\partial u^{a}_{\mu}}\,.$
If $\lambda\neq 1$, we can divide by $1-\lambda$ and obtain equation (28).
However, in this case there are hidden conditions in equation (31). Taking
$g=\log(|1-\lambda|)$, equation (31) implies
$\mathrm{d}g=\pm\frac{\partial L}{\partial z^{\nu}}\mathrm{d}x^{\nu}\,.$
This has solution if and only if the right hand side is closed, namely if
$D_{\nu}\frac{\partial L}{\partial z^{\mu}}=D_{\mu}\frac{\partial L}{\partial
z^{\nu}}\,.$ (32)
If this condition is fulfilled, since $D$ is homeomorphic to a ball,
$\frac{\partial L}{\partial z^{\nu}}\mathrm{d}x^{\nu}=\mathrm{d}h\,,$ (33)
and so we pick $g=h$. ∎
### 3.4 Relations between both approaches
The main difference between the non-holonomic and the vakonomic approaches is
the unexpected condition (29). It motivates the following definition.
###### Definition 3.6.
A Lagrangian has _closed action dependence_ if
$D_{\mu}\frac{\partial L}{\partial z^{\nu}}=D_{\nu}\frac{\partial L}{\partial
z^{\mu}}$ (34)
for any pair $1\leq\mu,\nu\leq m$.
This condition has two interesting interpretations: a variational one and a
geometric one. The Lagrangian has closed action dependence if, and only if,
the action of $\rho=(\sigma,\zeta)\in\Omega$ only depends on $\sigma$.
Equation (32) is obtained by taking variations of the constrained action in
the ${\zeta}$ direction. Indeed, a Lagrangian has closed action dependence if,
and only if, for any section $\sigma:M\to E$, does not exist a family of
sections $\zeta_{s}:M\to\Lambda^{m-1}(M)$, $s\in\mathbb{R}$, such that
$\left.\dfrac{\mathrm{d}\zeta_{s}}{\mathrm{d}s}\right|_{\partial D}=0$
and satisfying the conditions
$\mathrm{d}\zeta_{s}=\mathcal{L}(j^{1}\sigma,\zeta_{s})\quad\text{and}\quad\dfrac{\partial\mathcal{A(\sigma,\zeta_{s})}}{\partial
s}\bigg{|}_{s=0}\neq 0\,.$
The reason is because, if $\frac{\partial L}{\partial z^{\nu}}$ induces a
closed form, by Stokes’ theorem the action only depends on the border, where
the variation vanishes. This can be seen explicitly in the example presented
in Section 5.2.
The geometric interpretation can be obtained as follows. Let
$L:J^{1}(E\times_{M}\Lambda^{m-1}M)\to\mathbb{R}$ be a Lagrangian function.
Define the $M$-semibasic one-form
$\theta_{L}\in\Omega^{1}(J^{1}(E\times_{M}\Lambda^{m-1}M))$ as
$\theta_{L}=\frac{\partial L}{\partial z^{\mu}}\mathrm{d}x^{\mu},$ (35)
which is independent on the coordinates used to define it. The closed action
dependence condition is equivalent to
$\mathrm{d}\theta_{L}=0.$ (36)
The form $\theta_{L}$ is (minus) the dissipation form introduced in [16].
For Lagrangians with closed action dependence, both versions of the
variational principle given in Definitions 3.1 and 3.3 are equivalent.
Moreover, they coincide with the version proposed in [44] and the equations
are the same as the ones derived from the $k$-contact [32] and multicontact
[16] formalisms.
When the Lagrangian has not closed action dependence, both principles may be
different. In Section 5.2, we provide an example where there are sections
which are solutions of one variational principle but not the other. In this
case, only the non-holonomic approach provides, in general, the same equations
as the $k$-contact and multicontact formalisms.
## 4 Higher-order Lagrangian densities
Most of the relevant field theories are modelled by first-order Lagrangians
with one notable exception, General Relativity, which is usually described
with a second-order Lagrangian. Contact gravity is specially interesting as an
example of modified gravity which may explain certain observations about the
expansion of the universe [47]. The Herglotz field equations for the Hilbert
Einstein Lagrangian with a linear term in the action have been derived in [47]
and [33] with slightly different variational methods. The method used in [33]
is, essentially, the vakonomic method presented in Section 3.3, showing how it
can be expanded to higher-order Lagrangians. Hence, in this section we apply
the vakonomic principle to higher-order Lagrangian densities.
Consider the $r$-th jet bundle $J^{r}(E\times_{M}\Lambda^{m-1}M)$ of a fiber
bundle $E\to M$. Local coordinates of $J^{r}(E\times_{M}\Lambda^{m-1}M)$ will
be denoted as $(x^{\mu},u^{a}_{I})$, where $I=(I_{1},\ldots,I_{m})$ is a
multi-index such that $0\leq|I|=I_{1}+\ldots+I_{n}\leq r$. Given a local
section $\sigma:M\to E$, we denote by $j^{r}\sigma:M\to J^{r}E$ its $r$-th
prolongation.
Given a coordinate system, the total derivative
$D_{\mu}:\mathscr{C}^{\infty}(J^{k}(E\times_{M}\Lambda^{m-1}M))\rightarrow\mathscr{C}^{\infty}(J^{k+1}(E\times_{M}\Lambda^{m-1}M))$,
for $\mu=1,\dots,m$, is a derivation given by
$\displaystyle D_{\mu}f=\frac{\partial f}{\partial
x^{\mu}}+\sum_{|J|=0}^{k}\left(u^{a}_{J+1_{\mu}}\frac{\partial f}{\partial
u^{a}_{J}}+z^{\nu}_{J+1_{\mu}}\frac{\partial f}{\partial
z^{\nu}_{J}}\right)\,,$
where $f\in\mathscr{C}^{\infty}(J^{k}(E\times_{M}\Lambda^{m-1}M))$.
The Lagrangian density
$\mathcal{L}:J^{r}(E\times_{M}\Lambda^{m-1}M)\to\Lambda^{m}M$ is a fiber
bundle morphism over $M$. Locally, $\mathcal{L}=L\mathrm{d}^{m}x$. The
Herglotz operator [17] can be extended to fields.
###### Definition 4.1.
Given a Lagrangian $\mathcal{L}$ and an index $1\leq\mu\leq m$, the _Herglotz
operator_ for fields is the linear operator
$\displaystyle
D^{\mathcal{L}}_{\mu}:\mathscr{C}^{\infty}(J^{r}(E\otimes_{M}\Lambda^{m-1}M))$
$\displaystyle\longrightarrow\mathscr{C}^{\infty}(J^{r+1}(E\otimes_{M}\Lambda^{m-1}M))$
$\displaystyle F$ $\displaystyle\longmapsto
D^{\mathcal{L}}_{\mu}(F)=D_{\mu}F-F\frac{\partial L}{\partial z^{\mu}}\,.$
In general, these operators are not derivations and, since
$\left(D^{\mathcal{L}}_{\mu}D^{\mathcal{L}}_{\nu}-D^{\mathcal{L}}_{\nu}D^{\mathcal{L}}_{\mu}\right)F=\left(D_{\nu}\frac{\partial
L}{\partial z^{\mu}}-D_{\mu}\frac{\partial L}{\partial z^{\nu}}\right)F\,,$
they do not commute.
###### Lemma 4.2.
The Herglotz operators commute if, and only if, the Lagrangian has closed
action dependence.
For Lagrangians with closed action dependence, we can denote the successive
applications of the Herglotz operator with multi-index notation as
$D^{\mathcal{L}}_{I}=\prod_{\mu=1}^{m}\left(D^{\mathcal{L}}_{\mu}\right)^{I_{\mu}}\,.$
The constraint is implemented as in the first-order case, that is
$\Phi=(\tau^{r}_{1})^{*}\overline{DS}-\mathcal{L}=0\,,$ (37)
where $(\tau^{r}_{1}):J^{r}(E\times_{M}\Lambda^{m-1}M)\rightarrow
J^{1}\Lambda^{m-1}M$ is the projection. In local coordinates,
$\Phi=\phi\mathrm{d}^{m}x$, with $\phi=z^{\mu}_{\mu}-L$. Let $D\subseteq M$ be
an oriented manifold homeomorphic to a ball and with boundary $\partial D$.
The set of sections on $D$ which satisfy the constraint is denoted by
$\Omega=\\{\rho\in\Gamma_{D}(E\times_{M}\Lambda^{m-1}M)\,\text{ such that
}\,(j^{r}\rho)^{*}\Phi=0\\}.$
In the following definition we introduce the higher-order version of the
vakonomic variational principle presented in Definition 3.3.
###### Definition 4.3.
A section $\rho\in\Omega$ satisfies the _higher-order vakonomic Herglotz
variational principle_ if
$\mathrm{T}_{\rho}\mathcal{A}(\xi)=\int_{D}(j^{r}\rho)^{*}(\mathscr{L}_{\xi^{r}}\mathcal{L})=0\,,$
(38)
for every vector field $\xi\in\mathrm{T}_{\rho}\Gamma_{D}$ such that
$\mathscr{L}_{\xi^{r}}\Phi=0$.
As before, we have an equivalent version of this variational principle based
on Lagrange multipliers [9]. Consider the modified Lagrangian
$\mathcal{L}_{\lambda}=\mathcal{L}+\lambda\Phi=\left(L+\lambda(z^{\mu}_{\mu}-L)\right)\mathrm{d}^{m}x=L_{\lambda}\mathrm{d}^{m}x\,.$
Then, the action associated to $\mathcal{L}_{\lambda}$ is
$\displaystyle\mathcal{A}_{\lambda}:\Omega$
$\displaystyle\longrightarrow\mathbb{R}$ $\displaystyle\rho$
$\displaystyle\longmapsto\int_{D}(j^{1}\rho)^{*}\mathcal{L}_{\lambda}\,.$
###### Corollary 4.4.
A section $\rho\in\Omega$ satisfies the higher-order vakonomic Herglotz
variational principle if, and only if,
$\int_{D}(j^{r}\rho)^{*}(\mathscr{L}_{\xi^{r}}\mathcal{L}_{\lambda})=0\,,$
(39)
for every vector field $\xi\in\mathrm{T}_{\rho}\Gamma_{D}$.
###### Theorem 4.5.
Let $\mathcal{L}:J^{r}(E\otimes_{M}\Lambda^{m-1}M)\to\Lambda^{m}M$ be a
Lagrangian density and let $\rho\in\Omega$. Then, $j^{r}\rho$ satisfies the
higher-order Herglotz field equations
$\displaystyle\sum_{I}{(-1)}^{|I|}D_{I}^{\mathcal{L}}\left(\frac{\partial
L}{\partial u^{a}_{I}}\right)$ $\displaystyle=0$
and the condition
$\displaystyle D_{\nu}\frac{\partial L}{\partial
z^{\mu}}=D_{\mu}\frac{\partial L}{\partial z^{\nu}}\,,$
if, and only, if $\rho$ satisfies the higher-order vakonomic Herglotz
variational principle.
###### Proof.
We proceed in a similar way to the first-order case. The Euler–Lagrange
equations of $\mathcal{L}_{\lambda}$ are given by
$\displaystyle\sum_{I}{(-1)}^{|I|}D_{I}\left(\lambda\frac{\partial
L_{\lambda}}{\partial u^{a}_{I}}\right)$ $\displaystyle=0\,,$ (40)
$\displaystyle\lambda\frac{\partial L_{\lambda}}{\partial
z^{\nu}}-D_{\mu}\left(\lambda\frac{\partial L_{\lambda}}{\partial
z^{\nu}_{\mu}}\right)$ $\displaystyle=0\,.$ (41)
Since $L_{\lambda}$ does only depend of $\zeta$ and its first derivatives,
higher-order terms in equation (41) vanish. Taking into account the definition
of $L_{\lambda}$, we have
$(1-\lambda)\frac{\partial L}{\partial
z^{\nu}}-\frac{\partial\lambda}{\partial x^{\mu}}=0\,.$ (42)
Repeating the argument used in the first-order case, this has solution
$\lambda(x^{\mu})$ if and only if
$D_{\nu}\frac{\partial L}{\partial z^{\mu}}=D_{\mu}\frac{\partial L}{\partial
z^{\nu}}\,.$
That is, there only exist solutions where $\mathcal{L}$ has closed action
dependence. Hence, by equation (42), we see that, for any function $F$,
$D_{\mu}\left((1-\lambda)F\right)=-D_{\mu}\left(\lambda F\right)=-\lambda
D^{\mathcal{L}}_{\mu}F\,.$
Substituting the above expression in (40), we obtain the higher-order Herglotz
field equations. ∎
These equations are compatible with the ones derived in [33] for the
Hilbert–Einstein Lagrangian.
## 5 Examples
### 5.1 Vibrating string with damping
In this example we are going to study how we can derive the equation of a
vibrating string with damping from a Herglotz principle. It is well known that
a vibrating string can be described using the Lagrangian formalism. Consider
the coordinates $(t,x)$ for the time and the space. Denote by $u$ the
separation of a point in the string from its equilibrium point, and hence
$u_{t}$ and $u_{x}$ will denote the derivative of $u$ with respect to the two
independent variables. The Lagrangian function for this system is
$L_{0}(u,u_{t},u_{x})=\frac{1}{2}\rho u_{t}^{2}-\frac{1}{2}\tau u_{x}^{2}\,,$
(43)
where $\rho$ is the linear mass density of the string and $\tau$ is the
tension of the string. We will assume that these quantities are constant. The
Euler–Lagrange equation for this Lagrangian function is
$u_{tt}=c^{2}u_{xx}\,,$
where $c^{2}=\dfrac{\tau}{\rho}$.
In order to model a vibrating string with linear damping, we can modify the
Lagrangian function (43) so that it becomes a $k$-contact Lagrangian [32].
The new Lagrangian function $L$ is defined in the phase bundle
$\oplus^{2}\mathrm{T}Q\times\mathbb{R}^{2}$, equipped with adapted coordinates
$(u;u_{t},u_{x};z^{t},z^{x})$, and is given by
$L(u,u_{t},u_{x},z^{t},z^{x})=L_{0}-\gamma z^{t}=\frac{1}{2}\rho
u_{t}^{2}-\frac{1}{2}\tau u_{x}^{2}-\gamma z^{t}\,,$
where $\gamma\in\mathbb{R}$ is a constant accounting for the damping.
The Herglotz equation (20) for this Lagrangian $L$ reads
$u_{tt}=c^{2}u_{xx}-\gamma u_{t}\,,$
which is the equation of a vibrating string with damping. The additional
equation (29),
$D_{\nu}\frac{\partial L}{\partial z^{\mu}}=D_{\mu}\frac{\partial L}{\partial
z^{\nu}}\Leftrightarrow\begin{dcases}-\partial_{t}\gamma=0\,,\\\
-\partial_{x}\gamma=0\,,\end{dcases}$
is trivially satisfied since $\gamma$ is constant, and hence the equations
obtained are exactly the same as in the $k$-contact Lagrangian formalism
introduced in [32]. The next example presents a case in which both approaches
are not fully equivalent.
### 5.2 The non-holonomic and the vakonomic principles are not equivalent
Consider the Lagrangian
$L(t,x,u,u_{t},u_{x},z^{t},z^{x})=\frac{1}{2}(u_{t}^{2}+u_{x}^{2})-u\gamma_{x}z^{x}\,,$
where $\gamma_{x}\neq 0$ is a constant. The Lagrangian function $L$ is regular
in the sense of [16, 32]. This Lagrangian has not closed dependence action.
The corresponding Hergltoz field equations are
$\displaystyle\gamma_{x}z^{x}+u_{xx}+u_{tt}+u\gamma_{x}u_{x}$
$\displaystyle=0\,,$ $\displaystyle z^{t}_{t}+z^{x}_{x}$ $\displaystyle=L\,.$
A solution of these equations is the section $u(t,x)=t$, $z^{x}(t,x)=0$ and
$z^{t}(t,x)=\frac{t}{2}$. Nevertheless, for this section, we have
$D_{t}\frac{\partial L}{\partial z^{x}}=D_{x}\frac{\partial L}{\partial
z^{t}}\Rightarrow\gamma_{x}u_{t}=0\Rightarrow\gamma_{x}=0\,,$
which is not satisfied as long as $\gamma_{x}\neq 0$. Therefore, this section
is a solution of the non-holonomic variational principle, but it is not a
solution of the vakonomic variational principle. Therefore, both principles
are not equivalent.
### 5.3 The Korteweg–De Vries Lagrangian
The Korteweg–De Vries (KdV) equation is used to model waves on shallow water
[43]. This equation can be derived as the Euler-Lagrange equation of a second
order Lagrangian. We will use the higher-order vakonomic Herglotz variational
principle introduced in Definition 4.3 to derive the equations of motion of a
contact analogue of the KdV Lagrangian.
KdV equation involves a scalar field over time and one dimension of space.
Therefore, we consider a $2$-dimensional base manifold $M$, with coordinates
$(t,x)$. Then, in the second order jet $J^{2}(\mathbb{R}\otimes\Lambda^{2}M)$
we consider the coordinates
$(t,x,u,u_{t},u_{x},u_{xx},u_{xy},u_{yy},z^{t},z^{x},z^{t}_{t},z^{t}_{x},z^{x}_{t},z^{x}_{x},z^{t}_{tt},z^{t}_{tx},z^{t}_{xx},z^{x}_{tt},z^{x}_{tx},z^{x}_{xx})\,.$
The standard KdV Lagrangian is
$L_{0}=\frac{1}{2}u_{x}u_{t}+u_{x}^{3}-\frac{1}{2}u_{xx}^{2}\,.$ (44)
The Euler–Lagrange equation one obtains from this Lagrangian is
$\partial_{t}\partial_{x}u+6\partial_{x}u\partial_{x}^{2}u+\partial_{x}^{4}u=0\,.$
(45)
Let us now consider the KdV Lagrangian with a linear action coupling
$L=L_{0}-\gamma_{\mu}z^{\mu}=\dfrac{1}{2}u_{x}u_{t}+u_{x}^{3}-\dfrac{1}{2}u_{xx}^{2}-\gamma_{\mu}z^{\mu}\,.,$
(46)
which has closed action dependence provided that $\gamma_{\mu}$ are the
components of a closed form. This is a second order Lagrangian, so we need to
use the Herglotz field equations derived in Theorem 4.5. The Herglotz field
equation reads
$\partial_{t}\partial_{x}u+\dfrac{1}{2}(\gamma_{x}\partial_{t}u+\gamma_{t}\partial_{x}u)+6\partial_{x}u\partial_{x}^{2}u+3\gamma_{x}(\partial_{x}u)^{2}+\partial_{x}^{4}u+(2\gamma_{x}+\partial_{x}\gamma_{x})\partial_{x}^{3}u+\gamma_{x}^{2}\partial_{x}^{2}u=0\,,$
(47)
along with the constraint
$z^{t}_{t}+z^{x}_{x}=L\,.$
One sees that there are additional terms which are linear in the
$\gamma_{\mu}$, which also appear in the first-order theory, as well as
quadratic terms in $\gamma_{\mu}$ and involving their derivatives, which are
characteristic of a second-order theory.
## 6 Conclusions and outlook
In this paper we have developed a generalization of the Herglotz variational
principle [23, 41] for first-order and higher-order field theories. In order
to do this, we have developed two non-equivalent approaches: the non-holonomic
and the vakonomic versions. We have seen that the non-holonomic approach is
equivalent to the $k$-contact [32, 52] and multicontact [16] geometric
formulations of dissipative field theories. On the other hand, using the
vakonomic principle, some new conditions arise. This fact motivates the
introduction of the so-called Lagrangians with closed action dependence, for
which both approaches are equivalent.
The differences between the non-holonomic and the vakonomic principles have
been exemplified with an academic example which has a solution to its
$k$-contact Euler–Lagrange equations that is not a solution to the Herglotz
field equations arising from the vakonomic variational principle. This is
because the Lagrangian considered has not closed action dependence.
We have also studied a first-order field theory, the damped vibrating string,
for which the $k$-contact formalism and the Herglotz variational principle are
fully equivalent. The last example consisted in modifying the Korteweg–De
Vries Lagrangian by adding a standard dissipative term.
In [44], a variational principle for Lagrangians with closed action dependence
is derived using an implicit argument. The extension of this approach to the
general case will require a deeper analysis of equation (18). This might be
clarifying to understand the condition of closed action dependence (34).
There are still many open problems in the geometrization of action-dependent
field theories. In first place, it would be interesting to establish the
relations among the different geometric frameworks ($k$-contact, $k$-cocontact
and multicontact) and the variational principles presented in this work and
previous one. Another relevant problem is the case of field theories described
by singular Lagrangians.
There are some singular Lagrangians which are not compatible with the current
geometric structures, not even a weakened version of them [14]. Nevertheless,
we can derive their corresponding field equations via variational principles.
We expect this work will help in the understanding of the underlying geometric
structures of these singular Lagrangians.
### Acknowledgments
The authors acknowledge fruitful discussions and comments from our colleague
Miguel-C. Muñoz-Lecanda.
J. Gaset and X. Rivas acknowledge partial financial support from the
Ministerio de Ciencia, Innovación y Universidades (Spain), projects
PGC2018-098265-B-C33 and D2021-125515NB-21.
M. Lainz acknowledges partial financial support of the Spanish Ministry of
Science and Innovation (MCIN/AEI/ 10.13039/501100011033), under grants
PID2019-106715GB-C2 and “Severo Ochoa Programme for Centres of Excellence in
R&D” (CEX2019-000904-S).
X. Rivas acknowledges partial financial support from Novee Idee 2B-POB II
project PSP: 501-D111-20-2004310 funded by the “Inicjatywa Doskonałości -
Uczelnia Badawcza” (IDUB) program.
## References
* [1] R. Abraham and J. E. Marsden. Foundations of mechanics, volume 364 of AMS Chelsea publishing. Benjamin/Cummings Pub. Co., New York, 2nd edition, 1978. https://doi.org/10.1090/chel/364.
* [2] V. I. Arnold, V. V. Kozlov, and A. Neishtadt. Mathematical Aspects of Classical and Celestial Mechanics. Springer-Verlag Berlin Heidelberg, 1997. https://doi.org/10.1007/978-3-540-48926-9.
* [3] A. Awane. $k$-symplectic structures. J. Math. Phys., 33(12):4046, 1992. https://doi.org/10.1063/1.529855.
* [4] A. Banyaga and D. F. Houenou. A brief introduction to symplectic and contact manifolds, volume 15. World Scientific Publishing Co. Pte. Ltd., Singapore, 2016. https://doi.org/10.1142/9667.
* [5] E. Binz, M. de León, D. M. de Diego, and D. Socolescu. Nonholonomic constraints in classical field theories. Reports on Mathematical Physics, 49(2-3):151–166, Apr. 2002.
* [6] A. Bravetti. Contact Hamiltonian dynamics: The concept and its use. Entropy, 10(19):535, 2017. https://doi.org/10.3390/e19100535.
* [7] A. Bravetti. Contact geometry and thermodynamics. Int. J. Geom. Methods Mod. Phys., 16(supp01):1940003, 2018\. https://doi.org/10.1142/S0219887819400036.
* [8] A. Bravetti, H. Cruz, and D. Tapias. Contact Hamiltonian mechanics. Ann. Phys., 376:17–39, 2017. https://doi.org/10.1016/j.aop.2016.11.003.
* [9] C. M. Campos, M. Asorey, J. Clemente-Gallardo, E. Martínez, and J. F. Cariñena. Vakonomic Constraints in Higher-Order Classical Field Theory. In AIP Conference Proceedings, volume 1260, pages 119–125, Benasque (Spain), 2010. http://aip.scitation.org/doi/abs/10.1063/1.3479312.
* [10] J. F. Cariñena and J. Fernández-Núñez. Geometric theory of time-dependent singular Lagrangians. Fortschr. Phys., 41:517–552, 1993. https://doi.org/10.1002/prop.2190410603.
* [11] H. Cendra, A. Ibort, M. de León, and D. Martín de Diego. A generalization of Chetaev’s principle for a class of higher order nonholonomic constraints. J. Math. Phys., 45(7):2785–2801, 2004. http://aip.scitation.org/doi/10.1063/1.1763245.
* [12] D. Chinea, M. de León, and J. C. Marrero. The constraint algorithm for time-dependent Lagrangians. J. Math. Phys., 35(7):3410–3447, 1994. https://doi.org/10.1063/1.530476.
* [13] F. M. Ciaglia, H. Cruz, and G. Marmo. Contact manifolds and dissipation, classical and quantum. Ann. Phys., 398:159–179, 2018. https://doi.org/10.1016/j.aop.2018.09.012.
* [14] M. de León, J. Gaset, X. Gràcia, M. Muñoz-Lecanda, and X. Rivas. Time-dependent contact mechanics. Monatsh. Math., 2022. https://doi.org/10.1007/s00605-022-01767-1.
* [15] M. de León, J. Gaset, M. Lainz-Valcázar, X. Rivas, and N. Román-Roy. Unified Lagrangian-Hamiltonian formalism for contact systems. Fortschritte der Phys., 68(8):2000045, 2020. https://doi.org/10.1002/prop.202000045.
* [16] M. de León, J. Gaset, M. C. Muñoz-Lecanda, X. Rivas, and N. Román-Roy. Multicontact formulation for non-conservative field theories. http://arxiv.org/abs/2209.08918, Sept. 2022.
* [17] M. de León, J. Gaset, M. C. Muñoz-Lecanda, and N. Román-Roy. Higher-order contact mechanics. Ann. Phys., 425:168396, 2021. https://doi.org/10.1016/j.aop.2021.168396.
* [18] M. de León, V. M. Jiménez, and M. Lainz-Valcázar. Contact Hamiltonian and Lagrangian systems with nonholonomic constraints. J. Geom. Mech., 13(1):25–53, 2021. https://doi.org/10.3934/jgm.2021001.
* [19] M. de León, M. Lainz, and M. C. Muñoz-Lecanda. Optimal control, contact dynamics and Herglotz variational problem. Journal of Nonlinear Science, 33(1):9, Feb. 2023. https://doi.org/10.1007/s00332-022-09861-2.
* [20] M. de León and M. Lainz-Valcázar. Contact Hamiltonian systems. J. Math. Phys., 60(10):102902, 2019. https://doi.org/10.1063/1.5096475.
* [21] M. de León and M. Lainz-Valcázar. Singular Lagrangians and precontact Hamiltonian systems. Int. J. Geom. Methods Mod. Phys., 16(10):1950158, 2019. https://doi.org/10.1142/S0219887819501585.
* [22] M. de León and M. Lainz-Valcázar. Infinitesimal symmetries in contact Hamiltonian systems. J. Geom. Phys., 153:103651, 2020. https://doi.org/10.1016/j.geomphys.2020.103651.
* [23] M. de León, M. Lainz-Valcázar, and M. C. Muñoz-Lecanda. The Herglotz Principle and Vakonomic Dynamics. In F. Nielsen and F. Barbaresco, editors, Geometric Science of Information, volume 12829 of Lecture Notes in Computer Science, pages 183–190, Cham, 2021. Springer International Publishing. https://doi.org/10.1007/978-3-030-80209-7_21.
* [24] M. de León, M. Laínz, M. C. Muñoz-Lecanda, and N. Román-Roy. Constrained Lagrangian dissipative contact dynamics. http://arxiv.org/abs/2109.05295, 2021.
* [25] M. de León, J. Marín-Solano, J. C. Marrero, M. C. Muñoz-Lecanda, and N. Román-Roy. Singular Lagrangian systems on jet bundles. Fortschritte der Phys., 50(2):105–169, 2002. https://doi.org/10.1002/1521-3978(200203)50:2<105::AID-PROP105>3.0.CO;2-N.
* [26] M. de León, E. Merino, J. A. Oubiña, P. R. Rodrigues, and M. Salgado. Hamiltonian systems on $k$-cosymplectic manifolds. J. Math. Phys., 39(2):876, 1998. https://doi.org/10.1063/1.532358.
* [27] M. de León, E. Merino, and M. Salgado. $k$-cosymplectic manifolds and Lagrangian field theories. J. Math. Phys., 42(5):2092, 2001. https://doi.org/10.1063/1.1360997.
* [28] M. de León and C. Sardón. Cosymplectic and contact structures to resolve time-dependent and dissipative Hamiltonian systems. J. Phys. A: Math. Theor., 50(25):255205, 2017. https://doi.org/10.1088/1751-8121/aa711d.
* [29] A. Echeverría-Enríquez, M. C. Muñoz-Lecanda, and N. Román-Roy. Geometrical setting of time-dependent regular systems. Alternative models. Rev. Math. Phys., 3(3):301–330, 1991. https://doi.org/10.1142/S0129055X91000114.
* [30] J. Gaset, X. Gràcia, M. C. Muñoz-Lecanda, X. Rivas, and N. Román-Roy. A contact geometry framework for field theories with dissipation. Ann. Phys., 414:168092, 2020. https://doi.org/10.1016/j.aop.2020.168092.
* [31] J. Gaset, X. Gràcia, M. C. Muñoz-Lecanda, X. Rivas, and N. Román-Roy. New contributions to the Hamiltonian and Lagrangian contact formalisms for dissipative mechanical systems and their symmetries. Int. J. Geom. Methods Mod. Phys., 17(6):2050090, 2020. https://doi.org/10.1142/S0219887820500905.
* [32] J. Gaset, X. Gràcia, M. C. Muñoz-Lecanda, X. Rivas, and N. Román-Roy. A $k$-contact Lagrangian formulation for nonconservative field theories. Rep. Math. Phys., 87(3):347–368, 2021. https://doi.org/10.1016/S0034-4877(21)00041-0.
* [33] J. Gaset and A. Mas. A variational derivation of the field equations of an action-dependent Einstein-Hilbert Lagrangian. http://arxiv.org/abs/2206.13227, July 2022.
* [34] H. Geiges. An Introduction to Contact Topology, volume 109 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2008. https://doi.org/10.1017/CBO9780511611438.
* [35] B. Georgieva, R. Guenther, and T. Bodurov. Generalized variational principle of Herglotz for several independent variables. First Noether-type theorem. J. Math. Phys., 44(9):3911, 2003. https://doi.org/10.1063/1.1597419.
* [36] C. Godbillon. Geometrie Differentielle Et Mecanique Analytique. Collection methodes. Hermann, Paris, 1969.
* [37] S. Goto. Contact geometric descriptions of vector fields on dually flat spaces and their applications in electric circuit models and nonequilibrium statistical mechanics. J. Math. Phys., 57(10):102702, 2016. https://doi.org/10.1063/1.4964751.
* [38] X. Gràcia, J. Marín-Solano, and M. C. Muñoz-Lecanda. Some geometric aspects of variational calculus in constrained systems. Rep. Math. Phys., 51(1):127–148, 2003. https://doi.org/10.1016/S0034-4877(03)80006-X.
* [39] X. Gràcia, X. Rivas, and N. Román-Roy. Skinner–Rusk formalism for $k$-contact systems. J. Geom. Phys., 172:104429, 2022. https://doi.org/10.1016/j.geomphys.2021.104429.
* [40] C. Guenther, R. B. Guenther, J. Gottsch, and H. Schwerdtfeger. The Herglotz Lectures on Contact Transformations and Hamiltonian systems, volume 1 of Lecture notes in nonlinear analysis. Juliusz Center for Nonlinear Studies, Torun, Poland, 1st edition, 1996\.
* [41] G. Herglotz. Berührungstransformationen. Lectures at the University of Gottingen, 1930.
* [42] A. L. Kholodenko. Applications of Contact Geometry and Topology in Physics. World Scientific, 2013. https://doi.org/10.1142/8514.
* [43] D. J. Korteweg and G. de Vries. On the Change of Form of Long Waves Advancing in a Rectangular Canal, and on a New Type of Long Stationary Waves. Phil. Mag., 39:422–443, 1895.
* [44] M. J. Lazo, J. Paiva, J. T. S. Amaral, and G. S. F. Frederico. An action principle for action-dependent Lagrangians: Toward an action principle for non-conservative systems. J. Math. Phys., 59(3):032902, 2018. https://doi.org/10.1063/1.5019936.
* [45] P. Libermann and C.-M. Marle. Symplectic Geometry and Analytical Mechanics. Springer Netherlands, Reidel, Dordretch, oct 1987. https://doi.org/10.1007/978-94-009-3807-6.
* [46] M. C. Muñoz-Lecanda, M. Salgado, and S. Vilariño. $k$-symplectic and $k$-cosymplectic Lagrangian field theories: some interesting examples and applications. Int. J. Geom. Methods Mod. Phys., 7(4):669–692, 2010. https://doi.org/10.1142/S0219887810004506.
* [47] J. A. P. Paiva, M. J. Lazo, and V. T. Zanchin. Generalized nonconservative gravitational field equations from Herglotz action principle. Physical Review D, 2022. https://doi.org/10.1103/PhysRevD.105.124023.
* [48] P. D. Prieto-Martínez. Geometrical structures of higher-order dynamical systems and field theories. PhD thesis, Technical University of Catalonia (UPC), 2014. https://arxiv.org/abs/1410.7825.
* [49] H. Ramirez, B. Maschke, and D. Sbarbaro. Partial stabilization of input-output contact systems on a Legendre submanifold. IEEE Transactions on Automatic Control, 62(3):1431–1437, 2017\. https://doi.org/10.1109/TAC.2016.2572403.
* [50] A. M. Rey, N. Román-Roy, and M. Salgado. Günther formalism ($k$-symplectic formalism) in classical field theory: Skinner–Rusk approach and the evolution operator. J. Math. Phys., 46(5):052901, 2005. https://doi.org/10.1063/1.1876872.
* [51] X. Rivas. Geometrical aspects of contact mechanical systems and field theories. PhD thesis, Universitat Politècnica de Catalunya (UPC), 2021. https://arxiv.org/abs/2204.11537.
* [52] X. Rivas. Nonautonomous $k$-contact field theories. https://arxiv.org/abs/2210.09166, 2022.
* [53] X. Rivas and D. Torres. Lagrangian–Hamiltonian formalism for cocontact systems. J. Geom. Mech., 15(1):1–26, 2022. https://doi.org/10.3934/jgm.2023001.
* [54] N. Román-Roy, M. Salgado, and S. Vilariño. Symmetries and conservation laws in the Günther $k$-symplectic formalism of field theories. Rev. Math. Phys., 19(10):1117–1147, 2007. https://doi.org/10.1142/S0129055X07003188.
* [55] N. Román-Roy, Ángel M. Rey, M. Salgado, and S. Vilariño. On the $k$-symplectic, $k$-cosymplectic and multisymplectic formalisms of classical field theories. J. Geom. Mech., 3(1):113–137, 2011. https://doi.org/10.3934/jgm.2011.3.113.
* [56] J. Ryan. When action is not least for systems with action-dependent Lagrangians, May 2022. http://arxiv.org/abs/2205.10318.
* [57] A. A. Simoes, M. de León, M. Lainz-Valcázar, and D. Martín de Diego. Contact geometry for simple thermodynamical systems with friction. Proc. R. Soc. A., 476:20200244, 2020. https://doi.org/10.1098/rspa.2020.0244.
* [58] H. J. Sussmann. Geometry and optimal control. Mathematical control theory. Springer, New York, NY, 1999. https://doi.org/10.1007/978-1-4612-1416-8_5.
* [59] J. Vankerschaver, F. Cantrijn, J. de León, and D. Martín de Diego. Geometric aspects of nonholonomic field theories. Reports on Mathematical Physics, 56(3):387–411, Dec. 2005. https://doi.org/10.1016/S0034-4877(05)80093-X.
|
# Hint-dynamic Knowledge Distillation
###### Abstract
Knowledge Distillation (KD) transfers the knowledge from a high-capacity
teacher model to promote a smaller student model. Existing efforts guide the
distillation by matching their prediction logits, feature embedding, etc.,
while leaving how to efficiently utilize them in junction less explored. In
this paper, we propose Hint-dynamic Knowledge Distillation, dubbed HKD, which
excavates the knowledge from the teacher’s hints in a dynamic scheme. The
guidance effect from the knowledge hints usually varies in different instances
and learning stages, which motivates us to customize a specific hint-learning
manner for each instance adaptively. Specifically, a meta-weight network is
introduced to generate the instance-wise weight coefficients about knowledge
hints in the perception of the dynamical learning progress of the student
model. We further present a weight ensembling strategy to eliminate the
potential bias of coefficient estimation by exploiting the historical statics.
Experiments on standard benchmarks of CIFAR-100 and Tiny-ImageNet manifest
that the proposed HKD well boost the effect of knowledge distillation tasks.
Index Terms— Knowledge Distillation; Dynamic Network; Meta-Learning
## 1 Introduction
Whilst deep neural networks (DNNs) have achieved remarkable success in
computer vision, most of these well-performed models are difficult to deploy
on edge devices in practical scenarios due to the high computational costs. To
alleviate this, light-weight DNNs have been investigated a lot. The typical
approaches mainly include parameter quantization [1, 2, 3], network pruning
[4, 5, 6], knowledge distillation (KD) [7], etc. Among them, the KD topic has
gained increasing popularity in various vision tasks due to its simplicity to
be integrated into other model compression pipelines.
Fig. 1: Motivation of the proposed HKD. Unlike the existing KD paradigm that
exploits different knowledge hints in a pre-defined fashion, our HKD
adaptively customizes the learning fashion on each instance at different
training iteration $t$.
The core idea of KD [7] is to distill the knowledge from the cumbersome
teacher model to strengthen the compressed student by matching their posterior
distribution of class labels as knowledge hints. Numerous subsequent works
further explore various new forms of matching hints, such as intermediate
representation [8, 9, 10], attention maps [11], mutual information [12, 13,
14], structural knowledge [15, 16, 17], etc. By extracting knowledge from the
soft labels as coarse distillation, most of these methods further leverage
more fine-grained distillation from the explored novel knowledge hints.
Concretely, they combine the guidance from different grains with a pre-defined
combination coefficient, considering the teacher’s knowledge hints arguably
keeps consistent distillation value across different instances throughout the
training process. Nevertheless, the dynamical capacity of the student model is
neglected by most of these methods. In this regard, the current KD paradigm
tends to fail in modeling and perceiving the dynamical distillation effect of
different knowledge hints.
Fig. 2: Overview of the proposed HKD. The optimal learning fashion for
different knowledge hints on each instance is dynamically estimated based on
meta-learning. A meta-weight network in the perception of the feedback from
student estimates the learning coefficients for different knowledge hints. The
KD pipeline and the introduced meta-weight network are optimized jointly in a
nested loop optimization.
To ameliorate the above issue, we present a novel hint-dynamic scheme from the
insight of efficiently utilizing diverse knowledge representation. Fig. 1
depicts the motivation of our proposed framework. The learning progress of the
student differs in instances across the distillation procedure. For the plain
instances which are certain for the student, simply the coarse knowledge from
the rudimentary soft labels is enough to guide the distillation. In contrast,
for those critical ones who are not well-learned, more fine-grained knowledge
from other hints like feature embedding is introduced.
Our insight is to dynamically generate a customized learning fashion to handle
different knowledge hints according to the aptitude of student. To this end,
we formulate the importance of each knowledge hint as a variable dependent on
the input instance and model the learning fashion as weight coefficients of
the KD loss from different hints. A meta-weight network is further leveraged,
where an inner loop is set up to train the meta-weight network to generate
weights for each instance, while an outer loop is further introduced to guide
the efficient KD by the updated meta-weight. To alleviate the estimation bias
of optimal weights, we further propose a weight ensembling strategy utilizing
historical statics.
Our contribution can be summarized as:
* •
We propose a novel Hint-dynamic Knowledge Distillation framework that enables
dynamic learning for various knowledge hints adaptively.
* •
We introduce a meta-learning based method that dynamically assigns the weight
coefficients about distillation loss for each sample.
* •
We derive an uncertainty-based weight ensembling strategy, which alleviates
the adverse effect of the unreliable estimation of meta-weight via historical
statics.
* •
Experiments demonstrate the superior performance of the proposed HKD on
benchmark datasets.
## 2 Method
### 2.1 Preliminaries
In the task of KD, given a pre-trained teacher model $\mathcal{T}$ and a
student model $\mathcal{S}$ on a training data set $\mathcal{X}$, the KL
divergence between the student output $p_{\mathcal{S}}(x)$ and
$p_{\mathcal{T}}(x)$ is minimized as the vanilla KD version [7]:
$\mathcal{L}_{KD}^{van}=\sum_{x\in\mathcal{X}}p_{\mathcal{T}}(x)\cdot
log\frac{p_{\mathcal{T}}(x)}{p_{\mathcal{S}}(x)}$ (1)
Subsequently, the community further explores a extensive variety of hint forms
for knowledge transfer beyond the prediction labels, like intermediate layers
[8], attention maps [11], etc. Specifically, they leverage an auxiliary
guidance signals from the teacher by using the matching loss
$\mathcal{L}_{KD}^{aux}$ for the exploited hints, which is utilized to update
the student $\mathcal{S}$ over training data set $\mathcal{X}$:
$\mathcal{L}({\mathcal{S};\mathcal{X}})=\sum_{x\in\mathcal{X}}\mathcal{L}_{CE}(x)+\beta\mathcal{L}_{KD}^{van}(x)+\gamma\mathcal{L}_{KD}^{aux}(x)$
(2)
where $\mathcal{L}_{CE}$ is the cross entropy loss whereas $\beta,\gamma$ are
the weight coefficients for different distillation losses of each sample,
respectively, which are designed empirically to keep fixed for all the
instances across the whole training procedure in conventional KD methods,
despite the different learning progress of the student model for each sample.
As a core distinction, we propose to dynamically adjust the guidance manner
from the teacher, which emphasizes the varying demand for different knowledge
hints for each instance at different iterations.
### 2.2 Hint-dynamic Knowledge Distillation
Overview. Towards the goal of the dynamic distillation scheme which adaptively
allocates the hint weights for each instance as the training procedure goes,
one naive solution is to utilize an uncertainty-based metric [18] to evaluate
the instance-wise learning progress, which is somehow unreliable. To
ameliorate this issue, we leverage the merits of meta-learning, which provide
a second-order optimization framework to alleviate the estimation bias w.r.t.
learning degree. Specifically, a meta-weight network (meta-net) is introduced
to explicitly encode the importance of each instance as well as generate the
dynamic estimation for the subsequent distillation manner. By utilizing the
inner and outer loop, the introduced meta-net and the teacher-student
distillation framework promote each other in our proposed scheme. Fig. 2
depicts the workflow of the proposed HKD.
Meta-Weight Network. To perform the instance-wise dynamic estimation of the
optimal learning fashion w.r.t. different knowledge hints, we design a meta-
net $\mathcal{W}$ to generate the weight coefficients for each instance prior
to every learning iteration of the student. Holding the insight that the
cross-model relation matters, we feed not only the prediction logits of the
student but also the teacher’s prediction into meta-net, in such way the
weight for the sample $x$ can be written as:
$\beta(x),\gamma(x)=\mathcal{W}(p_{\mathcal{S}}(x),p_{\mathcal{T}}(x))$ (3)
Practically, this meta-net is easy to implement, e.g., a 2-layer Multi-Layer
Perceptron (MLP) with a given weight range. In what follows, we further
leverage a technique of pseudo student generation to perform the inner-loop
optimization for our HKD, which is inspired by the insight of meta-learning
[19].
Inner Loop via Pseudo Student Generation. In the utilization of a meta-net, we
exploit a technique to update a pseudo copy of student model to make
perception of the model performance using the meta-net. Specifically, a meta-
set $\mathcal{X}_{meta}$ is held out from the whole training data set
$\mathcal{X}$, i.e., $|\mathcal{X}_{meta}|\ll|\mathcal{X}|$, and we perform a
one-step gradient update for the student as a pseudo version
$\mathcal{S}_{p}$:
$\mathcal{L}({\mathcal{S}_{p}};\mathcal{X})=\sum_{x\in\mathcal{X}}\mathcal{L}_{CE}(x)+\beta(x)\mathcal{L}_{KD}^{van}(x)+\gamma(x)\mathcal{L}_{KD}^{aux}(x)$
(4)
The mean-square error (MSE) of pseudo student on the meta-set reflects the
quality of weight estimation of the meta-net, which can be further utilized to
optimize the meta-net $\mathcal{W}$:
$\mathcal{L}(\mathcal{W};\mathcal{X}_{meta}^{er})=\sum_{x\in\mathcal{X}_{meta}^{er}}\mathcal{L}_{MSE}(p_{S_{\mathcal{P}}}(x),GT(x))$
(5)
where $\mathcal{X}_{meta}^{er}$ denotes the incorrect results of pseudo
student’s output in the meta-set [20], $p_{S_{\mathcal{P}}}(x)$ is the
prediction probability of the pseudo student, and $GT(x)$ returns the ground-
truth probability value of the corresponding sample.
Outer Loop Optimization via Second-order Gradient. To facilitate the guidance
effect of the updated meta-net in the inner loop, we further leverage an outer
loop process, in which the student model acquires the knowledge from the
teacher according to the generated fashion when dealing with the knowledge
hints. In this regard, a standard knowledge distillation process is introduced
as Eq. 2. Then, by iteratively executing the two preceding loops, we can
formulate a nested optimization problem:
$\displaystyle\mathop{min}\limits_{\mathcal{S}}\quad\mathcal{L}({\mathcal{S}};\mathcal{X})$
(6) $\displaystyle s.t.\quad\mathcal{W}=\mathop{argmin}\limits_{\mathcal{W}}\
\mathcal{L}(\mathcal{W};\mathcal{X}_{meta}^{er})$
where the outer loop is formulated as a problem to search for the optimal hint
weights while constrained by the inner loop.
### 2.3 Meta-Weight Ensembling
The effect of the proposed meta-learning based framework relies on the
accurate estimation of the hint coefficients, i.e., the output of the meta-
net, whereas the transient state of this weight is not always reliable. To
address this issue, we further propose a strategy to generate a more robust
hint weights via the temporary ensembling:
$(\beta^{t}(x),\gamma^{t}(x))=\left\\{\begin{aligned}
\epsilon\cdot(\beta^{t-1}(x),\gamma^{t-1}(x))+(1-\epsilon)\cdot&(\beta(x),\gamma(x))&u(x)<u_{th}\\\
&(\beta(x),\gamma(x))&u(x)\geq u_{th}\end{aligned}\right.\\\ $
(7)
where $t$ denotes the current step and $t-1$ is the previous step. $\epsilon$
controls the ratio between the weight of $t$ and $t-1$. $u_{th}$ is the
threshold value of uncertainty. This mechanism is applied to the updating of
student in the outer loop, which means $\beta_{x}^{t},\gamma_{x}^{t}$ are
calculated by Eq. 7. $u(x)$ representing the uncertainty of sample $x$, can be
modeled by the prediction entropy:
$u(x)=-{p_{\mathcal{S}}(x)log(p_{\mathcal{S}}(x)})$.
## 3 Experiments
Datasets. Experiments are conducted on the benchmark datasets of CIFAR-100
[21] and Tiny-ImageNet [22]. CIFAR-100 contains 50K 32$\times$32 training
images with 500 images per class and 10K test images with 100 images per
class. Tiny-ImageNet is a subset version of ImageNet with 200 classes, where
each image is down-sampled to 64$\times$64\. The images in each class are
split as 500/50 for training and testing, respectively.
Implementation Details. Following the common practice in KD [17, 23], we set
the total number of training epochs to 240 while the batch size to 64. We use
stochastic gradient descent (SGD) as the optimizer for the student model.
Except for ShuffleNet V1, which is set to 0.01, the initial learning rate is
0.05. Weight decay is set to $5\times 10^{-4}$ , For meta-net, we adopt an
Adam optimizer with the initial learning rate $1\times 10^{-3}$. For the meta-
set, we set size to 1000 on CIFAR-100 and 2000 on Tiny-ImageNet considering 10
samples per class. The training interval of the inner loop is 100. We search
for the optimal dynamic weights $\beta$ and $\gamma$ with a searching range
$\l=0.5$ around the initial value $1$. For the weight ensembling, an
uncertainty threshold $u_{th}$ of 0.6 is adopted, and $\epsilon$ is 0.5.
### 3.1 Comparisons with State-of-the-art Methods
Results on CIFAR-100. We test the performance of our method when combined with
three state-of-the-art KD works, including Fitnet [8], VID [12] and CRD [17].
We directly cite the quantitative results reported in their papers [17]. The
results are shown in Tab. 2. Teacher and Student denote the accuracy of the
teacher and student models when they are trained individually. We can see that
combining the proposed HKD with the modern KD methods leads to a significant
improvement. Besides, we compare with two other adaptive distillation works
[18, 23], and it can be seen that HKD achieves better results in most of the
experiments.
Results on Tiny-ImageNet. Following KD on Tiny-ImageNet common practice [23],
experiments are conducted with Vgg13 → Vgg8, WRN_40_2 → WRN_16_2 and ResNet110
→ ResNet20. Tab. 2 presents the results, which indicate that HKD continues to
outperform other works.
Table 1: Top-1 Test Acc. (%) of the student networks on CIFAR-100.
| ResNet32x4 → ResNet32 | WRN_40_2 → ShuffleNetV1 | ResNet50 → Vgg8
---|---|---|---
| Std | D-KD | S-KD | Ours | Std | D-KD | S-KD | Ours | Std | D-KD | S-KD | Ours
Teacher | 74.92 | - | - | - | 75.61 | - | - | - | 79.34 | - | - | -
Student | 71.14 | - | - | - | 70.50 | - | - | - | 70.36 | - | - | -
Fitnet | 72.35 | 72.63 | 72.63 | 72.83 | 75.67 | 75.55 | 75.93 | 76.25 | 73.24 | 73.3 | 73.75 | 73.75
VID | 72.29 | 72.59 | 72.85 | 72.94 | 75.88 | 76.12 | 76.21 | 76.41 | 73.46 | 73.57 | 73.92 | 73.79
CRD | 72.99 | 73.26 | 73.21 | 73.78 | 76.03 | 76.2 | 76.26 | 76.82 | 74.58 | 73.94 | 74.81 | 74.96
Table 2: Top-1 Test Acc. (%) of the student networks on Tiny-ImageNet.
| Vgg13 → Vgg8 | WRN_40_2 → WRN_16_2 | ResNet110 → ResNet20
---|---|---|---
| Std | D-KD | S-KD | Ours | Std | D-KD | S-KD | Ours | Std | D-KD | S-KD | Ours
T | 60.09 | - | - | - | 61.26 | - | - | - | 58.46 | - | - | -
S | 56.03 | - | - | - | 57.17 | - | - | - | 51.89 | - | - | -
Fitnet | 58.33 | 58.88 | 59.10 | 59.67 | 58.88 | 58.96 | 59.33 | 59.55 | 54.04 | 54.10 | 54.25 | 54.36
VID | 58.55 | 58.84 | 58.80 | 59.05 | 58.78 | 58.85 | 58.99 | 59.15 | 53.94 | 54.12 | 54.28 | 54.42
CRD | 58.88 | 59.65 | 59.38 | 59.51 | 59.42 | 59.64 | 59.87 | 60.22 | 54.69 | 54.99 | 55.28 | 55.65
### 3.2 Further Empirical Analysis
Ablation Study. We conduct ablation studies on CIFAR-100 to validate the
effect of each component in our HKD. The results are shown in Tab. 3. Note
that the experiments here are based on the feature hints from CRD [17]. (1)
Static denotes the baseline which adopts a fixed weight of knowledge hints
during the whole training procedure. (2) Un-Dy models the dynamic property
w.r.t. the knowledge hints using a uncertainty-based approach as in [18]. We
can see the performance gain brought by utilizing the dynamic modeling for the
weight. (3) MWN further exploits the meta-weight network to generate the
dynamic weight in different searching range $\l$ of 0.5 and 1 around the
initial value. It can be seen that an appropriate searching range leads to a
better distillation effect. (4) When equipped with meta-weight ensembling
strategy as MWE to facilitate a more robust estimation of dynamic weight, the
accuracy of the student model distilled by our full HKD method achieves the
best accuracy, which indicates each component of our HKD plays its own role.
Table 3: Top-1 Test Acc. (%) of ablation studies on CIFAR-100.
| Static | Un-Dy | MWN ($\l=1$) | MWN ($\l=0.5$) | HWN+MWE (full)
---|---|---|---|---|---
R32x4→R32 | 72.99 | 73.26 | 73.30 | 73.37 | 73.78
W40-2→SN1 | 76.03 | 76.20 | 76.28 | 76.46 | 76.82
Visualization on Meta-weight Estimation. Fig. 3 shows the curves of the batch-
wise average of weight variation on Tiny-Imagenet. It can be observed that
most samples are uncertain to the student during the initial stage of
distillation, thus the vanilla KD loss and CRD loss weights are relatively
large. As the distillation goes on, the student’s capacity increases, which
requires more guidance from vanilla KD. Consequently, the weight of vanilla KD
loss increases while the weight of CRD loss decreases. Finally, the student
tends to choose a stable learning fashion compared with the initial state.
(a) Vgg13 → Vgg8
(b) WRN_40_2 → WRN_16_2
Fig. 3: The curves of the learned hint weights in two experiments on Tiny-
ImageNet.
## 4 Conclusion
This paper proposes Hint-dynamic Knowledge Distillation (HKD) to promote
knowledge transfer in a dynamic and active fashion. Instead of using the fixed
weight coefficients for knowledge hints from the teacher, we dynamically
customize instance-wise learning to facilitate the distillation process of the
student. Specifically, a meta-weight network is leveraged to generate the
estimation of optimal learning manner w.r.t. knowledge hints, further in the
utilization of a meta-learning based optimization framework. To alleviate the
bias of weight estimation, we further explore a strategy of meta-weight
ensembling, which adaptively ensemble the hint weights considering the
historical statics. Extensive experiments on benchmark datasets show that HKD
outperforms state-of-the-art and off-the-shelf distillation methods.
## References
* [1] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev, “Compressing deep convolutional networks using vector quantization,” arXiv preprint arXiv:1412.6115, 2014.
* [2] Jiaxiang Wu, Cong Leng, Yuhang Wang, Qinghao Hu, and Jian Cheng, “Quantized convolutional neural networks for mobile devices,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 4820–4828.
* [3] Mingbao Lin, Rongrong Ji, Yan Wang, Yichen Zhang, Baochang Zhang, Yonghong Tian, and Ling Shao, “Hrank: Filter pruning using high-rank feature map,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1529–1538.
* [4] Song Han, Jeff Pool, John Tran, and William Dally, “Learning both weights and connections for efficient neural network,” Advances in neural information processing systems, vol. 28, 2015\.
* [5] Kohei Yamamoto, “Learnable companding quantization for accurate low-bit neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5029–5038.
* [6] Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang, “Learning efficient convolutional networks through network slimming,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2736–2744.
* [7] Geoffrey Hinton, Oriol Vinyals, Jeff Dean, et al., “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, vol. 2, no. 7, 2015.
* [8] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio, “Fitnets: Hints for thin deep nets,” arXiv preprint arXiv:1412.6550, 2014.
* [9] Pengguang Chen, Shu Liu, Hengshuang Zhao, and Jiaya Jia, “Distilling knowledge via knowledge review,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5008–5017.
* [10] Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin Yao, Xing Fan, and Chenlei Guo, “Knowledge distillation from internal representations,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2020, vol. 34, pp. 7350–7357.
* [11] Sergey Zagoruyko and Nikos Komodakis, “Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer,” arXiv preprint arXiv:1612.03928, 2016.
* [12] Sungsoo Ahn, Shell Xu Hu, Andreas Damianou, Neil D Lawrence, and Zhenwen Dai, “Variational information distillation for knowledge transfer,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 9163–9171.
* [13] Baoyun Peng, Xiao Jin, Jiaheng Liu, Dongsheng Li, Yichao Wu, Yu Liu, Shunfeng Zhou, and Zhaoning Zhang, “Correlation congruence for knowledge distillation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 5007–5016.
* [14] Frederick Tung and Greg Mori, “Similarity-preserving knowledge distillation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1365–1374.
* [15] Wonpyo Park, Dongju Kim, Yan Lu, and Minsu Cho, “Relational knowledge distillation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3967–3976.
* [16] Jinguo Zhu, Shixiang Tang, Dapeng Chen, Shijie Yu, Yakun Liu, Mingzhe Rong, Aijun Yang, and Xiaohua Wang, “Complementary relation contrastive distillation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 9260–9269.
* [17] Yonglong Tian, Dilip Krishnan, and Phillip Isola, “Contrastive representation distillation,” arXiv preprint arXiv:1910.10699, 2019.
* [18] Lei Li, Yankai Lin, Shuhuai Ren, Peng Li, Jie Zhou, and Xu Sun, “Dynamic knowledge distillation for pre-trained language models,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp. 379–389.
* [19] Chelsea Finn, Pieter Abbeel, and Sergey Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” in International conference on machine learning. PMLR, 2017, pp. 1126–1135.
* [20] Jihao Liu, Boxiao Liu, Hongsheng Li, and Yu Liu, “Meta knowledge distillation,” arXiv preprint arXiv:2202.07940, 2022.
* [21] Alex Krizhevsky, Geoffrey Hinton, et al., “Learning multiple layers of features from tiny images,” 2009\.
* [22] Ya Le and Xuan Yang, “Tiny imagenet visual recognition challenge,” CS 231N, vol. 7, no. 7, pp. 3, 2015.
* [23] Jie Song, Ying Chen, Jingwen Ye, and Mingli Song, “Spot-adaptive knowledge distillation,” IEEE Transactions on Image Processing, vol. 31, pp. 3359–3370, 2022\.
|
††thanks: Current Address: Department of Applied Physics, Stanford University,
Stanford, CA 94305, USA
# Resonant weak-value enhancement for solid-state quantum metrology
Mahadevan Subramanian Department of Physics, Indian Institute of Technology
Bombay, Powai, Mumbai-400076, India Amal Mathew Department of Physics,
Indian Institute of Technology Bombay, Powai, Mumbai-400076, India Bhaskaran
Muralidharan Department of Electrical Engineering, Indian Institute of
Technology Bombay, Powai, Mumbai-400076, India Centre of Excellence in
Quantum Information, Computation, Science and Technology, Indian Institute of
Technology Bombay, Powai, Mumbai-400076, India<EMAIL_ADDRESS>
(August 27, 2024)
###### Abstract
Quantum metrology that employs weak-values can potentially effectuate
parameter estimation with an ultra-high sensitivity and has been typically
explored across quantum optics setups. Recognizing the importance of sensitive
parameter estimation in the solid-state, we propose a spintronic device
platform to realize this. The setup estimates a very weak localized Zeeman
splitting by exploiting a resonant tunneling enhanced magnetoresistance
readout. We establish that this paradigm offers nearly optimal performance
with a quantum Fisher information enhancement of about $10^{4}$ times that of
single high-transmissivity barriers. The obtained signal also offers a high
sensitivity in the presence of dephasing effects typically encountered in the
solid state. These results put forth definitive possibilities in harnessing
the inherent sensitivity of resonant tunneling for solid-state quantum
metrology with potential applications, especially, in the sensitive detection
of small induced Zeeman effects in quantum material heterostructures.
## I Introduction
Quantum metrology [1, 2, 3] provides the means toward high-sensitivity
parameter estimation using a quantum state as a probe, followed by
measurements, and has been demonstrated in a variety of systems [4, 5, 6, 7,
8, 9]. It is also well established that weak-values can inextricably be linked
with quantum sensing [10, 11, 12]. The use of weak-values in quantum sensing
has typically been explored using quantum optics setups [13, 14, 15, 16]. An
important metric to benchmark the quantum sensor performance is the quantum
Fisher information (QFI) [17, 18, 19, 20], which can also be linked to weak-
values [10]. The enhancement of weak-values have shown clear experimental
advantages for quantum sensing as demonstrated in many works [21, 22, 23],
despite theoretical studies which point to how post-selection is
disadvantageous, mainly because of a loss in QFI [24, 25]. This discrepancy
has been explored thoroughly with ways to surmount these disadvantages [26,
27] and methods to increase detection probability as well [28].
Solid state setups have recently garnered a lot of attention as pivotal
testbeds for foundational quantum concepts, such as, quantum state tomography
of electrons [29, 30], entanglement-generation by Cooper pair splitting [31,
32, 33, 34, 35], and even loophole-free Bell test experiments [36, 37, 38].
Given recent advancements in quantum materials and devices, there exist
numerous applications that a quantum sensor could provide with its inherent
quantum advantage that includes the detection of induced Zeeman splitting in
van der Waals heterostructures [39, 40, 41, 42, 43, 44, 45], the precise
estimation of the Rashba spin-orbit coupling parameter [46, 47], to name a
few. In this work, we demonstrate how double barrier resonant tunneling in the
solid-state can be exploited for high-sensitivity detection of localized
Zeeman splittings due to an enhanced weak-value, via a magnetoresistance
measurement. The setup we propose builds on a generalized four-terminal spin-
transport setup [43, 44, 45] where the magnetoresistance measurement is
directly related to a weak-value $A_{w}$ [48, 49], as a measurement outcome of
an operator $\hat{A}$ where $\ket{i}$ is an pre-selected state and $\ket{f}$
is the post-selected state:
$A_{w}=\frac{\langle f|\hat{A}|i\rangle}{\langle f|i\rangle}.$ (1)
We now refer to Fig. 1(a), which shows how our approach for enhancing weak-
values differs from the general approach of post selecting $\ket{f}$ for a
small $\langle f|i\rangle$ [25]. By the nature of our setup, the only control
we have is changing the incident wave-vector and as it turns out the choice
corresponding to the resonant tunneling wave-vectors have the highest weak-
value despite having the largest $\langle f|i\rangle$ overlap via close to
unity transmission.
Figure 1: Preliminaries and magnetoresistance signal. (a) A simple schematic
(top) representing the weak-value, and the sensing task (bottom) for
estimating any localized Zeeman splitting inside the resonant tunneling
barrier. General weak-value enhancement techniques involve post-selecting a
state $\ket{f}$ Our setup features an enhancement of the weak-value $A_{w}$ by
varying the initial state via a choice of the wave-vector $k$. Contrary to
typical setups, the weak-value is enhanced via a choice of $\ket{i}$ although
$\langle f|i\rangle$ is not small in general. (b) Detailed device schematic
with description of the embedded barrier region. The bottom gate voltage tunes
a specific $k$ value via a gate potential $V_{\text{gate}}$ and a small bias
voltage $\mu_{1}-\mu_{0}$ selects out the stream outgoing stream. (c) Device
schematic for the 1-D channel. There are four contacts, two NM (colored
yellow) and two FM (colored red) in direction depicted by the blue arrows.
Current readings are obtained from the contact $FM2$. (d) A summit result
depicting the signal $D_{Y}$ as a function of $k$ plotted along with the QFI,
shown as $\log_{10}\mathcal{H}$. We notice that there are three characteristic
peaks for the $k$ values for which resonant tunneling occurs which are
$k_{1},k_{2}$ and $k_{3}$ that are also values where the QFI takes large
values.
Our approach provides means to enhance both the weak-values in tandem with
increasing sensitivity, via an enhancement in the QFI. We make use of resonant
tunneling energy channels [50] using a double-barrier setup [51, 52, 53],
thereby allowing Fabry-Pérot resonances at specific energies. The schematic of
the double barrier device is described in Fig. 1 (b) and Fig. 1 (c). We also
quantify our design with the QFI and further analyze the effects of phase
breaking [54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 49] that are typically
detrimental in such solid-state systems. Our results put forth definitive
possibilities in harnessing the inherent sensitivity of resonant tunneling for
solid-state quantum metrology with potential applications, especially, in the
sensitive detection of small induced Zeeman effects in quantum material
heterostructures.
## II Setup and Formulation
### II.1 The Magnetoresistive setup
The device setup schematized in Fig. 1(b) and Fig. 1(c) consists of a long 1-D
nanowire with an embedded barrier region, facilitated electrostatic gating.
The embedded region consists of three rectangular barriers with heights
$V_{B2}$, $V_{B1}$ and $V_{B2}$ with the total width being $d_{2}$ and the
width of the middle region being $d_{1}$. The middle region features a
magnetic field $B$ along $\hat{z}$, which models for instance a weak Zeeman
splitting that is to be estimated precisely, denoted by $V_{Z}=g\mu_{B}B$.
This multi-terminal setup is a 1-D proof-of-concept which is quite realizable
using 1-D nanowires or 2-D structures with multiple gates [43, 44, 45] and has
been quite intensely pursued [43, 44, 45], especially in situations where
induced Zeeman effects occur in localized regions.
We can now define the channel Hamiltonian as follows
$\displaystyle\hat{H}=\begin{cases}\left(\frac{p^{2}}{2m}+V_{B1}\right)\mathbb{I}-\frac{V_{Z}}{2}\sigma_{z}&|y|\leq\frac{d_{1}}{2}\\\
\left(\frac{p^{2}}{2m}+V_{B2}\right)\mathbb{I}&\frac{d_{1}}{2}<|y|\leq\frac{d_{2}}{2}\\\
\left(\frac{p^{2}}{2m}\right)\mathbb{I}&|y|>\frac{d_{2}}{2}\end{cases}.$ (2)
The Hamiltonian can be written as
$\hat{H}=\hat{H}_{0}\mathbb{I}+\theta\hat{H}_{1}\sigma_{Z}$ where
$\hat{H}_{0}$ and $\hat{H}_{1}$ are spatial Hamiltonians and
$\theta=V_{Z}/2t_{0}$. The Zeeman splitting is only in the region where
$\hat{H}_{1}$ is non-zero. As depicted in Fig. 1(a), the incident beam of
electrons are $+\hat{x}$ spin polarized. The expectation value
$\langle\sigma_{Y}\rangle$ gives us a signal in relation to $\theta$ that
depicts the precession of the spin. Our simulations are conducted with the
following parameters: hopping energy $t_{0}=3.875$ eV, $d_{1}=40$ $nm$ and
$d_{2}=80$ $nm$.
We use two normal metallic contacts (NM) on the ends of the channel to
manipulate reflections in order to make the correct post-selection and the
detection of the transport signal feasible [49]. The ferromagnetic contact
$FM1$ introduces $\hat{x}$-polarized electrons facilitated via the bias
situation. The current readouts are taken at the ferromagnetic contact $FM2$.
The alignment of $FM2$ is along $\pm\hat{y}$. We denote the current readout
from $FM2$ in $\pm\hat{y}$ as $I_{FM2}^{\pm}$.
### II.2 Weak-values and sensing
The estimation task at hand is described in Fig. 1(a). Using pre-selection and
post-selection of quantum states, one can obtain measurement outcomes outside
of the eigenspectrum which can be explained using the concept of weak-values
[64, 65]. This treatment uses a quantum mechanical pointer which gives the
measurement outcomes after being coupled to the system using a von Neumann
interaction scheme. The relevance of these results have been discussed with
examining how weak measurement cannot be treated as a measurement in a true
sense [66].
A simpler treatment to weak-values can be found via a perturbative approach
[3, 12, 67]. For an operator $\hat{A}$, the $n^{\mathrm{th}}$ order weak-value
is defined to be $A^{n}_{w}=\langle f|\hat{A}^{n}|i\rangle/\langle
f|i\rangle$, where $\ket{i}$ is the initial state and the post selection is
done with state $\ket{f}$. We define $P=\langle f|i\rangle$ and $P=\langle
f|\hat{U}|i\rangle$ where $\hat{U}=\exp(\iota\epsilon\hat{A})$. We can treat
$\epsilon$ as a small parameter and perform a Taylor expansion for $U$ and
obtain
$\frac{P_{\epsilon}}{P}=1+2\epsilon\imaginary A_{w}-\epsilon^{2}[\real
A_{w}^{2}-|A_{w}|^{2}]+\mathcal{O}(\epsilon^{3}).$ (3)
Figure 2: Resonant enhancement in the transport signal. (a) Contour plot
depicts the dependence of the signal $D_{Y}$ with respect to the Zeeman
splitting energy $V_{Z}$. The signal is significantly amplified at the
resonant tunnelling wave-vectors as can be seen by the two sharp peaks in the
contour. (b) The values of the signal at the resonant values shows a very
large amplification for small values of the $V_{Z}$.
To ensure the validity of the weak interaction regime, the quantity
$2\epsilon\imaginary A_{w}$ must be much larger in magnitude than the sum of
all the higher order corrections that follow, which puts a limit to increasing
the sensitivity using weak-values [65].
### II.3 Transport formulation
To model the terminal current readout at $FM2$, we employ the Keldysh non-
equilibrium Green’s function (NEGF) technique [55, 68, 56, 69], whose specific
implementation for related setups is elaborated in the Appendix of Ref. [49].
We go over the brief procedure as follows. The electron correlator is defined
as
$\mathbf{G}^{n}=-i\mathbf{G}^{<}=\mathbf{G}^{r}\boldsymbol{\Sigma}^{in}\mathbf{G}^{a}$,
where $\mathbf{G}^{<}$ is the lesser Green’s function. Here, the retarded
Green’s function, $\mathbf{G}^{r}=[E-\hat{H}-\boldsymbol{\Sigma}]^{-1}$, where
$\hat{H}$ is the channel Hamiltonian, $\boldsymbol{\Sigma}$ is the sum of all
self-energies, and $\boldsymbol{\Sigma}^{in}$ is the in-scattering function.
The quantity $\mathbf{G}^{a}$ is the hermitian conjugate of $\mathbf{G}^{r}$
[62]. The terminal currents are then defined as
$I^{\pm}_{FM2}=\text{Tr}(\Gamma^{\pm}_{FM2}\mathbf{G}^{n})$. For a
$\pm\hat{y}$-polarized contact, the expression for the broadening function
$\Gamma^{\pm}_{FM2}$ is a matrix that is only non-zero in the submatrix for
the position of the $FM2$ contact on the channel where it takes on value
$-t_{0}e^{ika}(\mathbb{I}+\sigma_{Y})/2$. Given that
$\rho=\mathbf{G}^{n}/\text{Tr}(\mathbf{G}^{n})$, current measurements of
$I^{\pm}_{FM2}$ are proportional to the probabilities for
$\pm\hat{y}$-polarization at the position of the $FM2$ contact as is apparent
from the form of its expression.
We now define our primary magnetoresistance signal, $D_{Y}$, which is obtained
out of the current readouts from the contact $FM2$ when it is $\pm y$
polarized and defined as
$D_{Y}=\frac{I^{+}_{FM2}-I^{-}_{FM2}}{I^{+}_{FM2}+I^{-}_{FM2}}.$ (4)
From our physical understanding of the current measurements, the signal
$D_{Y}$ is proportional to the average value $\langle\sigma_{Y}\rangle$. Let
$\ket{\psi}$ be an eigenstate of $\hat{H}_{0}$ with an energy $\varepsilon(k)$
and $\ket{\psi^{\pm}}$ is the scattered wavefunction obtained for Hamiltonian
$\hat{H}_{0}\pm\hat{H}_{1}$. The scattered waves can be calculated using the
equilibrium Green’s function $\hat{G}_{0}$ evaluated from $\hat{H}_{0}$ [70,
71]. We define $\ket{f}$ as the momentum eigenstate with wave-vector $k$
multiplied by a Heaviside-step function to make it zero everywhere except to
the right of the barrier. By taking the Born approximation, the first order
approximation for $D_{Y}$ is as follows (see Appendix A for a more detailed
discussion) :
$D_{Y}=-\frac{V_{Z}}{2t_{0}}\imaginary\left(\dfrac{\langle
f|\hat{G}_{0}\hat{H}_{1}|\psi\rangle}{\langle
f|\psi\rangle}\right)+\mathcal{O}(\theta^{2}).$ (5)
This elucidates that amplifying the imaginary part of the weak-value for
$\hat{G}_{0}\hat{H}_{1}$ can boost the sensitivity of $D_{Y}$ with respect to
$\theta$. This weak-value also has physical relevance as a form of the
tunneling time as explored in [48]. It has been established that
$D_{Y}=-\omega_{L}\tau_{Y}$ where $\tau_{Y}$ is a real part of the weak-value
of the barrier potential [48, 49] which can be proven to be equivalent to (5).
This notion can be generalized in the case of more complicated barriers which
would only change the Green’s function $\hat{G}_{0}$, while $\hat{H}_{1}$
takes into account the localized Zeeman splitting.
### II.4 Quantum Fisher information
The task of quantum sensing is fundamentally a parameter estimation task and
the QFI is a very relevant figure-of-merit [17, 18]. In a general estimation
task, a set of measurements are performed on a parameterized state to retrieve
information on the parameters. We focus on the single parameter case, relevant
to our setup. The symmetric logarithm derivative [20, 72], denoted as
$L_{\theta}$, for the estimation task for a parameterized state
$\rho_{\theta}$ is defined by the equation
$\partial_{\theta}\rho_{\theta}=\frac{1}{2}(L_{\theta}\rho_{\theta}+\rho_{\theta}L_{\theta})$.
The QFI denoted by $\mathcal{H}$, is defined as
$\mathcal{H}=\text{Tr}(L_{\theta}^{2}\rho_{\theta})$, where $\mathcal{H}$ will
always be bounded above by the maximum eigenvalue of the operator
$L_{\theta}^{2}$.
Given $\partial_{\theta}\rho_{\theta}$ and $\rho_{\theta}$, we can find
$L_{\theta}$ as a solution to a continuous Lyapunov equation [72]. As
established in the previous section, we can write the density matrix
$\rho=\mathbf{G}^{n}/\text{Tr}(\mathbf{G}^{n})$. We define the parameter to
estimate as $\theta$ where $V_{Z}=\theta t_{0}$. From this, we can use the
NEGF equations to obtain the expressions
$\displaystyle\tilde{L}=H_{1}\mathbf{G}^{r}-\frac{\text{Tr}H_{1}\mathbf{G}^{r}\mathbf{G}^{n})}{\text{Tr}(\mathbf{G}^{n})}\mathbb{I},$
(6)
$\displaystyle\partial_{\theta}\rho=\tilde{L}\rho+\rho\tilde{L}^{\dagger}.$
(7)
The classical Fisher information (CFI) [19] for this parametrized state can
also be obtained by using the current measurements $I_{FM2}^{\pm}$ to define a
classical probability distribution since currents at the $FM2$ contact behave
like a positive operator-valued measure (POVM) for measurements along
$+\hat{y}$ and $-\hat{y}$ (see discussion in Appendix C). Since we obtain
current measurements from the contact, they will be in ratio of the
probabilities obtained from this POVM, which can be used for ascertaining the
CFI. More discussions on obtaining the CFI and QFI can be found in Appendix B
and Appendix D.
We denote the CFI as $\mathcal{H}_{c}$ which is dependent on the POVM set that
is chosen. The QFI can be equivalently defined as the maximal CFI over all
POVMs, hence $\mathcal{H}_{c}\leq\mathcal{H}$ [17]. The quantum Cramér-Rao
bound [73, 74] gives us a minimum bound on $\Delta\hat{\theta}$ where
$\hat{\theta}$ is an unbiased estimator for $\theta$ for $M$ repetitions of
the measurements. Picking a better POVM will result in a better
$\mathcal{H}_{c}$, which gives a better bound on $\Delta\hat{\theta}$, as seen
by the inequality
$(\Delta\theta)^{2}\geq\frac{1}{M\mathcal{H}_{c}}\geq\frac{1}{M\mathcal{H}}.$
(8)
Another common metric for the performance is that of the signal to noise ratio
[75]. This is linked to the QFI using a measure defined as
$R_{\theta}=\theta^{2}/\Delta\theta$. From (8), we get
$R_{\theta}=\leq\theta^{2}\mathcal{H}$. The quantity $\theta^{2}\mathcal{H}$
is also referred to as the estimability of the parameter [18]. Our setup has
practically unlimited repeated measurements since we obtain steady state
current measurements. Since our signal is proportional to our parameter and
the measurements are uncorrelated, $R_{\theta}$ would scale linearly with $N$
with $N$ probes. In what is known as the Heisenberg limit, the scaling of
$R_{\theta}$ goes as $N^{2}$ which is not possible here since that would
require correlations between the probes [76, 77, 78].
Figure 3: QFI and CFI of the parametrized state in the setup. (a) Comparison
between the QFI of the resonant tunneling setup (labelled
$\mathcal{H}_{\mathrm{resonant}}$) to the maximum possible QFI for the same
setup (labelled $\max(\mathcal{H}_{\mathrm{resonant}}$)). This demonstrates
that the QFI approaches closely the limiting value close to a resonant
tunneling wave-vector, say, $k_{2}$ (see inset). (b) Comparison of QFI for
resonant tunneling setup to the QFI for the single barrier setup (labelled
$\mathcal{H}_{\mathrm{single}}$). It can be seen that the resonant tunneling
setup clearly outperforms the single barrier setup. (c) Comparison between the
CFI (labelled $\mathcal{H}_{c,\mathrm{resonant}}$) and the QFI of the resonant
tunneling setup. We notice that the CFI almost approaches the QFI for resonant
tunneling wave-vectors (see inset for $k_{1}$). (d) Comparison between the CFI
of the resonant tunneling setup and the single barrier setup
$\mathcal{H}_{c,\mathrm{single}}$, again demonstrating that the resonant
tunneling setup outperforms the single barrier setup even here. Figure 4:
Effects of phase relaxation and momentum relaxation. (a) Results for the
resonant tunnelling setup with non-zero values of $D_{P}$ which causes pure-
phase dephasing, and (b) the results with non-zero values of $D_{M}$ which
cause momentum + phase relaxation
## III Results
### III.1 Response of the sensor and the quantum Fisher information
The signal $-D_{Y}$ obtained for a Zeeman splitting of $V_{Z}=t_{0}/5000$ is
depicted in Fig. 1(d) and this shows us three values of the wave-vector $k$
where the signal is very clearly amplified. The wave-vector $k_{3}$ has a
higher energy than $V_{B2}$ which does not correspond to resonant tunneling.
Additionally, we plot the QFI $\mathcal{H}$ and note that at the same values
of $k$, the QFI is much larger, which ascertains that they can perform better
sensing as well. We further explore how the signal $-D_{Y}$ varies with
$V_{Z}$ to understand its response in Fig. 2.
The three values of the wave-vector $k$ where the the signal has a much higher
proportionality with the Zeeman splitting is depicted in Fig. 2(a). As we
would expect for a small $V_{Z}$, the signal $D_{Y}$ shows a linear response
which is captured in Fig. 2(b) for the $k_{1},k_{2}$ and $k_{3}$. However, for
values of $V_{Z}>10^{-3}$ eV, it can be seen that the response stops being
linear as can be noted from Fig. 2(a). The value of $-D_{Y}$ actually begins
to dip for $k_{1}$ after it hits the maximum possible value of 1. To
understand the response in this range would require taking into account as the
effects of higher orders of $\theta=V_{Z}/t_{0}$ in our signal [3, 12, 67].
We also compare the QFI with both the CFI and the maximum possible QFI in Fig.
3. From these results, we can infer that at the resonant tunneling wave-
vectors, $\mathcal{H}_{c}$ is closest to $\mathcal{H}$ which in turn is
closest to the maximum value it can possibly attain (see Fig. 2(a) and Fig.
2(c)). Another inference is that our modified barrier setup outperforms the
single barrier setup by a very large margin at the resonant tunneling wave-
vectors (see Fig. 2(b) and Fig. 2(d)). This shows that our sensor has the
potential to give estimates with a near optimal error margin.
### III.2 Channels with dephasing
Solid-state systems are prone to dephasing interactions, typically categorized
as pure-phase, phase and momentum and spin relaxations. The dephasing
interactions that arise for pure-phase relaxation are usually electron-
electron interactions. The interactions for momentum and phase relaxation are
via fluctuating local non-magnetic impurities and that for spin relaxation are
via magnetic impurities. These can be accounted for in the Keldysh NEGF method
by adding the appropriate self-energies [54, 55, 56, 57, 58, 59, 60, 61, 62,
63, 49].
We define a scattering self-energy and the related in-scattering self-energy
in the following matrix form [62, 79]
$\displaystyle[\boldsymbol{\Sigma}^{r/<}_{s}]_{ij}$
$\displaystyle=D_{ijkl}[\mathbf{G}^{r/<}]_{kl}.$ (9)
Figure 5: Magnetoresistance signal with pure-phase relaxation and momentum
relaxation. The above graph shows the ratio between $D_{Y}$ and $V_{Z}$
showing that the response is not perfectly linear like in the absence of
dephasing. They become almost perfectly linear following a certain value of
$V_{Z}$.
Here $D_{ijkl}$ is a rank-4 tensor which describes the spatial correlation
between impurity scattering potentials [62]. For pure-phase dephasing
interactions, the tensor takes the following form characterized by interaction
strength $D_{P}$ as
$D_{ijkl}=D_{P}\delta_{ik}\delta_{jl}.$ (10)
Here $\delta_{ij}$ is the Kronecker delta function. The corresponding tensor
for momentum dephasing with strength $D_{M}$ is as follows
$D_{ijkl}=D_{M}\delta_{ij}\delta_{ik}\delta_{jl}.$ (11)
The self energies are then evaluated under the self-consistent Born
approximation [62]. It must be noted that both of these interactions do not
affect spin, and hence do not affect the measurement setup. Accounting for
spin dephasing effects will destroy the signal since the the setup is heavily
dependent on spin coherence [49]. Figure 4 depicts the simulation results for
both pure phase and momentum dephasing. Both of these effects broaden the
peaks, as would be expected, but with important qualitative differences.
Figure 5 shows the results of simulating a channel with
$D_{M}=10^{-4}t_{0}^{2}$ and $D_{P}=3\times 10^{-6}t_{0}^{2}$, which
correspond to typical impurity strengths encountered in 1-D channels. The
linear behavior fails if we go below a Zeeman splitting less than $10^{-9}$
eV. There is a reduction in the slopes of this linear behavior compared to the
slopes for the signals given by a clean channel. This reduction is not too
large and still has the slope of the same order as can be deduced from Fig. 5.
## IV Conclusion
We proposed a spintronic device platform to realize weak-value enhanced
quantum sensing. The setup estimates a very weak localized Zeeman splitting by
exploiting a resonant tunneling enhanced magnetoresistance readout. We
established that this paradigm offers a nearly optimal performance with a
quantum Fisher information enhancement of about $10^{4}$ times that of single
high-transmissivity barriers. The obtained signal also offers a high
sensitivity in the presence of dephasing effects typically encountered in the
solid state. These results, we believe, put forth definitive possibilities in
harnessing the inherent sensitivity of resonant tunneling for solid-state
quantum metrology with potential applications, especially, in the sensitive
detection of small induced Zeeman effects [43, 44, 45] in quantum material
heterostructures.
## Acknowledgements
The authors acknowledge Kerem Camsari, Saroj Dash and Sai Vinjanampathy for
useful discussions. The author BM wishes to acknowledge the financial support
from the Science and Engineering Research Board (SERB), Government of India,
under the MATRICS grant.
## Appendix A 1-D scattering and weak-values
We define the Hamiltonian of electrons in terms of spatial Hamiltonians
$\hat{H}_{0}$ and $\hat{H}_{1}$ and some small dimensional parameter $\theta$
as
$\hat{H}=\hat{H}_{0}\otimes\mathbb{I}+\theta\hat{H}_{1}\otimes\sigma_{Z}.$
(12)
Let us look at the spectrum of scattering states with this Hamiltonian. We
define a purely spatial scattering state $\ket{\psi}$ as follows
$\hat{H}_{0}\ket{\psi}=\varepsilon\ket{\psi}.$ (13)
This will get scattered further due to the $\theta H_{1}\otimes\sigma_{Z}$
part. Let us now define $\ket{\psi_{\pm}}$ as the spatially scattered states
for the up-spin and the down-spin channels respectively, expressed as
$\ket{\psi_{\pm}}=\ket{\psi}\pm\dfrac{\theta\hat{H}_{1}}{\varepsilon-\hat{H}_{0}}\ket{\psi_{\pm}}=\ket{\psi}\pm\hat{G}_{0}(\varepsilon)\theta\hat{H}_{1}\ket{\psi_{\pm}}.$
(14)
Here $\hat{G}_{0}$ is the equilibrium isolated Green’s function of the
Hamiltonian $\hat{H}_{0}$. By the general convention, the Green’s function is
defined as $\hat{G}_{0}(\varepsilon)=[\varepsilon-H_{0}\pm i\eta]^{-1}$ with
the plus and minus choice representing the retarded or the advanced Green’s
function. This choice will be largely irrelevant to how we use this operator
since it is never acted on an eigenstate directly. For the sake of convention,
all mentions of $\hat{G}_{0}$ will be of the retarded Green’s function which
has a more physical relevance [71].
We now define $\ket{f}$ as a part of $\ket{\psi}$ which is after scattering
[48]. If we assume that the incident wave is
$\ket{\psi}\otimes\ket{+\hat{x}}$, the scattered wave is
$(\ket{\psi_{+}}\ket{+\hat{z}}+\ket{\psi_{-}}\ket{-\hat{z}})/\sqrt{2}$. We can
evaluate the expectation value $\langle\sigma_{y}\rangle$ for the part on the
left of the scattering section (including barriers in $\hat{H}_{0}$) as
follows
$\langle\sigma_{Y}\rangle=\dfrac{i\langle\psi_{-}|f\rangle\langle
f|\psi_{+}\rangle-i\langle\psi_{+}|f\rangle\langle
f|\psi_{-}\rangle}{|\langle\psi_{+}|f\rangle|^{2}+|\langle\psi_{-}|f\rangle|^{2}}.$
(15)
The problem of 1-D scattering has been dealt with in more depth in Ref. [80].
To make a qualitative argument of the proportionality to the weak-value, we
can choose to use the Born approximation. This gives
$\ket{\psi_{\pm}}\approx\ket{\psi}\pm\theta\hat{G}_{0}\hat{H}_{1}\ket{\psi}$
which then helps to simplify (15) to the following form
$\displaystyle\langle\sigma_{Y}\rangle$
$\displaystyle=i\theta\frac{\langle\psi|f\rangle\langle
f|\hat{G}_{0}\hat{H}_{1}|\psi\rangle-\langle\psi|\hat{H}_{1}\hat{G}^{\dagger}_{0}|f\rangle\langle
f|\psi\rangle}{\langle\psi|f\rangle\langle
f|\psi\rangle+\theta^{2}\langle\psi|\hat{H}_{1}\hat{G}^{\dagger}_{0}|f\rangle\langle
f|\hat{G}_{0}\hat{H}_{1}|\psi\rangle}$ (16)
$\displaystyle=-2\theta\imaginary\left(\dfrac{\langle
f|\hat{G}_{0}\hat{H}_{1}|\psi\rangle}{\langle
f|\psi\rangle}\right)+\mathcal{O}(\theta^{2}).$
Notably the first order term is simply the weak-value of
$\hat{G}_{0}\hat{H}_{1}$. To actually evaluate this, we must note that
$\hat{H}_{1}$ is only non-zero in the region of the middle barrier. We can
then act the Green’s function on $\bra{f}$ and we will finally get an integral
which is only in the spatial region of the middle barrier as described in
[48].
An important insight this calculation gives us is that due to the form of
$\hat{G}_{0}\hat{H}_{1}$ for 1-D barriers, we can have a case where the weak-
value has an amplified value for the choice of $\ket{\psi}$ which has full
transmission (hence maximum $\langle f|\psi\rangle$)as is observed in our
results as well.
## Appendix B Quantum Fisher information for the setup
In this section, we will work out the expression for the quantum fisher
information which we can obtain out of or resonant tunneling setup. The
Hamiltonian is defined in equation (2). We wish to estimate $\theta$ to
measure Zeeman splitting. This is also a problem that has been studied in
context of quantum walks for 1-D scattering [70]. We define the following
position Hamiltonians
$\hat{H}_{0}=\begin{cases}\frac{p^{2}}{2m}+V_{B1}&|y|\leq\frac{d_{1}}{2}\\\
\frac{p^{2}}{2m}+V_{B2}&\frac{d_{1}}{2}<|y|\leq\frac{d_{2}}{2}\\\
\frac{p^{2}}{2m}&|y|>\frac{d_{2}}{2}\end{cases},$ (17)
$\hat{H}_{1}=\begin{cases}t_{0}&|y|\leq\frac{d_{1}}{2}\\\
0&|y|>\frac{d_{1}}{2}\end{cases}.$ (18)
For spin up (or spin down) particles, the effective Hamiltonian is
$H_{0}+\theta H_{1}$ (or $H_{0}-\theta H_{1}$). One of the defined Green’s
function based on the number of particles in the channel is the $G^{n}$
function defined in terms of the advanced and retarded Green’s functions as
$\mathbf{G}^{n}=\mathbf{G}^{r}\boldsymbol{\Sigma}_{in}\mathbf{G}^{a}$. We
obtain, $\rho=\mathbf{G}^{n}/\text{Tr}(\mathbf{G}^{n})$ and so we can see the
following on taking a partial derivative with respect to our parameter:
$\frac{\partial\mathbf{G}^{n}}{\partial\theta}=\frac{\partial\mathbf{G}^{r}}{\partial\theta}\boldsymbol{\Sigma}_{in}\mathbf{G}^{a}+\mathbf{G}^{r}\boldsymbol{\Sigma}_{in}\frac{\partial\mathbf{G}a}{\partial\theta}$
(19)
Now we must note that the retarded green’s function is defined as follows
$\mathbf{G}^{r}=[(E+i\eta)\mathbb{I}-H_{0}\otimes\mathbb{I}_{2}-\theta
H_{1}\otimes\sigma_{z}-\boldsymbol{\Sigma}_{L}-\boldsymbol{\Sigma}_{R}-\boldsymbol{\Sigma}_{F1}-\boldsymbol{\Sigma}_{F2}]^{-1}.$
(20)
From this we can find the partial derivative of $\mathbf{G}^{r}$ with respect
to the parameter $\theta$.
$\frac{\partial\mathbf{G}^{r}}{\partial\theta}=-H_{1}\times-(\mathbf{G}^{r})^{2}=H_{1}\mathbf{G}^{r}\times\mathbf{G}^{r}$
(21)
Hence if we define $L=H_{1}G^{r}$ we can clearly see that the following holds
$\frac{\partial\mathbf{G}^{n}}{\partial\theta}=L\mathbf{G}^{n}+\mathbf{G}^{n}L^{\dagger}$
(22)
We must now also account for the fact that $G^{n}$ must be normalized to give
the expression of the density matrix.
$\displaystyle\begin{split}\frac{\partial\rho}{\partial\theta}&=\frac{\partial}{\partial\theta}\frac{\mathbf{G}^{n}}{\text{Tr}(\mathbf{G}^{n})}\\\
&=\frac{1}{\text{Tr}(\mathbf{G}^{n})}\frac{\partial\mathbf{G}^{n}}{\partial\theta}-\frac{\mathbf{G}^{n}}{\text{Tr}(\mathbf{G}^{n})^{2}}\text{Tr}\left(\frac{\partial\mathbf{G}^{n}}{\partial\theta}\right)\\\
&=\left(L-\frac{\text{Tr}(L\mathbf{G}^{n})}{\text{Tr}(\mathbf{G}^{n})}\mathbb{I}\right)\rho+\rho\left(L^{\dagger}-\frac{\text{Tr}(\mathbf{G}^{n}L^{\dagger})}{\text{Tr}(\mathbf{G}^{n})}\mathbb{I}\right)\end{split}$
(23)
$Tr(A^{\dagger})\mathbb{I}=((Tr(A))\mathbb{I})^{\dagger}$ hence if we define
$\tilde{L}$ as follows:
$\tilde{L}=L-\frac{\text{Tr}(L\mathbf{G}^{n})}{\text{Tr}(\mathbf{G}^{n})}\mathbb{I},$
(24)
we can write the following expression
$\partial_{\theta}\rho=\tilde{L}\rho+\rho\tilde{L}^{\dagger}.$ (25)
This may look a lot like the expression of QFI defined in terms of a symmetric
logarithmic derivative [17]. The operator $\tilde{L}$ isn’t Hermitian hence
fails to be a symmetric logarithmic derivative. In general QFI is defined as
the following for density matrix $\rho=\sum\lambda_{i}\ket{i}\bra{i}$
$\mathcal{H}=\sum_{i,j,\lambda_{i}+\lambda_{j}\neq
0}2\text{Re}\frac{\innerproduct{i|\partial_{\theta}\rho|i}{i|\partial_{\theta}\rho|i}\innerproduct{j|\partial_{\theta}\rho|j}{j|\partial_{\theta}\rho|j}}{\lambda_{i}+\lambda_{j}}$
(26)
If $\partial_{\theta}\rho=\tilde{L}\rho+\rho\tilde{L}^{\dagger}$, then
$\innerproduct{i|\partial_{\theta}\rho|i}{i|\partial_{\theta}\rho|i}=\lambda_{i}\innerproduct{i|\tilde{L}+\tilde{L}^{\dagger}|i}{i|\tilde{L}+\tilde{L}^{\dagger}|i}$.
Hence if $\rho$ is pure we get the following (let
$\rho=\ket{\psi}\bra{\psi}$).
$\displaystyle\mathcal{H}$
$\displaystyle=\text{Re}\left(\innerproduct{\psi|\tilde{L}+\tilde{L}^{\dagger}|\psi}{\psi|\tilde{L}+\tilde{L}^{\dagger}|\psi}^{2}\right)$
(27) $\displaystyle=\text{Tr}(\rho(\tilde{L}+\tilde{L}^{\dagger})^{2})$
It is a known result that QFI maximizes when we have $\rho_{\theta}$ being a
pure state.
It is easy to see that based on this we have
$\mathcal{H}\leq\max(\text{eigenvalues}((\tilde{L}+\tilde{L}^{\dagger})^{2})).$
(28)
## Appendix C Current measurement as a strong measurement
The act of obtaining currents at the ferromagnetic contacts gives the
statistics for the spin expectation values. This is due to the fact that the
ferromagnetic contact, if aligned along a certain direction, will give a
current readout proportional to the population of spins aligned in that
particular direction [49]. This can be established in the NEGF formulation.
The current readouts are from the $FM2$ contact at $\pm\hat{y}$ orientation.
The current values come out to be as follows
$\displaystyle I^{\pm}_{FM2}$
$\displaystyle=\text{Tr}(\Gamma^{FM2}\mathbf{G}^{n})$ (29)
$\displaystyle=-2t_{0}i\sin(ka)\text{Tr}(\mathbf{G}^{n})\text{Tr}\left(\frac{(\mathbb{I}\pm\sigma_{y})\delta_{f_{2},f_{2}}}{2}\rho\right).$
The quantity $\text{Tr}((\mathbb{I}\pm\sigma_{y})\rho/2)$ is simply the
probabilities for the POVM set of
$\\{(\mathbb{I}+\sigma_{y})/2,(\mathbb{I}-\sigma_{y})/2\\}$. An additional
point to note is that our post-selection measurement is looking at one point
in the whole region which lies to the right of the barrier region (electrons
are injected from the left of the barrier). The reason it is only one point is
since the current readout only occurs at a specific point in the 1-D nanowire.
This has no change on the expectation value of $\sigma_{Y}$ since this will
have to be same all over the whole region which lies to the right of the
barrier region.
Hence what we use as the expectation value of $\sigma_{Y}$ is the same as the
expression we obtain by considering a complete post-selected region in
equation (15) since the spin part of the wavefunction is the same everywhere
on the right of the barrier. For any 1-D scattering problem, all changes only
occur at the boundaries, hence by looking at one point we can get the relevant
information for the whole post-selected region.
## Appendix D Classical Fisher information for the setup
As we have established previously, we take the current readouts to behave as
probabilities for the POVM set of
$\\{(\mathbb{I}+\sigma_{y})/2,(\mathbb{I}-\sigma_{y})/2\\}$ There are a few
issues with taking this as a direct interpretation since the state $\rho$ is
ultimately dependent on the polarization of $FM2$ (see equation (20)) hence is
slightly different depending on whether it is $+\hat{y}$ or $-\hat{y}$. We
first define the probabilities $p_{\pm}$ as
$\displaystyle
p_{\pm}=\frac{\text{Tr}\left(\frac{(\mathbb{I}\pm\sigma_{y})\delta_{f_{2},f_{2}}}{2}\mathbf{G}^{n}_{\pm}\right)}{\text{Tr}(\mathbf{G}^{n}_{\pm})}.$
(30)
We must note that these probabilities are only looking at a certain lattice
point corresponding to the contact $FM2$. Hence we actually need to define
conditional properties since those are the actual probabilities we get
$\pm\hat{y}$ polarization electrons detected on the other end. Hence let
$\tilde{p}_{\pm}=p_{\pm}/(p_{+}+p_{-})$. From this, the CFI is simply given as
follows.
$\mathcal{H}_{c}=\dfrac{(\partial_{\theta}\tilde{p}_{+})^{2}}{\tilde{p}_{+}}+\dfrac{(\partial_{\theta}\tilde{p}_{-})^{2}}{\tilde{p}_{-}}$
(31)
The expression of this can be easily evaluated using equation (22).
## References
* Giovannetti _et al._ [2006] V. Giovannetti, S. Lloyd, and L. Maccone, Phys. Rev. Lett. 96, 010401 (2006).
* Giovannetti _et al._ [2011] V. Giovannetti, S. Lloyd, and L. Maccone, Nature photonics 5, 222 (2011).
* Degen _et al._ [2017] C. Degen, F. Reinhard, and P. Cappellaro, Reviews of Modern Physics 89, 10.1103/revmodphys.89.035002 (2017).
* Polino _et al._ [2020] E. Polino, M. Valeri, N. Spagnolo, and F. Sciarrino, AVS Quantum Science 2, 024703 (2020), https://doi.org/10.1116/5.0007577 .
* Taylor and Bowen [2016] M. A. Taylor and W. P. Bowen, Physics Reports 615, 1 (2016), quantum metrology and its application in biology.
* Joo _et al._ [2011] J. Joo, W. J. Munro, and T. P. Spiller, Phys. Rev. Lett. 107, 083601 (2011).
* Pang and Brun [2014] S. Pang and T. A. Brun, Phys. Rev. A 90, 022117 (2014).
* Kaubruegger _et al._ [2021] R. Kaubruegger, D. V. Vasilyev, M. Schulte, K. Hammerer, and P. Zoller, Phys. Rev. X 11, 041045 (2021).
* Marciniak _et al._ [2022] C. D. Marciniak, T. Feldker, I. Pogorelov, R. Kaubruegger, D. V. Vasilyev, R. van Bijnen, P. Schindler, P. Zoller, R. Blatt, and T. Monz, Nature 603, 604 (2022).
* Hofmann [2011] H. F. Hofmann, Physical Review A 83, 10.1103/physreva.83.022106 (2011).
* Kofman _et al._ [2012] A. G. Kofman, S. Ashhab, and F. Nori, Physics Reports 520, 43–133 (2012).
* Dressel and Jordan [2012] J. Dressel and A. N. Jordan, Physical review letters 109, 230402 (2012).
* Lyons _et al._ [2015] K. Lyons, J. Dressel, A. N. Jordan, J. C. Howell, and P. G. Kwiat, Phys. Rev. Lett. 114, 170801 (2015).
* Viza _et al._ [2015] G. I. Viza, J. Martínez-Rincón, G. B. Alves, A. N. Jordan, and J. C. Howell, Phys. Rev. A 92, 032127 (2015).
* Xu _et al._ [2020] L. Xu, Z. Liu, A. Datta, G. C. Knee, J. S. Lundeen, Y.-q. Lu, and L. Zhang, Phys. Rev. Lett. 125, 080501 (2020).
* Liu _et al._ [2022] Y. Liu, L. Qin, and X.-Q. Li, Phys. Rev. A 106, 022619 (2022).
* Liu _et al._ [2019] J. Liu, H. Yuan, X.-M. Lu, and X. Wang, Journal of Physics A: Mathematical and Theoretical 53, 023001 (2019).
* Paris [2009] M. G. Paris, International Journal of Quantum Information 7, 125 (2009).
* Facchi _et al._ [2010] P. Facchi, R. Kulkarni, V. Man’ko, G. Marmo, E. Sudarshan, and F. Ventriglia, Physics Letters A 374, 4801 (2010).
* Fujiwara and Nagaoka [1995] A. Fujiwara and H. Nagaoka, Physics Letters A 201, 119 (1995).
* Alves _et al._ [2015] G. B. Alves, B. M. Escher, R. L. de Matos Filho, N. Zagury, and L. Davidovich, Phys. Rev. A 91, 062107 (2015).
* Vaidman [2017] L. Vaidman, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 375, 20160395 (2017).
* Dixon _et al._ [2009] P. B. Dixon, D. J. Starling, A. N. Jordan, and J. C. Howell, Phys. Rev. Lett. 102, 173601 (2009).
* Ferrie and Combes [2014] C. Ferrie and J. Combes, Phys. Rev. Lett. 112, 040406 (2014).
* Combes _et al._ [2014] J. Combes, C. Ferrie, Z. Jiang, and C. M. Caves, Physical Review A 89, 052117 (2014).
* Jordan _et al._ [2014] A. N. Jordan, J. Martínez-Rincón, and J. C. Howell, Physical Review X 4, 10.1103/physrevx.4.011031 (2014).
* Knee _et al._ [2016] G. C. Knee, J. Combes, C. Ferrie, and E. M. Gauger, Quantum Measurements and Quantum Metrology 3, doi:10.1515/qmetro-2016-0006 (2016).
* Vetrivelan and Vinjanampathy [2022] M. Vetrivelan and S. Vinjanampathy, Quantum Science and Technology 7, 025012 (2022).
* Jullien _et al._ [2014] T. Jullien, P. Roulleau, B. Roche, A. Cavanna, Y. Jin, and D. Glattli, Nature 514, 603 (2014).
* Samuelsson and Büttiker [2006] P. Samuelsson and M. Büttiker, Physical Review B 73, 041305 (2006).
* Tam _et al._ [2021] M. Tam, C. Flindt, and F. Brange, Phys. Rev. B 104, 245425 (2021).
* Ranni _et al._ [2021] A. Ranni, F. Brange, E. T. Mannila, C. Flindt, and V. F. Maisi, Nature communications 12, 1 (2021).
* Nigg _et al._ [2015] S. E. Nigg, R. P. Tiwari, S. Walter, and T. L. Schmidt, Phys. Rev. B 91, 094516 (2015).
* Brange _et al._ [2021] F. Brange, K. Prech, and C. Flindt, Phys. Rev. Lett. 127, 237701 (2021).
* Deacon _et al._ [2015] R. Deacon, A. Oiwa, J. Sailer, S. Baba, Y. Kanai, K. Shibata, K. Hirakawa, and S. Tarucha, Nature communications 6, 1 (2015).
* Pfaff _et al._ [2013] W. Pfaff, T. H. Taminiau, L. Robledo, H. Bernien, M. Markham, D. J. Twitchen, and R. Hanson, Nature Physics 9, 29 (2013).
* Ionicioiu _et al._ [2001] R. Ionicioiu, P. Zanardi, and F. Rossi, Phys. Rev. A 63, 050101 (2001).
* Bednorz and Belzig [2011] A. Bednorz and W. Belzig, Phys. Rev. B 83, 125304 (2011).
* Zhou _et al._ [2019] B. T. Zhou, K. Taguchi, Y. Kawaguchi, Y. Tanaka, and K. T. Law, Communications Physics 2, 1 (2019).
* Zhang _et al._ [2020] Y. Zhang, Z. Hou, Y.-X. Zhao, Z.-H. Guo, Y.-W. Liu, S.-Y. Li, Y.-N. Ren, Q.-F. Sun, and L. He, Phys. Rev. B 102, 081403 (2020).
* Li _et al._ [2014] Y. Li, J. Ludwig, T. Low, A. Chernikov, X. Cui, G. Arefe, Y. D. Kim, A. M. van der Zande, A. Rigosi, H. M. Hill, S. H. Kim, J. Hone, Z. Li, D. Smirnov, and T. F. Heinz, Phys. Rev. Lett. 113, 266804 (2014).
* Zhang _et al._ [2019] X.-X. Zhang, Y. Lai, E. Dohner, S. Moon, T. Taniguchi, K. Watanabe, D. Smirnov, and T. F. Heinz, Phys. Rev. Lett. 122, 127401 (2019).
* Dankert and Dash [2017] A. Dankert and S. P. Dash, Nature communications 8, 1 (2017).
* Khokhriakov _et al._ [2020] D. Khokhriakov, A. M. Hoque, B. Karpiak, and S. P. Dash, Nature communications 11, 1 (2020).
* Kamalakar _et al._ [2016] M. V. Kamalakar, A. Dankert, P. J. Kelly, and S. P. Dash, Scientific reports 6, 1 (2016).
* Tsitsishvili _et al._ [2004] E. Tsitsishvili, G. S. Lozano, and A. O. Gogolin, Phys. Rev. B 70, 115316 (2004).
* Sánchez _et al._ [2013] J. Sánchez, L. Vila, G. Desfonds, S. Gambarelli, J. Attané, J. De Teresa, C. Magén, and A. Fert, Nature communications 4, 1 (2013).
* Steinberg [1995] A. M. Steinberg, Physical Review Letters 74, 2405–2409 (1995).
* Mathew _et al._ [2022] A. Mathew, K. Y. Camsari, and B. Muralidharan, Phys. Rev. B 105, 144418 (2022).
* Ricco and Azbel [1984] B. Ricco and M. Y. Azbel, Phys. Rev. B 29, 1970 (1984).
* Sun _et al._ [1998] J. P. Sun, G. I. Haddad, P. Mazumder, and J. N. Schulman, Proceedings of the IEEE 86, 641 (1998).
* Björk _et al._ [2002] M. Björk, B. Ohlsson, C. Thelander, A. Persson, K. Deppert, L. Wallenberg, and L. Samuelson, Applied Physics Letters 81, 4458 (2002).
* Mazumder _et al._ [1998] P. Mazumder, S. Kulkarni, M. Bhattacharya, J. P. Sun, and G. Haddad, Proceedings of the IEEE 86, 664 (1998).
* Danielewicz [1984] P. Danielewicz, Annals of Physics 152, 239 (1984).
* Datta [1997] S. Datta, _Electronic transport in mesoscopic systems_ (Cambridge university press, 1997).
* Datta [2005] S. Datta, _Quantum Transport: Atom to Transistor_ , 2nd ed. (Cambridge University Press, 2005).
* Golizadeh-Mojarad and Datta [2007] R. Golizadeh-Mojarad and S. Datta, Phys. Rev. B 75, 081301 (2007).
* Sharma _et al._ [2016] A. Sharma, A. Tulapurkar, and B. Muralidharan, IEEE Transactions on Electron Devices 63, 4527 (2016).
* Sharma _et al._ [2018] A. Sharma, A. A. Tulapurkar, and B. Muralidharan, Applied Physics Letters 112, 192404 (2018).
* Singha and Muralidharan [2018] A. Singha and B. Muralidharan, Journal of Applied Physics 124, 144901 (2018).
* Sharma _et al._ [2017] A. Sharma, A. A. Tulapurkar, and B. Muralidharan, Phys. Rev. Applied 8, 064014 (2017).
* Camsari _et al._ [2020] K. Y. Camsari, S. Chowdhury, and S. Datta, The non-equilibrium green function (negf) method (2020), arXiv:2008.01275 [cond-mat.mes-hall] .
* Duse _et al._ [2021] C. Duse, P. Sriram, K. Gharavi, J. Baugh, and B. Muralidharan, Journal of Physics: Condensed Matter 33, 365301 (2021).
* Aharonov _et al._ [1988] Y. Aharonov, D. Z. Albert, and L. Vaidman, Phys. Rev. Lett. 60, 1351 (1988).
* Duck _et al._ [1989] I. M. Duck, P. M. Stevenson, and E. C. G. Sudarshan, Phys. Rev. D 40, 2112 (1989).
* Leggett [1989] A. J. Leggett, Phys. Rev. Lett. 62, 2325 (1989).
* Dressel _et al._ [2014] J. Dressel, M. Malik, F. Miatto, A. Jordan, and R. Boyd, Review of Modern Physics 86, 307 (2014).
* Meir and Wingreen [1992] Y. Meir and N. S. Wingreen, Phys. Rev. Lett. 68, 2512 (1992).
* Haug and Jauho [2007] H. Haug and A. Jauho, _Quantum Kinetics in Transport and Optics of Semiconductors_ , Springer Series in Solid-State Sciences (Springer Berlin Heidelberg, 2007).
* Zatelli _et al._ [2020] F. Zatelli, C. Benedetti, and M. G. A. Paris, Entropy 22, 10.3390/e22111321 (2020).
* Sakurai and Napolitano [2017] J. J. Sakurai and J. Napolitano, _Modern Quantum Mechanics_, 2nd ed. (Cambridge University Press, 2017).
* Liu _et al._ [2016] J. Liu, J. Chen, X.-X. Jing, and X. Wang, Journal of Physics A: Mathematical and Theoretical 49, 275302 (2016).
* Boixo _et al._ [2007] S. Boixo, S. T. Flammia, C. M. Caves, and J. Geremia, Phys. Rev. Lett. 98, 090401 (2007).
* Braunstein _et al._ [1996] S. L. Braunstein, C. M. Caves, and G. Milburn, Annals of Physics 247, 135 (1996).
* Agarwal and Davidovich [2022] G. Agarwal and L. Davidovich, Physical Review Research 4, L012014 (2022).
* Demkowicz-Dobrzański _et al._ [2012] R. Demkowicz-Dobrzański, J. Kołodyński, and M. Guţă, Nature communications 3, 1 (2012).
* Zwierz _et al._ [2012] M. Zwierz, C. A. Pérez-Delgado, and P. Kok, Phys. Rev. A 85, 042112 (2012).
* Zwierz _et al._ [2010] M. Zwierz, C. A. Pérez-Delgado, and P. Kok, Phys. Rev. Lett. 105, 180402 (2010).
* Lahiri _et al._ [2018] A. Lahiri, K. Gharavi, J. Baugh, and B. Muralidharan, Phys. Rev. B 98, 125417 (2018).
* Aharonov and Vaidman [2008] Y. Aharonov and L. Vaidman, The two-state vector formalism: An updated review, in _Time in Quantum Mechanics_, edited by J. Muga, R. S. Mayato, and Í. Egusquiza (Springer Berlin Heidelberg, Berlin, Heidelberg, 2008) pp. 399–447.
|
# Production of the triply heavy $\Omega_{ccc}$ and $\Omega_{bbb}$ baryons at
$e^{+}e^{-}$ colliders
Su-Zhi Wua<EMAIL_ADDRESS>Pei Wub<EMAIL_ADDRESS>You-Wei Lia
<EMAIL_ADDRESS>aCollege of Science, Northwest A$\&$F University,
Yangling, Shaanxi 712100, China
bCollege of Science, Hunan University of Science and Engineering, Yongzhou,
Hunan 425199, China
###### Abstract
In this paper, we calculate the total and differential cross sections of the
processes,
$e^{+}e^{-}\rightarrow\gamma^{*}/Z^{*}\rightarrow\Omega_{ccc}\bar{c}\bar{c}\bar{c}$
and
$e^{+}e^{-}\rightarrow\gamma^{*}/Z^{*}\rightarrow\Omega_{bbb}\bar{b}\bar{b}\bar{b}$,
in the leading order at the $e^{+}e^{-}$ collider with different energies.
## I Introduction
With the discovery of the doubly charmed baryon $\Xi_{cc}$ SELEX:2002wqn ;
LHCb:2017iph ; LHCb:2018pcs ; LHCb:2018zpl ; LHCb:2021eaf ; LHCb:2022rpd , the
triply heavy baryon has become the only one absent in the baryon family. The
masses of the triply heavy baryon have been calculated in kinds of models
theoretically Faustov:2021qqf ; Tazimi:2021ywr ; Wang:2020avt ;
Bhavsar:2018tad ; Vijande:2015faa ; Llanes-Estrada:2013rwa ; Wang:2011ae ;
Llanes-Estrada:2011gwu ; Alomayrah:2020qyw ; Yang:2019lsg ; Shah:2019jxp ;
Rai:2017hue ; Shah:2017jkr ; Martynenko:2007je ; Azizi:2014jxa ; Aliev:2012tt
; Zheng:2010zzc ; Zhang:2009re ; Patel:2008mv ; Jia:2006gw and different
results have been gotten. Exploring the triply baryon in experiment is
important for filling the particle spectra and testing the theories. To
produce the triply heavy baryon, three pairs of heavy quarks need to be
produced. This makes it hard to calculate the cross sections of the triply
heavy baryon production. The total cross sections and the differential cross
sections of the production of these baryons at LHC have been calculated in
Chen:2011mb ; Wu:2012wj . There, the authors simplified the calculation
utilizing the characteristic of the spin, the color configurations of the
triply heavy baryons and the small relative momenta among the constituent
quarks. The production of the triply charmed baryon in $e^{+}e^{-}$
annihilation at $Z$-boson pole have been estimated in Baranov:2004er , where
the mass of the charm quark has been ignored at some places in the matrix. The
production of $\Omega_{ccc}$ from quark gluon plasma in heavy ion collisions
has been calculated in Becattini:2005hb . And the production of the triply
baryons from the fragmentation processes have been calculated in
GomshiNobary:2005ur ; MoosaviNejad:2017rvi ; GomshiNobary:2006tzy ;
GomshiNobary:2004mq ; Delpasand:2019xpk .
In this paper, we report the study on the production of the triply heavy
baryons $\Omega_{ccc}$ and $\Omega_{bbb}$ at $e^{+}e^{-}$ colliders. Here, we
calculate the direct production of these triply heavy baryons exactly. Because
the constituent quarks of the triply heavy baryon are all heavy flavored, the
NRQCD Bodwin:1994jh can be used to describe the triply heavy baryon. And, the
production of the $\Omega_{QQQ}$ can be factorized into two parts, the short-
distance coefficient which describes the process of the production of the
triple heavy quark pairs, and the long-distance matrix element which describes
the formation of the triply heavy baryon from point-like three heavy
quarks.111$\Omega_{QQQ}$ means the triply heavy baryon $\Omega_{ccc}$ or
$\Omega_{bbb}$. And $Q$ denotes the $c$ or $b$ quark. To achieve the
calculation, we will reduce the number of the Feynman diagrams which we have
to calculate, as we have done in Chen:2011mb ; Wu:2012wj .
In the next section we will show the details on the calculation of these two
parts, the short-distance coefficient and the long-distance matrix element,
explicitly. In Sec.III, we will list the numerical results and conclusions.
And in Sec.IV, we will give a brief summary.
## II Production of THE $\Omega_{ccc}$ and $\Omega_{bbb}$ at $e^{+}e^{-}$
colliders
As a baryon, the triply heavy baryon must be in color singlet. The color wave
function of the $\Omega_{QQQ}$ must be
$\frac{1}{\sqrt{6}}\varepsilon^{\xi_{1}\xi_{2}\xi_{3}}Q_{1\xi_{1}}Q_{2\xi_{2}}Q_{3\xi_{3}}$
with $\xi_{i}$ ($i=$1,2,3) being the color index of the valence quark $Q_{i}$.
As a ground state, the orbital angular momentum wave function is symmetrical.
The exchange antisymmetry of the identical fermions implies that the
$\Omega_{QQQ}$ must be the spin-symmetrical state and the spin of it must be
$\frac{3}{2}$.
Because of the large masses of the heavy quarks, the NRQCD can be used to
describe the triply heavy baryons. In NRQCD, we describe the normalized wave
function of the $\Omega_{QQQ}$ as,
$\displaystyle|\Omega_{QQQ},\frac{3}{2},S_{Z}\rangle$
$\displaystyle=\sqrt{2M}\int\frac{d^{3}\vec{q}_{1}}{(2\pi)^{3}}\frac{d^{3}\vec{q}_{2}}{(2\pi)^{3}}\frac{1}{\sqrt{3!}}$
(1)
$\displaystyle\sum_{\xi_{1},\xi_{2},\xi_{3}}\sum_{\eta_{1},\eta_{2},\eta_{3}}\frac{\varepsilon^{\xi_{1}\xi_{2}\xi_{3}}}{\sqrt{6}}\langle\frac{3}{2},S_{Z}|\eta_{1},\eta_{2},\eta_{3}\rangle$
$\displaystyle\frac{1}{\sqrt{2E_{1}2E_{2}2E_{3}}}\psi(\vec{q}_{1},\vec{q}_{2})|Q_{1},\xi_{1},\eta_{1},\vec{q}_{1}\rangle$
$\displaystyle|Q_{2},\xi_{2},\eta_{2},\vec{q}_{2}\rangle|Q_{3},\xi_{3},\eta_{3},\vec{q}_{3}\rangle,\
\ \ \ $
where, $\vec{q}_{3}=-\vec{q}_{1}-\vec{q}_{2}$, and
$\displaystyle\langle Q_{i},\xi_{i},\eta_{i},$
$\displaystyle\vec{q}_{i}|Q_{j},\xi_{j},\eta_{j},\vec{q}_{j}\rangle$
$\displaystyle=\delta_{Q_{i}Q_{j}}\delta_{\eta_{i}\eta_{j}}\delta_{\xi_{i}\xi_{j}}(2\pi)^{3}2E_{f}\delta^{(3)}(\vec{q}_{i}-\vec{q}_{j}),$
with $\eta_{i}$ and ($E_{i}$, $\vec{q}_{i}$) ($i=1,2,3$) being the spin and
the four-momentum of the $Q_{i}$ heavy quark; $M$ being the mass of the baryon
$\Omega_{QQQ}$; $\langle\frac{3}{2},S_{Z}|\eta_{1},\eta_{2},\eta_{3}\rangle$
being the Clebsch-Gordan (C-G) coefficient; $S_{Z}$ being the third component
of the spin of the baryon, and $\psi(\vec{q}_{1},\vec{q}_{2})$ being the wave
function of the baryon in the momentum space which is normalized as follows
$\displaystyle\int\frac{d^{3}\vec{q}_{1}}{(2\pi)^{3}}\frac{d^{3}\vec{q}_{2}}{(2\pi)^{3}}\psi^{*}(\vec{q}_{1},\vec{q}_{2})\psi(\vec{q}_{1},\vec{q}_{2})=1\;.$
(2)
The production of $\Omega_{QQQ}$ at $e^{+}e^{-}$ colliders can be factorized
into two parts, the short-distance coefficient corresponding to the production
of the three $Q\bar{Q}$ pairs and the long-distance matrix element describing
the three heavy quark $Q$ coupling to the triply heavy baryon $\Omega_{QQQ}$.
In the heavy quark limit, the dependence of the short distance on the momenta,
$q_{1}$ and $q_{2}$, can be neglected in the leading order. Namely, the
momenta of the produced three identical heavy quarks are the same. As a
result, the long-distance matrix is proportional to the wave function of the
baryon at the origin,
$\displaystyle\Psi(0,0)=\int\frac{d^{3}q_{1}}{(2\pi)^{3}}\frac{d^{3}q_{2}}{(2\pi)^{3}}\psi(q_{1},q_{2})\;.$
(3)
Now, the amplitude of the process
$e^{+}e^{-}\rightarrow\Omega_{QQQ}\bar{Q}\bar{Q}\bar{Q}$ can be written as,
$\displaystyle A(e^{+}e^{-}\to\Omega_{QQQ}\bar{Q}\bar{Q}\bar{Q})=$
$\displaystyle\frac{\sqrt{2M}}{\sqrt{(2m)^{3}}}\frac{\Psi(0,0)}{\sqrt{3!}}\mathcal{M}(e^{+}e^{-}\to(QQQ)_{1}^{(\frac{3}{2},S_{Z})}\bar{Q}_{1}\bar{Q}_{2}\bar{Q}_{3})\;,\
\ \ \ $ (4)
in which, $m$ is the mass of the heavy quark $Q$, and
$\mathcal{M}(e^{+}e^{-}\to(QQQ)_{1}^{(\frac{3}{2},S_{Z})}\bar{Q}\bar{Q}\bar{Q})$
is the short-distance coefficient, namely, the matrix element of the process
$e^{+}e^{-}\to(QQQ)_{1}^{(\frac{3}{2},S_{Z})}\bar{Q}\bar{Q}\bar{Q}$.222
$(QQQ)_{1}^{(S,S_{Z})}$ means the total spin and the third component of the
total spin of the three identical $Q$-quarks are $\frac{3}{2}$ and $S_{Z}$,
respectively; the momenta of the three quarks are the same; and the three
heavy quarks couple to a color singlet.
Now, let us consider the short-distance coefficient. The contribution to the
production of the three heavy quark pairs comes from the $e^{+}e^{-}$
annihilation process mainly,
$\displaystyle e^{-}(k_{1})+e^{+}(k_{2})\rightarrow
Z^{*}/\gamma^{*}\rightarrow Q(p_{1},\xi_{1})+Q(p_{2},\xi_{2})$
$\displaystyle+Q(p_{3},\xi_{3})+\bar{Q}(p_{4},\chi_{1})+\bar{Q}(p_{5},\chi_{2})+\bar{Q}(p_{6},\chi_{3}),\
\ \ \ \ \ $ (5)
where, $k_{1}$ and $k_{2}$ are the 4-momenta of the electron and position;
$p_{i}$ ($i$=1,6) are the 4-momenta of the produced $Q$ and $\bar{Q}$ quarks,
with $p_{1}=p_{2}=p_{3}$; $\xi_{j}$ and $\chi_{j}$($j=$1,2,3) are the color
indices of the $Q_{j}$-quarks and $\bar{Q}_{j}$-quarks, respectively. The
produced three pairs of heavy quarks in process (II) have six permutations
denoted as, $(Q_{1}\bar{Q}_{i}Q_{2}\bar{Q}_{j}Q_{3}\bar{Q}_{k})$ with
$i,j,k$=1,2,3 and $i\neq j\neq
k$.333$(Q_{1}\bar{Q}_{i}Q_{2}\bar{Q}_{j}Q_{3}\bar{Q}_{k})$ ($i,j,k$=1,2,3 and
$i\neq j\neq k$) means the quark $Q_{1}$ in the same fermion line with the
antiquark $\bar{Q}_{i}$, the quark $Q_{2}$ in the same fermion line with the
antiquark $\bar{Q}_{j}$ and the quark $Q_{3}$ in the same fermion line with
the antiquark $\bar{Q}_{k}$. In the calculation, we will disregard the
contribution of the electroweak interaction between the heavy quarks for the
process (II). As a result, there are seven inequivalent topology structures
for each of the six permutations as the same as the one in Chen:2011mb . So,
there are 42 topology structures totally for the produced heavy quarks.
Inserting the $e^{+}e^{-}\gamma^{*}$ vertex and $e^{+}e^{-}Z^{*}$ vertex into
the 42 topology structures in all the allowed positions in the tree level, we
get 576 Feynman diagrams for the process (II).
As pointed out above, the color configuration of the produced three heavy
quarks is
$\frac{1}{\sqrt{6}}\varepsilon^{\xi_{1}\xi_{2}\xi_{3}}Q_{1\xi_{1}}Q_{2\xi_{2}}Q_{3\xi_{3}}$.
Setting $T^{a}=\frac{\lambda^{a}}{2}$ with $\lambda^{a}$ $(a=1,\ldots,8)$
being the Gell-Mann matrices and using the color-flow method in Chen:2011mb ,
we can get the color factors of the 576 Feynman diagrams. The color factors of
the last two diagrams in Fig.1 are both,
$\displaystyle\sum_{a,b,c}\sum_{\xi_{1},\xi_{2},\xi_{3}}\frac{1}{\sqrt{6}}\varepsilon^{\xi_{1}\xi_{2}\xi_{3}}f^{abc}(T^{a})_{\xi_{l}\chi_{i}}(T^{b})_{\xi_{m}\chi_{j}}(T^{b})_{\xi_{n}\chi_{k}}=0\;.$
in which, $f^{abc}$ ($(a,b,c=1,\ldots,8)$) is the structure constant of the
$SU(3)$ group, and the index $\xi_{i}$ ($i=1,2,3$) appears twice, in
$\varepsilon^{\xi_{1}\xi_{2}\xi_{3}}$ directly and as one of the indices
$\xi_{l}$, $\xi_{m}$ and $\xi_{n}$ indirectly. And we see the result is
independent onto the indices $i,j,k,l,m,$ and $n$. As a result, we get the
conclusion that the total contribution to the amplitude of the process (II)
from the 74 Feynman diagrams involving a three-gluon vertex vanishes because
of the color factors of these Feynman diagrams are all zero. The number of the
Feynman diagrams which we need to consider reduces to 504. The color factors
of the first seven Feynman diagrams in Fig.1 are all the same,
$\displaystyle\sum_{a,b}\sum_{\xi_{1},\xi_{2},\xi_{3}}\frac{1}{\sqrt{6}}\varepsilon^{\xi_{1}\xi_{2}\xi_{3}}(T^{a})_{\xi_{l}\chi_{i}}$
$\displaystyle(T^{a}T^{b})_{\xi_{m}\chi_{j}}(T^{b})_{\xi_{n}\chi_{k}}$
$\displaystyle=(-1)^{N}\frac{4}{9}\frac{1}{\sqrt{6}}\varepsilon^{\chi_{i}\chi_{j}\chi_{k}}\;,$
in which $N$ is the number of permutations of transforming the set
$(\xi_{l},\xi_{m},\xi_{n})$ to the set $(\xi_{1},\xi_{2},\xi_{3})$. For each
of the Feynman diagrams, there is a Feynman factor $(-1)^{N^{*}}$ where
$N^{*}$ equals the total number of permutations of transforming the set
$(l,m,n)$ to the set $(1,2,3)$ and transforming the set $(i,j,k)$ to the set
$(1,2,3)$. Subsuming the Feynman factor into the corresponding color factor,
we find that all the color factors of the remaining 504 Feynman diagrams are
the same,
$\displaystyle
C_{col}=(-1)^{N^{*}}(-1)^{N}\frac{4}{9}\frac{1}{\sqrt{6}}\varepsilon^{\chi_{i}\chi_{j}\chi_{k}}=\frac{4}{9}\frac{1}{\sqrt{6}}\varepsilon^{\chi_{1}\chi_{2}\chi_{3}}\;.$
Figure 1: Nine typical Feynman diagrams for the process (II). The indices
$i,j,k,l,m,n=1,2,3$ (with $i\neq j\neq k$ and $l\neq m\neq n$). Figure 2:
Distributions of the differential production cross section versus the
transverse momentum of the $\Omega_{ccc}$ produced at CEPC with
$\sqrt{s}=91.2\;{\rm GeV}$. Figure 3: Distributions of the differential
production cross section versus the transverse momentum of the $\Omega_{bbb}$
produced at CEPC with $\sqrt{s}=91.2\;{\rm GeV}$. Figure 4: The production
cross section of the $\Omega_{ccc}$ produced by $e^{+}e^{-}$ annihilation at
different energies. Figure 5: The production cross section of the
$\Omega_{bbb}$ produced by $e^{+}e^{-}$ annihilation at different energies.
Now, let’s consider the remained 504 Feynman diagrams. For each one of the six
permutations given above, $(Q_{1}\bar{Q}_{i}Q_{2}\bar{Q}_{j}Q_{3}\bar{Q}_{k})$
with $i,j,k$=1,2,3, there are 84 Feynman diagrams, in which 42 ones correspond
to the process $e^{-}+e^{+}\to Z^{*}\to
Q_{1}Q_{2}Q_{3}\bar{Q}_{1}\bar{Q}_{2}\bar{Q}_{3}$ and the other 42 ones
correspond to the process $e^{-}+e^{+}\to\gamma^{*}\to
Q_{1}Q_{2}Q_{3}\bar{Q}_{1}\bar{Q}_{2}\bar{Q}_{3}$. In our calculation, the
momenta of the produced identical heavy quarks are considered to be equal. As
a result, we find that the contribution of every 84 Feynam diagrams
corresponding to each one of the six permutations are the same. So, the total
amplitude for the process
$e^{+}e^{-}\to(QQQ)_{1}^{(\frac{3}{2},S_{Z})}\bar{Q}\bar{Q}\bar{Q}$
$\displaystyle\mathcal{M}(e^{+}e^{-}\to(QQQ)_{1}^{(\frac{3}{2},S_{Z})}\bar{Q}\bar{Q}\bar{Q})$
$\displaystyle=$ $\displaystyle\sum_{k=1}^{504}C_{col}\,\Gamma_{k}$ (6)
$\displaystyle=$ $\displaystyle
3!\sum_{k=1}^{84}C_{col}\,\Gamma^{{}^{\prime}}_{k}\;,$
where $\Gamma_{k}$ ($k=1,2....,504$) are the matrices without color factors of
the Feynman diagrams, and $\Gamma^{{}^{\prime}}_{k}$ ($k=1,2....,84$) are the
matrices without color factors of the 84 Feynman diagrams corresponding to the
permutation $(Q_{1}\bar{Q}_{1}Q_{2}\bar{Q}_{2}Q_{3}\bar{Q}_{3})$.
## III Numerical results and discussions
Now, the cross section of the progress (II) can be written as
$\displaystyle\sigma=$
$\displaystyle\frac{1}{3!}\int\sum_{S_{Z},\varsigma_{i}}\frac{(2\pi)^{4}}{2\hat{s}}\delta^{4}(k_{1}+k_{2}-P-p_{4}-p_{5}-p_{6})$
(7) $\displaystyle
d\Pi_{4}\frac{1}{4}\sum_{s_{1},s_{2},\chi_{i}}|\mathcal{A}(e^{+}e^{-}\to\Omega_{QQQ}\bar{Q}\bar{Q}\bar{Q})|^{2},$
with
$\displaystyle
d\Pi_{4}=\frac{d^{3}P}{(2\pi)^{3}2E}\frac{d^{3}p_{4}}{(2\pi)^{3}2E_{p_{4}}}\frac{d^{3}p_{5}}{(2\pi)^{3}2E_{p_{5}}}\frac{d^{3}p_{6}}{(2\pi)^{3}2E_{p_{6}}},\
\ \ \ $
where $\varsigma_{i}$ ($i=1,2,3$), $s_{1}$ and $s_{2}$ are the spins of the
antiquarks $\bar{Q}_{i}$, $e^{+}$ and $e^{-}$, respectively.
To do the numerical calculation, the parameters are taken as follows:
$\displaystyle m_{c}=1.5\,{\rm GeV},\;m_{b}=4.9\,{\rm GeV},\;m_{Z}=91.18\,{\rm
GeV},$ $\displaystyle sin^{2}\theta_{w}=0.224,\;\Gamma_{{}_{Z}}=2.49\,{\rm
GeV},\;\alpha(m_{Z})=1/127.95,\;$
$\displaystyle|\Psi_{\Omega_{ccc}}(0,0)|^{2}=0.36\cdot 10^{-3}\,{\rm
GeV^{6}}\;$ $\displaystyle\text{and}\ \
|\Psi_{\Omega_{bbb}}(0,0)|^{2}=0.189\,{\rm GeV^{6}}.$ (8)
For the electromagnetic coupling constant, we adopt
$\displaystyle\alpha(q)$ $\displaystyle=$
$\displaystyle\frac{\alpha(m_{Z})}{1-\frac{\alpha(m_{Z})}{3\pi}{\rm
log}(\frac{q^{2}}{m_{Z}^{2}})},$ (9)
where, $q$ denotes the energy scale of the electromagnetic coupling constant,
and we take $q=\frac{\sqrt{s}}{2}$ in this paper with $\sqrt{s}$ being the
colliding energy in the center-of-mass frame. And the strong coupling
constant, we adopt
$\displaystyle\alpha_{s}(\mu)$ $\displaystyle=$
$\displaystyle\frac{\alpha_{s}(m_{Z})}{1+\frac{b_{0}}{2\pi}\alpha_{s}(m_{Z}){\rm
log}(\frac{\mu}{m_{Z}})},$ (10) $\displaystyle\text{with}\ \ b_{0}$
$\displaystyle=$ $\displaystyle 11-\frac{2}{3}n_{f},$
where $\mu$ is the energy scale and $\alpha_{s}(m_{Z})=0.118$. For the
production of the $\Omega_{ccc}$, $n_{f}$ is taken to be 4 when $\mu\leq
2m_{b}$ and 5 when $\mu$ is larger than 2$m_{b}$. And for the production of
the $\Omega_{bbb}$, $n_{f}$ is taken to be 5. For comparison, we take two
different values for $\mu$, i.e., $\mu$=$\sqrt{s}/2$ and $\mu$=$\mu_{R}/2$,
where $\mu_{R}$ is the transverse mass of the produced baryon, namely
$\mu_{R}^{2}$=$p_{T}^{2}$+$M^{2}$.444Here, $p_{T}$ means the transverse
momentum of the produced triply heavy baryon.
SuperKEKB, the asymmetric beam energy $e^{-}e^{+}$ collider, is an upgrade of
the KEKB accelerator facility. The target integrated luminosity of 50
$ab^{-1}$ to be collected by the Belle II experiment. And a high luminosity
$e^{+}e^{-}$ collider named the Circular Electron-Positron Collider (CEPC) has
been proposed by the Chinese particle physics community. The CEPC is designed
to operate at three different modes, as a Higgs factory at $\sqrt{s}=240\,{\rm
GeV}$, as a $Z$ factory at $\sqrt{s}=91.2\,{\rm GeV}$ and performing $WW$
threshold scans around $\sqrt{s}=160\,{\rm GeV}$ An:2018dwb . We calculate the
production cross sections of the baryons $\Omega_{ccc}$ and $\Omega_{bbb}$ at
CEPC with the energy $\sqrt{s}=91.2\,{\rm GeV}$ and $\sqrt{s}=160\,{\rm GeV}$,
and at Belle II with the center-of-mass energy $\sqrt{s}=10.58\,{\rm GeV}$.
The results are shown in TABLE. 1.
- | $\sqrt{s}$ | $\mu$ | $\Omega_{ccc}$ | $\Omega_{bbb}$
---|---|---|---|---
CEPC | 91.2 GeV | $\sqrt{s}/2$ | 0.00204(5) | $0.332(4)\times 10^{-3}$
- | - | $\mu_{R}/2$ | 0.0124(3) | $0.952(4)\times 10^{-3}$
- | 160 GeV | $\sqrt{s}/2$ | $0.214(9)\times 10^{-5}$ | $0.61(2)\times 10^{-6}$
- | - | $\mu_{R}/2$ | $0.108(9)\times 10^{-4}$ | $0.197(5)\times 10^{-5}$
Belle II | 10.58 GeV | $\sqrt{s}/2$ | $0.249(1)\times 10^{-5}$ | -
- | - | $\mu_{R}/2$ | $0.707(3)\times 10^{-5}$ | -
Table 1: Production cross sections (in unit $fb$) of the $\Omega_{ccc}$ and
$\Omega_{bbb}$ at the CEPC and Belle II.
Here, we also calculate the differential cross sections $d\sigma/dp_{T}$ of
the production of the $\Omega_{ccc}$ and $\Omega_{bbb}$ at $\sqrt{s}=91.2\ \
{\rm GeV}$, which are shown in Figs. 2 and 3. The line for the $p_{T}$
distribution are not smooth due to the Vegas simulation error which can be
improved by increasing the event numbers. We calculate the production of the
$\Omega_{ccc}$ and $\Omega_{bbb}$ at the $e^{+}e^{-}$ collider with different
colliding energies, which are shown in Fig.4 and Fig.5.
As we see that both the integrated cross sections and differential cross
sections are proportional to $|\Psi(0,0)|^{2}$, $\alpha^{2}(q)$and
$\alpha_{s}^{4}(\mu)$. So the numerical results can be changed by one even two
orders using different the wave function at the origin of the triply heavy
baryons, different running coupling constants and different energy scale
choices. We can get the conclusion from the numerical results shown in Figs.4
and 5, that the production cross sections of the $\Omega_{ccc}$ and
$\Omega_{bbb}$ take their maximum values at $\sqrt{s}=m_{Z}$. From the
numerical results in Table 1, we know that it is impossible to observe
$\Omega_{ccc}$ at SuperKEKB, if there is no other models to produce them at
the $e^{+}e^{-}$ colliders. Also, we see there are around 32-198 events for
the production of $\Omega_{ccc}$ and 5-15 events for the production of
$\Omega_{bbb}$, at CEPC with $16\,ab^{-1}$ integrated luminosity running at
$91.2\,{\rm GeV}$.
## IV Summary
As a summary, we have studied the production of the $\Omega_{ccc}$ and
$\Omega_{bbb}$ at the SuperKEKB and the CEPC. From the numerical results, we
conclude that it is impossible to find the triply heavy baryons at SuperKEKB,
and it is hard to find $\Omega_{ccc}$ and $\Omega_{bbb}$ at the CEPC because
of the fewer events.
Acknowledgments: The authors thank Doctor Jian-Wei Xu for helpful discussions
and important suggestions on the manuscript. The work of S. Z. Wu was
supported by the Natural Science Foundation of China under Grants No.
11347200.
## References
* (1) M. Mattson et al. First Observation of the Doubly Charmed Baryon $\Xi^{+}_{cc}$. Phys. Rev. Lett., 89:112001, 2002.
* (2) Roel Aaij et al. Observation of the doubly charmed baryon $\Xi_{cc}^{++}$. Phys. Rev. Lett., 119(11):112001, 2017.
* (3) Roel Aaij et al. First Observation of the Doubly Charmed Baryon Decay $\Xi_{cc}^{++}\rightarrow\Xi_{c}^{+}\pi^{+}$. Phys. Rev. Lett., 121(16):162002, 2018.
* (4) Roel Aaij et al. Measurement of the Lifetime of the Doubly Charmed Baryon $\Xi_{cc}^{++}$. Phys. Rev. Lett., 121(5):052002, 2018.
* (5) Roel Aaij et al. Search for the doubly charmed baryon ${\varXi}_{cc}^{+}$ in the ${\varXi}_{c}^{+}{\pi}^{-}{\pi}^{+}$ final state. JHEP, 12:107, 2021.
* (6) Roel Aaij et al. Observation of the doubly charmed baryon decay ${\varXi}_{cc}^{++}\to{\varXi}_{c}^{\prime+}{\pi}^{+}$. JHEP, 05:038, 2022.
* (7) R. N. Faustov and V. O. Galkin. Triply heavy baryon spectroscopy in the relativistic quark model. Phys. Rev. D, 105(1):014013, 2022.
* (8) N. Tazimi and A. Ghasempour. Mass spectrum of triply heavy baryon in the hypercentral quark model. Mod. Phys. Lett. A, 36(39):2150270, 2021.
* (9) Zhi-Gang Wang. Analysis of the triply-heavy baryon states with the QCD sum rules. AAPPS Bull., 31:5, 2021.
* (10) Tanvi Bhavsar, Manan Shah, and P. C. Vinodkumar. A relativistic approach for triply heavy avour baryon. DAE Symp. Nucl. Phys., 63:840–841, 2018.
* (11) J. Vijande, A. Valcarce, and H. Garcilazo. Constituent-quark model description of triply heavy baryon nonperturbative lattice QCD data. Phys. Rev. D, 91(5):054011, 2015.
* (12) Felipe J. Llanes-Estrada, Olga I. Pavlova, and Richard Williams. Triply heavy baryon mass estimated within pNRQCD. Acta Phys. Polon. Supp., 6(3):821, 2013.
* (13) Zhi-Gang Wang. Analysis of the Triply Heavy Baryon States with QCD Sum Rules. Commun. Theor. Phys., 58:723–731, 2012.
* (14) Felipe J. Llanes-Estrada, Olga I. Pavlova, and Richard Williams. A First Estimate of Triply Heavy Baryon Masses from the pNRQCD Perturbative Static Potential. Eur. Phys. J. C, 72:2019, 2012.
* (15) Norah Alomayrah and T. Barakat. The excited states of triply-heavy baryons in QCD sum rules. Eur. Phys. J. A, 56(3):76, 2020.
* (16) Gang Yang, Jialun Ping, Pablo G. Ortega, and Jorge Segovia. Triply heavy baryons in the constituent quark model. Chin. Phys. C, 44(2):023102, 2020.
* (17) Zalak Shah and Ajay Kumar Rai. Mass spectra of triply heavy charm-beauty baryons. EPJ Web Conf., 202:06001, 2019.
* (18) Ajay Kumar Rai and Zalak Shah. Regge Trajectories of triply heavy baryons. J. Phys. Conf. Ser., 934(1):012035, 2017.
* (19) Zalak Shah and Ajay Kumar Rai. Masses and Regge trajectories of triply heavy $\Omega_{ccc}$ and $\Omega_{bbb}$ baryons. Eur. Phys. J. A, 53(10):195, 2017.
* (20) A. P. Martynenko. Ground-state triply and doubly heavy baryons in a relativistic three-quark model. Phys. Lett. B, 663:317–321, 2008.
* (21) K. Azizi, T. M. Aliev, and M. Savci. Properties of doubly and triply heavy baryons. J. Phys. Conf. Ser., 556(1):012016, 2014.
* (22) T. M. Aliev, K. Azizi, and M. Savci. Masses and Residues of the Triply Heavy Spin-1/2 Baryons. JHEP, 04:042, 2013.
* (23) W. Zheng and H. R. Pang. Momentum-space Faddeev calculations for ground-state triply and doubly heavy baryons in the constituent quark model. Mod. Phys. Lett. A, 25:2077–2088, 2010.
* (24) Jian-Rong Zhang and Ming-Qiu Huang. Deciphering triply heavy baryons in terms of QCD sum rules. Phys. Lett. B, 674:28–35, 2009.
* (25) Bhavin Patel, Ajay Majethiya, and P. C. Vinodkumar. Masses and Magnetic moments of Triply Heavy Flavour Baryons in Hypercentral Model. Pramana, 72:679–688, 2009.
* (26) Yu Jia. Variational study of weakly coupled triply heavy baryons. JHEP, 10:073, 2006.
* (27) Yu-Qi Chen and Su-Zhi Wu. Production of Triply Heavy Baryons at LHC. JHEP, 08:144, 2011. [Erratum: JHEP 09, 089 (2011)].
* (28) Su-Zhi Wu, You-Wei Li, and Reyima Rashidin. Heaviest bound baryons production at the Large Hadron Collider. Phys. Rev. D, 86:114504, 2012.
* (29) S. P. Baranov and V. L. Slad. Production of triply charmed Omega(ccc) baryons in e+ e- annihilation. Phys. Atom. Nucl., 67:808–814, 2004.
* (30) Francesco Becattini. Production of multiply heavy flavored baryons from quark gluon plasma in relativistic heavy ion collisions. Phys. Rev. Lett., 95:022301, 2005.
* (31) M. A. Gomshi Nobary and R. Sepahvand. An Ivestigation of triply heavy baryon production at hadron colliders. Nucl. Phys. B, 741:34–41, 2006.
* (32) S. Mohammad Moosavi Nejad. NLO QCD corrections to triply heavy baryon fragmentation function considering the effect of nonperturbative dynamics of baryon bound states. Phys. Rev. D, 96(11):114021, 2017.
* (33) M. A. Gomshi Nobary and R. Sepahvand. Triply heavy baryons. eConf, C0605151:0010, 2006.
* (34) M. A. Gomshi Nobary and R. Sepahvand. Fragmentation of triply heavy baryons. Phys. Rev. D, 71:034024, 2005.
* (35) Mahdi Delpasand and S. Mohammad Moosavi Nejad. Gluon fragmentation into triply heavy baryons considering two various scenarios. Phys. Rev. D, 99(11):114028, 2019.
* (36) Geoffrey T. Bodwin, Eric Braaten, and G. Peter Lepage. Rigorous QCD analysis of inclusive annihilation and production of heavy quarkonium. Phys. Rev. D, 51:1125–1171, 1995. [Erratum: Phys.Rev.D 55, 5853 (1997)].
* (37) Fenfen An et al. Precision Higgs physics at the CEPC. Chin. Phys. C, 43(4):043002, 2019.
|
A 4-fold Categorical Equivalence
Ray Maresca
###### Abstract
In this note, we will illuminate some immediate consequences of work done by
Reineke in [4] that may prove to be useful in the study of elliptic curves. In
particular, we will construct an isomorphism between the category of smooth
projective curves with a category of quiver grassmannians. We will use this to
provide a 4-fold categorical equivalence between a category of quiver
grassmannians, smooth projective curves, compact Riemann surfaces and fields
of transcendence degree 1 over $\mathbb{C}$. We finish with noting that the
category of elliptic curves is isomorphic to a category of quiver
grassmannians, whence providing an analytic group structure to a class of
quiver grassmannians.
## 1 Introduction
It is well known that there is a three-fold equivalence between the categories
of compact Riemann surfaces, fields of transcendence degree 1 over
$\mathbb{C}$, and smooth projective curves [2] and [9]. A more recent
development is the notion of quiver grassmannians, first introduced by
Schofield in [8]. Since their introduction, they have become a popular topic
of research. It has been known that quiver grassmannians are projective
varieties, but just how much projective geometry is captured by quiver
grassmannians was unclear until the early 2010’s. A famous result of Hille
[3], Huisgen-Zimmermann, and Rieneke [4], is that all projective varieties can
be realized as quiver grassmannians for some wild acyclic quiver $Q$.
Actually, even more is true. Expanding on his work in [5] in which he proved
the result for a generalized Kronecker quiver, Ringel showed in [7], the
incredible result that given any wild quiver $Q$, we can realize all
projective varieties as the quiver grassmannian of a suitable
$Q$-representation. It may be interesting to ask, is there is a ‘best’ quiver
with which to study projective varieties, and if not, which quivers are
‘better’ in which circumstances? Another natural question to ask is, can we
restrict the quiver $Q$ and still get a similar result? Ringel showed in [6]
that the answer to this question is partially yes. Namley, Ringel showed that
for a (controlled) wild algebra, any projective variety can be realized as an
Auslander variety, but not necessarily as a quiver Grassmannian.
In this note, we will use the construction given by Rieneke in [4] to define a
functor from the category of smooth projective curves to a subcategory of
quiver grassmannians. We will show that this functor is an isomorphism of
categories, which will ultimately yield a four-fold categorical equivalence.
We finish this note with some immediate consequences regarding elliptic
curves.
## 2 Preliminaries
To establish the equivalence, we will first recall some definitions.
### 2.1 Projective Varieties
Following [9], let $\Bbbk$ denote a perfect field. We begin by recalling that
projective $n$-space over a field $\Bbbk$ is defined as
$\mathbb{P}^{n}_{\overline{\Bbbk}}=\mathbb{P}^{n}={\overline{\Bbbk}^{n+1}\over\sim}$
where $(z_{0},\dots,z_{n})\sim(z^{\prime}_{0},\dots,z^{\prime}_{n})$ if and
only if there exists $\lambda\in\overline{\Bbbk}^{*}$ such that $(\lambda
z_{0},\dots,\lambda z_{n})=(z^{\prime}_{0},\dots,z^{\prime}_{n})$ and
$\overline{\Bbbk}$ denotes the algebraic closure of $\Bbbk$. Denote by
$[z_{0},\dots,z_{n}]$ the class of $(z_{0},\dots,z_{n})$ under the
aforementioned quotient map. Let $R=\overline{\Bbbk}[x_{0},\dots,x_{n}]$ be
the polynomial ring in $n+1$ variables over $\overline{\Bbbk}$. A polynomial
$P\in R$ is homogeneous of degree $d$ if $P(\lambda x)=\lambda^{d}P(x)$ for
all $\lambda\in\overline{\Bbbk}^{*}$. An ideal $I\subset R$ is homogeneous if
$I$ is generated by homogeneous polynomials. A projective algebraic set is
some subset of $\mathbb{P}^{n}$ of the form
$V(I)=\\{[x_{0},\dots,x_{n}]\in\mathbb{P}^{n}:P(x)=0\,\,\text{for all
homogeneous}\,\,P\in I\subset R\\}$ where $I$ is a homogeneous ideal. A
projective algebraic variety is $V(I)$ for $I$ a prime homogeneous ideal of
$R$.
We define the field of rational functions of $\mathbb{P}^{N}$ by
$\overline{\Bbbk}(\mathbb{P}^{N})=\\{{f\over g}\\}$ where $f,g\in R$, $g\neq
0$ and both $f$ and $g$ are homogeneous of the same degree. The field of
rational functions of a projective variety $V\subset\mathbb{P}^{N}$ is defined
as $\overline{\Bbbk}(V)={\overline{\Bbbk}(\mathbb{P}^{N})\over\sim}$ where
${f_{1}\over g_{1}}\sim{f_{2}\over g_{2}}$ if and only if
$f_{1}g_{2}-f_{2}g_{1}\in I(V)$ where $I(V)=\\{\text{homogeneous}\,\,P\in
R:P(x)=0\,\,\text{for all}\,\,x\in\overline{\Bbbk}^{n+1}\\}$. We define a
rational map between a projective variety $V\subset\mathbb{P}^{N}$ and
$\mathbb{P}^{M}$ as the data of $M+1$ elements of $\overline{\Bbbk}(V)$. A
rational map $V\rightarrow V^{\prime}\subset\mathbb{P}^{M}$ is a rational map
$V\rightarrow\mathbb{P}^{M}$ such that $[f_{0},\dots,f_{M}](x)\in V^{\prime}$
for all $x\in V$ for which $[f_{0},\dots,f_{M}](x)$ is defined. A rational map
between varieties is called a morphism if it is defined everywhere.
The dimension of a projective variety is the transcendence degree of
$\overline{\Bbbk}(V)$ over $\overline{\Bbbk}$. Projective varieties
$V\subset\mathbb{P}^{2}$ of dimension one are called projective curves. A
projective variety is called non-singular, or smooth, if the dimension of its
tangent space equals its dimension at every point. For more on projective
algebraic geometry, see [9] and [2]. The following $3$-fold categorical
equivalence is well known.
###### Theorem 2.1.
The following three categories are equivalent:
1. 1.
Compact connected Riemann Surfaces with holomorphic maps.
2. 2.
Field extensions of transcendence degree one over $\mathbb{C}$ with field
morphisms.
3. 3.
Smooth projective curves in $\mathbb{P}^{2}_{\mathbb{C}}$ with morphisms of
varieties. $\square$
### 2.2 Quiver Grassmannians
A quiver $Q$ is a directed graph. More formally, it is a $4$-tuple
$Q=(Q_{0},Q_{1},s,t)$ where $Q_{0}$ is the set of vertices, $Q_{1}$ is the set
of arrows, and $s$ and $t$ are maps that assign to each vertex a starting and
terminal point respectively. For a field $\Bbbk$ that is usually taken to be
algebraically closed but need not be, a representation $V$ of a quiver $Q$ is
an assignment of a $\Bbbk$-vector space $V_{i}$ for each $i\in Q_{0}$ and a
vector space morphism $\phi_{\alpha}:V_{i}\rightarrow V_{j}$ for each
$\alpha\in Q_{1}$ such that $s(\alpha)=i$ and $t(\alpha)=j$. A
subrepresentation $M=(M_{i},\psi_{\alpha})$ of a representation
$V=(V_{i},\phi_{\alpha})$ is a representation of $Q$ such that $M_{i}\subset
V_{i}$ is a sub vector space for all vertices $i$, $\psi_{\alpha}$ is the
restriction of $\phi_{\alpha}$ to $M_{s(\alpha)}$, and
$\psi_{\alpha}(M_{i})\subset M_{j}$ for all arrows $\alpha:i\rightarrow j\in
Q_{1}$. In other words, a subrepresentation is a collection of subspaces that
are compatible with the morphisms defining the parent representation. The
dimension vector of a representation is
$\textbf{dim}V=($dim$V_{1},\dots,$dim$V_{|Q_{0}|})$. We call a representation
$V$ finite dimensional if $V_{i}$ is a finite dimensional vector space for all
$i\in Q_{0}$.
$1$$2$$3$A quiver $\displaystyle
Q$$\overline{\Bbbk}$$\overline{\Bbbk}^{10}$$\overline{\Bbbk}^{6}$$\varphi_{0}$$\varphi_{1}$$\varphi_{2}$$f$A
representation $\displaystyle V$ of $\displaystyle Q$ with
dim$\displaystyle(V)=(1,10,6)$$0$$\overline{\Bbbk}$$\overline{\Bbbk}$$\varphi_{0}$$\varphi_{1}$$\varphi_{2}$$0$A
subrepresentation $\displaystyle M$ of $\displaystyle V$ with
dim$\displaystyle(M)=(0,1,1)$ Figure 1: The quiver key to Theorem 2.2
Given a quiver $Q$ and a representation $V$ of $Q$, the quiver grassmannian
$\text{Gr}_{\bm{e}}^{Q}(V)$ is the set of subrepresentations of $V$ with
dimension vector $\bm{e}$. The subrepresentation $M$ in Figure 1 is an element
of $\text{Gr}_{(0,1,1)}^{Q}(V)$. It is well known that quiver grassmannians
are projective varieties. For more on quiver grassmannians, see [1]. We also
have the following result of Hille, Huisgen-Zimmermann, and Reineke. The
wording below is consistent with Reineke’s in [4]:
###### Theorem 2.2.
Every projective variety is isomorphic to a quiver Grassmannian
$\text{Gr}^{Q}_{\bm{e}}(V)$ for an acyclic quiver Q with at most three
vertices, a Schurian representation V, and a thin dimension vector $\bm{e}$;
that is, $e_{i}\leq 1$ for all $i\in Q_{0}$. $\square$
## 3 The 4-fold Equivalence
Theorem 2.2 relies on the $d$-uple Veronese embedding. We will use essentially
the same idea to create a category of quiver Grassmannians equivalent to the
third category listed in Theorem 2.1.
Let $X\subset\mathbb{P}_{\overline{\Bbbk}}^{2}$ be a smooth projective curve.
Thus $X$ is defined as the vanishing locus of a homogeneous polynomial $P$ of
degree $d$ in three variables. Let
$\nu_{d}:\mathbb{P}_{\overline{\Bbbk}}^{2}\rightarrow\mathbb{P}_{\overline{\Bbbk}}^{\binom{d+2}{2}-1}$
denote the $d$-uple Veronese embedding, which is an isomorphism onto its image
since $\overline{\Bbbk}$ is a field of characteristic 0. Then by Reineke’s
result, Theorem 2.2, $\nu_{d}(X)=\text{Gr}_{(0,1,1)}(V)$ for $V$ a
representation of the quiver $Q$ in Figure 1 of dimension $(1,M,M^{\prime})$
where $M=\binom{d+2}{2}$ and $M^{\prime}=\binom{d+1}{2}$. On the top right of
Figure 1 is an example of $\nu_{3}(X)$ where $X$ is a projective curve defined
by the vanishing locus of a degree $d=3$ polynomial in three variables.
The condition of being in the image of the $d$-uple Veronese embedding is
encoded into an $M^{\prime}\times 3$ matrix $A_{d}(x)$, whose $2\times 2$ sub
minors vanish. In particular, let $M_{2,d}$ be the set of tuples
$m=(m_{0},m_{1},m_{2})\in\mathbb{N}^{3}$ summing to $d$, so that $M$ is the
cardinality of $M_{2,d}$, and $\nu_{d}$ maps homogeneous coordinates
$[x_{0},x_{1},x_{2}]$ to $[\dots,x^{m},\dots]_{m\in M_{2,d}}$, where
$x^{m}=x_{0}^{m_{0}}x_{1}^{m_{1}}x_{2}^{m_{2}}$. We define the matrix
$A_{d}(x)$ with rows indexed by the $n\in M_{2,d-1}$ and columns indexed by
the $i=0,1,2$ with the $(2,i)$-th entry being $x_{n+e_{i}}$. Then
$x\in\nu_{d}(X)$ if and only if $P(x)=0$ and $A_{d}(x)$ has rank 1. We thus
define $V$ as in Figure 1 with $f=\nu_{d}(P)$ and $\varphi_{i}$ the $i$th
column of $A_{d}(x)$. Then $\nu_{d}(X)=\text{Gr}_{(0,1,1)}(V)$. For more on
this construction, see [4].
Fix the quiver $Q$ to be the one in Figure 1 and let $d\in\mathbb{Z}^{\geq
0}$. Let $V_{d}^{f}$ be a $\overline{\Bbbk}$-representation of $Q$ such that
$\textbf{dim}(V)=(1,M,M^{\prime})$, $\varphi_{i}$ is the $i$th column of
$A_{d}(x)$, and the preimage of the linear map $f$, denoted by
$\nu_{d}^{-1}(f)$, is irreducible as a homogeneous polynomial in
$\overline{\Bbbk}[x_{0},x_{1},x_{2}]$.
###### Definition 3.1.
Define $\mathcal{GR}_{(0,1,1)}(V)$ to be the category whose objects are
$\text{Gr}_{(0,1,1)}(V_{d}^{f})$ for all $d\in\mathbb{Z}^{\geq 0}$ and any $f$
such that $\nu_{d}^{-1}(f)$ is irreducible, and whose morphisms are those of
projective varieties. Let $\mathcal{GR}_{(0,1,1)}^{\text{sm}}(V)$ be the full
subcategory of $\mathcal{GR}_{(0,1,1)}(V)$ who’s objects are smooth.
###### Theorem 3.1.
The category of non-singular projective curves in
$\mathbb{P}^{2}_{\overline{\Bbbk}}$ with morphisms of varieties, (NPC) for
short, is isomorphic to $\mathcal{GR}_{(0,1,1)}^{\text{sm}}(V)$.
###### Proof.
We begin by constructing a functor
$\nu:\text{(NPC)}\rightarrow\mathcal{GR}_{(0,1,1)}^{\text{sm}}(V)$. For $X$ a
non-singular projective curve cut out by a homogeneous polynomial of degree
$d$, define $\nu(X):=\nu_{d}(X)$. Then $\nu(X)$ is non-singular since $X$ is
and $\nu(X)\cong X$. Moreover, $\nu(X)=\text{Gr}_{(0,1,1)}(V_{d}^{f})$ for $f$
the homogeneous polynomial of degree $d$ that cuts out $X$ by Reineke’s
result, Theorem 2.2. Thus $\nu(X)\in\mathcal{Gr}_{(0,1,1)}^{\text{sm}}(V_{d})$
and $\nu$ is well defined on objects. Given a morphism $\psi:X\rightarrow Y$
between two non-singular projective curves cut out by homogeneous polynomials
of degree $d$ and $d^{\prime}$ respectively, define
$\nu(\psi):\nu(X)\rightarrow\nu(Y)$ by
$\nu_{d^{\prime}}\circ\psi\circ\nu_{d}^{-1}$. Since $\nu(\psi)$ is a morphism
of varieties and $\nu$ preserves the identity and composition, $\nu$ defines a
functor.
It is well known that the $d$-uple Veronese embedding is an isomorphism onto
its image. Notice by construction, for any $d$, the image of $\nu_{d}$
restricted to projective curves cut out by a homogeneous polynomial of degree
$d$ is equal to the collection of $\text{Gr}_{(0,1,1)}(V_{d}^{f})$. This
allows us to define
$\nu^{-1}:\mathcal{GR}_{(0,1,1)}^{\text{sm}}(V)\rightarrow(\text{NPC})$
analogously to $\nu$, and these two functors are inverse. ∎
By taking $\Bbbk=\mathbb{C}$, an immediate consequence of Theorems 2.2 and 3.1
is the following 4-fold categorical equivalence.
###### Corollary 3.2.
The following four categories are equivalent:
1. 1.
Compact connected Riemann Surfaces with holomorphic maps.
2. 2.
Field extensions of transcendence degree one over $\mathbb{C}$ with field
morphisms.
3. 3.
Smooth projective curves in $\mathbb{P}^{2}_{\overline{\mathbb{C}}}$ with
morphisms of varieties.
4. 4.
$\mathcal{GR}_{(0,1,1)}^{\text{sm}}(V)$ where $V$ is a
$\mathbb{C}$-representation of $Q$. $\square$
Recall that by definition, elliptic curves are non-singular curves of genus
one; however, every such curve can be written as the locus in
$\mathbb{P}_{\overline{\Bbbk}}^{2}$ of a cubic equation with the base point on
the line at $\infty$ [9]. The next corollary follows from the fact that
elliptic curves are the vanishing locus of an irreducible homogeneous
polynomial of degree 3 and the fact that the functor $\nu$ restricts to an
isomorphism, namely $\nu_{3}$.
###### Corollary 3.3.
The category of elliptic curves in $\mathbb{P}_{\overline{\Bbbk}}^{2}$ with
morhpisms of varieties is equivalent to
$\mathcal{GR}_{(0,1,1)}^{\text{sm}}(V_{3})$, the full subcategory of
$\mathcal{GR}_{(0,1,1)}^{\text{sm}}(V)$ whose objects are
$\text{Gr}_{(0,1,1)}^{\text{sm}}(V_{3}^{f})$ for $f$ such that
$\nu_{3}^{-1}(f)$ is irreducible. ∎
###### Remark 3.1.
Since in Corollary 3.3 we do not take $\Bbbk=\mathbb{C}$, this equivalence
along with the linear-algebraic nature of representations of quivers may prove
to be useful in the study of rational points of elliptic curves. Moreover, one
may be able to use the moduli space of quiver grassmannians to study that of
elliptic curves and vice versa.
Corollary 3.3 also provides us with a way to remove the artificial imposition
of smoothness in Definition 3.1. In determining smoothness of
$\text{Gr}_{(0,1,1)}(V_{d}^{f})$, it suffices to check the Jacobian criterion
on the equations that cut out the quiver grassmannian; however, after
embedding into a higher dimensional projective space there can be several of
these equations and checking this criterion can quickly become computationally
expensive. We do however have the following proposition.
###### Proposition 3.4.
Suppose the characteristic of $\Bbbk$ is not 2 or 3 and consider a quiver
grassmannian $\text{GR}_{(0,1,1)}(V_{3}^{f})$. Then
$\text{GR}_{(0,1,1)}(V_{3}^{f})$ is smooth if and only if it is isomorphic to
$\text{GR}_{(0,1,1)}(V_{3}^{\xi})$ where $\xi=x_{7}-x_{0}-ax_{5}-bx_{9}$ and
$4a^{3}+27b^{2}\neq 0$.
###### Proof.
By definition, a curve in $\mathbb{P}_{\overline{\Bbbk}}^{2}$ cut out by a
degree 3 homogeneous polynomial is smooth if and only if it is an elliptic
curve. In the case of elliptic curves however, it is known that each curve can
be written in reduced Weierstrass form as $y^{2}=x^{3}+ax+b$ when the field is
not characteristic 2 or 3 [9]. Upon realizing this in homogeneous coordinates,
we get the equation $y^{2}z-x^{3}-axz^{2}-bz^{3}=0$. To attain the
corresponding quiver grassmannian, we analyze the 3-uple Veronese embedding:
$\nu_{3}(x,y,z)=(x^{3},x^{2}y,x^{2}z,xy^{2},xyz,xz^{2},y^{3},y^{2}z,yz^{2},z^{3}).$
Relabeling the $\mathbb{P}_{\overline{\Bbbk}}^{9}$ coordinates as
$(x_{0},x_{1},\dots,x_{9})$, any elliptic curve is the solution set to
$x_{7}-x_{0}-ax_{5}-bx_{9}=0$. Letting $\xi=x_{7}-x_{0}-ax_{5}-bx_{9}$, the
corresponding quiver grassmannian is $\text{Gr}_{(0,1,1)}(V_{3}^{\xi})$. Now
by Corollary 3.3, we have that $\text{Gr}_{(0,1,1)}(V_{3}^{\xi})$ is an object
of $\mathcal{GR}_{(0,1,1)}^{\text{sm}}(V_{3})$ if and only if $\xi$ is smooth,
which occurs if and only if $4a^{3}+27b^{2}\neq 0$ [9]. ∎
Using Corollary 3.3 we can also see that the objects of
$\text{GR}_{(0,1,1)}^{\text{sm}}(V_{3}^{f})$ can be endowed with a commutative
group structure inherited from that of elliptic curves. In the case
$\Bbbk=\mathbb{C}$, we can use Corollary 3.2 to further state that these
quiver grassmannians are also isomorphic to connected compact Riemann surfaces
of genus 1, hence complex tori.
###### Corollary 3.5.
Let $X$ be a connected compact Riemann surface. Then the following are
equivalent:
1. 1.
$X$ has genus 1.
2. 2.
$X$ has a structure of an analytic group.
3. 3.
$X$ has a commutative analytic group structure.
4. 4.
$X\cong{\mathbb{C}\over\mathbb{Z}l+\mathbb{Z}w}$
5. 5.
$X\cong C_{F}$ where $C_{F}$ is an elliptic curve.
6. 6.
$X\cong\text{Gr}_{(0,1,1)}^{\text{sm}}(V_{3}^{f})$ for some $f.\hfill\square$
## 4 References
1. [1]
Cerulli Irelli, G. Three Lectures on Quiver Grassmannians, Preprint,
arXiv:2003.08265 [math.AG], 2020.
2. [2]
Hartshorne, R. Algebraic Geometry, Springer, Volume 52, 1977.
3. [3]
Hille, L. Moduli of representations, quiver Grassmannians and Hilbert schemes,
Preprint, arXiv:1505.06008, [math.RT], 2015.
4. [4]
Reineke, M. Every Projective Variety is a Quiver Grassmannian, Algebras and
Representation Theory, 16, 1313-1314, 2013.
5. [5]
Ringel, C. The Eigenvector Variety of a Matrix Pencil, Linear Algebra Appl.,
531 (2017), 447-458.
6. [6]
Ringel, C. Quiver Grassmannians and Auslander varieties for wild algebras, J.
of Alg., Volume 402, 351-357, 2014
7. [7]
Ringel, C. Quiver Grassmannians for Wild Acyclic Quivers, Proc. AMS, 146, n.
5, 2018.
8. [8]
Schofield, A. General representations of quivers, Proc. London Math. Soc. (3)
65 (1992), no. 1, 46–64.
9. [9]
Silverman, Joseph H. The Arithmetic of Elliptic Curves, Springer, Volume 106,
2009.
|
# Microlensing effects of blackhole-like wormholes
Ke Gao Lei-Hua Liu<EMAIL_ADDRESS>Department of Physics, College
of Physics, Mechanical and Electrical Engineering, Jishou University, Jishou
416000, China Mian Zhu<EMAIL_ADDRESS>Department of Physics, The Hong
Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong
Kong, P.R. China Jockey Club Institute for Advanced Study, The Hong Kong
University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong,
P.R. China Faculty of Physics, Astronomy and Applied Computer Science,
Jagiellonian University, 30-348 Krakow, Poland
###### Abstract
In this paper, we investigate the microlensing effects of blackhole-like
wormholes. We evaluate the deflection angle upon the second order under weak
field approximation with Gauss-Bonnet theorem. We elaborate on the deflection
angle of the Ellis-Bronnikov wormhole as an example. Following the same
procedure, we study the magnification of three typical wormholes (WH):
Schwarzschild WH, Kerr-like WH, and RN WH, as well as their blackhole
correspondence. We find that the prograde case of Kerr-like metric will lead
to the multi-peaks of magnification as the mass part is compatible with the
charge part. Moreover, the first two gentle peaks of Kerr blackhole are larger
than the wormhole case by one order of magnitude, while the main peak of Kerr
blackholes and wormholes are of the same order. For other cases, the behavior
of magnification wormholes and their corresponding blackholes is similar. Our
result may shed new light on exploring compact objects through the
microlensing effect.
## I I. Introduction
Wormhole (WH) Morris:1988cz ; Einstein:1935tc ; Fuller:1962zza ;
Bronnikov:1973fh ; Ellis:1973yv is a hypothetic geometric structure
connecting two otherwise remote regions. Wormholes may permit faster-than-
light travel and time travel Morris:1988tu . Furthermore, in the framework of
General Relativity, the construction of traversable wormholes requires the
violation of Null Energy Condition (NEC) Hochberg:1998ii ; Hochberg:1998ha ,
and exotic matters beyond our current scope are necessary. Hence, the
existence of wormholes may improve our understanding of new physics. Thus, It
is important to study wormhole physics.
Gravitational lensing is a promising approach to search wormholes
Narayan:1996ba ; Bartelmann:1999yn ; Perlick:2004tq . In the literature, the
lensing effect of a wormhole is extensively studied Safonova:2002si ;
TejeiroS:2005ltc ; Nandi:2006ds ; Abe:2010ap ; Toki:2011zu ; Yoo:2013cia ;
Takahashi:2013jqa ; Izumi:2013tya ; Kuhfittig:2013hva ; Nakajima:2014nba ;
Tsukamoto:2016zdu ; Shaikh:2017zfl ; Asada:2017vxl ; Shaikh:2018oul ;
Shaikh:2019itn ; Javed:2019qyg ; Shaikh:2019jfr ; Dai:2019mse ;
Simonetti:2020ivl ; Bambi:2021qfo ; Godani:2021aub ; Liu:2022lfb ;
Petters:2010an . Conventionally, wormholes are treated as a dark compact
object Cardoso:2019rvt . The resulting lensing effect is relevant to the
asymptotic behavior of the wormholes, while their geometric and topological
structures do not play an important role. In this sense, a large variety of
the wormholes mimics the black holes (BHs), and it would be difficult to
distinguish them by astrophysical observations, such as lensing, acceration
and quasi-normal-mode ringing Damour:2007ap ; Tsukamoto:2012xs . Thus, people
are motivated to distinguish wormholes from other compact objects with various
techniques such as shadow Amir:2018pcu ; Kasuya:2021cpk , accretion
Shaikh:2019hbm ; Karimov:2020fuj , deflection angle of massive particles
Jusufi:2018gnz and quasi-normal ringing Konoplya:2016hmd .
In this paper, we proceed with a slightly different approach. We wish to study
the lensing effect when the light ray is close to the wormhole
throat/blackhole horizon. The lensing effect of the simplest Ellis wormhole in
the strong-field limit is well-studied Tsukamoto:2016qro ; Tsukamoto:2016jzh .
However, the magnification of more generic wormholes might be hard to
evaluate. Hence, we still work in the weak field approximation. That is, the
impact parameter $b_{I}$ is much larger than the intrinsic parameters of the
wormhole/blackhole, such as the mass of a Schwarzschild metric. However, we
evaluate the deflection angle upon second order, and numerically study the
magnification of each case. For convenience, we adopt the technique introduced
in Gibbons:2008rj ; Gibbons:2008zi ; Werner:2012rc , where the deflection
angle is evaluated through the Gaussian-Bonnet theorem (GBT). The GBT
formalism is wildly applied to study the deflection angle of various wormhole
models Jusufi:2017vta ; Jusufi:2017mav ; Jusufi:2017drg ; Ovgun:2018fnk ;
Ono:2018ybw ; Jusufi:2018kmk ; Ovgun:2018prw ; Ovgun:2020yuv .
We organize the paper as follows. In section I, we briefly introduce wormhole
physics, gravitational lensing physics, and how to evaluate the deflection
angle using GBT formalism. We explicitly show how GBT formalism works by using
Ellis wormhole as an example in section II. We then study the magnification of
Schwarzschild WH/BH in section III, Kerr WH/BH in section IV, and RN WH/BH in
section V. We find that it is possible to distinguish Kerr WH and BH through
the difference of magnitudes between their gentle peaks and main peaks. We
conclude in section VI.
## II I. Basic formalism
In this section, we firstly review the basics of wormhole physics. After that,
we discuss the gravitational lensing, and show how to use the GBT formalism to
study the lensing physics.
### II.1 Wormhole physics
For simplicity, we shall consider static spherically symmetric wormholes only.
We start with the Morris-Throne wormhole Morris:1988cz ; Morris:1988tu . The
metric is given by
$ds^{2}=-e^{2\Lambda\left(r\right)}dt^{2}+\frac{dr^{2}}{1-b\left(r\right)/r}+r^{2}d\Omega_{2}^{2},$
(1)
where $d\Omega_{2}^{2}$ is the metric of a unit 2-sphere. The function
$\Lambda\left(r\right)$ and $b\left(r\right)$ are the redshift function and
shape function, respectively. The wormhole structure is characterized by its
throat that connects two regions of spacetime. We illustrate a typical
wormhole structure in figure 1, in which we only consider the microlensing
effects that occurred on one side (spacetime 2 or 1).
Figure 1: This figure shows a wormhole connects two spacetime, the shadow part
locating in the middle of throat means the structure of throat and $b_{0}$ is
the radius of throat.
We impose the flare-out condition, which states that the geometry is the
minimality of the wormhole throat in the embedded spacetime. For metric (1),
the condition is given by
$b(r_{0})=r_{0}~{},~{}\frac{b(r)-rb^{\prime}(r)}{2b(r)^{2}}>0~{},$ (2)
with $b^{\prime}(r)\equiv db(r)/dr$, $r=r_{0}$ labels the location of the
throat. We see that the structure of the wormhole is solely determined by the
shape function $b(r)$. We also impose the asymptotic flatness, which sets
$\displaystyle\lim_{r\to\infty}b(r)/r=0$. Finally, a traversable wormhole
should have no horizon, i.e. $g_{tt}\neq 0$ everywhere. In the metric (1) this
translates into $\Lambda(r)$ is finite everywhere.
### II.2 Gravitational lensing
A typical gravitational lensing geometry is illustrated in figure 2. For an
infinitesimal source, the images observed will be magnified or demagnified due
to the change of cross-section of a bundle of rays. The magnification is
determined by the ratio between the solid angles
$|\mu|=\frac{d\omega_{i}}{d\omega_{s}}=\left|\frac{\beta}{\theta}\frac{d\beta}{d\theta}\right|^{-1}~{}.$
(3)
Figure 2: This plot shows three situations of the geometry of lensing via the
wormhole source depicted by the purple ball $W$ corresponding to different
relative positions of source, observer and wormhole. $O$ and $S$ are observers
and light sources, respectively. $I$ is the image of $S$, and $\theta$ is the
angle of the corresponding image and the wormhole. $\alpha$ is the deflection
angle, and $\beta$ is the angle between the wormhole and the light source.
$b_{0}$ is the throat of the wormhole. $D_{l},D_{ls}$ and $D_{s}$ are angular
diameter distances. Other quantities are auxiliary.
The lensing geometry in figure 2 gives the lens equation
$\beta=\theta-\frac{D_{ls}}{D_{s}}\alpha.$ (4)
Hence, if we work out the deflection angle $\alpha$ as a function of $\theta$,
we can use (4) to get $\beta(\theta)$. Then, with the help of (3) we have the
magnification $|\mu|$, which is an important observable in astrophysics.
Finally, the lens equation (4) may admit more than one solution of
$\beta(\theta)$, corresponding to multiple images. For simplicity, we shall
consider the microlensing case, where the separation of images is too small to
be resolved by existing telescopes. In this case, we observe the combined
light intensity, i.e. the observed magnification should be the summation of
magnifications of each image:
$|\mu_{total}|=\sum_{i}|\mu_{i}|~{}.$ (5)
### II.3 Formalism with Gauss-Bonnet Theorem
For scenarios with relatively strong gravity and potentially non-trivial
geometry, GBT is useful to calculate the deflection angle since it is a pure
geometric description of gravitation. In GBT formalism, the deflection angle
is given by
$\alpha=-\int{\int_{D_{\infty}}{\mathcal{K}d\sigma}},$ (6)
where $D_{\infty}$ is a domain outside the light ray, the $\mathcal{K}$ stands
for the Gaussian optical curvature and $d\sigma$ is the elementary surface
area of the optical geometry.
Figure 3: The geometry of lensing from Ref. Gibbons:2008rj . Two geodesics
$\gamma_{1}$ and $\gamma_{2}$ are deflected by a lens at $L$. $D_{1}$ and
$D_{2}$ are two domains with boundary curves $\gamma_{L}$ and $\gamma_{P}$.
In the lensing geometry as illustrated in figure 3, the formula (6) becomes
$\iint\limits_{D}{\mathcal{K}}\,\,d\sigma+\oint\limits_{\partial
D}{\kappa\,\,dt+\sum_{i}{\alpha_{i}}}=2\pi,$ (7)
where the domain $D$ is simply-connected and $\partial D$ is its boundary,
$\sum_{i}{\alpha_{i}}$ is the sum of the exterior angle when taking the domain
as a polygon. The function $\kappa$ is the curvature of the geodesics. Note
that, the external angles and interior angles are connected by
$\theta_{S}=\pi-\alpha_{S}~{},~{}\theta_{O}=\pi-\alpha_{O}~{},$ (8)
and $\kappa$ vanishes if and only if the line integration takes on geodesics.
Hence, we further have
$\theta_{S}+\theta_{O}=2\pi+\oint_{\gamma_{L}}{\kappa
dt}+\iint_{A_{1}}{\mathcal{K}\,\,d\sigma},$ (9)
Finally, in the limit $r\to\infty$, we have $\theta_{S}+\theta_{O}=\pi$. With
the wormhole geometry, we can then evaluate the integrals in equation (9) and
get the deflection angle.
## III II. Ellis wormhole as an example
In this section, we illustrate how the GBT formalism (6) works by applying it
to the Ellis wormhole, the simplest traversable wormhole models. The metric of
an Ellis wormhole is given by
$ds^{2}=-dt^{2}+dr^{2}+\left(r^{2}+r_{0}^{2}\right)d\Omega_{2}^{2},$ (10)
and after a coordinate transformation $\rho=\sqrt{r^{2}+r_{0}^{2}}$, the
metric returns to a Morris-Throne type (1):
$ds^{2}=-dt^{2}+\frac{d\rho^{2}}{1-r_{0}^{2}/\rho^{2}}+\rho^{2}d\Omega_{2}^{2}~{},$
(11)
and it’s easy to see that the throat radius is $r_{0}$.
For a photon, the geodesic equation is $ds^{2}=0$. Also, we may simplify the
problem by working in the equatorial plane with $\theta=\pi/2$. The geodesics
of a photon is then described by
$dt^{2}=dr^{2}+(r^{2}+r_{0}^{2})d\varphi^{2}~{}.$ (12)
Now we define auxiliary functions $du=dr$, $\zeta(u)=\sqrt{r^{2}+r_{0}^{2}}$,
such that equation (12) becomes
$dt^{2}=h_{ab}d\lambda^{a}d\lambda^{b}=du^{2}+\zeta^{2}\left(u\right)d\varphi^{2}.$
(13)
The Gaussian optical curvature is then
$\mathcal{K}=-\frac{1}{\zeta\left(u\right)}\left[\frac{dr}{du}\frac{d}{dr}\left(\frac{dr}{du}\right)\frac{d\zeta}{dr}+\left(\frac{dr}{du}\right)^{2}\frac{d^{2}\zeta}{dr^{2}}\right].$
(14)
Moreover, we have $\kappa dt=d\varphi$, then equation (9) becomes
$\int_{0}^{\pi+\alpha}{d\varphi}+\int_{0}^{\pi}{\int^{\infty}_{b/\sin\varphi}{\mathcal{K}\,\,\sqrt{\det
h_{ab}}drd\varphi}}=\pi~{},$ (15)
and the deflection angle is
$\alpha=-\int_{0}^{\pi}{\int^{\infty}_{b/\sin\varphi}{\mathcal{K}\sqrt{\det
h_{ab}}}}drd\varphi.$ (16)
Here, the $r$ integral ranges from the source to the observation. Using the
lens geometry in figure 2 and with the help of equations (13) and (14), we
finally get
$\alpha=\left(\varphi-{\\!\>\sqrt{\frac{r_{0}^{2}+2\\!\>b_{I}^{2}-r_{0}^{2}\\!\>\mathrm{Cos[}2\\!\>\varphi]}{2(r_{0}^{2}+b_{I}^{2}\\!\>\mathrm{Csc[}\varphi]^{2})}}\\!\>\mathrm{Csc[}\varphi]\\!\>\mathrm{E[}\varphi,-\frac{r_{0}^{2}}{b_{I}^{2}}]}\right)$
(17)
where $E[\varphi,-\frac{r_{0}^{2}}{b_{I}^{2}}]$ is the Elliptic functon. In
weak field approximation, the impact parameter $b_{I}$ would be much greater
than the throat radius $r_{0}$. We expand our result in Taylor series with
respect to $r_{0}/b_{I}$, and gets
$\alpha=\frac{\pi}{4}\left(\frac{r_{0}}{b_{I}}\right)^{2}+\frac{3\pi}{32}\left(\frac{r_{0}}{b_{I}}\right)^{4}+\mathcal{O}\left(\frac{r_{0}}{b_{I}}\right)^{6}~{},$
(18)
where the first term is agreed with Nakajima:2012pu . In this section, we only
implement GBT to calculate its deflection angle which is consistent with the
geodesic line method. Since the ADM mass of Ellis wormhole is zero, thus one
cannot find its corresponding blackhole. In the following examples, we will
investigate the microlensing effects of Schwarzschild wormhole, Kerr-like
wormhole and RN-like wormhole, especially for caclulating their magnification
effects under GBT as well as the correponding BHs.
## IV III. Schwarzschild wormhole (blackhole)
In this section, we will investigate the microlensing effects of the
Schwarzschild wormhole and its corresponding blackhole.
### IV.1 Metric I
The metric of Schwarzschild wormhole can be written as follows Damour:2007ap ,
$ds^{2}=-\left(1-\frac{2M}{r}+\lambda^{2}\right)dt^{2}+\frac{dr^{2}}{1-\frac{2M}{r}}+r^{2}d\varOmega^{2},$
(19)
where $\lambda$ is a parameter, the Schwarzschild blackhole is restored as
$\lambda=0$ and $8\pi G=1$. The key case is that $\lambda$ is nonzero which
will lead to the geometry of the wormhole structure with $r_{0}=2M$. However,
we will use it as a free parameter to simulate the total magnification. Using
GBT formula into Gaussian curvature (14), one could obtain its second order as
follows Ovgun:2018fnk ,
$\mathcal{K}=\frac{6(\lambda^{2}+1)-7(\lambda^{2}+1)rM^{2}+r^{2}(\lambda^{2}+2)M}{(-r+2M)r^{4}}.$
(20)
Noting that this Gaussian curvature is an exact formula without using weak
field approximation. In order to capture more information via the lensing
effects, we calculate its corresponding deflection angle up to the second
order of $\frac{M}{b_{I}}$ using the weak field approximation,
$\alpha\approx\frac{4M}{b_{I}}+\frac{2M\lambda^{2}}{b_{I}}+\frac{7M^{2}\pi}{4b_{I}^{2}}+\frac{7M^{2}\pi\lambda^{2}}{4b_{I}^{2}}.$
(21)
Its first order is consistent with Ref. Ovgun:2018fnk . The deflection angle
(21) of Schwarzschild wormhole contains the information of Schwarzschild
blackhole as $\lambda=0$. According to the weak field approximation, we could
also obtain that $\lambda\ll b_{I}$. Thus, the deflection angle (21) is
sufficient for investigating the microlensing effects for Schwarzschild
wormhole and Schwarzschild blackhole.
### IV.2 Magnification I
In this part, we will study the microlensing effects of the Schwarzschild
wormhole and its corresponding blackhole via magnification. By implementing
Eqs. (5) and (3), we will obtain its magnification as showing in figure 4, in
which we numerically simulate the magnification as a function
$\mu\equiv\mu(r_{0},\lambda,b_{I})$. To describe the impact of throat
structure on the lensing effects, we change the variable $r_{0}=2M$ ($G=1$) in
figure 4. The scale that we consider is within the galaxy whose radius is
around $5-100~{}\rm kpc$, thus we set $D_{l}=10~{}\rm kpc$ as a quite
reasonable input, meanwhile, we also set $\frac{D_{ls}}{D_{s}}=\frac{1}{2}$
for simplicity.
Figure 4: The upper panel shows the magnification of metric (19), in which we
have set $\lambda=0.1$ (dimensionless parameter) and
$r_{0}=0.03,~{}0.06,~{}0.09~{}\rm kpc$. The lower panel shows the
magnification with various $\lambda$ as fixing $r_{0}=0.06~{}\rm kpc$. We have
set $D_{l}=10~{}\rm kpc$ for both plots. And the blue line corresponds to the
case of the Schwarzschild blackhole.
Figure 4 indicates that the magnification is a function of $b_{I}$. In the
upper panel, it shows the peak of magnification of the Schwarzschild wormhole
will be enhanced as improving the value of $r_{0}$ with a fixed $\lambda=0.1$.
The lower panel shows that $\mu$ will be enhanced by increasing the value of
$\lambda$, in which the blue line corresponds to the Schwarzschild blackhole.
Thus, we may conclude that the magnification of Schwarzschild’s blackhole is
minimal in determining the mass. From the perspective of observations, we
could distinguish these two objects via their peaks since one can determine
the mass of a condensed object without any charge, and meanwhile, we could fix
the distance $D_{ls}$, $D_{s}$ and $D_{l}$.
## V IV. Kerr wormhole (blackhole)
In this section, we will investigate the microlensing effects of Kerr-like
wormhole and its corresponding blackhole.
### V.1 Metric II
The Kerr-like wormhole was discovered by Bueno:2017hyj , its corresponding
metric reads as follows,
$\begin{split}ds^{2}=-\left(1-\frac{2Mr}{\Sigma}\right)dt^{2}-\frac{4Mar\sin^{2}(\theta)}{\Sigma}dtd\phi+\frac{\Sigma}{\hat{\Delta}}dr^{2}+\\\
\Sigma
d\theta^{2}+\left(r^{2}+a^{2}+\frac{2Ma^{2}r\sin^{2}\theta}{\Sigma}\right)\sin^{2}\theta
d\phi^{2},\end{split}$ (22)
with
$\begin{split}\Sigma=r^{2}+a^{2}\cos^{2}(\theta),~{}\hat{\Delta}=r^{2}-2M(1+\lambda^{2})r+a^{2}\end{split}$
(23)
where $a\equiv\frac{J}{MC}$ ($c=1$) with the angular momentum $J$. $\lambda=0$
will become the geometry of Kerr blackhole. As for the non-trivial $\lambda$,
the topology will change dramatically. The radius of throat will be given by
$\Delta=0$ whose formula is
$r_{+}=M(1+\lambda^{2})+\sqrt{M^{2}(1+\lambda)^{2}-a^{2}}$. Observing that
$r<r_{+}$ will not allow the points exist. We cannot easily define the
$r_{0}=r_{+}$ in metric (22). Thus, we will retain the $M$ and $a$ to simulate
the magnification.
### V.2 Deflection angle II
The first essential quantity is the deflection angle. Ref. Ovgun:2018fnk
already has evaluated the deflection angle to the first order. Noticing that
their first order of deflection angle
$\alpha\approx\frac{2M(\lambda^{2}+2)}{b_{I}}\pm\frac{4Ma}{b_{I}^{2}}$, in
which the $-$ (minus) sign denotes the prograde light ray, will change the
structure of magnification since the first term and the second term will
contribute the opposite part. In light of this simple observation, its higher-
order of deflection angle is also essential.
We will follow the method of Ono:2017pie to calculate the deflection angle of
metric (22). First, we need to calculate its corresponding $d\sigma=\sqrt{\det
h_{ab}}drd\varphi$ as follows,
$\small
d\sigma=\sqrt{\frac{\Sigma^{2}}{\hat{\Delta}(\Sigma-2mr)}\bigg{(}r^{2}+a^{2}+\frac{2a^{2}mr}{\Sigma-2mr}\bigg{)}\frac{\Sigma}{(\Sigma-2mr)}}drd\varphi$
(24)
where $\hat{\Delta}$ and $\Sigma$ have defined in eq. (23). Since Kerr-like
wormhole (blackhole) will rotate, the geodesic curvature will not vanish.
Hence we obtain
$\kappa\approx-\frac{2aM}{r^{3}}+\frac{2M^{2}a\lambda^{2}}{r^{4}}-\frac{2aM^{2}}{r^{4}}+\mathcal{O}\bigg{(}\frac{1}{r^{5}}\bigg{)},$
(25)
where the first term is consistent with eq. (36) of Ono:2017pie . We also make
the same approximation as Ono:2017pie implemented as $b_{I}\approx
r/\cos\theta$ and $l\approx b_{I}\tan\theta$ ($0<\theta<2\pi$), in which
$\theta$ could be approximated to $2\pi$ as observer and light source is very
remote with each other (approximated to be infinity for simplicity).
Meanwhile, when we transform into the variable $b$, we already have used the
weak field approximation $M,~{}a\ll b_{I},~{}r$. Here, we should emphasize
that this calculation for $\kappa$ is the prograde case ($dl>0$). As for the
retrograde case, the calculation is the same but the sign is the opposite,
thus one can write down the geodesic curvature as follows,
$\kappa\approx\pm\bigg{(}-\frac{2aM}{r^{3}}+\frac{2M^{2}a\lambda^{2}}{r^{4}}-\frac{2aM^{2}}{r^{4}}+\mathcal{O}\big{(}\frac{1}{r^{5}}\big{)}\bigg{)}$
(26)
Secondly, we will consider the Gaussian curvature, we will explicitly
implement the result of Ovgun:2018fnk since $\sigma$ already has been
approximated to the second order,
$\mathcal{K}\approx\frac{(\lambda^{2}+2)M}{r^{3}}.$ (27)
Meanwhile, we will adopt the linear approximation of $b_{I}$, then we will
obtain the deflection angle from the Gauss curvature and geodesic curvature,
$\alpha=-\int_{\phi_{S}}^{\phi_{R}}\int_{r}^{\infty}K\sqrt{\gamma}drd\phi+\int_{l_{S}}^{l_{R}}\kappa
dl,\ $ (28)
where we will set that $l_{R}$ and $l_{S}$ are infinity. Subsequently, we
first get the prograde case,
$\small\alpha_{\rm
pro}=\frac{2M(\lambda^{2}+2)}{b_{I}}-\frac{4Ma}{b_{I}^{2}}+\frac{3M^{2}\pi(\lambda^{2}+2)}{2b_{I}^{2}}-\frac{aM^{2}\pi(\lambda^{2}+5)}{b_{I}^{3}}.$
(29)
The retrograde case is as follows,
$\small\alpha_{\rm
re}=\frac{2M(\lambda^{2}+2)}{b_{I}}+\frac{4Ma}{b_{I}^{2}}+\frac{3M^{2}\pi(\lambda^{2}+2)}{2b_{I}^{2}}-\frac{3aM^{2}\pi(\lambda^{2}+1)}{b_{I}^{3}}.$
(30)
The first two terms of eq. (29) and eq. (LABEL:eq:retrograde_case_of_kerr) are
in the accordance with Ovgun:2018fnk . Before investigating the magnification
of Kerr-like wormhole (blackhole), we will first compare the deflection angle
between the prograde case and retrograde case. Figure 5 indicates the
deflection angle for the prograde (29) and retrograde case (30). In the upper
panel, we find that the difference will be apparent as setting $\lambda=0.01$
and $a=1$ where $b_{I}$ is smaller than $2~{}\rm kpc$. In the lower panel, our
numerical results show that these two cases almost cannot be distinguished as
setting $\lambda=0.1$, $a=0.1$. Through figure 5, we find that the large value
of momentum $a$ will lead to the apparent difference of these two cases, which
means that the fast rotation of Kerr-like wormhole (blackhole) will lead to
the distinctive microlensing effects.
Figure 5: This plot shows the deflection angle for case (29) and
(LABEL:eq:retrograde_case_of_kerr). For both cases, we have set $M=0.01$ (weak
field approximation). In the upper panel, We have set $\lambda=0.01$, $a=1$.
The blue solid line corresponds to the prograde case (29) and the yellow solid
line corresponds to the case (LABEL:eq:retrograde_case_of_kerr). For the lower
panel, we have set $\lambda=0.1$, $a=0.1$.
### V.3 Magnification II
In this subsection, we will analyze the magnification of a Kerr-like wormhole
(blackhole). As mentioned in the previous discussions, we will analyze the
magnifications of Kerr-like wormhole (blackhole) in various cases: $(a).$ The
contribution of the mass part is much larger than the angular momentum part;
$(b).$ The contribution of the angular momentum part is much larger than the
mass part; $(c).$ These two parts are comparable.
#### V.3.1 $M\gg a$
In this case, a Kerr-like wormhole will behave like a quasi-static object.
Figure 6 shows the magnification of Kerr Kerr wormhole (blackhole) metric
(22), in which it indicates that trend of magnification is almost the same for
the prograde case and retrograde case. The maximal order of this magnification
is around $10^{3}$. The Kerr blackhole corresponds to the blue solid line in
figure 6. Our numerical results show that the peak will be appeared as
enhancing the value of $b_{I}$. Thus, one can find the difference between the
prograde case and retrograde case via the occurrence of the peak at different
scales for $b_{I}$. Here, we should emphasize that there is only one peak for
the Kerr wormhole or Kerr blackhole in the weak field region. If someone wants
to distinguish the Kerr-like wormhole and Kerr blackhole, the value of
$\lambda$ should be better larger than $0.1$. Another special place is that
the peak occurs from $5~{}\rm kpc$ to $6~{}\rm kpc$, which means that the
impact parameter is quite large compared with the radius of the throat. As for
the other scale of $b_{I}$, it is trivial that has no magnification effects.
Figure 6: This plot shows the magnification of Kerr wormhole (blackhole)
metric (22) including the prograde case and retrograde case. We have defined
$\mu\equiv\mu(M,b_{I},\lambda,a)$. The upper panel shows the magnification of
the prograde case and the lower panel shows the retrograde case. All of these
units have unified and the unit of $5.2~{}\rm kpc<b_{I}<6~{}\rm kpc$ ensures
the weak field approximation. The maximal value of the magnification is around
$4000$.
#### V.3.2 $M\ll a$
In this case, we will investigate the microlensing effects as $M\ll a$, which
means the Kerr wormhole (blackhole) will have a large angular momentum.
Comparing with figure 6, it indicates that the peak will appear at the smaller
scale of $b_{I}$, whose order is also smaller (around $300$) compared with
$M\gg a$. In this case, it is difficult to distinguish the Kerr-like wormhole
and Kerr blackhole since these three curves are almost overlapping with each
other, for which the blue solid line corresponds to the Kerr blackhole.
However, we can still distinguish $M\gg a$ and $M\ll a$ since the order of
peak of $\mu$ is different.
Figure 7: This plot shows the magnification of Kerr wormhole (blackhole)
metric (22) including the prograde case and retrograde case. We have defined
$\mu\equiv\mu(M,b_{I},\lambda,a)$. The upper panel shows the magnification of
the prograde case and the lower panel shows the retrograde case. All of these
units have unified and the unit of $0.1~{}\rm kpc<b_{I}<3~{}\rm kpc$ ensures
the weak field approximation. The maximal order of the magnification is around
$10^{2}$.
#### V.3.3 $M\approx a$
In this subsection, we will investigate the microlensing effects of metric
(22) in which the contribution from the mass is comparable with the angular
momentum part. First, we observe the second and third terms of deflection
angle (29) that will contribute compatible if we take the value of $a$ is ten
times larger than $M$ at most.
Figure 8: This plot shows the magnification of Kerr wormhole (blackhole)
metric (22) including the prograde case and retrograde case. We have defined
$\mu\equiv\mu(M,b_{I},\lambda,a)$. The upper panel shows the magnification of
the prograde case and the lower panel shows the retrograde case. All of these
units have unified and the unit of $0.1~{}\rm kpc<b_{I}<2~{}\rm kpc$ ensures
the weak field approximation. The maximal order of the magnification is around
$10^{2}$.
In figure 8, it shows that the magnification will be clearly distinguished by
these two cases. The most essential difference comes via the multi-peaks for
the prograde case in the upper panel of figure 8. Especially for the Kerr
blackhole, it will be three peaks in the prograde case, in which the first one
is less than unity and the second one will be around $10$. As for the Kerr-
like wormhole (yellow and green solid line), the first peak of them is too
tiny to be detected, in which the second peak of yellow and green lines will
be smaller one order compared with the Kerr blackhole case. To detect its
validity, we need more accurate observations. As for the yellow and green
solid lines, the first peak will be suppressed by enhancing the value of $M$,
but it is not so relevant with $\lambda$.
The lower panel shows the retrograde case, in which one cannot find any multi-
peaks. The difference between the Kerr blackhole (blue solid line) and Kerr-
like wormhole is that the peak will be improved as ehancing the value of
$\lambda$, which does not highly depend on $M$. As for the maximal values of
these two cases, both of them will reach around $200-250$. Thus, if one
detected the magnification had multi-peaks as shown in figure 8, one
possibility is the prograde case of Kerr-like wormhole (blackhole).
## VI V. RN wormhole (blackhole)
In this section, we will investigate the microlensing effects of RN wormhole
(blackhole).
### VI.1 RN wormhole
The metric of RN wormhole combines the typically spherically symmetric
structure and RN blackhole’s structure, its metric is found by Kim:2001ri ,
$ds^{2}=-\bigg{(}1+\frac{Q^{2}}{r^{2}}\bigg{)}dt^{2}+\bigg{(}1-\frac{r_{0}^{2}}{r^{2}}+\frac{Q^{2}}{r^{2}}\bigg{)}^{-1}dr^{2}r+r^{2}d\Omega^{2},$
(31)
where $Q$ is the charge and $r_{0}$ is the radius of the throat and we only
consider the special case of RN spacetime, namely
$\frac{b(r)}{r}=\frac{r_{0}^{2}}{r^{2}}$ ($b(r)$ is the shape function). As
$Q=0$, it will become the Ellis wormhole. And $b(r)=r_{0}^{2}/r=0$ will nicely
recover the geometry of the RN blackhole. Thus, $r_{0}$ is non-vanishing which
denotes the geometry will change dramatically (from blackhole to wormhole).
Here, we will utilize $r_{0}$ and $Q^{2}$ as two essential parameters to
simulate the magnification. Being armed with GBT, we obtain the Gaussian
curvature up the second order,
$\mathcal{K}=\frac{3Q^{2}-r_{0}^{2}}{r^{4}}-\frac{4r_{0}^{2}Q^{2}}{r^{6}}+\mathcal{O}(Q^{4},r_{0}^{4}),$
(32)
Then, we can obtain its corresponding deflection angle as follows,
$\alpha=\frac{r_{0}^{2}\pi}{4b_{I}^{2}}-\frac{3\pi
Q^{2}}{4b_{I}^{2}}+\frac{3r_{0}^{2}\pi Q^{2}}{8b_{I}^{4}}.$ (33)
Once obtained the second order of deflection angle, then we can simulate the
magnification of RN wormhole (blackhole).
### VI.2 Deflection angle III
In the following investigation, we will also study three cases: $(a).$ The
contribution of $r_{0}$ part is much larger than the part of $Q$. $(b).$ The
contribution of $Q$ part is much larger than $r_{0}$ part. $(c).$ The part of
$r_{0}$ is compatible with the part of $Q$. The magnification is parametrized
by $\mu\equiv\mu(r_{0},b_{I},Q)$.
#### VI.2.1 $r_{0}\ll Q$
As $r_{0}\gg Q$, the geometry will almost recover RN blackhole. Figure 9 shows
the magnification of metric (31), in which it tells that there are no
magnification effects as $b_{I}>1.5~{}\rm kpc$. Being different from the usual
case, the image of RN wormhole (blackhole) will be shrunk as the $b_{I}$ is
less than $1~{}\rm kpc$. As for the pure blackhole case (the blue solid line),
the shrinking effects will be milder compared to the RN wormhole case (green
and yellow solid line).
Figure 9: This plot shows the magnification of metric (31). The range of
$b_{I}$ is from $0.1~{}\rm kpc$ to $5.8~{}\rm kpc$ keeping the weak field
approximation. The blue line corresponds to the RN blackhole. The yellow and
green solid line corresponds to the case of $Q=0.05$ and $Q=0.01$,
respectively.
#### VI.2.2 $r_{0}\gg Q$
Figure 10: This plot shows the magnification of metric (31). The range of
$b_{I}$ is from $0.1~{}\rm kpc$ to $2.5~{}\rm kpc$ keeping the weak field
approximation. We have set $Q=10^{-7}$. We gradually reduce the values of
$r_{0}$ from $0.1$ to $0.02$ corresponding to the blue, yellow, and green
lines, respectively. The peak of these cases is from $80$ to $230$.
In this case, it corresponds to the Ellis wormhole. In figure 10, it indicates
that the magnification is dramatically different as $r_{0}\gg Q$ since there
are peak values in some specific scales (various value of $b_{I}$). In our
numerical simulations, we find that the peak of magnification of RN wormhole
will be larger and larger as enhancing the value of $r_{0}$ with a fixed $Q$.
The precise physical meaning is that the magnification effects will be
enhanced by improving the radius of the throat as $r_{0}\gg Q$.
#### VI.2.3 $Q\approx r_{0}$
In this case, we will investigate the magnification effects of RN wormhole
(blackhole) when the contribution of charge part is compatible with the radius
of throat $r_{0}$.
Figure 11: This plot shows the magnification of metric (31). The range of
$b_{I}$ is from $0.1~{}\rm kpc$ to $2.5~{}\rm kpc$ keeping the weak field
approximation. We have set $Q=10^{-7}$. We gradually reduce the values of
$r_{0}$ from $0.1$ to $0.02$ corresponding to the blue, yellow, and green
lines, respectively.
Figure 11, we can clearly see that the magnification will be approaching unity
as $b_{I}>1.5~{}\rm kpc$. Our numerical results also indicate that the
shrinking effects will be enhanced by improving the values of $r_{0}$ and $Q$,
in which the geometry of spacetime will be more and more trivial as decreasing
the contribution of matter.
## VII VI. Conclusion
We explore the microlensing effects of a wormhole with a relatively strong
presence of gravity. Although we still work in the weak field approximation,
we investigate the deflection angle in the second order. The three typical
wormholes (blackholes), the Schwarzschild WH/BH, Kerr-like WH/BH, and RN
WH/BH, are investigated.
We find that it is possible to distinguish the Kerr-like wormhole and its
corresponding blackhole. More specifically, in the prograde case, there will
be multi-peaks when the contributions from mass and angular momentum are
comparable. As shown in figure 8, the magnitude of the main peak for a Kerr BH
is two orders of magnitude higher than the first gentle peak. While, for the
wormhole case, the main peak is three orders of magnitude higher than the
first gentle peak. The other cases are hard to distinguish, since the behavior
of magnifications of wormholes and corresponding blackholes are similar, and
the only difference is the magnitude of magnification.
Our work is a preliminary check on this topic. There are many interesting
ideas to be studied in the future. Firstly, to fully address the issue, we
need to go to the strong field limit. Then, in the strong field case, it is
possible that the non-trivial topology influence not only the deflection
angle, but also the lens equations. Hence, we need to improve the techniques
in this paper. Secondly, we only studied selected models, and we need more
examples to strengthen our conclusion. For example, the no-hair theorem
Bekenstein:1971hc ; Bekenstein:1995un tells that blackholes are uniquely
determined by their mass, charge, and angular momentum. It is interesting to
study wormholes and blackholes sharing these three same parameters. Finally,
GBT may fail for certain modified gravity theories, such as the gravitational
theory with a modification to the Gauss-Bonnet term Glavan:2019inb , can we
improve the GBT formalism in this situation?
## Acknowledgements
We appreciate the stimulating discussions with Bichu Li, Yuhang Zhu. LH and KG
are funded by NSFC grant NO. 12165009. MZ is funded by grant NO.
UMO-2018/30/Q/ST9/00015 from the National Science Center, Poland.
## References
* (1) M. S. Morris and K. S. Thorne, Am. J. Phys. 56 (1988), 395-412 doi:10.1119/1.15620
* (2) A. Einstein and N. Rosen, Phys. Rev. 48 (1935), 73-77 doi:10.1103/PhysRev.48.73
* (3) R. W. Fuller and J. A. Wheeler, Phys. Rev. 128 (1962), 919-929 doi:10.1103/PhysRev.128.919
* (4) K. A. Bronnikov, Acta Phys. Polon. B 4 (1973), 251-266
* (5) H. G. Ellis, J. Math. Phys. 14 (1973), 104-118 doi:10.1063/1.1666161
* (6) M. S. Morris, K. S. Thorne and U. Yurtsever, Phys. Rev. Lett. 61 (1988), 1446-1449 doi:10.1103/PhysRevLett.61.1446
* (7) D. Hochberg and M. Visser, Phys. Rev. Lett. 81 (1998), 746-749 doi:10.1103/PhysRevLett.81.746 [arXiv:gr-qc/9802048 [gr-qc]].
* (8) D. Hochberg and M. Visser, Phys. Rev. D 58 (1998), 044021 doi:10.1103/PhysRevD.58.044021 [arXiv:gr-qc/9802046 [gr-qc]].
* (9) R. Narayan and M. Bartelmann, [arXiv:astro-ph/9606001 [astro-ph]].
* (10) M. Bartelmann and P. Schneider, Phys. Rept. 340 (2001), 291-472 doi:10.1016/S0370-1573(00)00082-X [arXiv:astro-ph/9912508 [astro-ph]].
* (11) V. Perlick, Living Rev. Rel. 7 (2004), 9
* (12) M. Safonova and D. F. Torres, Mod. Phys. Lett. A 17 (2002), 1685-1692 doi:10.1142/S0217732302008083 [arXiv:gr-qc/0208039 [gr-qc]].
* (13) J. M. Tejeiro S. and E. A. Larranaga R., Rom. J. Phys. 57 (2012), 736-747 [arXiv:gr-qc/0505054 [gr-qc]].
* (14) K. K. Nandi, Y. Z. Zhang and A. V. Zakharov, Phys. Rev. D 74 (2006), 024020 doi:10.1103/PhysRevD.74.024020 [arXiv:gr-qc/0602062 [gr-qc]].
* (15) F. Abe, Astrophys. J. 725 (2010), 787-793 doi:10.1088/0004-637X/725/1/787 [arXiv:1009.6084 [astro-ph.CO]].
* (16) Y. Toki, T. Kitamura, H. Asada and F. Abe, Astrophys. J. 740 (2011), 121 doi:10.1088/0004-637X/740/2/121 [arXiv:1107.5374 [astro-ph.CO]].
* (17) C. M. Yoo, T. Harada and N. Tsukamoto, Phys. Rev. D 87 (2013), 084045 doi:10.1103/PhysRevD.87.084045 [arXiv:1302.7170 [gr-qc]].
* (18) R. Takahashi and H. Asada, Astrophys. J. Lett. 768 (2013), L16 doi:10.1088/2041-8205/768/1/L16 [arXiv:1303.1301 [astro-ph.CO]].
* (19) K. Izumi, C. Hagiwara, K. Nakajima, T. Kitamura and H. Asada, Phys. Rev. D 88 (2013), 024049 doi:10.1103/PhysRevD.88.024049 [arXiv:1305.5037 [gr-qc]].
* (20) P. K. F. Kuhfittig, Eur. Phys. J. C 74 (2014) no.99, 2818 doi:10.1140/epjc/s10052-014-2818-2 [arXiv:1311.2274 [gr-qc]].
* (21) K. Nakajima, K. Izumi and H. Asada, Phys. Rev. D 90 (2014) no.8, 084026 doi:10.1103/PhysRevD.90.084026 [arXiv:1404.2720 [gr-qc]].
* (22) N. Tsukamoto and T. Harada, Phys. Rev. D 95 (2017) no.2, 024030 doi:10.1103/PhysRevD.95.024030 [arXiv:1607.01120 [gr-qc]].
* (23) R. Shaikh and S. Kar, Phys. Rev. D 96 (2017) no.4, 044037 doi:10.1103/PhysRevD.96.044037 [arXiv:1705.11008 [gr-qc]].
* (24) H. Asada, Mod. Phys. Lett. A 32 (2017) no.34, 1730031 doi:10.1142/S0217732317300312 [arXiv:1711.01730 [gr-qc]].
* (25) R. Shaikh, P. Banerjee, S. Paul and T. Sarkar, Phys. Lett. B 789 (2019), 270-275 [erratum: Phys. Lett. B 791 (2019), 422-423] doi:10.1016/j.physletb.2018.12.030 [arXiv:1811.08245 [gr-qc]].
* (26) R. Shaikh, P. Banerjee, S. Paul and T. Sarkar, Phys. Rev. D 99 (2019) no.10, 104040 doi:10.1103/PhysRevD.99.104040 [arXiv:1903.08211 [gr-qc]].
* (27) W. Javed, R. Babar and A. Övgün, Phys. Rev. D 99 (2019) no.8, 084012 doi:10.1103/PhysRevD.99.084012 [arXiv:1903.11657 [gr-qc]].
* (28) R. Shaikh, P. Banerjee, S. Paul and T. Sarkar, JCAP 07 (2019), 028 doi:10.1088/1475-7516/2019/07/028 [arXiv:1905.06932 [gr-qc]].
* (29) D. C. Dai and D. Stojkovic, Phys. Rev. D 100 (2019) no.8, 083513 doi:10.1103/PhysRevD.100.083513 [arXiv:1910.00429 [gr-qc]].
* (30) J. H. Simonetti, M. J. Kavic, D. Minic, D. Stojkovic and D. C. Dai, Phys. Rev. D 104 (2021) no.8, L081502 doi:10.1103/PhysRevD.104.L081502 [arXiv:2007.12184 [gr-qc]].
* (31) C. Bambi and D. Stojkovic, Universe 7 (2021) no.5, 136 doi:10.3390/universe7050136 [arXiv:2105.00881 [gr-qc]].
* (32) N. Godani and G. C. Samanta, Annals Phys. 429 (2021), 168460 doi:10.1016/j.aop.2021.168460 [arXiv:2105.08517 [gr-qc]].
* (33) L. H. Liu, M. Zhu, W. Luo, Y. F. Cai and Y. Wang, [arXiv:2207.05406 [gr-qc]].
* (34) A. O. Petters and M. C. Werner, Gen. Rel. Grav. 42 (2010), 2011-2046 doi:10.1007/s10714-010-0968-6 [arXiv:0912.0490 [astro-ph.GA]].
* (35) V. Cardoso and P. Pani, Living Rev. Rel. 22 (2019) no.1, 4 doi:10.1007/s41114-019-0020-4 [arXiv:1904.05363 [gr-qc]].
* (36) T. Damour and S. N. Solodukhin, Phys. Rev. D 76 (2007), 024016 doi:10.1103/PhysRevD.76.024016 [arXiv:0704.2667 [gr-qc]].
* (37) N. Tsukamoto, T. Harada and K. Yajima, Phys. Rev. D 86 (2012), 104062 doi:10.1103/PhysRevD.86.104062 [arXiv:1207.0047 [gr-qc]].
* (38) M. Amir, K. Jusufi, A. Banerjee and S. Hansraj, Class. Quant. Grav. 36 (2019) no.21, 215007 doi:10.1088/1361-6382/ab42be [arXiv:1806.07782 [gr-qc]].
* (39) S. Kasuya and M. Kobayashi, Phys. Rev. D 103 (2021) no.10, 104050 doi:10.1103/PhysRevD.103.104050 [arXiv:2103.13086 [gr-qc]].
* (40) R. Shaikh and P. S. Joshi, JCAP 10 (2019), 064 doi:10.1088/1475-7516/2019/10/064 [arXiv:1909.10322 [gr-qc]].
* (41) R. K. Karimov, R. N. Izmailov, A. A. Potapov and K. K. Nandi, Eur. Phys. J. C 80 (2020) no.12, 1138 doi:10.1140/epjc/s10052-020-08717-x [arXiv:2012.13564 [gr-qc]].
* (42) K. Jusufi, A. Banerjee, G. Gyulchev and M. Amir, Eur. Phys. J. C 79 (2019) no.1, 28 doi:10.1140/epjc/s10052-019-6557-2 [arXiv:1808.02751 [gr-qc]].
* (43) R. A. Konoplya and A. Zhidenko, JCAP 12 (2016), 043 doi:10.1088/1475-7516/2016/12/043 [arXiv:1606.00517 [gr-qc]].
* (44) N. Tsukamoto, Phys. Rev. D 94, no.12, 124001 (2016) doi:10.1103/PhysRevD.94.124001 [arXiv:1607.07022 [gr-qc]].
* (45) N. Tsukamoto, Phys. Rev. D 95, no.6, 064035 (2017) doi:10.1103/PhysRevD.95.064035 [arXiv:1612.08251 [gr-qc]].
* (46) G. W. Gibbons and M. C. Werner, Class. Quant. Grav. 25 (2008), 235009 doi:10.1088/0264-9381/25/23/235009 [arXiv:0807.0854 [gr-qc]].
* (47) G. W. Gibbons, C. A. R. Herdeiro, C. M. Warnick and M. C. Werner, Phys. Rev. D 79 (2009), 044022 doi:10.1103/PhysRevD.79.044022 [arXiv:0811.2877 [gr-qc]].
* (48) M. C. Werner, Gen. Rel. Grav. 44 (2012), 3047-3057 doi:10.1007/s10714-012-1458-9 [arXiv:1205.3876 [gr-qc]].
* (49) K. Jusufi, A. Ovgün and A. Banerjee, Phys. Rev. D 96 (2017) no.8, 084036 doi:10.1103/PhysRevD.96.084036 [arXiv:1707.01416 [gr-qc]].
* (50) K. Jusufi and A. Övgün, Phys. Rev. D 97 (2018) no.2, 024042 doi:10.1103/PhysRevD.97.024042 [arXiv:1708.06725 [gr-qc]].
* (51) K. Jusufi, N. Sarkar, F. Rahaman, A. Banerjee and S. Hansraj, Eur. Phys. J. C 78 (2018) no.4, 349 doi:10.1140/epjc/s10052-018-5823-z [arXiv:1712.10175 [gr-qc]].
* (52) A. Övgün, Phys. Rev. D 98 (2018) no.4, 044033 doi:10.1103/PhysRevD.98.044033 [arXiv:1805.06296 [gr-qc]].
* (53) T. Ono, A. Ishihara and H. Asada, Phys. Rev. D 98 (2018) no.4, 044047 doi:10.1103/PhysRevD.98.044047 [arXiv:1806.05360 [gr-qc]].
* (54) K. Jusufi, A. Övgün, A. Banerjee and ·. I. Sakallı, Eur. Phys. J. Plus 134 (2019) no.9, 428 doi:10.1140/epjp/i2019-12792-9 [arXiv:1802.07680 [gr-qc]].
* (55) A. Övgün, G. Gyulchev and K. Jusufi, Annals Phys. 406 (2019), 152-172 doi:10.1016/j.aop.2019.04.007 [arXiv:1806.03719 [gr-qc]].
* (56) A. Övgün, Turk. J. Phys. 44 (2020) no.5, 465-471 doi:10.20944/preprints202008.0512.v1 [arXiv:2011.04423 [gr-qc]].
* (57) K. Nakajima and H. Asada, Phys. Rev. D 85 (2012), 107501 doi:10.1103/PhysRevD.85.107501 [arXiv:1204.3710 [gr-qc]].
* (58) P. Bueno, P. A. Cano, F. Goelen, T. Hertog and B. Vercnocke, Phys. Rev. D 97 (2018) no.2, 024040 doi:10.1103/PhysRevD.97.024040 [arXiv:1711.00391 [gr-qc]].
* (59) T. Ono, A. Ishihara and H. Asada, Phys. Rev. D 96 (2017) no.10, 104037 doi:10.1103/PhysRevD.96.104037 [arXiv:1704.05615 [gr-qc]].
* (60) S. W. Kim and H. Lee, Phys. Rev. D 63 (2001), 064014 doi:10.1103/PhysRevD.63.064014 [arXiv:gr-qc/0102077 [gr-qc]].
* (61) J. D. Bekenstein, Phys. Rev. D 5 (1972), 1239-1246 doi:10.1103/PhysRevD.5.1239
* (62) J. D. Bekenstein, Phys. Rev. D 51 (1995) no.12, R6608 doi:10.1103/PhysRevD.51.R6608
* (63) D. Glavan and C. Lin, Phys. Rev. Lett. 124, no.8, 081301 (2020) doi:10.1103/PhysRevLett.124.081301 [arXiv:1905.03601 [gr-qc]].
|
# Operationalizing Legislative Bodies:
A Methodological and Empirical Perspective
Carolina Luque, Universidad Ean<EMAIL_ADDRESS>
Juan Sosa, Universidad Nacional<EMAIL_ADDRESS>
###### Abstract
This manuscript extensively reviews applications, extensions, and models
derived from the Bayesian ideal point estimator. We primarily focus our
attention on studies conducted in the United States as well as Latin America.
First, we provide a detailed description of the Bayesian ideal point
estimator. Next, we propose a new taxonomy to synthesize and frame technical
developments and applications associated with the estimator in the context of
North American and Latin American governing bodies. The literature available
in Latin America allows us to conclude that few legislatures in the region
have been analyzed using the methodology under discussion. Also, we highlight
those parliaments of Latin America embedded in democratic presidential systems
as novel scenarios for operationalizing the electoral behavior of legislative
bodies through nominal voting data. Our findings show some alternatives for
future research. Finally, to fix ideas and illustrate the capabilities of the
Bayesian ideal point estimator, we present an application involving the
Colombian House of Representatives 2010–2014.
Keywords: Spatial voting models, Bayesian ideal point estimator, roll call
data, parliamentary electoral behavior, Markov Chain Monte Carlo methods.
## 1 Introduction
The quadratic Bayesian spatial voting model (also known as the Bayesian ideal
point estimator or IDEAL; Jackman, 2001, Clinton et al., 2004) is a valuable
method for identifying latent (unobserved) traits of political actors from
nominal voting data. This model is compatible with the nature of scientific
research in the Social Sciences (Jackman,, 2004, 2009), and also, it is
flexible and powerful enough for recognizing political preferences of voters
(Carlin and Louis,, 2008; Clinton and Jackman,, 2009). Moreover, some authors
point out that it is an essential tool for providing empirical evidence on
different voting phenomena in modern Political Science (e.g., Clinton et al.,,
2004; Yu and Rodríguez,, 2021; Moser et al.,, 2021).
IDEAL and its derivatives have been widely used to analyze the electoral
behavior of the United States Parliament. However, there are a limited number
of applications in other contexts, such as Latin America. Therefore, there is
a legitimate opportunity to characterize electoral phenomena of legislatures
in Latin America (e.g., Tsai,, 2020) by taking IDEAL as a reference framework
due to its extensive usage in the North American Congress. We share this
position with some authors, who see the theoretical and methodological
developments of the legislative voting behavior of the United States Congress
as a basis for advancing the quantitative understanding of parliamentary
electoral conduct in scenarios with limited empirical evidence (Gamm and
Huber,, 2002). Furthermore, the available literature shows that this path
inspires adaptations and applications of IDEAL in different parliamentary
electoral settings (e.g., Zucco,, 2013; McDonnell,, 2017; Tsai,, 2020; Ribeiro
et al.,, 2021).
This manuscript extensively reviews the available literature on models,
extensions, and applications derived from IDEAL in North American and Latin
American legislative settings. Our goal is to provide a comprehensive and up-
to-date overview of statistical foundations, applications, and developments on
the topic under consideration from 2001-2022. We present our findings by
proposing a brand new taxonomy on the matter, considering the parliamentary
voting phenomena underlying to use of the estimator. From a quantitative point
of view, we focus on substantive hypotheses and methodological aspects that
allow us to analyze legislative electoral behavior through roll call data.
To the best of our knowledge, no previous research discusses and synthesizes
methodological aspects and applications of the estimator in the framework of
parliamentary voting in both the United States and Latin America. Our work is
relevant to the scientific community for several reasons. First, it shows the
fusion between political theory and statistical and computational principles
as a fundamental element in implementing a spatial voting model in a
particular legislative reality. Second, it reveals the limitations of the
standard Bayesian ideal point estimator and shows them as an opportunity to
promote methodological and applied research in the context of governing
bodies. Finally, it provides evidence of emerging scientific research niches
in different legislative spaces. Mainly in Latin America, where legislatures
rooted in presidential democratic systems have different dynamics than the
North American parliament (e.g., Pereira and Mueller, 2004a, ; Jones and
Hwang, 2005a, ; Zucco,, 2013; Ribeiro et al.,, 2021).
This document is structured as follows. Section 2 shows spatial voting model
terminology and the relevance of the ideal Bayesian point estimator through a
historical review of studies in legislative committees. Section 3 presents
theoretical references related to the Bayesian ideal point estimator’s
specification, identification, and implementation. Section 4 exhibits
applications, adaptations, and extensions of the ideal point estimator in the
context of North and Latin America. Section 5 illustrates the case of the
Colombian House of Representatives 2010–2014. Finally, Section 6 synthesizes
our main contributions and discusses some alternatives for future research.
## 2 Voting models: Terminology and developments
The collective election is a phenomenon of interest in the political context
because voters’ preferences are not homogeneous (Krehbiel,, 1988; Clinton,,
2012). For example, in the case of deliberative bodies such as congress,
legislators typically show different judgments about public policies under
debate, which unequivocally leads to a collective conflict of perspectives.
The spatial voting models allow us to characterize this phenomenon, assuming
that different subjects (legislators, judges, citizens, among others) adopt
dissimilar preferences regarding choice alternatives (policies, proposals,
motions, among others).
Most of the theoretical work on these models proposes normal or quadratic
utility functions to characterize the preferences of political actors (to
measure characteristics essential for understanding the causes and
consequences of politics, Clinton,, 2012; Carroll et al.,, 2013). Such an
assumption has its roots in the fields of Psychology and Mathematics. On the
one hand, studies in psychology indicate that the response function of
individuals when they judge similarities between stimuli or express their
preferences is approximately normal (Poole,, 2005). On the other hand, studies
in Mathematics show that the normal distribution can be approximated by a
quadratic expression through a second-order Taylor polynomial (Carroll et al.,
2009a, ). For an additional in-depth discussion about other utility structures
in spatial voting models, see Carroll et al., (2013) and Tiemann, (2019).
Assuming a quadratic utility function in a spatial voting model is not only a
convenient mathematical abstraction but also a standard formulation that
describes the response of a subject facing a choice. Thus, each individual has
an utility function centered around an ideal point (optimal alternative, see
Clinton,, 2012) that represents his/hers preferences. Hence, the utility
decreases as the distance (metric defined on the reference space) between
his/hers ideal point and the option of choice increases. In this way, the
subjects vote for the alternatives closest to their ideal point according to a
specific stochastic mechanism.
A spatial voting model fully characterizes the behavior of individuals facing
an electoral process. Additionally, it allows visualizing the political space
(reference space to represent the ideal points of individuals) in specific
contexts such as that of a legislature. The proximity of ideal points in the
political space is evidence of similarities in legislators’ voting records. It
indicates the presence of latent traits (e.g., ideological preferences) that
underlie their voting behavior (Poole,, 2005). Such an abstraction is
advantageous because it allows for studying several issues, including the
evolution of political parties, strategic voting, location of minorities,
electoral interests and incentives, and relations between institutions.
However, for this representation to make sense and be interpretable in a
political framework, besides the statistical foundations of the model, it
requires a deep understanding of the political system of the institutions
under analysis (Poole and Rosenthal,, 1985).
### 2.1 Formalizing electoral conduct of political actors
Spatial voting models are involved in a quite extensive literature. The
foundational work of Black et al., (1958) and Downs, (1957) provide important
advances for developing voting theory. Nevertheless, such work does not
provide a formal mathematical structure to test political theories (Poole and
Rosenthal,, 1985). The first spatial voting models that rigorously
operationalize electoral behavior appear almost a decade later, by identifying
the position of the political space that candidates must adopt to win an
electoral contest in a majority election (Davis and Hinich,, 1965; Davis et
al.,, 1970; Hinich and Ordeshook,, 1970). These investigations formally
introduce the concept of the utility function, pointing out its relevance in
penalizing the discrepancy between the position of a voter and the voting
alternatives as well as electorate preferences. However, these approaches do
not consider a random component in voting procedures (Poole,, 2005).
The interest of researchers in proposing a rigorous mathematical structure
allowing them to design measurement instruments to contrast political theory
favored the development of several studies in subsequent decades. For example,
in the late 1970s, McKelvey et al., (1978) investigated electoral behavior in
committees through cooperative games (Aumann,, 1964), which carries out the
analysis in a new direction, but whose findings are not sufficient to support
general patterns of strategic behavior (Arrow,, 1990). Subsequently, based on
agenda theory, Romer and Rosenthal, (1978) and Shepsle, (1979) propose to
explain the role that structures and procedures play in shaping the results of
legislative voting. These authors provide initial results on the influence of
the order of deliberation of motions on legislative voting behavior. At about
the same time, Cahoon et al., (1978) and Wolters, (1978) use multidimensional
scaling methods to represent the responses of the voting process (dichotomous
or polytomous) as low-dimensional continuous variables in Euclidean space.
Subsequently, other studies investigate fundamental aspects of probabilistic
voting behavior (Manski,, 1977; Coughlin and Nitzan,, 1981) and
predictive/latent dimensions (Enelow and Hinich,, 1984).
The probabilistic theory of electoral behavior considers votes as random
variables whose behavior depends on stochastic utility functions. Such
functions incorporate the distance between the voter’s ideal point and the
voting alternative and consider a random term (McFadden,, 1976; Poole and
Rosenthal,, 1984) reconciling systematic spatial utility differences
(intractable voting patterns) with probabilities to vote in favor of a given
alternative. Typically, random errors are assumed to follow a normal or
logistic distribution, leading to a probit or logit link function,
respectively (Jackman,, 2004; Carroll et al., 2009a, ). These two random
mechanisms lead to similar one-dimensional results with no more than scale
differences (Hahn and Soyer,, 2005; Carroll et al., 2009a, ; Lofland et al.,,
2017; Luque and Sosa,, 2022).
Predictive dimension methods frame voters and election objects in the same
low-dimensional space to explain (dis)similarities between voters as a
function of distances between points in the political space (Weisberg and
Rusk,, 1970; Poole and Rosenthal,, 1984). Actors’ voting patterns interpret
dimensions of political space in terms of issues, policies, or belief systems
(Cahoon et al.,, 1978; Poole and Rosenthal,, 1985; Hinich and Munger,, 1996;
Hinich et al.,, 1997). The number of dimensions of political space is a
technical question. However, some authors have pointed out that it is common
to identify low-dimensional spaces in the study of electoral behavior. Poole,
(2005) argues that only a few dimensions are required to capture the structure
of the North American parliamentary voting behavior because legislators
typically vote based on the underlying policies of the political group they
represent (left-right, Republican-Democrat). Furthermore, perceptual studies
in psychology support the idea of a small number of dimensions underlying
individual choice behavior, as people have a limited ability to perceive
different objects (Miller,, 1956) and a tendency to confront a group against
another group (Tajfel,, 1981).
### 2.2 Roll call data
Spatial voting models based on data reduction techniques such as
multidimensional scaling (Cahoon et al.,, 1978), factor analysis (Brazill and
Grofman,, 2002), and principal component analysis (De Leeuw,, 2006) do not
provide a methodological framework for modeling directly binary responses of
individuals based on the parameters of interest. Then, these methods do not
allow practitioners recovering voters’ ideal points nor modeling the
probability of voting either in favor of or against a given issue (Jackman,,
2001). The interest in explaining the individual behavior of deputies led to
the development of parliamentary voting models in the 1980s (Poole and
Rosenthal,, 1985) focusing on estimating the ideal points of legislative
actors and recovering their latent characteristics (political preferences). In
essence, these are generalized linear models (McCullagh,, 2018) that rely on
iterative methods such as parametric bootstrap and Markov chain Monte Carlo
(e.g., Clinton et al.,, 2004; Martin and Quinn,, 2002; Lewis and Poole,, 2004;
Carroll et al., 2009a, ; Carroll et al., 2009b, ) to estimate and infer
political preferences from roll call data or nominal votes (Poole and
Rosenthal,, 1985, 1987; Poole,, 2005).
By means of nominal votes legislators reveal their positions to other deputies
and the general public (Mayhew,, 1974). This political activity provides data
that allows social scientists to estimate ideal points and retrieve the
political preferences of political actors (Alemán et al.,, 2009). Research
using roll call data to learn about legislative decisions shows that these
data are helpful to calculate correlations between parliamentary members
through factor and cluster analysis methods, which are relevant to identifying
aggregate patterns. However, its use to predict individual voting decisions is
popular as well (e.g., MacRae,, 1952, 1958, 1965; VanDoren,, 1990; Poole,,
2005; Clinton,, 2012; Binding and Stoetzer,, 2022). Most studies on nominal
voting only consider binary data, i.e., data instances consisting of two
alternatives, in favor of or against a motion, known as new alternative and
status quo, respectively (Carroll et al., 2009a, ). Few studies in the
literature consider abstentions (Thurner,, 2000; Poole,, 2005; Rosas et al.,,
2015). On the other hand, recent studies describe the implications and
limitations of using roll call data to test political theory (e.g., Roberts,,
2007; Ainsley et al.,, 2020). Additionally, other research use this type of
data with textual data to analyze legislative bodies from different data
sources (e.g., Lauderdale and Clark,, 2014; Kim et al.,, 2018).
In recent years, the use of roll call data to analyze parliamentary electoral
behavior has increased substantially (e.g., Clinton et al.,, 2004; Hix et
al.,, 2005; Jones and Hwang, 2005a, ; Clinton,, 2012; Zucco,, 2013; Binding
and Stoetzer,, 2022; Seabra and Mesquita,, 2022; Rasmussen,, 2022; Hansford et
al.,, 2022; Grier et al.,, 2022). In particular, the analysis of nominal votes
of the legislative chambers of Latin American countries has become quite
popular (see Table 1). Recent access to these data has made their analysis
novel and relevant to regional legislative phenomena studies. Unfortunately,
roll call data are not widely available in Latin America (Morgenstern,, 2003).
For example, Zucco, (2013) confirms the existence of a reduced number of roll
calls to analyze parliamentary electoral behavior in the legislative chambers
of Uruguay. Unlike the United States, in some countries in the region, roll
call voting is not always the most frequent task (see Jones and Hwang, 2005a,
; Ribeiro et al.,, 2021), and yet, it is becoming a common practice to
facilitate access to this type of data (see McDonnell et al.,, 2019). Even
when roll calls are available, these data are not systematized. Compiling them
is a laborious task involving gathering information from external sources
(local newspapers, websites, among others) (Morgenstern,, 2003; Luque,, 2021).
| | | Roll call data | Legislators |
---|---|---|---|---|---
Country | Period | House | Available | Analyzed | Available | Analyzed | Reference
Argentina | 1989–2001 | Lower | 415 | | | | Morgenstern, (2003)
| 1989–2003 | Lower | 473 | | | | Jones and Hwang, 2005a
| 1993–1995 | Lower | 64 | | 256 | | Rosas and Shomer, (2008)
| 1989–2007 | Lower | 1193 | | | | Jones et al., (2009)
| 2002–2007 | Congress | 110 | 31 | 618 | 356 | Onuki et al., (2009)
| 2008–2009 | Lower | 251 | | | | Alemán et al., (2018)
| 1993–2017 | Lower | | | | | Clerici, (2021)
Brazil | 1991–1998 | Lower | 623 | | | | Morgenstern, (2003)
| 1989–2008 | Congress | | | | | McDonnell, (2017)
| 1989–2010 | Lower | 2332 | 1806 | | | Zucco and Lauderdale, (2011)
| 2003–2006 | Lower | 421 | | 703 | | Tsai, (2020)
Chile | 2002–2006 | Lower | 157 | 36 | 120 | 118 | Onuki et al., (2009)
| 2004–2006 | Upper | 313 | 118 | | 49 | Alemán, (2008)
Colombia | 2006–2015 | Congress | 11600 | | 650 | | Morales, (2021)
| 2010–2014 | Upper | 417 | 417 | 110 | 91 | Luque and Sosa, (2022)
Paraguay | 2003–2012 | Lower | 147 | | 167 | | Ribeiro et al., (2021)
Uruguay | 1985–1994 | Congress | 63 | | | | Morgenstern, (2003)
| 1985–2005 | Upper | 125 | | | | Zucco, (2013)
Table 1: Available literature to analyze roll call data in Latin American
legislatures. Some studies employ selection criteria for selecting either roll
calls or legislators (e.g., Onuki et al.,, 2009; Zucco and Lauderdale,, 2011;
Alemán,, 2008; Luque and Sosa,, 2022). Recently, the nominal votes of
Argentina, Brazil, and Colombia are available at
https://votaciones.hcdn.gob.ar/, https://bancodedadoslegislativos.com.br/, and
https://congresovisible.uniandes.edu.co/, respectively.
Morgenstern, (2003) manifests that some roll call voting records of the
legislative chambers in Brazil, Argentina, Chile, and Uruguay have been
available since the early 90s. For other countries in the region, these data
have been made available later. For example, in the case of Colombia,
parliamentary voting records have been collected since 2006 (Carroll and
Pachón,, 2016), and until very recently, they have been used to implement
spatial voting models to retrieve political preferences of deputies (Luque and
Sosa,, 2022). In Latin America, the analysis of the lower house (House of
Representatives) is more frequent compared to the upper house (Senate) (see
Table 1). Few studies analyze the legislative behavior of the entire congress
without discriminating by chambers. Under this scenario, we argue that Latin
American legislatures are a young field for the empirical analysis of
parliamentary electoral behavior based on roll call data.
### 2.3 Operationalization of individual electoral behavior
Poole and Rosenthal, (1985) provide the building blocks that allow researchers
to estimate ideal points and latent variables considering a political space
from roll call data. Two popular models emerge from such foundational work:
NOMINATE and IDEAL (technical details about the latter are available in
Section 3). In principle, both models are used to analyze the behavior of
deputies in the Senate and House of Representatives, mainly from the United
States. However, their application has been extended to other non-legislative
contexts such as the United Nations (e.g., Voeten,, 2000, 2013; Bailey et
al.,, 2017; Seabra and Mesquita,, 2022) and courtrooms (e.g., Martin and
Quinn,, 2002; Hansford et al.,, 2022).
NOMINATE is a frequentist spatial voting model based on an utility function
with a random component (see Poole and Rosenthal,, 2001; Poole,, 2005; Carroll
et al., 2009b, ; Caughey and Schickler,, 2016, for more details). IDEAL is the
Bayesian counterpart of NOMINATE (Clinton and Jackman,, 2009). The latter also
incorporates random utility functions and extensively uses simulation-based
approaches (e.g., Markov Chain Monte Carlo, MCMC) to estimate the legislators’
ideal points (Jackman,, 2001; Clinton et al.,, 2004). There are comparative
studies that highlight the similarities and differences between these tow
models, as well as their advantages and disadvantages in modeling roll call
data to retrieve political preferences from parliamentarians (see Carroll et
al., 2009a, ; Clinton and Jackman,, 2009). Research in this direction
highlights the flexibility of IDEAL to test substantive hypotheses (Clinton et
al.,, 2004). Furthermore, IDEAL is a more straightforward estimator than
NOMINATE because it only requires the Bayes theorem for estimation and
inference, making it a more generalizable method (Clinton and Jackman,, 2009).
Also, IDEAL is an estimator that has a direct correspondence with the nature
of scientific inquiry in the Social Sciences (e.g., Jackman,, 2004, 2009).
Thus, IDEAL has been extended in various ways. For example, Martin and Quinn,
(2002) postulate a dynamic version that allows ideal points to change
systematically over time. Quinn, (2004) extends the model to handle non-binary
data (including continuous data). Treier and Jackman, (2008) formulate a
variant of the model to measure the level of democracy in countries, while
Rosas et al., (2015) do the same to analyze the behavior of the vote under
informative abstentions. In another way, Yu, (2020) and Yu and Rodríguez,
(2021) develop spatial voting models based on non-Euclidean metrics. Also,
there are applications of the model to study electoral behavior in
legislatures other than the North American one. For example, Zucco, (2013)
analyzes executive-legislative relations in Uruguay between 1985 and 2005 in
the context of presidential coalitions. Furthermore, Tsai, (2020) studies the
Brazilian Chamber of Deputies between 2003 and 2006, highlighting the effect
of political incentives on electoral behavior.
The literature makes it explicit that IDEAL is a dominant methodological and
theoretical instrument in modern Political Science (Yu,, 2020; Moser et al.,,
2021). Moreover, all the works presented above show that the analysis of the
political preferences of legislators from nominal votes is an area of active
research. Therefore, IDEAL constitutes a fundamental model worth
investigating.
## 3 IDEAL: Bayesian ideal point estimator
### 3.1 Modeling
Roll call data arise when $n$ legislators vote on $m$ motions (such as bills
or legislative initiatives). Thus, each legislator $i\in\\{1,\ldots,n\\}$
takes a position in favor of (yea) or against (nay) motion
$j\in\\{1,\ldots,m\\}$. Hence, we let $y_{i,j}\in\\{0,1\\}$ be the vote cast
by legislator $i$ on motion $j$, with $y_{i,j}=1$ if such a vote turns out in
favor of the motion, and $y_{i,j}=0$ otherwise. These votes in favor of or
against motion $j$ are assumed to be points in a $d$-dimensional Euclidean
space, known aspolitical space, and are denoted by $\bm{\psi}_{j}$ and
$\bm{\zeta}_{j}$, respectively.
It is assumed that all the legislators have a political preference, i.e., each
legislator $i$ has a latent (unobserved) factor
$\bm{\beta}_{i}\in\mathbb{R}^{d}$ known as ideal point. Thus, decisions are
made based on a quadratic utility function given by
$U_{i}(\bm{\psi}_{j})=-\parallel\bm{\psi}_{j}-\bm{\beta}_{i}\parallel^{2}+\eta_{i,j}$
and
$U_{i}(\bm{\zeta}_{j})=-\parallel\bm{\zeta}_{j}-\bm{\beta}_{i}\parallel^{2}+\upsilon_{i,j}$,
where $U_{i}(\bm{\psi}_{j})$ and $U_{i}(\bm{\zeta}_{j})$ are the corresponding
profits associated with legislator $i$ for voting in favor of or against
motion $j$, respectively, and $\eta_{i,j}$ and $\upsilon_{i,j}$ are
independent random deviations resulting from the uncertainty involved during
the decision processes, such that $\textsf{E}(\eta_{i,j}-\upsilon_{i,j})=0$
and $\textsf{Var}(\eta_{i,j}-\upsilon_{i,j})=\sigma^{2}_{j}$.
Rational choice theory (e.g., Yu and Rodriguez,, 2019) states that under the
previous setting, legislator $i$ votes in favor of motion $j$ if and only if
$U_{i}(\bm{\psi}_{j})>U_{i}(\bm{\zeta}_{j})$. Hence, assuming that
$\eta_{i,j}-\upsilon_{i,j}$ is normally distributed, i.e.,
$(\eta_{i,j}-\upsilon_{i,j})\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
ind}}}{\sim}}\textsf{N}(0,\sigma^{2}_{j})$, it follows that
$\textsf{Pr}(y_{i,j}=1\mid\bm{\zeta}_{j},\bm{\psi}_{j},\sigma_{j},\bm{\beta}_{i})=\textsf{Pr}(\epsilon_{i,j}<\mu_{j}+\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i})=\Phi(\mu_{j}+\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i})$
where $\epsilon_{i,j}=(\upsilon_{i,j}-\eta_{i,j})/\sigma_{j}$ is the random
component,
$\mu_{j}=(\bm{\zeta}_{j}^{\textsf{T}}\bm{\zeta}_{j}-\bm{\psi}_{j}^{\textsf{T}}\bm{\psi}_{j})/\sigma_{j}$
is the approval parameter (representing the basal probability of a vote in
favor of motion $j$),
$\bm{\alpha}_{j}=2(\bm{\psi}_{j}-\bm{\zeta}_{j})/\sigma_{j}$ is the
discrimination parameter (representing the effect that ideal points have upon
the probability of a vote in favor of motion $j$), and $\Phi(\cdot)$ is the
cumulative distribution function of the standard normal distribution.
The previous specification fully characterizes the probability of observing a
positive vote since
$y_{i,j}\mid\mu_{j},\bm{\alpha}_{j},\bm{\beta}_{i}\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
ind}}}{\sim}}\textsf{Bernoulli}(\Phi(\mu_{j}+\bm{\alpha}^{\textsf{T}}_{j}\bm{\beta}_{i}))$.
The likelihood associated with such latent factor model is given by
$p(\mathbf{Y}\mid\\{\mu_{j}\\},\\{\bm{\alpha}_{j}\\},\\{\bm{\beta}_{i}\\})=\prod_{i=1}^{n}\prod_{j=1}^{m}\Phi(\mu_{j}+\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i})^{y_{i,j}}\left[1-\Phi(\mu_{j}+\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i})\right]^{1-y_{i,j}}\,,$
where $\mathbf{Y}=[y_{i,j}]$ is a binary rectangular matrix of size $n\times
m$. Finally, in order to complete the specification of the model and carry out
full Bayesian inference, it is required to specify a joint prior distribution
on the parameter space. A computationally convenient alternative that works
well in practice consists in letting
$(\mu_{j},\bm{\alpha}_{j})\mid\bm{a},\mathbf{A}\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
iid}}}{\sim}}\textsf{N}_{d+1}(\bm{a},\mathbf{A})$ and
$\bm{\beta}_{i}\mid\bm{b}_{i},\mathbf{B}_{i}\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
ind}}}{\sim}}\textsf{N}_{d}(\bm{b}_{i},\mathbf{B}_{i})$, where
$\bm{a},\mathbf{A},\bm{b}_{i}$, and $\mathbf{B}_{i}$ are the model
hyperparameters (known fixed quantities). Of course, more complex hierarchical
specifications are also possible.
### 3.2 Identifiability
The model parameters in are not identifiable. Specifically, note that
Euclidean distances among ideal points $\bm{\beta}_{i}$ and the voting
alternatives $\bm{\psi}_{j}$ and $\bm{\zeta}_{j}$ remain invariant under any
translation, rotation, or reflection of the political space. Such a geometric
occurrence ensures that both discrimination parameters and ideal points are
not distinguishable for any voting pattern $\mathbf{Y}$. For instance,
consider a rotation of the political space through an $d\times d$ orthogonal
matrix $\mathbf{Q}$, i.e., $\mathbf{Q}^{\textsf{T}}\mathbf{Q}=\mathbf{I}_{d}$.
Then,
$(\mathbf{Q}\bm{\alpha}_{j})^{\textsf{T}}(\mathbf{Q}\bm{\beta}_{i})=\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i}$,
for all $i$ and all $j$, and therefore,
$p(\mathbf{Y}\mid\\{\mu_{j}\\},\\{\bm{\alpha}_{j}\\},\\{\bm{\beta}_{i}\\})=p(\mathbf{Y}\mid\\{\mu_{j}\\},\\{\mathbf{Q}\bm{\alpha}_{j}\\},\\{\mathbf{Q}\bm{\beta}_{i}\\})$.
Such a phenomenon is also typical of latent space models for social networks
(e.g., Sosa and Betancourt,, 2022).
The lack of identifiability require us to establish parameter constraints. A
popular alternative consists in imposing restrictions on the mean and variance
of the ideal points. Specifically, in the same spirit of Jackman, (2004) and
Lofland et al., (2017), letting $\bm{b}_{i}=\bm{0}_{d}$ and
$\mathbf{B}_{i}=\mathbf{I}_{d}$, for $i=1m,\ldots,n$, is useful to overcome
translation and scale issues. Furthermore, it is convenient fixing the
position of $d+1$ legislator, known as anchor legislators, with known (but
distinctive!) political patterns in the political space, since it allows the
model to differentiate legislative tendencies (Rivers,, 2003; Clinton et al.,,
2004).
### 3.3 Prior elicitation and computation
Along the lines of Clinton et al., (2004), we recommend to set
$\bm{a}=\bm{0}_{(d+1)}$ and $\mathbf{A}=\sigma^{2}\mathbf{I}_{(d+1)}$ with
$\sigma^{2}$ an arbitrarily large constant (e.g., $\sigma^{2}=25$) in order to
assign a zero-centered non-informative prior distribution to $\mu_{j}$ and
$\bm{\alpha}_{j}$, for $j=1,\ldots,m$, aiming to emulate roughly the behavior
of a diffuse state of information. This choice is quite reminiscent of the
hyperparameter elicitation in a standard linear regression model when there is
no need of informative or empirical alternatives, such as the unit information
prior (Kass and Wasserman,, 1996) or the $g$-prior (Zellner,, 1986). Finally,
notice that it is not required to set a large variance a priori for the ideal
points. Actually, all what is required is a prior notion of scale for this set
of parameters.
### 3.4 Posterior Inference
Considering data from $n$ legislators on $m$ motions, we have $dn+m(d+1)$
unknown model parameters to estimate. Thus, the posterior distribution is
framed in a high dimensional space, which makes it analytically intractable.
Even though other deterministic approaches recognized by their efficiency are
available (e.g., variational approximations; Ormerod and Wand, 2010), we
strongly suggest Markov Chain Monte Carlo algorithms (MCMC; e.g., Gamerman and
Lopes,, 2006) to approximate the posterior distribution. In particular, by
means of a Gibbs sampler, we are able to produce a sequence of dependent but
approximately independent draws from the posterior distribution, through
iterative sampling from the full conditional distributions of
$\mu_{j},\bm{\alpha}_{j}$ and $\bm{\beta}_{i}$. Hence, make it possible to
compute point and interval estimates from the corresponding empirical
distributions.
Following Albert and Chib, (1993), we recommend to increase the parameter
space by introducing a set of auxiliary variables $\\{z_{i,j}\\}$ such that
$y_{i,j}\mid z_{i,j}=1$ if $z_{i,j}>0$, and $y_{i,j}\mid z_{i,j}=0$ if
$z_{i,j}\leq 0$, where
$z_{i,j}\mid\mu_{j},\bm{\alpha}_{j},\bm{\beta}_{i}\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
ind}}}{\sim}}\textsf{N}(\mu_{j}+\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i},1)$.
Notice that we obtain exactly the original Bernoulli model specified above
when integrating the $z_{i,j}$ variables out. This model reformulation makes
it easier to draw directly from the full conditional distributions of
$\mu_{j}$, $\bm{\alpha}_{j}$, and $\bm{\beta}_{i}$.
Let
$\mathbf{\Theta}=(\mu_{1},\ldots,\mu_{m},\bm{\alpha}_{1},\ldots,\bm{\alpha}_{m},\bm{\beta}_{1},\ldots,\bm{\beta}_{n})$
be the full set of model parameters. The posterior distribution of
$\mathbf{\Theta}$ is
$\displaystyle p(\mathbf{\Theta}\mid\mathbf{Y})$
$\displaystyle\propto\prod_{i=1}^{n}\prod_{j=1}^{m}p(y_{i,j}\mid
z_{i,j})\times\prod_{i=1}^{n}\prod_{j=1}^{m}\textsf{N}(z_{i,j}\mid\mu_{j}+\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i},1)$
$\displaystyle\hskip
99.58464pt\times\prod_{j=1}^{m}\textsf{N}_{d+1}(\mu_{j},\bm{\alpha}_{j}\mid\bm{a},\mathbf{A})\times\prod_{i=1}^{n}\textsf{N}_{d}(\bm{\beta}_{i}\mid\bm{b}_{i},\mathbf{B}_{i})\,.$
Moreover, let $\phi^{(b)}$ denote the state of parameter $\phi$ in the $b$-th
iteration of the Gibbs sampling algorithm, for $b=1,\ldots,B$. Then, such an
algorithm in this case is as follows:
1. 1.
Choose a starting configuration for each model parameter, say $z_{i,j}^{(0)}$,
$\mu_{j}^{(0)}$, $\bm{\alpha}_{j}^{(0)}$, and $\bm{\beta}_{i}^{(0)}$, for
$i=1,\ldots,n$ and $j=1,\ldots,m$.
2. 2.
Update $z_{i,j}^{(b-1)}$, $\mu_{j}^{(b-1)}$, $\bm{\alpha}_{j}^{(b-1)}$, and
$\bm{\beta}_{i}^{(b-1)}$, for $i=1,\ldots,n$ and $j=1,\ldots,m$, cycling until
convergence:
1. (a)
Sample $z_{i,j}^{(b)}$ from
$p(z_{i,j}\mid\mu_{j}^{(b-1)},\bm{\alpha}_{j}^{(b-1)},\bm{\beta}_{i}^{(b-1)},y_{i,j})$,
where
$p(z_{i,j}\mid\mu_{j},\bm{\alpha}_{j},\bm{\beta}_{i},y_{i,j})=\left\\{\begin{matrix}\textsf{TN}_{(0,+\infty)}(z_{i,j}\mid\mu_{j}+\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i},1)&\text{if}&y_{i,j}=1\,,\\\
\\\
\textsf{TN}_{(-\infty,0]}(z_{i,j}\mid\mu_{j}+\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i},1)&\text{if}&y_{i,j}=0\,.\end{matrix}\right.$
2. (b)
Sample $(\mu_{j},\bm{\alpha}_{j})^{(b)}$ from
$p(\mu_{j},\bm{\alpha}_{j}\mid\\{\bm{\beta}_{i}^{(b-1)}\\},\\{z_{i,j}^{(b)}\\})$,
where
$p(\mu_{j},\bm{\alpha}_{j}\mid\\{\bm{\beta}_{i}\\},\\{z_{i,j}\\})=\textsf{N}_{d+1}(\mu_{j},\bm{\alpha}_{j}\mid\bm{c}_{j},\mathbf{C})\,,$
with
$\bm{c}_{j}=(\mathbf{A}^{-1}+\mathbf{E}^{\textsf{T}}\mathbf{E})^{-1}\left(\mathbf{A}^{-1}\bm{a}+\mathbf{E}^{\textsf{T}}\bm{z}_{\bullet
j}\right)$ and
$\mathbf{C}=(\mathbf{A}^{-1}+\mathbf{E}^{\textsf{T}}\mathbf{E})^{-1}$, where
$\mathbf{E}$ is a rectangular matrix whose $i$-th row is $(1,\bm{\beta}_{i})$,
and $\bm{z}_{\bullet j}=(z_{1,j},\ldots,z_{n,j})$.
3. (c)
Sample $\bm{\beta}_{i}^{(b)}$ from
$p(\bm{\beta}_{i}\mid\\{\mu_{j}^{(b)}\\},\\{\bm{\alpha}_{j}^{(b)}\\},\\{z_{i,j}^{(b)}\\})$,
where
$p(\bm{\beta}_{i}\mid\\{\mu_{j}\\},\\{\bm{\alpha}_{j}\\},\\{z_{i,j}\\})=\textsf{N}_{d}(\bm{\beta}_{i}\mid\bm{d}_{i},\mathbf{D}_{i})\,,$
with
$\bm{d}_{i}=(\mathbf{B}_{i}^{-1}+\mathbf{F}^{\textsf{T}}\mathbf{F})^{-1}\left(\mathbf{B}_{i}^{-1}\bm{b}_{i}+\mathbf{F}^{\textsf{T}}(\bm{z}_{i\bullet}-\bm{\mu})\right)$
and
$\mathbf{D}_{i}=(\mathbf{B}_{i}^{-1}+\mathbf{F}^{\textsf{T}}\mathbf{F})^{-1}$,
where $\mathbf{F}=[\bm{\alpha}_{1},\ldots,\bm{\alpha}_{m}]^{\textsf{T}}$,
$\bm{z}_{i\bullet}=(z_{i,1},\ldots,z_{i,m})$, and
$\bm{\mu}=(\mu_{1},\ldots,\mu_{m})$.
## 4 Ideal Bayesian point estimator: North and Latin American case
An extensive review of the literature shows that the ideal Bayesian point
estimator is essential to analyze parliamentary electoral behavior at any
level (whether individual or collective). Its application allows political
scientists to support conjectures about individuals’ political preferences and
the latent characteristics that underlie legislative decisions, taking into
account political theory as well as the singularities of the parliament under
analysis. Furthermore, the flexibility of the estimator makes it adaptable to
provide empirical evidence on the relationship between policymakers,
institutional arrangements, and legislative outcomes.
To present our findings, we propose a brand new taxonomy considering the type
of parliamentary phenomena as well as the scope of the application. The
taxonomy is composed of seven parts: Political space dimension (Section 4.1),
pivot legislators identification (Section 4.2), voting restricted to the
agenda’s nature (Section 4.3), evolution and change in preferences of
political actors (Section 4.4), influence of national political leaders and
groups (Section 4.5), strategic abstentions (Section 4.6), and extremes voting
together (Section 4.7).
### 4.1 Political space dimension
The political space dimension is a popular topic in the Political Science
literature (e.g., Potoski and Talbert,, 2000; Jackman,, 2001; Talbert and
Potoski,, 2002; Aldrich et al.,, 2014; Dougherty et al.,, 2014; Roberts et
al.,, 2016). Research in this direction highlights the discrimination
parameters (Jackman,, 2001), the voting proposal typology Moser et al.,
(2021), and the use of weighted Euclidean distances (Binding and Stoetzer,,
2022) as crucial elements to analyze the dimension of political space. Thus,
social scientists are interested in learning about the number of latent
features to model parliamentarians’ electoral behavior, the nature of the
retrieved dimensions, identifiability in multidimensional models, and
additivity of legislators’ preferences, among others.
The dimension of the political space $d$, from a technical perspective,
corresponds to the number of latent characteristics needed to model the voting
behavior of legislators properly. Its choice translates into a model selection
problem, seeking to achieve a balance between the goodness-of-fit and the
complexity of the model (Moser et al.,, 2021). Several works consider
methodological and epistemological discussions on the political space
dimension as well as specific mechanisms for its choice (Benoit and Laver,,
2012; De Vries and Marks,, 2012; Jackman,, 2001; Lofland et al.,, 2017; Moser
et al.,, 2021).
The dimension of the political space can be studied through the discrimination
parameters $\bm{\alpha}_{1},\ldots,\bm{\alpha}_{m}$. These parameters allow
analysts to assess the political space dimension in conjunction with the
goodness-of-fit of the model, discern substantive content of the recovered
dimensions, and also, contrast substantive hypotheses about the dimensions
that underlie the political space (Jackman,, 2001; Luque and Sosa,, 2022).
Those motions with statistically significant discrimination parameters (that
is, those whose credibility interval does not contain zero) provide evidence
about the number of latent features required to explain voting patterns. For
example, in the one-dimensional case, a considerable number of discrimination
parameters indistinguishable from zero strongly suggests to embed the model in
higher dimensions (Jackman,, 2001). Thus, inspecting the content of those
voting lists associated with not significant discrimination parameters can
reveal the qualitative character of additional dimensions to be considered.
Defining informative prior states about discrimination parameters favors
handling the complexities that arise when considering higher-dimensional
models (Jackman,, 2001).
Some authors point out that interpreting the discrimination parameters as
factor loadings is quite useful to identify the substantive content of the
retrieved dimensions (Jackman,, 2001). However, some others argue that the
dimension of the latent space is not necessarily aligned with the content of
the bills presented for voting (Moser et al.,, 2021). In such a situation,
interpreting the dimensions concerning substantive issues is inconsistent.
These interpretive perspectives lead to discussions regarding a distinction
between the basic space dimension and a thematic space (see Moser et al.,,
2021; Poole,, 2007, for more details).
In a recent study, Moser et al., (2021) indicates that standard spatial voting
models, such as the Bayesian ideal point estimator, assume that the dimension
of the political space is fixed and common. However, this assumption is a
limitation for analyzing political actors’ preferences in different electoral
domains and identifying individual voting patterns. In this sense, this author
proposes a methodology based also on aggregation principles for analyzing
nominal data (see Roberts et al.,, 2016) that allows modelers determining
latent characteristics common to the chamber and each legislator, taking into
account different voting domains. In other words, this approach assumes that
legislators can reveal different preferences depending on the nature of the
vote (economy, security, among others). Furthermore, this framework assumes
the existence of subsets of legislators whose voting patterns for certain
groups of votes are not explainable through the linear combination of the
latent traits recovered for the entire group.
The standard Bayesian ideal point estimator assumes that political actors’
preferences in every dimension are additively separable. Then, the utility of
an actor is given by the weighted sum of the deviations along the dimensions.
In order to test this assumption, Binding and Stoetzer, (2022) introduce a
statistical model allowing non-separability across dimensions. These authors
affirm that political actors’ preferences can be characterized better by
multiple non-separable dimensions rather than a single dimension or multiple
independent dimensions.
Just a handful of studies in the Latin American context focus on providing
empirical evidence to justify the choice of the political space dimension.
Most studies assume one or two dimensions considering the political context of
the legislatures under analysis (e.g., Zucco,, 2013; Zucco and Lauderdale,,
2011). For example, on the one hand, Rosas, (2005) assures that legislative
politics in most of the countries of Latin America is one-dimensional. On the
other hand, Zucco and Lauderdale, (2011) point out that the presence of
religious, linguistic, or ethnic parties justifies the existence of a second
ideological dimension. Thus, the presence of coalitions, electoral districts,
and regional or provincial divisions, among others, indicate possible higher
non-ideological dimensions (Zucco and Lauderdale,, 2011; Zucco,, 2013).
Lastly, Jones and Hwang, 2005a and Luque and Sosa, (2022), inspired by
Jackman, (2001), examine the dimension of political space through the analysis
of discrimination parameters. The estimates of this parameters provide
empirical evidence to ensure that there is a single dimension underlying the
policy space in the case of the nominal votes of the Argentine House of
Representatives 1989–2003 and Senate of the Republic of Colombia 2010–2014,
respectively. These authors state that a one-dimensional model is enough to
model properly such roll call data.
### 4.2 Pivot legislators identification
The identification of pivot legislators includes studies focused on the
estimation and inference of both ideal points and auxiliary quantities
obtained as a function of model parameters. Such quantities allow researchers
to recognize those members of parliament whose position in the political space
is considered as relevant in order to understand what happens within the
legislature (Clinton et al.,, 2004). In this line of work, the interest relies
on the identity and position of pivot legislators (also known as fundamental
legislators), extremists, minorities, among others.
The notion of pivot legislator is fundamental for some theories of
parliamentary behavior that characterize and predict law formulation processes
based on the position of these legislators in the political space (Clinton et
al.,, 2004). In the North American context, such theories indicate that
pivotal legislators are those whose vote is critical to the success or failure
of the legislative process. In particular, the vote these deputies can
guarantee the closure of a debate and define a majority vote in an
extraordinary legislative situation (see Krehbiel,, 1998; Clinton et al.,,
2004).
In this way, Clinton et al., (2004) exposes a methodology based on the
standard Bayesian one-dimensional ideal point estimator, in order to discern
the identity and spatial location of the deputies who play a fundamental role
within the legislative body. The methodology corresponds to a sequential and
iterative three-step scheme, namely, (i) sample the legislators’ ideal points
from their joint posterior distribution; (ii) order the sampled ideal points
in ascending order, and (iii) observe which legislators occupy a particular
pivot or order statistics of interest. After repeating this scheme an
arbitrarily large number of times, we can identify those legislators who are
more or less likely to occupy a particular ordered position.
In Latin America, studies in this direction are scarce. From a political
perspective, social scientists are still determining whether the fundamental
theory has significant applications in the Latin American context or not. Very
recently, Luque and Sosa, (2022) identify legislators from the Colombian
Senate 2010–2014 who are more likely to be in a conservative position or in
the extremes of the political spectrum. Their findings are limited to
individualizing deputies rather than making inferences about other quantities
that can be derived from ideal point estimates. Nevertheless, more efforts
need to be carried out in order to determine order statistics within
parliament.
### 4.3 Voting restricted to the agenda’s nature
Research in this direction highlights the importance of estimating parameters
associated with voting alternatives (see Clinton and Meirowitz,, 2001), say
$\bm{\psi}_{j}$ and $\bm{\zeta}_{j}$, as a mechanism to investigate issues
related to the behavior of the status quo in a specific period, the
positioning of new policies in particular moments and contexts, and the
dependency relationship between votes under a specific agenda.
Standard political preference estimators, such as the standard Bayesian ideal
point estimator, do not incorporate the sequential nature of the agenda, i.e.,
these models assume that the retrieved parameters are not affected by a
reordering of the voting sequence. This assumption omits helpful information
to locate parameters associated with voting alternatives and determine the
political space dimension, which may lead to unsuitable ideal points estimates
for certain political instances. Consequently, the corresponding findings
could support misleading interpretations regarding the legislative theories
tested with these models (Clinton and Meirowitz,, 2001).
In order to incorporate the agenda’s sequential nature into roll call data
analysis, Clinton and Meirowitz, (2001) extend the standard Bayesian ideal
point estimator to a constrained agenda model, where voting alternatives are
tied to a status quo parameter. In formal terms and under the assumption of
complete knowledge of the order of the agenda, the status quo of the voting
list $j$, say $\bm{\zeta}_{j}$, is assumed to be associated with previous
voting alternatives parameters. Thus, if $\bm{\psi}_{j-1}$ was approved, then
the new status quo is $\bm{\zeta}_{j}=\bm{\psi}_{j-1}$. On the contrary, if
$\bm{\psi}_{j-1}$ was negated, then the new status quo is
$\bm{\zeta}_{j}=\bm{\zeta}_{j-1}$. Therefore, voting in favor of motion $j$
represents a movement in the political space, but voting against it, avoids
such a movement.
In the North American context, the restricted model provides estimates of
ideal points more consistent with deliberate policies than the standard
estimator because it reveals the relationship between the status quo and the
last approved proposal. However, the model needs to provide a framework for
examining endogenous agenda formation and strategic voting within parliament.
Additionally, the analysis does not provide evidence of the consistency about
the restricted and unrestricted estimators Clinton and Meirowitz, (2001).
Although the proposal of these authors allows us to cover a little-studied
legislative phenomenon, the Bayesian ideal point estimator considering the
order of the agenda implies a price to pay in terms of parsimony. Unlike the
standard model, this extension includes more parameters to estimate and more
technical difficulties. To the best of our knowledge, there are no
implementations of the constrained estimator in the Latin American context.
### 4.4 Change in preferences of political actors
In this line of work, research premises lie on the stability of the political
preferences of legislators under particular circumstances of the legislative
process (e.g., change of party). Also, it is of interest to analyze the
discrepancies exhibited by deputies in their voting patterns when faced with
different voting domains. In addition, the differences they reveal in their
electoral behavior when they legislate in different institutional bodies
(e.g., chambers and commissions), among others. All these questions lead to an
essential technical discussion about the importance of establishing common
latent scales that allow contrasting the electoral behavior of political
actors (e.g., Asmussen and Jo,, 2016; Shor et al.,, 2010; Shor and McCarty,,
2011).
The latter task, establishing common latent scales, makes reference to a
methodological aspect that implies analyzing bridges that allow ideal points
to be scaled on a standard scale to ensure that the contrasts between
institutions, periods, or other scenarios are compatible. Thus, Shor et al.,
(2010) asserts that it is not valid to equate ideal point estimates obtained
separately since each set of ideal points produces estimates on a different
latent scale. Therefore, one alternative consists in identifying “bridge
actors” (e.g., legislators with common voting records) to anchor latent spaces
that allow analysts to confront the electoral behavior of subjects not voting
simultaneously.
The change of preferences has been considered from different perspectives. For
example, Martin and Quinn, (2002) postulate a dynamic version of the standard
Bayesian spatial voting model to assess the change and possible dependencies
in political actors’ preferences over time. Although these authors do not
present results in the parliamentary context, their model is a reference for
characterizing the dynamics of legislators’ political preferences, since their
research describes different approaches to measure changes in such preferences
over time. Additionally, they indicate the theoretical structure to
incorporate dynamic linear models (West and Harrison,, 2006) in the estimation
of ideal points and the methodology to support the corresponding Bayesian
inference.
The dynamic model specification is analogous to the base model but taking into
account indexing over time. In this spirit, the Bayesian ideal point estimator
in its dynamic version is a dynamic linear model of the form
$y_{i,j,t}^{*}=\mu_{j}+\bm{\alpha}_{j}^{\textsf{T}}\bm{\beta}_{i,t}+\epsilon_{i,j,t}$,
where $\bm{\beta}_{i,t}$ is the ideal point corresponding to legislator $i$ at
time $t$, and $\epsilon_{i,j,t}$ is the stochastic deviation due to the
uncertainty associated with the voting process over time. The other parameters
have the same connotation as in Section 3. This model differs from the base
model mainly in the prior distributions of the ideal points. Martin and Quinn,
(2002) propose a dynamic prior distribution given by
$\bm{\beta}_{i,t}\mid\bm{\beta}_{i,t-1},\bm{\Delta}_{\beta_{i,t}}\stackrel{{\scriptstyle\text{ind}}}{{\sim}}\textsf{N}(\bm{\beta}_{i,t-1},\bm{\Delta}_{\beta_{i,t}})$,
where $\bm{\Delta}_{\beta_{i,t}}$ is a hyperparameter describing temporal
evolution variation.
In the Latin American context, particularly in Brazil, we are aware of an
approach that applies the one-dimensional standard dynamic Bayesian ideal
point estimator to contrast the electoral behavior of deputies in the Chambers
of Parliament 1989–2008. In particular, McDonnell, (2017) establishes a common
latent scale between chambers, taking as a bridge the joint votes of its
members. In Brazil, representatives and senators occasionally vote
sequentially in Congress. With this strategy, the author equates legislative
behavior to an educational test where subjects act on the same policies almost
simultaneously.
Another perspective to study change in political actors’ preferences is
assuming that the preferences of deputies are stable, at least in short
periods, but they are affected by particular circumstances of the legislative
process (Clinton et al.,, 2004; Moser et al.,, 2021). In this sense, Clinton
et al., (2004) analyze the change of deputies’ preferences in the context of
party change through the standard one–dimensional Bayesian ideal point
estimator (considering a liberal–conservative latent trait). This author
states that legislators who change party affiliation in the exercise of their
office are crucial to identifying the effect of the party on the voting
behavior of members of the legislative body. At the moment of change, other
determinants of the vote remain constant (e.g., the constituency and the
affiliation of the other legislators, among others), and the new affiliation
may reflect a variation in the ideal points of the legislators as a result of
such a change. Naturally, all legislators can present a variation in the
estimates of their ideal points after the change. However, the variation for
deputies who change parties is greater than those who maintain their political
affiliation. In this sense, it is possible to parameterize the ideal points as
$\beta_{i,1}=\beta_{i,0}+\delta_{i}$, where $\beta_{i,0}$ and $\beta_{i,1}$
are the ideal points of legislator $i$ before and after the change,
respectively. Thus, the relative change, $\delta_{i}$, of the $i$-th
legislator will make it possible to analyze the variation of the corresponding
ideal point.
From this angle, there are two main critical issues in the quantitative
investigation of change in preferences of political actors, namely, the
precision and the compatibility of the estimates mentioned above (McCarty et
al.,, 2001; Clinton et al.,, 2004). Regarding precision, Bayesian simulation
allows us to identify the inaccuracy that arises when dividing the data set
into pre and post-change. Furthermore, it provides us with uncertainty
assessments for all the model parameters, showing the drop in accuracy of the
estimates after splitting the dataset around the match change. Now, regarding
producing comparable estimates, Clinton et al., (2004) points out that the
posterior standardization of the ideal points limits the problem. However,
Lofland et al., (2017) question this approach. These authors point out three
methodological inconsistencies. First, comparing the hierarchical order of the
legislators through the estimates made before and after the change of party
ignores that locations of different legislators are not independent. One
legislator increases his rank when another or others decrease theirs. Second,
implementing models before and after the change produces non-comparable ideal
point estimates. Standardized estimates of ideal points do not guarantee that
they share the same latent scale. And third, the party-switching hypothesis
must consider the lack of fit when testing multiple hypotheses.
Then, Lofland et al., (2017) propose a hierarchical model to induce a common
scale without splitting the vote set. Furthermore, they assume that not all
legislators change their preferences. The deputies with constant political
preferences throughout the period under analysis are the bridge between the
political spaces (see Shor et al.,, 2010; Shor and McCarty,, 2011). The model
uses zero-inflated Gaussian priors to identify the bridge legislators that
connect the arbitrary ideological scales and make them comparable.
Additionally, this estimator allows addressing multiple comparison problems
simultaneously (see Scott and Berger,, 2006, 2010), taking into account Bayes
factors and posterior probabilities. Although the proposal is inspired by
Martin and Quinn, (2002), the main goal is not modeling the evolution of
deputies’ preferences over time, but to test hypotheses of party change at a
specific moment in the legislative process.
Inspired by the work of Lofland et al., (2017), Moser et al., (2021) propose
another extension of the ideal Bayesian point estimator. The formulation does
not focus on party switching, but on variation in legislators’ preferences
when they vote in different domains or issues. The estimation of ideal points
of political actors in different voting domains constitutes a version of
change in parliamentarians’ preferences little studied. The extension of the
model provides a methodological framework to carry out contrasts at both
individual and group levels about similarities and discrepancies that
deputies’ revealed preferences present when they make decisions in different
voting domains. These extensions propose estimating the identity of “party
legislators” (bridge legislators), voters whose revealed preference remains
constant for all lists. The approach of these authors does not assume prior
knowledge of the identity regarding partisan voters.
These extensions of the ideal Bayesian point estimator also allow researchers
to investigate the relationship between the political space and the thematic
space through a clustering model (see, Moser et al.,, 2021), under which
legislators may have different preferences for each group of votes. The use of
previous groupings reduces the number of different positions for a legislator
and contributes to less uncertainty in the estimates of the model parameters.
Unlike Lofland et al., (2017), Moser et al., (2021) generalizes the model to
an arbitrary number of groups by introducing prior distributions that divide
the set of votes into $K$ groups determined by the analyst. The generality in
the model formulation opens the door to potential applications mainly in Latin
American legislatures, where there are no studies in this direction to the
best of our knowledge.
### 4.5 Influence of national political leaders and groups
An issue of interest to scholars of the North and Latin American Congress is
the influence of party leaders or national political groups on the electoral
behavior of legislators. Studies in this line focus on the effect of political
groups (parties, coalitions, among others) on electoral behavior and the
influence of incentives on deputies’ voting decisions. In other words, the
research emphasizes voting patterns that are not the product of sincere voting
behavior.
Clinton et al., (2004) provides a methodology to interpret and operationalize
the influence of political groups on parliamentary voting behavior. Not
considering this phenomenon leads to ideal points estimates absorbing an
impact common to the members of a particular group and revealing a greater
polarization (e.g., partisan) in the preferences of the deputies. In this
direction, party influence is a plausible mechanism to generate extra utility
incentives associated with a specific political group. Then, an extension of
the utility function to model directly partisan incentives is
$U_{i}(\psi_{j})=-(\psi_{j}-\beta_{i})^{2}+\delta_{j}^{\textsf{D}}+\eta_{i,j}\hskip
28.45274pt\text{and}\hskip
28.45274ptU_{i}(\zeta_{j})=-(\zeta_{j}-\beta_{i})^{2}+\delta_{j}^{\textsf{R}}+\upsilon_{i,j}\,,$
where $U_{i}(\psi_{j})$ and $U_{i}(\zeta_{j})$ are the corresponding profits
associated with legislator $i$ for voting in favor of or against motion $j$,
respectively, $\delta_{j}^{\textsf{D}}$ and $\delta_{j}^{\textsf{R}}$ are the
incentives that the legislator $i$ receives for voting positively depending on
their affiliation party, where D represents Democrats and R Republicans.
Finally, $\eta_{i,j}$ and $\upsilon_{i,j}$ are random shocks product of the
uncertainty associated with the voting processes, which are assumed
independent and identically with logistic distribution. These utility
functions lead to a linear model of the form
$y_{i,j}^{*}=\mu_{j}+\alpha_{j}\beta_{i}+\delta_{j}D_{i}+\epsilon_{i,j}\,,$
where $\epsilon_{i,j}=(\upsilon_{i,j}-\eta_{i,j})/\sigma_{j}$,
$\mu_{j}=(\zeta_{j}^{2}-\psi_{j}^{2})/\sigma_{j}$ and
$\alpha_{j}=2(\psi_{j}-\zeta_{j})/\sigma_{j}$, $D_{i}$ is a dummy variable
that takes the value of 1 if the legislator $i$ is a Democrat, and 0
otherwise, and $\delta_{j}=\delta_{j}^{\textsf{D}}-\delta_{j}^{\textsf{R}}$ is
the net difference of specific democratic party incentives. If $\delta_{j}>0$,
then the net incentive is for Democrats that vote in favor of motion $j$,
while if $\delta_{j}<0$, then the incentive is for Democrats that vote against
motion $j$.
Clinton et al., (2004) declare that this extension is advantageous compared to
other methodological proposals to study party influence. Unlike other
mechanisms, it does not require a differential analysis of items (e.g.,
Wainer,, 1993), nor the use of two-stage procedures (e.g., Snyder Jr and
Groseclose,, 2000). These methods are criticized due to the bias occurring
during the estimation process (e.g., McCarty et al.,, 2001). In this sense,
the Bayesian approach improves several implementations of party influence
models available in the literature. Furthermore, the model is quite general
since it augments utility functions with party-specific incentives for each
motion.
This extended estimator has the identification restrictions of the standard
model but also introduces restrictions for the parameters $\delta_{j}$ (in the
same direction as Snyder Jr and Groseclose, 2000). Application of the model
leads to the identification of voting lists subject to partisan incentives and
the estimation of statistically significant party effects (the 95% credibility
interval of $\delta_{j}$ does not overlap with zero) for the US 105th Senate.
Clinton et al., (2004) state that although there is evidence consistent with
party influence in only a third of closed votes (roll call votes decided by
margins closer to 35% - 65%), the magnitude of these incentives is large and
politically consistent with the behavior of the deputies. Estimating party-
specific incentives for each roll call makes it possible to determine party
polarization more precisely.
In the same line of thought, Hahn et al., (2012) adapt the standard Bayesian
ideal point estimator using sparse factor models (e.g., Pati et al.,, 2014;
Bernardo et al.,, 2003; Carvalho et al.,, 2008) to reveal partisan patterns in
roll call votes of the US Senate. The proposal incorporates theoretical
referents of the multivariate probit model (e.g., Chib and Greenberg,, 1998),
the Gaussian factor model (e.g., Murray et al.,, 2013), and scattered mass
points at zero (e.g., Castillo et al.,, 2015) to identify cases in which
partisanship is not predictive of roll calls. This formulation makes it
possible to characterize covariance patterns in multivariate binary data to
establish which bills and legislators do not rely on latent factors of
partisan nature. In this way, Hahn et al., (2012) state that the sparse factor
probit model is a crucial exploratory tool for analyzing high-dimensional
correlated categorical data. These authors point out that, like the standard
estimator, their method is adaptive and indicates that an interesting
extension would consist in adding either spatial or temporal autocorrelation
components to scores. The dependency would explain the partisanship of
senators who serve in consecutive congresses or are spatially close (for
example, belonging to the same district).
In the case of Latin America, findings in this regard have shown a broader
literature. The influence of the executive, leaders, and political groups
(leaders of the majority party, coalitions, delegations, among others) in the
legislative vote is remarkable in the region. Studies in this direction have
analyzed the phenomenon in question from different perspectives. The
investigations have shown that legislatures in Latin America have a structure
and dynamics that differ from the US parliament (Pereira and Mueller, 2004b,
). Additionally, the executive–legislative relationship (Pereira and Mueller,
2004b, ; Zucco,, 2013), the consolidation of government–opposition coalitions
(Carey,, 1998; Carroll and Pachón,, 2016; Tsai,, 2020; Zucco and Lauderdale,,
2011), multipartyism (Ames,, 2002; Neto,, 2002; Figueiredo and Limongi,, 2000;
Pereira and Mueller, 2004b, ; Tsai,, 2020; Zucco,, 2013), political actors and
internal subdivisions (committees) or external (constituencies, delegations,
among others) to the legislative chambers (Jones and Hwang, 2005a, ; Cheibub
et al.,, 2009; Pachón and Johnson,, 2016; Tsai,, 2020; Clerici,, 2021), are
determining factors of parliamentary electoral conduct. There are even authors
who argue that, in contrast to the ideological component, these have a greater
impact on the legislative voting process (e.g., Alemán et al.,, 2018; Tsai,,
2020).
Thus, Tsai, (2020) proposes adjusting the Bayesian ideal point estimator to
incorporate non-ideological factors (such as membership in the ruling
coalition) in the analysis of roll call data. This model offers empirical
evidence regarding the influence of political groups on the legislative voting
behavior of the Brazilian Chamber of Deputies from 2003–2006. This author
states that the extension of the model is necessary since the Brazilian
assembly is an example of a legislature that reflects electoral behavior
mediated by political negotiation (strategies to access political resources)
and ideological preferences. The standard ideal point estimates are not
optimal since they need to distinguish between the impact of coalition
dynamics and the effect of individual political preferences that underlie
voting behavior. Other spatial voting models postulated under a similar
argument are those proposed by Zucco, (2009) and Zucco and Lauderdale, (2011).
In this latest study, the authors present a version of the standard ideal
point estimator in two dimensions (left–right and government–opposition).
The modeling strategy assumes that the political problem underlying nominal
voting has a one-dimensional ideological content (left–right) underlying
different voting decisions to be made by legislators with the same political
position. Then, the government–opposition relationship explains the
discrepancy in the election. In this sense, Tsai, (2020) models the
executive’s influence on legislative voting through the party affiliation of
the deputies and the party membership in the ruling coalition. Thus, the
author assumes that legislators whose party belongs to the alliance have
additional incentives, $\delta>0$ and $\lambda>0$, if they vote in favor of
the government proposals and against the opposition proposals, respectively.
Analogous to the formulation of Clinton et al., (2004), the additional
incentives, product of the government–opposition conflict, are incorporated
into the utility functions
$U_{i}(\psi_{j})=-(\psi_{j}-\beta_{i})^{2}+\delta\cdot\textsf{MGov}_{k[i]}\textsf{PGov}_{j}+\eta_{i,j}$
and
$U_{i}(\zeta_{j})=-(\zeta_{j}-\beta_{i})^{2}+\lambda\cdot\textsf{MGov}_{k[i]}\textsf{POpp}_{j}+\upsilon_{i,j}\,,$
where $U_{i}(\psi_{j})$ and $U_{i}(\zeta_{j})$ are the corresponding profits
associated with legislator $i$ for voting in favor of or against motion $j$,
respectively, $\eta_{i,j}$ and $\upsilon_{i,j}$ are random shocks product of
the uncertainty associated with the voting processes, which are assumed
independent and identically distributed. Additionally, $k[i]$ denotes the
party affiliation of the legislator $i$, in a way that $\textsf{MGov}_{k[i]}$
corresponds to a binary indicator that takes the value of 1 when legislator
$i$ belongs to party $k$ of the governing coalition, and otherwise, it takes
the value of 0. Finally, $\textsf{PGov}_{j}$ and $\textsf{POpp}_{j}$ are dummy
variables that take the value of 1 when proposal $j$ comes from the government
and the opposition, respectively, otherwise, they take the value of 0.
The incentives $\delta$ and $\lambda$ directly affect the parameter intercept
$\mu_{j}$ in the regression model
$\mu_{j}=\gamma_{1}+\gamma_{2}\textsf{PGov}_{j}+\nu_{j}\,,$
where $\gamma_{1}$ represents the reference probability of granting a positive
vote when the opposition casts the proposal and
$\nu_{j}\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
iid}}}{\sim}}{\textsf{N}}(0,\sigma_{\nu}^{2})$. Thus, $\gamma_{2}>0$ indicates
that the government–opposition conflict increases the probability of voting in
favor when proposal $j$ is governmental, whereas $\gamma_{2}<0$ indicates that
the coalition dynamic does not positively affect the government direction. The
specification includes conventional semiconjugate prior distributions of the
forms $\sigma_{\nu}^{2}\sim\textsf{GI}\left(a_{0}/2,b_{0}/2\right)$,
$\gamma_{1}\sim\textsf{N}(\gamma_{0},\sigma_{\nu}^{2})$, and
$\gamma_{2}\sim\textsf{N}(\gamma_{0},\sigma_{\nu}^{2})$, where $a_{0}$,
$b_{0}$, and $\gamma_{0}$ are the model hyperparameters.
Additionally, the model incorporates the party affiliation through informative
priors on the ideal points. The aim behind this approach is consists in
evaluating the variation between parties and analyzing the effect of the
ruling coalition on the legislative vote through a party base. Thus, in the
same spirit of Zucco and Lauderdale, (2011), the ideal point corresponding
associated with $i$ comes from a distribution specific to his political party,
centered on the party mean $\mu_{\beta_{k[i]}}$ with variance
$\sigma_{\beta}^{2}$, i.e.,
$\beta_{i}\mid\mu_{\beta_{k[i]}},\sigma^{2}_{\beta}{\sim}{\textsf{N}}(\mu_{\beta_{k[i]}},\sigma_{\beta}^{2})$.
Unlike other authors, Tsai, (2020) proposes priors hierarchies on party means.
In this sense, he suggests that
$\mu_{\beta_{1}}\mid\mu_{\beta_{2}}{\sim}\textsf{U}(-3,\mu_{\beta_{2}})$,
$\mu_{\beta_{k}}\mid\mu_{\beta_{k-1}},\mu_{\beta_{k+1}}{\sim}\textsf{U}(\mu_{\beta_{k-1}},\mu_{\beta_{k+1}})$,
and
$\mu_{\beta_{K}}\mid\mu_{\beta_{K-1}}{\sim}\textsf{U}(\mu_{\beta_{K-1}},3)$,
for $k=2,\cdots,K-1$.
In this way, $\mu_{\beta_{k}}$ is subject to a prior ordering restriction,
through which political parties are organized from left to right, taking into
account positions that other authors have identified (e.g., Zucco,, 2009;
Zucco and Lauderdale,, 2011). Additionally, considering the pre-established
ordering, two parties with different ideologies are selected to anchor the
extremes of the scale. Thus, this specification allows the problem of
rotational invariance to be solved directly since it makes the left and right
parties stand on the corresponding side of the underlying ideological scale.
On the other hand, electing deputies from “anchor parties” with different
ideologies but belonging to the same field (government–opposition, selection
based on (Zucco and Lauderdale,, 2011)) ensures that the underlying
ideological dimension and the government–opposition conflict do not overlap.
Lastly, the sign of the discrimination parameters $\alpha_{j}$ reveals the
ideological status of the proposal $j$. The prior distributions for these
parameters are $\alpha_{j}{\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
ind}}}{\sim}}}{\textsf{N}}(-2,1)\bm{I}(\alpha_{j}<0)$, for $j=1,\ldots,m_{0}$,
and $\alpha_{j}{\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
iid}}}{\sim}}}{\textsf{N}}(0,1)$, for $j=m_{0},\ldots,m$, where
$\bm{I}(\cdot)$ denotes the indicator function. Model parameters are
standardized in each iteration of the MCMC algorithm for solving scale
invariance problems (Tsai,, 2020).
The model results regarding the Chamber of Deputies of Brazil reveal that
legislators with membership in the government coalition are more likely to
support the executive’s proposals regardless of their political ideology.
Empirical evidence confirms that parties influence legislative votes. In
addition, estimating the ideal points associated with the 11 parties in the
ideological scale (left–right) reveals a partisan order similar to previous
studies (e.g., Zucco and Lauderdale,, 2011).
On the other hand, Jones and Hwang, 2005a implement the standard estimator to
roll call data from the Chamber of Deputies of Argentina 1989–2003. The
estimated ideal points help build an indicator of homogeneity of provincial
legislative behavior to analyze the influence of governors on the legislative
behavior of their regional partners in the lower house. In turn, such a metric
is considered as the response variable of a linear regression model that
evaluates the governor’s influence on parliamentary electoral behavior using
the characteristics of the governor as covariates (e.g., belonging to the
government). The results do not reveal provincial effects on the legislative
behavior of deputies, nor do they reflect a significant relationship between
the characteristics of the governor and the estimated measure of homogeneity.
Zucco, (2013) also uses the estimated ideal points to build an index of
legislative performance. This measure is the response variable of a Tobit
model (Amemiya,, 1984) to explain the influence of political groups or actors
on parliamentary voting behavior through covariates such as the average
ideological distance of the faction from the president as well as the
proportion of nominal passes in which the faction held a ministerial position.
The pattern revealed by the estimated ideal points indicates that the behavior
of legislators from the same fraction is alike. In addition, the ideological
distance influences the legislative behavior indicator. Finally, legislators
whose factions hold ministerial positions behave more pro-government than
their ideology might predict.
Finally, Zucco, (2013) also implements the standard one-dimensional estimator
to roll call data from the Uruguayan Senate 1985–2005. A low number of votes
(see Table) and a highly institutionalized party system lead to a distribution
of ideal points in the political space that does not follow an ideological
pattern. Therefore, the assumed dimension cannot be interpreted ideologically,
but rather in terms of the executive–legislative relationship.
### 4.6 Strategic abstentions
Research in the North American context (see Rodríguez and Moser,, 2015; Rosas
et al.,, 2015) have proposed to extend the standard Bayesian ideal point
estimator to study the phenomenon of strategic abstentions (intentional not
voting, absenteeism from the room, not vote recording, among others; a
theoretical explanation in this regard is given in Cohen and Noll, 1991;
Kromer, 2005; Voeten, 2000). Abstention from voting is a characteristic of the
electoral behavior of deliberative bodies usually ignored without empirical
support (Rodríguez and Moser,, 2015; Rosas et al.,, 2015). The matter of
interest associated with this issue are the identity of legislators who
frequently participate in strategic abstentions, the behavior of the
abstention rates of groups or political leaders over time or under specific
conditions, and the types of bills for which legislators locations in the
political spectrum are key as abstention factors, among others.
The standard Bayesian ideal point estimator typically ignores missing values
and assumes that the random process generating these missing data is
irrelevant for estimating the ideal points (Rosas et al.,, 2015). However,
abstentions reflect legislators’ preferences when faced with the dilemma of
not affecting others (e.g., party leaders or voters) during the competition
(Rodríguez and Moser,, 2015; Rosas and Shomer,, 2008). Consequently, Rosas et
al., (2015) point out that the decision to model the lack of response should
be based on theoretical foundations and not on goodness-of-fit measures since
the strategic nature of abstentions is specific to the context of each
legislature.
The extensions of the estimator that address the phenomenon of abstentions
show two main approaches: first, moving from a binary model to an ordinal
model including the non-response as an intermediate category between the
positive and negative vote (Rodríguez and Moser,, 2015), and second, keep a
binary model but consider the joint probability of both positive vote and vote
registration (Rosas et al.,, 2015). These estimators characterize the
mechanism that drives the presence of missing values in voting lists.
Moreover, its implementation allows modeling the choice of non-response and
voting simultaneously, as well as evaluating the impact of factors (e.g., as
ideology) on the probability of non-response and vice versa (see Rodríguez and
Moser,, 2015; Rosas et al.,, 2015).
Rosas and Shomer, (2008) present an extension of the canonical ideal point
estimator (standard model assuming completely random abstentions, also called
ignorant abstentions model) to analyze the processes of abstention and voting
simultaneously. They illustrate their proposal by analyzing roll call data
from the Federal Congress of Argentina 1993–1995. These data favor the study
due to the high rates of abstention as well as lack of recording (32% for this
period).
The one-dimensional bivariate probit model admits two sources of information:
(i) the voting choice (1 = Yes, 0 = No) and (ii) the vote registration
indicator (1 = No registration and 0 = Registration) of the legislators. Then,
$y_{1,i,j}^{*}$ and $y_{2,i,j}^{*}$ are linear predictors driving the
probability of positive vote and abstention for legislator $i$ in motion $j$,
respectively. These predictors are
$y_{1,i,j}^{*}=\mu_{j}+\alpha_{j}\beta_{i}+\gamma_{j}c_{i}+\epsilon_{1,i,j}$,
with $\epsilon_{1,i,j}\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
iid}}}{\sim}}{\textsf{N}}(0,1)$, and
$y_{2,i,j}^{*}=\eta_{j}+\delta_{j}c_{i}+\epsilon_{2,i,j}$, with
$\epsilon_{2,i,j}\mathrel{\overset{\makebox[0.0pt]{\mbox{\tiny
iid}}}{\sim}}\textsf{N}(0,1)$. The former expression is the canonical latent
model with an additional predictor $c_{i}$ that represents the latent
propensity to abstention of legislator $i$ in motion $j$, and $\gamma_{j}$ is
the effect of such propensity on the probability of a positive vote on motion
$j$. These parameters are incorporated into the model in order to allow the
abstention process to be informative of the result of a vote. On the other
hand, the latter expression characterizes the mechanism that generates
abstention, i.e., the probability that legislator $i$ abstains when
considering vote $j$, with $\delta_{j}$ y $\eta_{j}$ are specific parameters
associated with voting list $j$.
Rosas and Shomer, (2008) define informative prior distributions for the
parameters of the model under the assumption of the influence of the
provincial delegations of the parties in the electoral behavior of the
deputies (see Jones and Hwang, 2005b, ). They presume that the propensities to
the abstain are grouped within such delegations as they define a common prior
for the members of the same delegation. Although there are 20 parties with
representation in congress during the period of interest, the results only
provide information on the voting process and abstention of the majority and
opposition parties. The proposed model presents an adequate performance
compared to the model that assumes ignorable abstentions. However, the number
of roll call data for the analysis is scarce, and the information retrieved
through them is insufficient (see Table 1) to generalize the results.
Nevertheless, these findings reveal information about voting lists and bills
that provide the highest abstention rates and discriminate better between
legislators.
### 4.7 Extremes voting together
Very recently, spherical latent factor Bayesian models emerged for binary and
ordinal multivariate data (see Yu and Rodríguez,, 2021; Yu and Rodriguez,,
2019; Yu,, 2020). Although this kind of model is similar to the ideal Bayesian
point estimator, it is not an extension. These models constitute a more
general class, and the standard estimator is a limiting case of the new family
of models (Yu and Rodriguez,, 2019).
Yu and Rodríguez, (2021) mention that most spatial voting models assume ideal
points embedded within a (possibly multidimensional) Euclidean political
space. However, there are situations where Euclidean geometry does not explain
unusual voting behavior. For example, the phenomenon where members from
opposite ends of the ideological spectrum reveal similar preferences by voting
against the rest of the legislature. In this case, neither increasing the
dimensionality of the latent space nor performing linear transformations allow
the classical estimator to characterize the phenomenon of “extremes voting
together” adequately. As a result, the standard ideal point estimator does not
perform well in the mentioned case since it exhibits the extreme deputies as
conservative individuals. Instead, it is a better fit when nominal vote data
comes from political systems where parties are relatively unified (Yu and
Rodríguez,, 2021). So then, this new class of models explores unconventional
voting patterns evaluating the geometry that underlies the political space.
Bayesian spatial voting models based on spherical geometries differ from the
classical approach in their specification and implementation (see Yu,, 2020).
In particular, the model parameters, $\bm{\beta}_{i}$, $\bm{\psi}_{j}$ and
$\bm{\zeta}_{j}$, are defined on a Riemann manifold $\mathcal{D}$ (see Lee,,
2018), and the Euclidean distance is replaced by the geodesic distance $\rho$
in $\mathcal{D}$. The distance between two points is defined on a unit
$(d+1)$-dimensional hypersphere. This distance corresponds to the smallest
angle formed by the projections of the points through the origin. The case
$d=1$ leads to a spatial voting model in a circular space with utility
functions given by
$U_{i}(\psi_{j})=-\rho{(\bm{\psi}_{j},\bm{\beta}_{i})}^{2}+\eta_{i,j}\qquad\text{and}\qquad
U_{i}(\zeta_{j})=-\rho{(\bm{\zeta}_{j},\bm{\beta}_{i})}^{2}+\upsilon_{i,j}\,,$
where $\rho(a,b)=\arccos{(\cos{(a-b)})}\in[0,\pi]$ is the geodetic distance
that establishes the smallest angle between $a$ and $b$. Thus, $\beta_{i}$,
$\psi_{j}$, and $\zeta_{j}$ $\in[-\pi,\pi]$ are interpreted as angular
positions in an unit circle. The specification of the utility function leads
to substantial changes in terms of the choice of the link function, the prior
distributions of the model parameters, the identification, and interpretation
of the model, and its computational implementation (see details in Yu and
Rodriguez,, 2019; Yu,, 2020; Yu and Rodríguez,, 2021).
The circular version of this family of models was applied to 1988–2019 United
States House of Representatives roll call data (see Yu and Rodríguez,, 2021).
The results reveal that the circular voting model explains modern voting
patterns better than traditional Euclidean models. On the other hand, the
circular variance as a measure of sphericity (extreme voting behavior) helps
analyze the temporal evolution of this voting phenomenon.
Spherical Bayesian spatial voting models have yet to be widely applied. To the
best of our knowledge, there is still a need for applications in the Latin
American legislatures’ context. Existing implementations are novel and provide
empirical evidence of a recent phenomenon. Furthermore, these models suggest
uninvestigated methodological paths. For example, work has yet to be reported
in the literature illustrating how to incorporate and interpret strategic
abstentions in this model. The applications discussed so far assume that
missing data result from mere chance. Furthermore, there is an open path to
explore suitable priors in high-dimensional spheres contexts (Yu and
Rodríguez,, 2021).
## 5 Illustration
In order to fix ideas and provide a brand new case study, we focus on the
Colombian House of Representatives 2010–2014. This chamber’s plenary roll call
votes have yet to be previously studied using some of the ideas under review.
Therefore, we take advantage of the novelty of this data set to broaden the
discussion on the nature of the dimension of the political space of the
Colombian parliament, particularly the lower house. We have available 626 roll
calls deliberated by 181 deputies. However, we omitted 31 legislators for
participating in less than $95\%$ of the votes, which does not imply
eliminating any political group. In addition, we eliminated 66 unanimous roll
calls since these are not relevant to recover information about the latent
traits that underlie the political decisions of parliamentarians (Jackman,,
2001; Luque and Sosa,, 2022).
Four political groups make up the House of Representatives 2010–2014, the
coalition of government, independents, minorities, and opposition. Four of the
fourteen political parties in the legislative exercise constitute the
coalition, Conservador Colombiano (CC, $21.33\%$), Liberal Colombiano (LC,
$22.67\%$), Cambio Radical (CR, $10\%$), and Social de Unidad Nacional (PU,
$28.67\%$). The parties openly declared as independent are Alianza Verde (PAV,
$2\%$), Integración Nacional (PIN, $6.67\%$), and MIRA ($0.67\%$). Five of the
six minority parties have a seat in the chamber (each with $0.67\%$
representation). These are Alas Equipo Colombia (ALAS), Alianza Social
Indígena (ASI), Movimiento Integración Regional (MIR), Movimiento Popular
Unido (MPU), and AfroVives (VIVES). The sixth minority is the Movimiento de
Apertura Liberal (MOAL, $1.34\%$) with two seats. The Polo Democrático
Alternativo (PDA, $3.3\%$) is the only opposition party.
We implement the one–dimensional Bayesian ideal point estimator under the
technical details exposed by Luque and Sosa, (2022). The model provides
inferences about the ideal points of 148 parliamentarians (two legislators are
anchors for model identification) and about the discrimination and approval
parameters of the 560 voting lists, i.e., we estimate 1268 parameters. We
eliminate missing data as they have no substantial impact on our findings (see
Luque and Sosa,, 2022). In future methodological research, roll call votes of
this deliberative body could be helpful to illustrate patterns of
parliamentary conduct in the region when abstention is frequent (the data set
has $44\%$ missing data, of which $39\%$ correspond to abstentions and the
remainder to absences).
Figure 1: Ideal point estimates along with their 95% credible bands.
As shown in Figure 1, the ideal point estimates $\beta_{i}$ vary between
$-1.47$ and $2.22$. For all opposition members, the posterior mean of their
ideal point is negative and significantly different from zero (i.e., the
corresponding credibility interval does not contain zero). In the case of the
governing coalition members, the posterior mean of their ideal point is
positive, and $99\%$ of the deputies of this political group have
statistically significant ideal points. On the other hand, the ideal points of
minorities and independent parties are more dispersed throughout the political
space, all being significantly different from zero. The distribution of the
ideal points in the political space of the lower house is similar to the upper
house for the same period (see Luque and Sosa,, 2022).
The distribution of the ideal points indicates that the underlying latent
feature of the nominal voting of the lower house is not ideological
(left–right) because on the same side of the political spectrum are
historically antagonistic parties (LC and CC, see Mazzuca and Robinson,,
2009). Our findings indicate a latent non–ideological trait that divides the
spectrum into opposition–non-opposition since there are political groups that
do not openly declare themselves as part of or against the government (e.g.,
independents and minorities). Then, the recovered dimension reflects power
control between hegemonic parties within the legislative body, with some
minority and independent members taking a moderate (centrist) position.
On the other hand, we identified that 447 of the 560 discrimination
parameters, $\alpha_{j}$, are significantly different from zero. Then,
$79.8\%$ of the motions discriminate between legislators across the political
spectrum. Roll calls segregating among deputies are mostly related to
security, defense, and public forces ($23.04\%$), social security and health
($19.91\%$), and justice ($20.36\%$). So, we argue that power control within
the chamber revolves around these issues. Further, we determined that
$85.46\%$ of motions discriminating this political dimension are from a
government initiative and also that $95.08\%$ are list passes debated during
the first three legislative years.
Furthermore, we note that motions related to the environment, mines and
energy, and recreation and sports do not discriminate for the proposed
dimension. The 113 motions that do not discriminate in the retrieved policy
continuum possibly store information about the second dimension of political
space. Finally, we examine the posterior predictive distribution of some test
statistics. In none of these cases the posterior predictive $p$-value is an
extreme value (less than 0.05 or greater than 0.95). Therefore, the proposed
one-dimensional model reveals a good fit for 2010–2014 Colombian House of
Representatives nominal vote data set.
## 6 Discussion
The taxonomy proposed in this document specifies substantive themes that
inspire research on legislative electoral behavior in different contexts,
particularly in Latin America. Future research aims to identify additional
lines, phenomena, or substantive hypotheses subject to analysis through the
methodologies under review that allows social scientists to extend our
categorization.
We identify that four of the seven lines present empirical evidence of
legislative electoral behavior in Latin America via the Ideal Bayesian point
estimator. Then, in some countries of the region, there are available studies
about the political space dimension, the change in the preferences of
political actors, the influence of national political leaders or groups, and
strategic abstentions. However, to the best of our knowledge, in Latin
America, no research uses the Bayesian ideal point estimator to study issues
related to voting restricted to the nature of the agenda and the case of
extremes voting together. In addition, the available literature only points
out to the parliaments of Argentina (Jones and Hwang, 2005a, ; Rosas and
Shomer,, 2008), Brazil (McDonnell,, 2017; Tsai,, 2020; Zucco and Lauderdale,,
2011), Colombia (Luque and Sosa,, 2022), Paraguay (Ribeiro et al.,, 2021), and
Uruguay (Zucco,, 2013) as case studies. In this sense, Latin American
parliaments within a democratic presidential system are exhibited as novel
scenarios to operationalize the electoral behavior of legislative bodies
through roll call data under a Bayesian approach.
Quantitative studies on parliamentary electoral behavior in Latin America
based on the application, adaptation, and generalization of the Bayesian ideal
point estimator are emerging as promising research fields. In this regard,
several aspects are of interest in the framework of the proposed taxonomy. For
example, although there are studies on the dimension of the political space,
it is still necessary to delve into the dynamic and collective nature of the
dimensions proposed for the different parliaments (e.g., Moser et al.,, 2021).
Furthermore, the nature of higher dimensions in the region’s legislatures is
also an object of study. The latter requires specifying the methodological
conditions to propose multidimensional spaces in the legislative chambers of
Latin America (see Jackman,, 2001). In addition, future research can evaluate
the variation in the dimension of the political space when votes are
segregated by issue, initiative, or other kinds of categorization (e.g.,
Lofland et al.,, 2017).
Regarding the identification of pivotal legislators, future research should
provide empirical evidence on the deputies who are essential for the approval
of new political alternatives or hinder voting in Latin American parliaments
(e.g., Clinton et al.,, 2004)). Furthermore, research framed in voting
restricted to the nature of the agenda indicates an opportunity to investigate
the importance of parameters of voting alternatives when studying legislative
electoral conduct. The latter would allow analysts to know political scenarios
under which there are a greater legislative activity or probability of
changing the status quo and identify possible political strategies to
establish the legislative agenda in the deliberative bodies of the region.
Furthermore, from a technical point of view, more evidence is needed regarding
the consistency of the constrained and unconstrained estimators (see Clinton
and Meirowitz,, 2001).
In the United States, the effect of party switching on legislators’
preferences has been extensively studied (see Clinton et al.,, 2004). However,
this direction has yet to be studied in Latin America. Additionally, given the
characteristics of Latin American parliaments, it is quite important to
investigate the variation in the preferences of members of Congress. For
example, when there is a change of political groups (e.g., entry and exit of
parties to the ruling coalition) or leadership within parliaments (e.g.,
change of president in the legislative chambers). On the other hand, studies
in line with the evolution of legislators’ preferences admit the contrast
between legislative chambers (e.g., plenary and commissions) to discover
changes in the latent features that underlie the vote of deputies. In
addition, interested researchers could evaluate the potential of the estimator
in its dynamic version to provide empirical evidence on the impact of party
fragmentation and the proliferation of political groups on legislative
electoral behavior in the region’s countries.
Concerning the influence of national political leaders or groups, future
research should focus on incorporating multiple incentives (e.g., party and
coalition) into the estimator in the Latin American case. Moreover, we have
yet to review applications of the estimator that incorporate external factors
(e.g., media, opinions of the electorate, foreign political groups, or
electoral processes, among others; see Ribeiro et al.,, 2021; Fouirnaies and
Hall,, 2022) to examine other determinants of the voting decision of
parliamentarians. On the other hand, the line of strategic abstentions traces
an alternative to investigating the nature of the dynamics of non-
participation in the vote of the deputies in the legislative institutions of
Latin America (e.g., Rosas et al.,, 2015).
Finally, the line of extremes voting together is a novel methodological
alternative to evaluate different technical aspects. (e.g., the geometry
underlying political space in Latin American legislatures, see Yu and
Rodriguez,, 2019). Again, to the best of our knowledge, research in Latin
America has yet to explore spherical latent factor voting models for binary
and ordinal data. In this document, we do not describe non-parametric
implementations of the estimator (see Tahk,, 2018; Shiraito et al.,, 2022).
Such approaches will be discussed elsewhere.
## References
* Ainsley et al., (2020) Ainsley, C., Carrubba, C. J., Crisp, B. F., Demirkaya, B., Gabel, M. J., and Hadzic, D. (2020). Roll-call vote selection: Implications for the study of legislative politics. American Political Science Review, 114(3):691–706.
* Albert and Chib, (1993) Albert, J. H. and Chib, S. (1993). Bayesian analysis of binary and polychotomous response data. Journal of the American statistical Association, 88(422):669–679.
* Aldrich et al., (2014) Aldrich, J. H., Montgomery, J. M., and Sparks, D. B. (2014). Polarization and ideology: Partisan sources of low dimensionality in scaled roll call analyses. Political Analysis, pages 435–456.
* Alemán, (2008) Alemán, E. (2008). Policy positions in the chilean senate: An analysis of coauthorship and roll call data. Brazilian Political Science Review (Online), 3(SE):0–0.
* Alemán et al., (2009) Alemán, E., Calvo, E., Jones, M. P., and Kaplan, N. (2009). Comparing cosponsorship and roll-call ideal points. Legislative Studies Quarterly, 34(1):87–116.
* Alemán et al., (2018) Alemán, E., Micozzi, J. P., Pinto, P. M., and Saiegh, S. (2018). Disentangling the role of ideology and partisanship in legislative voting: evidence from argentina. Legislative Studies Quarterly, 43(2):245–273.
* Amemiya, (1984) Amemiya, T. (1984). Tobit models: A survey. Journal of econometrics, 24(1-2):3–61.
* Ames, (2002) Ames, B. (2002). Party discipline in the chamber of deputies. Legislative Politics in Latin America, pages 185–221.
* Arrow, (1990) Arrow, K. (1990). Advances in the spatial theory of voting. Cambridge University Press.
* Asmussen and Jo, (2016) Asmussen, N. and Jo, J. (2016). Anchors away: a new approach for estimating ideal points comparable across time and chambers. Political Analysis, pages 172–188.
* Aumann, (1964) Aumann, R. (1964). The bargaining set for cooperative games. Advances in Game Theory, pages 443–476.
* Bailey et al., (2017) Bailey, M. A., Strezhnev, A., and Voeten, E. (2017). Estimating dynamic state preferences from united nations voting data. Journal of Conflict Resolution, 61(2):430–456.
* Benoit and Laver, (2012) Benoit, K. and Laver, M. (2012). The dimensionality of political space: Epistemological and methodological considerations. European Union Politics, 13(2):194–218.
* Bernardo et al., (2003) Bernardo, J., Bayarri, M., Berger, J., Dawid, A., Heckerman, D., Smith, A., and West, M. (2003). Bayesian factor regression models in the “large p, small n” paradigm. Bayesian statistics, 7:733–742.
* Binding and Stoetzer, (2022) Binding, G. and Stoetzer, L. F. (2022). Non-separable preferences in the statistical analysis of roll call votes. Political Analysis, pages 1–14.
* Black et al., (1958) Black, D. et al. (1958). The theory of committees and elections. Springer.
* Brazill and Grofman, (2002) Brazill, T. J. and Grofman, B. (2002). Factor analysis versus multi-dimensional scaling: binary choice roll-call voting and the us supreme court. Social Networks, 24(3):201–229.
* Cahoon et al., (1978) Cahoon, L., Hinich, M. J., and Ordeshook, P. C. (1978). A statistical multidimensional scaling method based on the spatial theory of voting. In Graphical representation of multivariate data, pages 243–278. Elsevier.
* Carey, (1998) Carey, J. M. (1998). Parties, Coalitions, and the Chilean Congress in the 1990s. Latin American Studies Association.
* Carlin and Louis, (2008) Carlin, B. P. and Louis, T. A. (2008). Bayesian methods for data analysis. CRC Press.
* (21) Carroll, R., Lewis, J. B., Lo, J., Poole, K. T., and Rosenthal, H. (2009a). Comparing nominate and ideal: Points of difference and monte carlo tests. Legislative Studies Quarterly, 34(4):555–591.
* (22) Carroll, R., Lewis, J. B., Lo, J., Poole, K. T., and Rosenthal, H. (2009b). Measuring bias and uncertainty in dw-nominate ideal point estimates via the parametric bootstrap. Political analysis, pages 261–275.
* Carroll et al., (2013) Carroll, R., Lewis, J. B., Lo, J., Poole, K. T., and Rosenthal, H. (2013). The structure of utility in spatial models of voting. American Journal of Political Science, 57(4):1008–1028.
* Carroll and Pachón, (2016) Carroll, R. and Pachón, M. (2016). The unrealized potential of presidential coalitions in Colombia. Legislative Institutions and Lawmaking in Latin America, pages 122–147.
* Carvalho et al., (2008) Carvalho, C. M., Chang, J., Lucas, J. E., Nevins, J. R., Wang, Q., and West, M. (2008). High-dimensional sparse factor modeling: applications in gene expression genomics. Journal of the American Statistical Association, 103(484):1438–1456.
* Castillo et al., (2015) Castillo, I., Schmidt-Hieber, J., Van der Vaart, A., et al. (2015). Bayesian linear regression with sparse priors. Annals of Statistics, 43(5):1986–2018.
* Caughey and Schickler, (2016) Caughey, D. and Schickler, E. (2016). Substance and change in congressional ideology: Nominate and its alternatives. Studies in American Political Development, 30(2):128–146.
* Cheibub et al., (2009) Cheibub, J. A., Figueiredo, A., and Limongi, F. (2009). Political parties and governors as determinants of legislative behavior in brazil’s chamber of deputies, 1988–2006. Latin American Politics and Society, 51(1):1–30.
* Chib and Greenberg, (1998) Chib, S. and Greenberg, E. (1998). Analysis of multivariate probit models. Biometrika, 85(2):347–361.
* Clerici, (2021) Clerici, P. (2021). Legislative territorialization: The impact of a decentralized party system on individual legislative behavior in argentina. Publius: The Journal of Federalism, 51(1):104–130.
* Clinton et al., (2004) Clinton, J., Jackman, S., and Rivers, D. (2004). The statistical analysis of roll call data. American Political Science Review, pages 355–370.
* Clinton, (2012) Clinton, J. D. (2012). Using roll call estimates to test models of politics. Annual Review of Political Science, 15:79–99.
* Clinton and Jackman, (2009) Clinton, J. D. and Jackman, S. (2009). To simulate or nominate? Legislative Studies Quarterly, 34(4):593–621.
* Clinton and Meirowitz, (2001) Clinton, J. D. and Meirowitz, A. (2001). Agenda constrained legislator ideal points and the spatial voting model. Political Analysis, pages 242–259.
* Cohen and Noll, (1991) Cohen, L. R. and Noll, R. G. (1991). How to vote, whether to vote: Strategies for voting and abstaining on congressional roll calls. Political Behavior, 13(2):97–127.
* Coughlin and Nitzan, (1981) Coughlin, P. and Nitzan, S. (1981). Electoral outcomes with probabilistic voting and nash social welfare maxima. Journal of Public Economics, 15(1):113–121.
* Davis and Hinich, (1965) Davis, O. A. and Hinich, M. J. (1965). A mathematical model of policy formation in a democratic society. Graduate School of Industrial Administration, Carnegie Institute of Technology.
* Davis et al., (1970) Davis, O. A., Hinich, M. J., and Ordeshook, P. C. (1970). An expository development of a mathematical model of the electoral process. The American Political Science Review, 64(2):426–448.
* De Leeuw, (2006) De Leeuw, J. (2006). Principal component analysis of binary data by iterated singular value decomposition. Computational statistics & data analysis, 50(1):21–39.
* De Vries and Marks, (2012) De Vries, C. E. and Marks, G. (2012). The struggle over dimensionality: A note on theory and empirics. European Union Politics, 13(2):185–193.
* Dougherty et al., (2014) Dougherty, K. L., Lynch, M. S., and Madonna, A. J. (2014). Partisan agenda control and the dimensionality of congress. American Politics Research, 42(4):600–627.
* Downs, (1957) Downs, A. (1957). An economic theory of political action in a democracy. Journal of political economy, 65(2):135–150.
* Enelow and Hinich, (1984) Enelow, J. M. and Hinich, M. J. (1984). The spatial theory of voting: An introduction. CUP Archive.
* Figueiredo and Limongi, (2000) Figueiredo, A. C. and Limongi, F. (2000). Presidential power, legislative organization, and party behavior in brazil. Comparative Politics, pages 151–170.
* Fouirnaies and Hall, (2022) Fouirnaies, A. and Hall, A. B. (2022). How do electoral incentives affect legislator behavior? evidence from us state legislatures. American Political Science Review, 116(2):662–676.
* Gamerman and Lopes, (2006) Gamerman, D. and Lopes, H. (2006). Markov chain Monte Carlo: stochastic simulation for Bayesian inference. CRC Press.
* Gamm and Huber, (2002) Gamm, G. and Huber, J. (2002). Legislatures as political institutions: Beyond the contemporary congress. In IraKatznelson and Milner, H. V., editors, Political science: State of the discipline, pages 313–341. New York: W.W. Norton.
* Grier et al., (2022) Grier, K., Grier, R., and Mkrtchian, G. (2022). Campaign contributions and roll-call voting in the us house of representatives: The case of the sugar industry. American Political Science Review, pages 1–7.
* Hahn and Soyer, (2005) Hahn, E. D. and Soyer, R. (2005). Probit and logit models: Differences in the multivariate realm. The Journal of the Royal Statistical Society, Series B, pages 1–12.
* Hahn et al., (2012) Hahn, R. P., Carvalho, C. M., and Scott, J. G. (2012). A sparse factor analytic probit model for congressional voting patterns. Journal of the Royal Statistical Society: Series C (Applied Statistics), 61(4):619–635.
* Hansford et al., (2022) Hansford, T. G., Depaoli, S., and Canelo, K. S. (2022). Estimating the ideal points of organized interests in legal policy space. Justice System Journal, pages 1–12.
* Hinich and Munger, (1996) Hinich, M. J. and Munger, M. C. (1996). Ideology and the theory of political choice. University of Michigan Press.
* Hinich et al., (1997) Hinich, M. J., Munger, M. C., et al. (1997). Analytical politics. Cambridge university press.
* Hinich and Ordeshook, (1970) Hinich, M. J. and Ordeshook, P. C. (1970). Plurality maximization vs vote maximization: A spatial analysis with variable participation. The American Political Science Review, 64(3):772–791.
* Hix et al., (2005) Hix, S., Noury, A., and Roland, G. (2005). Power to the parties: cohesion and competition in the european parliament 1979-2001. British Journal of Political Science, pages 209–234.
* Jackman, (2001) Jackman, S. (2001). Multidimensional analysis of roll call data via bayesian simulation: Identification, estimation, inference, and model checking. Political Analysis, 9(3):227–241.
* Jackman, (2004) Jackman, S. (2004). Bayesian analysis for political research. Annu. Rev. Polit. Sci., 7:483–505.
* Jackman, (2009) Jackman, S. (2009). Bayesian analysis for the social sciences, volume 846. John Wiley & Sons.
* (59) Jones, M. P. and Hwang, W. (2005a). Party government in presidential democracies: Extending cartel theory beyond the us congress. American Journal of Political Science, 49(2):267–282.
* (60) Jones, M. P. and Hwang, W. (2005b). Provincial party bosses: Keystone of the Argentine Congress, pages 115–138. Pennsylvania State University Press University Park.
* Jones et al., (2009) Jones, M. P., Hwang, W., and Micozzi, J. P. (2009). Government and opposition in the argentine congress, 1989-2007: Understanding inter-party dynamics through roll call vote analysis. Journal of Politics in Latin America, 1(1):67–96.
* Kass and Wasserman, (1996) Kass, R. E. and Wasserman, L. (1996). The selection of prior distributions by formal rules. Journal of the American statistical Association, 91(435):1343–1370.
* Kim et al., (2018) Kim, I. S., Londregan, J., and Ratkovic, M. (2018). Estimating spatial preferences from votes and text. Political Analysis, 26(2):210–229.
* Krehbiel, (1988) Krehbiel, K. (1988). Spatial models of legislative choice. Legislative Studies Quarterly, pages 259–319.
* Krehbiel, (1998) Krehbiel, K. (1998). Pivotal politics: A theory of US lawmaking. University of Chicago Press.
* Kromer, (2005) Kromer, M. K. (2005). Determinants of abstention in the united states house of representatives: an analysis of the 102nd through the 107th sessions. Master’s thesis, Louisiana State University, Baton Rouge, LA.
* Lauderdale and Clark, (2014) Lauderdale, B. E. and Clark, T. S. (2014). Scaling politically meaningful dimensions using texts and votes. American Journal of Political Science, 58(3):754–771.
* Lee, (2018) Lee, J. M. (2018). Introduction to Riemannian manifolds. Springer.
* Lewis and Poole, (2004) Lewis, J. B. and Poole, K. T. (2004). Measuring bias and uncertainty in ideal point estimates via the parametric bootstrap. Political Analysis, pages 105–127.
* Lofland et al., (2017) Lofland, C. L., Rodríguez, A., Moser, S., et al. (2017). Assessing differences in legislators’ revealed preferences: A case study on the 107th us senate. The Annals of Applied Statistics, 11(1):456–479.
* Luque, (2021) Luque, C. (2021). Métodos bayesianos para caracterizar el comportamiento legislativo del senado colombiano en el periodo 2010-2014. Master’s thesis, Universidad Santo Tomás.
* Luque and Sosa, (2022) Luque, C. and Sosa, J. (2022). A bayesian spatial voting model to characterize the legislative behavior of the colombian senate 2010–2014. Journal of Applied Statistics, pages 1–22.
* MacRae, (1952) MacRae, D. (1952). The relation between roll call votes and constituencies in the massachusetts house of representatives. American Political Science Review, 46(4):1046–1055.
* MacRae, (1958) MacRae, D. (1958). Dimensions of congressional voting: A statistical study of the house of representatives in the eighty-first congress. The Journal of Politics, 1(3).
* MacRae, (1965) MacRae, D. (1965). A method for identifying issues and factions from legislative votes. The American Political science Review, 59(4):909–926.
* Manski, (1977) Manski, C. F. (1977). The structure of random utility models. Theory and decision, 8(3):229.
* Martin and Quinn, (2002) Martin, A. D. and Quinn, K. M. (2002). Dynamic ideal point estimation via markov chain monte carlo for the us supreme court, 1953–1999. Political analysis, 10(2):134–153.
* Mayhew, (1974) Mayhew, D. R. (1974). Congress: The electoral connection. Yale university press.
* Mazzuca and Robinson, (2009) Mazzuca, S. and Robinson, J. A. (2009). Political conflict and power sharing in the origins of modern colombia. Hispanic American Historical Review, 89(2):285–321.
* McCarty et al., (2001) McCarty, N., Poole, K. T., and Rosenthal, H. (2001). The hunt for party discipline in congress. American Political Science Review, pages 673–687.
* McCullagh, (2018) McCullagh, P. (2018). Generalized linear models. Routledge.
* McDonnell, (2017) McDonnell, R. M. (2017). Formal comparisons of legislative institutions: Ideal points from brazilian legislatures. Brazilian Political Science Review, 11(1).
* McDonnell et al., (2019) McDonnell, R. M., Duarte, G. J., and Freire, D. (2019). congressbr: An r package for analyzing data from brazil’s chamber of deputies and federal senate. Latin American Research Review, 54(4).
* McFadden, (1976) McFadden, D. L. (1976). Quantal choice analaysis: A survey. In Annals of Economic and Social Measurement, Volume 5, number 4, pages 363–390. NBER.
* McKelvey et al., (1978) McKelvey, R. D., Ordeshook, P. C., and Winer, M. D. (1978). The competitive solution for n-person games without transferable utility, with an application to committee games. American Political Science Review, 72(2):599–615.
* Miller, (1956) Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 63(2):81.
* Morales, (2021) Morales, J. S. (2021). Legislating during war: Conflict and politics in colombia. Journal of Public Economics, 193:104325.
* Morgenstern, (2003) Morgenstern, S. (2003). Patterns of legislative politics: roll-call voting in Latin America and the United States. Cambridge University Press.
* Moser et al., (2021) Moser, S., Rodríguez, A., and Lofland, C. L. (2021). Multiple ideal points: Revealed preferences in different domains. Political Analysis, 29(2):139–166.
* Murray et al., (2013) Murray, J. S., Dunson, D. B., Carin, L., and Lucas, J. E. (2013). Bayesian gaussian copula factor models for mixed data. Journal of the American Statistical Association, 108(502):656–665.
* Neto, (2002) Neto, O. A. (2002). Presidential cabinets, electoral cycles, and coalition discipline in brazil. Legislative Politics in Latin America, pages 48–78.
* Onuki et al., (2009) Onuki, J., Ribeiro, P. F., and Oliveira, A. J. d. (2009). Political parties, foreign policy and ideology: Argentina and chile in comparative perspective. Brazilian Political Science Review (Online), 4(SE):0–0.
* Ormerod and Wand, (2010) Ormerod, J. T. and Wand, M. P. (2010). Explaining variational approximations. The American Statistician, 64(2):140–153.
* Pachón and Johnson, (2016) Pachón, M. and Johnson, G. B. (2016). When’s the party (or coalition)? agenda-setting in a highly fragmented, decentralized legislature. Journal of Politics in Latin America, 8(2):71–100.
* Pati et al., (2014) Pati, D., Bhattacharya, A., Pillai, N. S., Dunson, D., et al. (2014). Posterior contraction in sparse bayesian factor models for massive covariance matrices. Annals of Statistics, 42(3):1102–1130.
* (96) Pereira, C. and Mueller, B. (2004a). The cost of governing: Strategic behavior of the president and legislators in brazil’s budgetary process. Comparative Political Studies, 37(7):781–815.
* (97) Pereira, C. and Mueller, B. (2004b). A theory of executive dominance of congressional politics: the committee system in the brazilian chamber of deputies. The Journal of Legislative Studies, 10(1):9–49.
* Poole, (2005) Poole, K. T. (2005). Spatial models of parliamentary voting. Cambridge University Press.
* Poole, (2007) Poole, K. T. (2007). Changing minds? not in congress! Public Choice, 131(3-4):435–451.
* Poole and Rosenthal, (1984) Poole, K. T. and Rosenthal, H. (1984). Us presidential elections 1968-80: A spatial analysis. American Journal of Political Science, pages 282–312.
* Poole and Rosenthal, (1985) Poole, K. T. and Rosenthal, H. (1985). A spatial model for legislative roll call analysis. American Journal of Political Science, pages 357–384.
* Poole and Rosenthal, (1987) Poole, K. T. and Rosenthal, H. (1987). Analysis of congressional coalition patterns: A unidimensional spatial model. Legislative Studies Quarterly, pages 55–75.
* Poole and Rosenthal, (2001) Poole, K. T. and Rosenthal, H. (2001). D-nominate after 10 years: A comparative update to congress: A political-economic history of roll-call voting. Legislative Studies Quarterly, pages 5–29.
* Potoski and Talbert, (2000) Potoski, M. and Talbert, J. (2000). The dimensional structure of policy outputs: Distributive policy and roll call voting. Political Research Quarterly, 53(4):695–710.
* Quinn, (2004) Quinn, K. M. (2004). Bayesian factor analysis for mixed ordinal and continuous responses. Political Analysis, 12(4):338–353.
* Rasmussen, (2022) Rasmussen, M. B. (2022). Farmers and the origin of the welfare state: Evidence from 308 roll call votes between 1882 and 1940. Scandinavian Political Studies, 45(2):202–226.
* Ribeiro et al., (2021) Ribeiro, P. F., Burian, C. L., and Urdinez, F. (2021). Legislative behavior, mass media, and foreign policy making: The case of paraguay. Latin American Research Review, 56(2).
* Rivers, (2003) Rivers, D. (2003). Identification of multidimensional item-response models. Typescript. Department of Political Science, Stanford University.
* Roberts, (2007) Roberts, J. M. (2007). The statistical analysis of roll-call data: A cautionary tale. Legislative Studies Quarterly, 32(3):341–360.
* Roberts et al., (2016) Roberts, J. M., Smith, S. S., and Haptonstahl, S. R. (2016). The dimensionality of congressional voting reconsidered. American Politics Research, 44(5):794–815.
* Rodríguez and Moser, (2015) Rodríguez, A. and Moser, S. (2015). Measuring and accounting for strategic abstentions in the us senate, 1989–2012. Journal of the Royal Statistical Society: Series C: Applied Statistics, pages 779–797.
* Romer and Rosenthal, (1978) Romer, T. and Rosenthal, H. (1978). Political resource allocation, controlled agendas, and the status quo. Public choice, 33(4):27–43.
* Rosas, (2005) Rosas, G. (2005). The ideological organization of latin american legislative parties: An empirical analysis of elite policy preferences. Comparative Political Studies, 38(7):824–849.
* Rosas and Shomer, (2008) Rosas, G. and Shomer, Y. (2008). Models of nonresponse in legislative politics. Legislative Studies Quarterly, 33(4):573–601.
* Rosas et al., (2015) Rosas, G., Shomer, Y., and Haptonstahl, S. R. (2015). No news is news: Nonignorable nonresponse in roll-call data analysis. American Journal of Political Science, 59(2):511–528.
* Scott and Berger, (2006) Scott, J. G. and Berger, J. O. (2006). An exploration of aspects of bayesian multiple testing. Journal of statistical planning and inference, 136(7):2144–2162.
* Scott and Berger, (2010) Scott, J. G. and Berger, J. O. (2010). Bayes and empirical-bayes multiplicity adjustment in the variable-selection problem. The Annals of Statistics, pages 2587–2619.
* Seabra and Mesquita, (2022) Seabra, P. and Mesquita, R. (2022). Beyond roll-call voting: Sponsorship dynamics at the un general assembly. International Studies Quarterly, 66(2):sqac008.
* Shepsle, (1979) Shepsle, K. A. (1979). Institutional arrangements and equilibrium in multidimensional voting models. American Journal of Political Science, pages 27–59.
* Shiraito et al., (2022) Shiraito, Y., Lo, J., and Olivella, S. (2022). A non-parametric bayesian model for detecting differential item functioning: An application to political representation in the us. arXiv preprint arXiv:2205.05934.
* Shor et al., (2010) Shor, B., Berry, C., and McCarty, N. (2010). A bridge to somewhere: Mapping state and congressional ideology on a cross-institutional common space. Legislative Studies Quarterly, 35(3):417–448.
* Shor and McCarty, (2011) Shor, B. and McCarty, N. (2011). The ideological mapping of american legislatures. American Political Science Review, pages 530–551.
* Snyder Jr and Groseclose, (2000) Snyder Jr, J. M. and Groseclose, T. (2000). Estimating party influence in congressional roll-call voting. American Journal of Political Science, pages 193–211.
* Sosa and Betancourt, (2022) Sosa, J. and Betancourt, B. (2022). A latent space model for multilayer network data. Computational Statistics & Data Analysis, page 107432.
* Tahk, (2018) Tahk, A. (2018). Nonparametric ideal-point estimation and inference. Political Analysis, 26(2):131–146.
* Tajfel, (1981) Tajfel, H. (1981). Human groups and social categories: Studies in social psychology. Cup Archive.
* Talbert and Potoski, (2002) Talbert, J. C. and Potoski, M. (2002). Setting the legislative agenda: The dimensional structure of bill cosponsoring and floor voting. Journal of Politics, 64(3):864–891.
* Thurner, (2000) Thurner, P. W. (2000). The empirical application of the spatial theory of voting in multiparty systems with random utility models. Electoral Studies, 19(4):493–517.
* Tiemann, (2019) Tiemann, G. (2019). The shape of utility functions and voter attitudes towards risk. Electoral Studies, 61:102051.
* Treier and Jackman, (2008) Treier, S. and Jackman, S. (2008). Democracy as a latent variable. American Journal of Political Science, 52(1):201–217.
* Tsai, (2020) Tsai, T. (2020). The influence of the president and government coalition on roll-call voting in brazil, 2003–2006. Political Studies Review, page 1478929920904588.
* VanDoren, (1990) VanDoren, P. M. (1990). Can we learn the causes of congressional decisions from roll-call data? Legislative Studies Quarterly, pages 311–340.
* Voeten, (2000) Voeten, E. (2000). Clashes in the assembly. International organization, pages 185–215.
* Voeten, (2013) Voeten, E. (2013). Data and analyses of voting in the united nations general assembly. In Routledge handbook of international organization, pages 80–92. Routledge.
* Wainer, (1993) Wainer, P. W. H. H. (1993). Differential item functioning. Psychology Press.
* Weisberg and Rusk, (1970) Weisberg, H. F. and Rusk, J. G. (1970). Dimensions of candidate evaluation. The American Political Science Review, 64(4):1167–1185.
* West and Harrison, (2006) West, M. and Harrison, J. (2006). Bayesian forecasting and dynamic models. Springer Science & Business Media.
* Wolters, (1978) Wolters, M. (1978). Models of roll-call behavior. Political Methodology, pages 7–54.
* Yu, (2020) Yu, X. (2020). Spherical Latent Factor Model for Binary and Ordinal Data. PhD thesis, UC Santa Cruz.
* Yu and Rodriguez, (2019) Yu, X. and Rodriguez, A. (2019). Spherical latent factor model. Available at SSRN 3381925.
* Yu and Rodríguez, (2021) Yu, X. and Rodríguez, A. (2021). Spatial voting models in circular spaces: A case study of the us house of representatives. The Annals of Applied Statistics, 15(4):1897–1922.
* Zellner, (1986) Zellner, A. (1986). On assessing prior distributions and bayesian regression analysis with g-prior distributions. Bayesian inference and decision techniques.
* Zucco, (2009) Zucco, C. (2009). Ideology or what? legislative behavior in multiparty presidential settings. The Journal of Politics, 71(3):1076–1092.
* Zucco, (2013) Zucco, C. (2013). Legislative coalitions in presidential systems: the case of uruguay. Latin American politics and society, 55(1):96–118.
* Zucco and Lauderdale, (2011) Zucco, C. and Lauderdale, B. E. (2011). Distinguishing between influences on brazilian legislative behavior. Legislative Studies Quarterly, 36(3):363–396.
|
# Brunn-Minkowski inequality for $\theta$-convolution bodies via Ball’s bodies
David Alonso-Gutiérrez Área de análisis matemático, Departamento de
matemáticas, Facultad de Ciencias, Universidad de Zaragoza, Pedro Cerbuna 12,
50009 Zaragoza (Spain), IUMA<EMAIL_ADDRESS>and Javier Martín Goñi Área
de análisis matemático, Departamento de matemáticas, Facultad de Ciencias,
Universidad de Zaragoza, Pedro cerbuna 12, 50009 Zaragoza (Spain), IUMA and
Faculty of Computer Science and Mathematics, University of Passau, Innstrasse
33, 94032 Passau, Germany<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
We consider the problem of finding the best function
$\varphi_{n}:[0,1]\to\mathbb{R}$ such that for any pair of convex bodies
$K,L\in\mathbb{R}^{n}$ the following Brunn-Minkowski type inequality holds
$|K+_{\theta}L|^{\frac{1}{n}}\geq\varphi_{n}(\theta)(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}),$
where $K+_{\theta}L$ is the $\theta$-convolution body of $K$ and $L$. We prove
a sharp inclusion of the family of Ball’s bodies of an $\alpha$-concave
function in its super-level sets in order to provide the best possible
function in the range $\left(\frac{3}{4}\right)^{n}\leq\theta\leq 1$,
characterizing the equality cases.
The first named author is partially supported by MICINN Project
PID-105979-GB-I00 and DGA Project E48_20R. The second named author is
supported by DGA project E48_20R and the Austrian Science Fund (FWF) Project
P32405 Asymptotic Geometric Analysis and Applications.
## 1\. Introduction
It is well known that for any pair of convex bodies (i.e., compact convex sets
with non-empty interior) $K,L\subseteq\mathbb{R}^{n}$, their Minkowski sum
$K+L$, defined as
$K+L:=\\{x+y\,:x\in K,y\in
L\\}=\\{z\in\mathbb{R}^{n}\,:\,K\cap(z-L)\neq\emptyset\\},$
is a convex body whose volume (or $n$-dimensional Lebesgue measure) $|\cdot|$
verifies, by Brunn-Minkowski inequality (see [Sch, Theorem 7.1.1]), that
(1.1) $|K+L|^{\frac{1}{n}}\geq|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}.$
In [AJV], the authors considered, for any $0\leq\theta\leq 1$ the
$\theta$-convolution bodies of a pair of convex bodies
$K,L\subseteq\mathbb{R}^{n}$, defined as
$K+_{\theta}L:=\\{z\in K+L\,:|K\cap(z-L)|\geq\theta M(K,L)\\},$
where $M(K,L)=\max_{z\in\mathbb{R}^{n}}|K\cap(z-L)|$, and studied the problem
of obtaining the best possible function $\varphi_{n}:[0,1]\to\mathbb{R}$ such
that for any pair of convex bodies $K,L\subseteq\mathbb{R}^{n}$ one has the
following Brunn-Minkowski type inequality
(1.2)
$|K+_{\theta}L|^{\frac{1}{n}}\geq\varphi_{n}(\theta)(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}).$
The authors proved that for any pair of convex bodies
$\frac{K+_{\theta}L}{1-\theta^{\frac{1}{n}}}$ is an increasing family of
convex bodies in $\theta$ and, as a consequence of Brunn-Minkowski inequality,
$\varphi_{n}(\theta)\geq 1-\theta^{\frac{1}{n}}$
for every $\theta\in[0,1]$. Therefore, for every pair of convex bodies
$K,L\subseteq\mathbb{R}^{n}$
(1.3)
$|K+_{\theta}L|^{\frac{1}{n}}\geq(1-\theta^{\frac{1}{n}})(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}).$
It was also shown in [AJV] that the increasing sequence of convex bodies
$\frac{K+_{\theta}L}{1-\theta^{\frac{1}{n}}}$ remains constant if and only if
$K=-L$ is an $n$-dimensional simplex, in which case there is no equality in
Brunn-Minkowski inequality (1.1). The purpose of this paper is to improve the
estimate of the function $\varphi_{n}$. We will prove the following:
###### Theorem 1.1.
Let $K,L\subseteq\mathbb{R}^{n}$ be convex bodies. Then, for every
$\left(\frac{3}{4}\right)^{n}\leq\theta\leq 1$ we have that
$|K+_{\theta}L|^{\frac{1}{n}}\geq\frac{1}{2}{{2n}\choose{n}}^{\frac{1}{n}}(1-\theta^{\frac{1}{n}})\left(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}\right)$
and, for every $0\leq\theta\leq\left(\frac{3}{4}\right)^{n}$,
$|K+_{\theta}L|^{\frac{1}{n}}\geq\left(1-\left(\frac{4}{3}-\frac{1}{6}{{2n}\choose{n}}^{\frac{1}{n}}\right)\theta^{\frac{1}{n}}\right)\left(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}\right).$
Moreover, given any $\left(\frac{3}{4}\right)^{n}\leq\theta<1$ there is
equality if and only if $K=-L$ is an $n$-dimensional simplex.
###### Remark.
The constant $\frac{1}{2}{{2n}\choose{n}}^{\frac{1}{n}}$ is asymptotically
$2$, so the result improves on (1.3) for values of $\theta$ close to $1$, in
which case there is equality whenever $K=-L$ is an $n$-dimensional simplex.
Besides, $\left(\frac{4}{3}-\frac{1}{6}{{2n}\choose{n}}^{\frac{1}{n}}\right)$
is asymptotically $\frac{2}{3}$. Therefore, this result also improves on
(1.3), for small values of $\theta$.
###### Remark.
The change of behavior in the estimate at
$\theta_{0}=\left(\frac{3}{4}\right)^{n}$ is due to the method we use in order
to prove Theorem 1.1. However, it is clear that the estimate obtained for the
function $\varphi_{n}(\theta)$ for $\theta_{0}\leq\theta\leq 1$ cannot hold in
the whole range $0\leq\theta\leq 1$, as this would lead, taking limit as
$\theta$ tends to $0$, to Brunn-Minkowski’s inequality (1.1) with an extra
factor $\frac{1}{2}{{2n}\choose{n}}^{\frac{1}{n}}$ which is asymptotically $2$
and such inequality is false if $K=L$.
In order to prove Theorem 1.1 we observe that the $\theta$-convolution bodies
of two convex bodies $K,L\subseteq\mathbb{R}^{n}$ are the super-level sets of
the function $\tilde{g}_{K,L}:\mathbb{R}^{n}\to[0,1]$ given by
$\tilde{g}_{K,L}(z)=\frac{|K\cap(z-L)|}{M(K,L)}$, which is
$\frac{1}{n}$-concave (i.e., $\tilde{g}_{K,L}^{\frac{1}{n}}$ is concave on its
support, $K+L$) and, in particular, is log-concave. Given any integrable log-
concave function $g:\mathbb{R}^{n}\to[0,\infty)$ (i.e., $\log
g:\mathbb{R}^{n}\to[-\infty,\infty)$ is concave) with $g(0)\neq 0$, Ball [B]
defined the following family of convex bodies associated to it
$(K_{p}(g))_{p>0}$:
$K_{p}(g):=\left\\{x\in\mathbb{R}^{n}\,:\,p\int_{0}^{\infty}r^{p-1}g(rx)dr\geq
g(0)\right\\}.$
In [ABG, Lemma 2.3.2, Remark 2.6], the authors proved, following the ideas in
[KM], the following inclusion relation between Ball’s bodies and super-level
sets of a log-concave function in a certain range of the parameters involved:
If $g:\mathbb{R}^{n}\to[0,\infty)$ is an integrable log-concave function with
$\|g\|_{\infty}=g(0)$, then for any $p>0$ and any $0<t<\frac{p}{e}$
(1.4)
$\frac{t}{\Gamma(1+p)^{\frac{1}{p}}}K_{p}(g)\subseteq\\{x\in\mathbb{R}^{n}\,:\,g(x)=e^{-t}\|g\|_{\infty}\\}.$
We will obtain a sharper inclusion relation between the family Ball’s bodies
and the super-level sets of an $\alpha$-concave function $g$ (i.e.,
$g^{\alpha}$ is concave on its support) which will allow us to prove Theorem
1.1. More precisely, denoting for any $0\leq r\leq 1$ by $L_{r}$ the super-
level set of an $\alpha$-concave function $g$ given by
(1.5) $\displaystyle L_{r}(g)$ $\displaystyle=$
$\displaystyle\\{x\in\textrm{supp}(g)\,:\,g^{\alpha}(x)\geq
r\|g\|_{\infty}^{\alpha}\\}$ (1.6) $\displaystyle=$
$\displaystyle\\{x\in\textrm{supp}(g)\,:\,g(x)\geq
r^{\frac{1}{\alpha}}\|g\|_{\infty}\\},$
and, denoting by $\partial K$ and by $\rho_{K}$ the boundary of the convex
body $K\subseteq\mathbb{R}^{n}$ containing the origin in its interior and its
radial function (see definition below), and for any $x,y>0$ the generalized
binomial coefficient, which is defined in terms of the gamma function as
$\displaystyle{{x\choose
y}:=\frac{\Gamma{(1+x)}}{\Gamma{(1+y)}\Gamma{(1+x-y)}}}$, we will prove the
following:
###### Theorem 1.2.
Let $p>0$ and let $K\subseteq\mathbb{R}^{n}$ be a convex body with $0\in K$
and let $g:K\to[0,\infty)$ be a continuous $\alpha$-concave function, with
$\alpha>0$, such that $\|g\|_{\infty}=g(0)>0$ and such that $g(x)=0$ for every
$x\in\partial K$. Then, understanding that $g(x)=0$ for every $x\not\in K$, we
have that for every
$t\in\left[0,\frac{p\alpha}{(1+p\alpha)^{1+\frac{1}{p\alpha}}}\right]$
$t{{p+\frac{1}{\alpha}}\choose{p}}^{\frac{1}{p}}K_{p}(g)\subseteq L_{1-t}(g).$
Moreover, for any
$t\in\left(0,\frac{p\alpha}{(1+p\alpha)^{1+\frac{1}{p\alpha}}}\right]$ there
is equality if and only if
$g(x)=\|g\|_{\infty}\left(1-\|x\|_{K}\right)^{\frac{1}{\alpha}}$ for every
$x\in K$. Furthermore, given $u\in S^{n-1}$, if there exists
$r\in[0,\rho_{K}(u)]$ such that
$g(ru)\neq\|g\|_{\infty}\left(1-\|ru\|_{K}\right)^{\frac{1}{\alpha}}$, we have
that there exists $\varepsilon>0$ such that for every
$t\in\left(0,\frac{p\alpha}{(1+p\alpha)^{1+\frac{1}{p\alpha}}}\right]$
$t{{p+\frac{1}{\alpha}}\choose{p}}^{\frac{1}{p}}(\rho_{K_{p}(g)}(u)+\varepsilon)\leq\rho_{L_{1-t}(g)}(u).$
The paper is organised as follows. In Section 2 we will provide the necessary
definitions and results that we need to prove our results. In Section 3 we
will provide the proof of Theorem 1.2. Finally, in Section 4 we will prove
Theorem 1.1.
## 2\. Preliminaries
### 2.1. Notation
Given a convex body $K\subseteq\mathbb{R}^{n}$ with $0\in\textrm{int}K$, we
will denote by $\rho_{K}$ the radial function $\rho_{K}:S^{n-1}\to[0,\infty)$
given by $\rho_{K}(u)=\max\\{\lambda\geq 0\,:\,\lambda u\in K\\}$, where
$S^{n-1}$ denotes the $(n-1)$-dimensional Euclidean sphere in
$\mathbb{R}^{n}$, that is, $S^{n-1}=\\{u\in\mathbb{R}^{n}\,:\,\|u\|_{2}=1\\}$.
It is well known that given two convex bodies $K_{1},K_{2}\in\mathbb{R}^{n}$
we have that $K_{1}\subseteq
K_{2}\Leftrightarrow\rho_{K_{1}}(u)\leq\rho_{K_{2}}(u)$ for every $u\in
S^{n-1}$. $\|\cdot\|_{K}$ will denote the Minkowski gauge, defined as
$\|x\|_{K}=\inf\\{\lambda>0\,:\,x\in\lambda K\\}.$
Notice that for every $u\in S^{n-1}$ we have that
$\|u\|_{K}=\frac{1}{\rho_{K}(u)}$. $\chi_{K}$ will denote the characteristic
function of a convex body $K$ and $K-K$ will always denote the difference body
$K+(-K)$. Whenever $K\subseteq\mathbb{R}^{n}$ is a $k$-dimensional set
contained in an affine $k$-dimensional subspace, $|K|$ will denote its
$k$-dimensional Lebesgue measure. $B_{2}^{n}$ will stand for the Euclidean
unit ball, $\|\cdot\|_{2}$ for the Euclidean norm and $\Delta^{n}$ will stand
for the regular simplex.
### 2.2. Ball’s bodies
A log-concave function $g:\mathbb{R}^{n}\to[0,\infty)$ is a function of the
form $g(x)=e^{-u(x)}$ with $u:\mathbb{R}^{n}\to(-\infty,\infty]$ a convex
function. The family of log-concave functions plays an extremely important
role in the study of problems related to distribution of volume in convex
bodies. In [B], Ball introduced a family of convex bodies $(K_{p}(g))_{p>0}$
associated to log-concave functions verifying $g(0)>0$. More precisely, for
any measurable (not necessarily log-concave) $g:\mathbb{R}^{n}\to[0,\infty)$
such that $g(0)>0$ and $p>0$, $K_{p}(g)$ is defined as
$K_{p}(g):=\left\\{x\in\mathbb{R}^{n}\,:\,p\int_{0}^{\infty}r^{p-1}g(rx)dr\geq
g(0)\right\\}.$
$K_{p}(g)$ is a star set with center $0$ whose radial function is given by
$\rho_{K_{p}(g)}^{p}(u)=\frac{p}{g(0)}\int_{0}^{\infty}r^{p-1}g(ru)dr,\quad\forall
u\in S^{n-1}.$
It is well known that for any integrable log-concave function with $g(0)>0$,
$K_{p}(g)$ is convex for every $p>0$ and, by integration in polar coordinates,
we have that
$\displaystyle{|K_{n}(g)|=\int_{\mathbb{R}^{n}}\frac{g(x)}{g(0)}dx}$. We refer
the reader to [BGVV, Section 2.5] for more information on Ball’s bodies.
### 2.3. The generalized covariogram function
Given a convex body $K\subseteq\mathbb{R}^{n}$, its covariogram function is
the function $g_{K}:K-K\to[0,\infty)$ given by
$g_{K}(z)=|K\cap(z+K)|.$
Through this paper we will consider, given a pair of convex bodies
$K,L\subseteq\mathbb{R}^{n}$, the generalized covariogram function defined as
$g_{K,L}:K+L\to[0,\infty)$ given by
$g_{K,L}(z)=|K\cap(z-L)|=\chi_{K}*\chi_{L}(z),$
where $\chi_{K}*\chi_{L}$ denotes the convolution of the functions $\chi_{K}$
and $\chi_{L}$. Notice that for any convex body $g_{K}=g_{K,-K}$. As a
consequence of Brunn-Minkowski inequality (1.1), we have that for any pair of
convex bodies $K,L\subseteq\mathbb{R}^{n}$, $g_{K,L}$ is $\frac{1}{n}$-concave
and then, for any $\theta\in[0,1]$ we have that
$K+_{\theta}L=L_{\theta^{\frac{1}{n}}}(g_{K,L}),$
where $L_{\theta^{\frac{1}{n}}}(g_{K,L})$ is defined by (1.5). Besides, (see
[AJV, Proposition 2.1]) for any $x\in\mathbb{R}^{n}$ and any $\theta\in[0,1]$
(2.1) $(x+K)+_{\theta}L=x+(K+_{\theta}L)$
and for any pair of convex bodies $K,L\subseteq\mathbb{R}^{n}$ one has that
$\int_{\mathbb{R}^{n}}\frac{g_{K,L}(z)}{\|g_{K,L}\|_{\infty}}dz=\frac{|K||L|}{M(K,L)}.$
It was also proved in [AJV] that $\frac{K+_{\theta}L}{1-\theta^{\frac{1}{n}}}$
is an increasing family of convex bodies in $\theta\in[0,1]$ and (see [AJV,
Proposition 2.8]) that $K+_{\theta}L=(1-\theta^{\frac{1}{n}})(K+L)$ for every
$0\leq\theta\leq 1$ if and only if $K=-L$ is a simplex.
### 2.4. The polar projection body and Zhang’s inequality
Given a convex body $K\subseteq\mathbb{R}^{n}$, its polar projection body
$\Pi^{*}K$ is defined as the unit ball of the norm given by
$\|x\|_{\Pi^{*}K}=\|x\|_{2}|P_{x^{\perp}}K|.$
It is well known that for any convex body, $|K|^{n-1}|\Pi^{*}K|$ is an affine
invariant quantity that verifies Petty projection inequality [P] (also known
as the affine isoperimetric inequality) :
$|K|^{n-1}|\Pi^{*}K|\leq|B_{2}^{n}|^{n-1}|\Pi^{*}B_{2}^{n}|,$
with equality if and only if $K$ is an ellipsoid. In [Z], Zhang proved the
following reverse inequality for any convex body $K\subseteq\mathbb{R}^{n}$:
(2.2)
$|K|^{n-1}|\Pi^{*}K|\geq|\Delta^{n}|^{n-1}|\Pi^{*}\Delta^{n}|=\frac{1}{n^{n}}{{2n}\choose{n}},$
with equality if and only if $K$ is a simplex (see also [GZ] for another
proof).
In [T], Tsolomitis studied the existence of the behavior of limiting
convolution bodies
$C_{\alpha}(K,L):=\lim_{\theta\to
1^{-}}\frac{K+_{\theta}L}{(1-\theta)^{\alpha}}$
for symmetric convex bodies $K$ and $L$, and some exponent $\alpha$, giving
some regularity conditions under which the above limit is non-degenerated for
some $\alpha$. Taking into account that for any convex body
$K\subseteq\mathbb{R}^{n}$, $C_{1}(K,-K)=|K|\Pi^{*}K$, in [AJV, Theorem 4.6],
the authors showed that Zhang’s inequality can be extended to
(2.3) $|C_{1}(K,L)|\geq\frac{1}{n^{n}}{{2n}\choose{n}}\frac{|K||L|}{M(K,L)},$
for any pair of convex bodies $K,L\subseteq\mathbb{R}^{n}$ such that
$M(K,L)=|K\cap(-L)|$, with equality if and only if $K=-L$ is a simplex.
## 3\. An inclusion relation between Ball’s bodies and superlevel sets
In this section we are going to prove Theorem 1.2, from which we will derive
Theorem 1.1 in the following section.
###### Proof of Theorem 1.2.
Let us assume, without loss of generality that $\|g\|_{\infty}=g(0)=1$.
Otherwise, consider the function $\frac{g}{\|g\|_{\infty}}$. Let us denote,
for any $u\in S^{n-1}$, $l_{u}=\sup\\{r>0\,:\,g(ru)>0\\}=\rho_{K}(u)$,
$v_{u}:[0,l_{u}]\to[0,1]$ the function defined as $v_{u}(r)=g^{\alpha}(ru)$,
which is concave on $[0,l_{u}]$, and, for any $q>0$, let
$\phi_{u}:[0,l_{u}]\to[0,\infty)$ the function defined as
$\phi_{u}(r)=r^{q\alpha}v_{u}(r)=r^{q\alpha}g^{\alpha}(ru).$
Notice that since $\log\phi_{u}$ is strictly concave on $(0,l_{u})$,
$\displaystyle{\lim_{r\to 0^{+}}\log\phi_{u}(r)=-\infty}$, and
$\displaystyle{\lim_{r\to l_{u}^{-}}\log\phi_{u}(r)=-\infty}$, $\phi_{u}$
attains a unique maximum at some $r_{0}=r_{0}(q)\in(0,l_{u})$. Therefore,
denoting by $(\phi_{u})_{-}(r_{0})$ and $(\phi_{u})_{+}(r_{0})$ the lateral
derivatives of $\phi_{u}$ at $r_{0}$ we have that
* •
$0\leq(\phi_{u})_{-}(r_{0})=r_{0}^{q\alpha}\left(\frac{q\alpha}{r_{0}}v_{u}(r_{0})+(v_{u})_{-}(r_{0})\right)$,
* •
$0\geq(\phi_{u})_{+}(r_{0})=r_{0}^{q\alpha}\left(\frac{q\alpha}{r_{0}}v_{u}(r_{0})+(v_{u})_{+}(r_{0})\right)$.
Then,
* •
$(v_{u})_{-}(r_{0})\geq-\frac{q\alpha}{r_{0}}v_{u}(r_{0})$,
* •
$(v_{u})_{+}(r_{0})\leq-\frac{q\alpha}{r_{0}}v_{u}(r_{0})$.
Notice that if $v_{u}$ is an affine function then necessarily for every $q>0$
we have that
$v_{u}(r)=v_{u}(r_{0})\left(1-\frac{q\alpha}{r_{0}}(r-r_{0})\right)$. We will
denote this affine function by
$\tau_{u,q}(r)=v_{u}(r_{0}(q))\left(1-\frac{q\alpha}{r_{0}(q)}(r-r_{0}(q))\right),\quad\forall
r\in[0,l_{u}].$
For any $q>0$, the graph of the function $\tau_{u,q}$ is a supporting line of
the hypograph of $v_{u}$ and then
$v_{u}(r)\leq\tau_{u,q}(r)=v_{u}(r_{0})\left(1-\frac{q\alpha}{r_{0}}(r-r_{0})\right),\quad\forall
r\in[0,l_{u}].$
Thus, for any $p,q>0$
(3.1) $\displaystyle\rho_{K_{p}(g)}^{p}(u)$ $\displaystyle=$ $\displaystyle
p\int_{0}^{\infty}r^{p-1}g(ru)dr=p\int_{0}^{l_{u}}r^{p-1}v_{u}^{\frac{1}{\alpha}}(r)dr$
(3.2) $\displaystyle\leq$ $\displaystyle
pv_{u}^{\frac{1}{\alpha}}(r_{0})\int_{0}^{l_{u}}r^{p-1}\left(1-\frac{q\alpha}{r_{0}}(r-r_{0})\right)^{\frac{1}{\alpha}}dr$
(3.3) $\displaystyle\leq$ $\displaystyle
pg(r_{0}u)\int_{0}^{\left(1+\frac{1}{q\alpha}\right)r_{0}}r^{p-1}\left(1-\frac{q\alpha}{r_{0}}(r-r_{0})\right)^{\frac{1}{\alpha}}dr$
(3.4) $\displaystyle=$ $\displaystyle
pg(r_{0}u)\left(1+q\alpha\right)^{\frac{1}{\alpha}}\left(1+\frac{1}{q\alpha}\right)^{p}r_{0}^{p}\int_{0}^{1}s^{p-1}\left(1-s\right)^{\frac{1}{\alpha}}ds$
(3.5) $\displaystyle=$ $\displaystyle
pg(r_{0}u)\frac{\left(1+q\alpha\right)^{p+\frac{1}{\alpha}}}{(q\alpha)^{p}}r_{0}^{p}\beta\left(p,1+\frac{1}{\alpha}\right).$
Moreover, for any $q>0$, the previous inequality is an equality if and only if
$l_{u}=\left(1+\frac{1}{q\alpha}\right)r_{0}$ and $v_{u}(r)=\tau_{u,q}(r)$ for
every $0\leq r\leq l_{u}=\left(1+\frac{1}{q\alpha}\right)r_{0}$. That is, if
$v_{u}$ is an affine function such that $v_{u}(l_{u})=0$.
Consequently, for any $p,q>0$
$\frac{q\alpha}{\left(1+q\alpha\right)^{1+\frac{1}{p\alpha}}}{{p+\frac{1}{\alpha}}\choose{p}}^{\frac{1}{p}}\rho_{K_{p}(g)}(u)\leq
g(r_{0}u)^{\frac{1}{p}}r_{0},$
with equality if and only if $v_{u}$ is an affine function such that
$v_{u}(l_{u})=0$.
On the one hand, since $v_{u}$ is concave on $[0,l_{u}]$ and
$\|g\|_{\infty}=g(0)=1$, we have that
$\displaystyle g^{\alpha}\left(g^{\frac{1}{p}}(r_{0}u)r_{0}u\right)$
$\displaystyle=$ $\displaystyle
v_{u}\left(g^{\frac{1}{p}}(r_{0}u)r_{0}\right)\geq
g^{\frac{1}{p}}(r_{0}u)v_{u}(r_{0})+\left(1-g^{\frac{1}{p}}(r_{0}u)\right)v_{u}(0)$
$\displaystyle=$ $\displaystyle
g^{\frac{1}{p}}(r_{0}u)g^{\alpha}(r_{0}u)+1-g^{\frac{1}{p}}(r_{0}u)$
$\displaystyle=$ $\displaystyle
1-\left(g^{\alpha}(r_{0}u)\right)^{\frac{1}{p\alpha}}\left(1-g^{\alpha}(r_{0}u)\right),$
with equality if and only if $v_{u}$ is affine on $[0,r_{0}]$ and then
$v_{u}(r)=\tau_{u,q}(r)$ for every $0\leq r\leq r_{0}$. On the other hand,
since $v_{u}$ is concave on $[0,l_{u}]$, we have that $(v_{u})_{-}$ is
decreasing on $[0,l_{u}]$ and then
$\displaystyle v_{u}(r_{0})$ $\displaystyle=$ $\displaystyle
v_{u}(0)+\int_{0}^{r_{0}}(v_{u})_{-}(r)dr\geq
v_{u}(0)+(v_{u})_{-}(r_{0})r_{0}\geq v_{u}(0)-q\alpha v_{u}(r_{0})$
$\displaystyle=$ $\displaystyle 1-q\alpha v_{u}(r_{0})$
and then
$g^{\alpha}(r_{0}u)=v_{u}(r_{0})\geq\frac{1}{1+q\alpha},$
with equality if and only if $(v_{u})_{-}=-\frac{q\alpha}{r_{0}}v_{u}(r_{0})$
for every $0\leq r<r_{0}$, in which case $v_{u}(r)=\tau_{u,q}(r)$ for every
$0\leq r<r_{0}$. Since the function $h(x)=1-x^{\frac{1}{p\alpha}}(1-x)$ is
decreasing in $\left[0,\frac{1}{1+p\alpha}\right]$ and is increasing in
$\left[\frac{1}{1+p\alpha},1\right]$, we have that if
$\frac{1}{1+p\alpha}\leq\frac{1}{1+q\alpha}$ (which happens whenever $0<q\leq
p$)
$\displaystyle g^{\alpha}\left(g^{\frac{1}{p}}(r_{0}u)r_{0}u\right)$
$\displaystyle\geq$ $\displaystyle
1-\left(g^{\alpha}(r_{0}u)\right)^{\frac{1}{p\alpha}}\left(1-g^{\alpha}(r_{0}u)\right)=h(g^{\alpha}(r_{0}u))\geq
h\left(\frac{1}{1+q\alpha}\right)$ $\displaystyle=$ $\displaystyle
1-\left(\frac{1}{1+q\alpha}\right)^{\frac{1}{p\alpha}}\left(1-\frac{1}{1+q\alpha}\right)$
$\displaystyle=$ $\displaystyle
1-\frac{q\alpha}{(1+q\alpha)^{1+\frac{1}{p\alpha}}}.$
Besides, there is equality if and only if $v_{u}(r)=\tau_{u,q}(r)$ for every
$0\leq r\leq r_{0}$ and
$g^{\alpha}(r_{0}u)=v_{u}(r_{0})=\frac{1}{1+q\alpha},$
which also happens if and only if $v_{u}(r)=\tau_{u,q}(r)$ for every $0\leq
r<r_{0}$.
Therefore, if $0<q\leq p$, $g^{\frac{1}{p}}(r_{0}u)r_{0}u\in
L_{1-\frac{q\alpha}{(1+q\alpha)^{1+\frac{1}{p\alpha}}}}(g)$ and, since $0\in
L_{1-\frac{q\alpha}{(1+q\alpha)^{1+\frac{1}{p\alpha}}}}(g)$ as
$\|g\|_{\infty}=g(0)=1$ and
$L_{1-\frac{q\alpha}{(1+q\alpha)^{1+\frac{1}{p\alpha}}}}(g)$ is convex, we
have that
$g^{\frac{1}{p}}(r_{0}u)r_{0}\leq\rho_{L_{1-\frac{q\alpha}{(1+q\alpha)^{1+\frac{1}{p\alpha}}}}}(u),$
with equality if and only if $g^{\frac{1}{p}}(r_{0}u)r_{0}u\in\partial
L_{1-\frac{q\alpha}{(1+q\alpha)^{1+\frac{1}{p\alpha}}}}$, which happens if and
only if $v_{u}(r)=\tau_{u,q}(r)$ for every $0\leq r\leq r_{0}$. Thus, if
$0<q\leq p$
$\frac{q\alpha}{\left(1+q\alpha\right)^{1+\frac{1}{p\alpha}}}{{p+\frac{1}{\alpha}}\choose{p}}^{\frac{1}{p}}\rho_{K_{p}(g)}(u)\leq\rho_{L_{1-\frac{q\alpha}{(1+q\alpha)^{1+\frac{1}{p\alpha}}}}}(u),$
with equality if and only $v_{u}(r)=\tau_{u,q}(r)$ for every $0\leq r\leq
l_{u}=\left(1+\frac{1}{q\alpha}\right)r_{0}$, i.e., if $v_{u}$ is an affine
function such that $v_{u}(l_{u})=0$. Since this happens for every $u\in
S^{n-1}$,
$\frac{q\alpha}{\left(1+q\alpha\right)^{1+\frac{1}{p\alpha}}}{{p+\frac{1}{\alpha}}\choose{p}}^{\frac{1}{p}}K_{p}(g)\subseteq
L_{1-\frac{q\alpha}{(1+q\alpha)^{1+\frac{1}{p\alpha}}}}(g)$
and, for any $0<q\leq p$, there is equality equality if and only if for every
direction $u\in S^{n-1}$ $v_{u}$ is an affine function such that
$v_{u}(l_{u})=0$. That is, if
$g(x)=\|g\|_{\infty}\left(1-\|x\|_{K}\right)^{\frac{1}{\alpha}}$ for every
$x\in K$. Since the function $h_{1}(x)=\frac{x}{(1+x)^{1+\frac{1}{p\alpha}}}$
is continuous and increasing in $[0,p\alpha]$, it attains every value in
$\left[0,\frac{p\alpha}{(1+p\alpha)^{1+\frac{1}{p\alpha}}}\right]$. Therefore,
we have that for every
$t\in\left[0,\frac{p\alpha}{(1+p\alpha)^{1+\frac{1}{p\alpha}}}\right]$.
$t{{p+\frac{1}{\alpha}}\choose{p}}^{\frac{1}{p}}K_{p}(g)\subseteq L_{1-t}(g)$
and, for any
$t\in\left[0,\frac{p\alpha}{(1+p\alpha)^{1+\frac{1}{p\alpha}}}\right]$, there
is equality if and only if
$g(x)=\|g\|_{\infty}\left(1-\|x\|_{K}\right)^{\frac{1}{\alpha}}$.
Assume now that for some $u\in S^{n-1}$ there exists $r\in[0,\rho_{K}(u)]$
such that
$g(ru)\neq\|g\|_{\infty}\left(1-\|ru\|_{K}\right)^{\frac{1}{\alpha}}$.
Therefore, $v_{u}$ is not an affine function.
We first notice that the function $r_{0}(q)$ is continuous on
$q\in(0,\infty)$. Indeed, let $(q_{k})_{k=1}^{\infty}\subseteq(0,\infty)$ be a
sequence converging to some $q\in(0,\infty)$. Let $(q_{k_{i}})_{i=1}^{\infty}$
be any convergent subsequence of $(q_{k})_{k=1}^{\infty}$ such that
$r_{0}(q_{k_{i}})$ converges to some $\overline{r}\in[0,l_{u}]$, which exist
since $(r_{0}(q_{k}))_{k=1}^{\infty}\subseteq(0,l_{u})$. Since for every
$i\in\mathbb{N}$, we have, by the definition of $r_{0}(q_{k_{i}})$, that
$r_{0}(q)^{q_{k_{i}}}v_{u}(r_{0}(q))\leq
r_{0}(q_{k_{i}})^{q_{k_{i}}}v_{u}(r_{0}(q_{k_{i}})),$
taking limits as $i$ tends to $\infty$ we obtain that
$r_{0}(q)^{q}v_{u}(r_{0}(q))\leq\overline{r}^{q}v_{u}(\overline{r}).$
Therefore, by the definition of $r_{0}(q)$, $\overline{r}=r_{0}(q)$.
Therefore,
$\liminf_{k\to\infty}r_{0}(q_{k})=\limsup_{k\to\infty}r_{0}(q_{k})=r_{0}(q)$
and then $r_{0}(q_{k})$ converges to $r_{0}(q)$. Thus $r_{0}(q)$ is continuous
on $q\in(0,\infty)$. Besides, if $(q_{k})_{k=1}^{\infty}\subseteq(0,\infty)$
is a sequence converging to $0$ and for some subsequence $r_{0}(q_{k_{i}})$
converges to $l_{u}$ we would have that for every $r\in[0,l_{u}]$
$v_{u}(r)\leq\tau_{u,q_{k_{i}}}(r)=v_{u}(r_{0}(q_{k_{i}}))\left(1-\frac{q_{k_{i}}\alpha}{r_{0}(q_{k_{i}})}(r-r_{0}(q_{k_{i}}))\right),$
leading to $v_{u}(r)\leq 0$, which is a contradiction. Therefore, for any
$p>0$ we have that $s:=\sup\\{r_{0}(q)\,:\,q\in(0,p]\\}<l_{u}$ and then, for
every $p>0$ and every $0<q\leq p$ we have that
$\frac{q\alpha}{r_{0}(q)}v_{u}(r_{0}(q))\leq-(v_{u})_{+}(r_{0}(q))\leq-(v_{u})_{+}(s)$
and then $\frac{q\alpha}{r_{0}(q)}v_{u}(r_{0}(q))$ is bounded in $q\in(0,p]$.
Assume that there is no $\varepsilon>0$ such that for every $0<q\leq p$ we
have that
$\varepsilon<p\int_{0}^{\left(1+\frac{1}{q\alpha}\right)r_{0}(q)}r^{p-1}\tau_{u,q}^{\frac{1}{\alpha}}(r)dr-p\int_{0}^{l_{u}}r^{p-1}v_{u}^{\frac{1}{\alpha}}(r)dr.$
Then, we can find a sequence $(q_{k})_{k=1}^{\infty}\subseteq(0,p]$ and, if
necessary, extract from it further subsequences which we denote in the same
way, such that
$\displaystyle\lim_{k\to\infty}p\int_{0}^{\infty}r^{p-1}\left(\tau_{u,q_{k}}^{\frac{1}{\alpha}}(r)\chi_{\left(1+\frac{1}{q_{k}\alpha}\right)r_{0}(q_{k})}(r)-v_{u}^{\frac{1}{\alpha}}(r)\chi_{[0,l_{u}]}(r)\right)dr=0,$
$(q_{k})_{k=1}^{\infty}$ converges to some $q\in[0,p]$, $r_{0}(q_{k})$
converges to some $\overline{r}\in[0,l_{u}]$, and
$\frac{q_{k}\alpha}{r_{0}(q_{k})}v_{u}(r_{0})$ converges to some
$\lambda\in[0,\infty)$. Since for every $r\in[0,\infty)$ we have that for
every $k\in\mathbb{N}$
$\displaystyle v_{u}(r)\chi_{[0,l_{u}]}(r)$ $\displaystyle\leq$
$\displaystyle\tau_{u,q_{k}}(r)\chi_{\left[0,\left(1+\frac{1}{q_{k}\alpha}\right)r_{0}(q_{k})\right]}(r)$
$\displaystyle=$ $\displaystyle
v_{u}(r_{0}(q_{k}))\left(1-\frac{q_{k}\alpha}{r_{0}(q_{k})}(r-r_{0}(q_{k}))\right)\chi_{\left[0,\left(1+\frac{1}{q_{k}\alpha}\right)r_{0}(q_{k})\right]}(r),$
taking limits as $k\to\infty$ we obtain that for almost every $r\in[0,\infty)$
$v_{u}(r)\chi_{[0,l_{u}]}(r)\leq\left(v_{u}(\overline{r})-\lambda(r-\overline{r})\right)\chi_{\left[0,\left(1+\frac{1}{q\alpha}\right)\overline{r}\right]}(r)$
and then, by Fatou’s lemma,
$\displaystyle
p\int_{0}^{\infty}r^{p-1}\left(\left(v_{u}(\overline{r})-\lambda(r-\overline{r})\right)\chi_{\left[0,\left(1+\frac{1}{q\alpha}\right)^{\frac{1}{\alpha}}\overline{r}\right]}(r)-v_{u}(r)^{\frac{1}{\alpha}}\chi_{[0,l_{u}]}(r)\right)dr$
$\displaystyle\leq\lim_{k\to\infty}p\int_{0}^{\infty}r^{p-1}\left(\tau_{u,q_{k}}^{\frac{1}{\alpha}}(r)\chi_{\left[0,\left(1+\frac{1}{q_{k}\alpha}\right)r_{0}(q_{k})\right]}(r)-v_{u}^{\frac{1}{\alpha}}(r)\chi_{[0,l_{u}]}(r)\right)dr=0.$
Since the integrand in the first integral is non-negative, by continuity of
the following functions, we have that for every $r\in[0,\infty)$
$v_{u}(r)\chi_{[0,l_{u}]}(r)=\left(v_{u}(\overline{r})-\lambda(r-\overline{r})\right)\chi_{\left[0,\left(1+\frac{1}{q\alpha}\right)\overline{r}\right]}(r)$
and then $v_{u}$ is an affine function. Therefore, if $v_{u}$ is not linear,
there exists $\overline{\varepsilon}>0$ such that for every $0<q\leq p$
$p\int_{0}^{l_{u}}r^{p-1}v_{u}^{\frac{1}{\alpha}}(r)dr+\overline{\varepsilon}\leq
pg(r_{0}u)\frac{\left(1+q\alpha\right)^{p+\frac{1}{\alpha}}}{(q\alpha)^{p}}r_{0}^{p}\beta\left(p,1+\frac{1}{\alpha}\right)$
and then there exists $\varepsilon>0$ such that for every $0<q\leq p$
$\displaystyle\frac{q\alpha}{\left(1+q\alpha\right)^{1+\frac{1}{p\alpha}}}{{p+\frac{1}{\alpha}}\choose{p}}^{\frac{1}{p}}\left(\rho_{K_{p}(g)}(u)+\varepsilon\right)$
$\displaystyle\leq$
$\displaystyle\frac{q\alpha}{\left(1+q\alpha\right)^{1+\frac{1}{p\alpha}}}{{p+\frac{1}{\alpha}}\choose{p}}^{\frac{1}{p}}\left(\rho_{K_{p}(g)}^{p}(u)+\overline{\varepsilon}\right)^{\frac{1}{p}}$
$\displaystyle\leq$ $\displaystyle g(r_{0}u)^{\frac{1}{p}}r_{0},$
Proceeding now as in the proof of the inequality we obtain the result. ∎
## 4\. Brunn-Minkowski inequality for $\theta$-convolution bodies
In this section we are going to prove Theorem 1.1.
###### Proof of Theorem 1.1.
Let $K,L\subseteq\mathbb{R}^{n}$ be a pair of convex bodies. By (2.1) we can
assume, without loss of generality, that
$M(K,L)=\max_{z\in\mathbb{R}^{n}}|K\cap(z-L)|=|K\cap(-L)|.$
Then, the function $g_{K,L}:K+L\to[0,\infty)$ given by
$g_{K,L}(z)=|K\cap(z-L)|$, which is a continuous $\frac{1}{n}$-concave
function, verifies that $\|g_{K,L}\|_{\infty}=g_{K,L}(0)$ and $g_{K,L}(z)=0$
for every $z\in\partial(K+L)$.
By Theorem 1.2 with $p=n$, we have that for every $0\leq t\leq\frac{1}{4}$
$t{{2n}\choose{n}}^{\frac{1}{n}}K_{n}(g_{K,L})\subseteq L_{1-t}(g_{K,L}).$
Equivalently, taking $t=1-\theta^{\frac{1}{n}}$ we have that if
$\left(\frac{3}{4}\right)^{n}\leq\theta<1$
${{2n}\choose{n}}^{\frac{1}{n}}K_{n}(g_{K,L})\subseteq\frac{L_{\theta^{\frac{1}{n}}}(g_{K,L})}{1-\theta^{\frac{1}{n}}}=\frac{K+_{\theta}L}{1-\theta^{\frac{1}{n}}}.$
Taking volumes, and taking into account that
$|K_{n}(g_{K,L})|=\int_{\mathbb{R}^{n}}\frac{g_{K,L}(x)}{g_{K,L}(0)}dx=\int_{\mathbb{R}^{n}}\frac{g_{K,L}(x)}{\|g_{K,L}\|_{\infty}}dx=\frac{|K||L|}{M(K,L)}$
we obtain that for every $\left(\frac{3}{4}\right)^{n}\leq\theta<1$
$\displaystyle\left|\frac{K+_{\theta}L}{1-\theta^{\frac{1}{n}}}\right|^{\frac{1}{n}}$
$\displaystyle\geq$
$\displaystyle{{2n}\choose{n}}^{\frac{1}{n}}|K_{n}(g_{K,L})|^{\frac{1}{n}}={{2n}\choose{n}}^{\frac{1}{n}}\frac{|K|^{\frac{1}{n}}|L|^{\frac{1}{n}}}{M(K,L)^{\frac{1}{n}}}\geq{{2n}\choose{n}}^{\frac{1}{n}}\frac{|K|^{\frac{1}{n}}|L|^{\frac{1}{n}}}{\min\\{|K|^{\frac{1}{n}},|L|^{\frac{1}{n}}\\}}$
$\displaystyle=$
$\displaystyle{{2n}\choose{n}}^{\frac{1}{n}}\max\\{|K|^{\frac{1}{n}},|L|^{\frac{1}{n}}\\}\geq\frac{1}{2}{{2n}\choose{n}}^{\frac{1}{n}}(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}).$
Assume that there is equality for some
$\left(\frac{3}{4}\right)^{n}\leq\theta_{0}<1$. Then, by the equality cases in
Theorem 1.2, we have that
$g_{K,L}(x)=M(K,L)\left(1-\|x\|_{K+L}\right)^{n}\quad\forall x\in K+L.$
Otherwise, we have that for every $0\leq t\leq\frac{1}{4}$
$t{{2n}\choose{n}}^{\frac{1}{n}}K_{n}(g_{K,L})\subsetneq L_{1-t}(g_{K,L}).$
or, equivalently, taking $t=1-\theta^{\frac{1}{n}}$ we have that for every
$\left(\frac{3}{4}\right)^{n}\leq\theta<1$
${{2n}\choose{n}}^{\frac{1}{n}}K_{n}(g_{K,L})\subsetneq\frac{L_{\theta^{\frac{1}{n}}}(g_{K,L})}{1-\theta^{\frac{1}{n}}}=\frac{K+_{\theta}L}{1-\theta^{\frac{1}{n}}}.$
and then, in particular,
$\left|\frac{K+_{\theta_{0}}L}{1-\theta_{0}^{\frac{1}{n}}}\right|^{\frac{1}{n}}>{{2n}\choose{n}}^{\frac{1}{n}}|K_{n}(g_{K,L})|^{\frac{1}{n}}\geq\frac{1}{2}{{2n}\choose{n}}^{\frac{1}{n}}(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}),$
which contradicts the equality at $\theta_{0}$.
Therefore, if there is equality for some
$\left(\frac{3}{4}\right)^{n}\leq\theta_{0}\leq 1$, we have that for every
$0\leq\theta<1$
$K+_{\theta}L=(1-\theta^{\frac{1}{n}})(K+L)$
and then, as mentioned in Section 2.3, $K=-L$ is a simplex.
For any $0\leq\theta\leq\left(\frac{3}{4}\right)^{n}$ we have that
$0\leq\theta^{\frac{1}{n}}\leq\frac{3}{4}$ and
$\theta^{\frac{1}{n}}=\left(\frac{4}{3}\theta^{\frac{1}{n}}\right)\frac{3}{4}+\left(1-\frac{4}{3}\theta^{\frac{1}{n}}\right)0.$
Since $g_{K,L}^{\frac{1}{n}}$ is concave on $K+L$, we have that
$\displaystyle K+_{\theta}L$ $\displaystyle=$ $\displaystyle
L_{\theta^{\frac{1}{n}}}(g_{K,L})\supseteq\frac{4}{3}\theta^{\frac{1}{n}}L_{\frac{3}{4}}(g_{K,L})+\left(1-\frac{4}{3}\theta^{\frac{1}{n}}\right)L_{0}(g_{K,L})$
$\displaystyle\supseteq$
$\displaystyle\frac{1}{3}\theta^{\frac{1}{n}}{{2n}\choose{n}}^{\frac{1}{n}}K_{n}(g_{K,L})+\left(1-\frac{4}{3}\theta^{\frac{1}{n}}\right)(K+L),$
where the last inclusion relation is a consequence of Theorem 1.2. Taking
volumes and using Brunn-Minkowski inequality we obtain that
$\displaystyle|K+_{\theta}L|^{\frac{1}{n}}$ $\displaystyle\geq$
$\displaystyle\frac{1}{3}\theta^{\frac{1}{n}}{{2n}\choose{n}}^{\frac{1}{n}}|K_{n}(g_{K,L})|^{\frac{1}{n}}+\left(1-\frac{4}{3}\theta^{\frac{1}{n}}\right)|K+L|^{\frac{1}{n}}$
$\displaystyle\geq$
$\displaystyle\frac{1}{3}\theta^{\frac{1}{n}}{{2n}\choose{n}}^{\frac{1}{n}}\frac{|K|^{\frac{1}{n}}|L|^{\frac{1}{n}}}{M(K,L)^{\frac{1}{n}}}+\left(1-\frac{4}{3}\theta^{\frac{1}{n}}\right)\left(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}\right)$
$\displaystyle\geq$
$\displaystyle\frac{1}{3}\theta^{\frac{1}{n}}{{2n}\choose{n}}^{\frac{1}{n}}\max\\{|K|^{\frac{1}{n}},|L|^{\frac{1}{n}}\\}+\left(1-\frac{4}{3}\theta^{\frac{1}{n}}\right)\left(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}\right)$
$\displaystyle\geq$
$\displaystyle\frac{1}{6}\theta^{\frac{1}{n}}{{2n}\choose{n}}^{\frac{1}{n}}\left(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}\right)+\left(1-\frac{4}{3}\theta^{\frac{1}{n}}\right)\left(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}\right)$
$\displaystyle=$
$\displaystyle\left(1-\left(\frac{4}{3}-\frac{1}{6}{{2n}\choose{n}}^{\frac{1}{n}}\right)\theta^{\frac{1}{n}}\right)\left(|K|^{\frac{1}{n}}+|L|^{\frac{1}{n}}\right).$
∎
Finally, let us point out that, as a consequence of the estimates obtained in
the proof of Theorem 1.1, we can obtain another proof of inequality (2.3):
###### Corollary 4.1.
Let $K,L\subseteq\mathbb{R}^{n}$ be a pair of convex bodies such that
$M(K,L)=|K\cap(-L)|$. Then
$|C_{1}(K,L)|^{\frac{1}{n}}\geq\frac{1}{n}{{2n}\choose{n}}^{\frac{1}{n}}\frac{|K|^{\frac{1}{n}}|L|^{\frac{1}{n}}}{M(K,L)^{\frac{1}{n}}},$
with equality if and only if $K=-L$ is a simplex.
###### Proof.
We have seen in the previous proof that for every
$\left(\frac{3}{4}\right)^{n}\leq\theta<1$
$\left|\frac{K+_{\theta}L}{1-\theta^{\frac{1}{n}}}\right|^{\frac{1}{n}}\geq{{2n}\choose{n}}^{\frac{1}{n}}\frac{|K|^{\frac{1}{n}}|L|^{\frac{1}{n}}}{M(K,L)^{\frac{1}{n}}}.$
Taking limit as $\theta\to 1^{-}$ and taking into account that
$\frac{K+_{\theta}L}{1-\theta^{\frac{1}{n}}}$ is an increasing family of
convex bodies in $\theta$ such that
$\lim_{\theta\to
1^{-}}\frac{K+_{\theta}L}{1-\theta^{\frac{1}{n}}}=\lim_{\theta\to
1^{-}}\frac{1-\theta}{1-\theta^{\frac{1}{n}}}\frac{K+_{\theta}L}{1-\theta}=nC_{1}(K,L),$
we obtain the result. Moreover, if $K=-L$ is a simplex, the inequality above
becomes Zhang’s inequality (2.2). On the other hand, if there is equality, by
the equality case in Theorem 1.2, then necessarily
$g_{K,L}=M(K,L)(1-\|x\|_{K+L})^{n}$. Therefore, for every $\theta\in[0,1]$ we
have that
$K+_{\theta}L=(1-\theta^{\frac{1}{n}})(K+L)$
and then $K=-L$ is a simplex. ∎
## References
* [ABG] D. Alonso-Gutiérrez, J. Bernués, B. González Merino. Zhang’s inequality for log-concave functions. Geometric Aspects of Functional Analysis - Israel Seminar (GAFA) 2017–2019. Lecture Notes in Mathematics, 2256 (2020), 29–48.
* [AJV] D. Alonso-Gutiérrez, C.H. Jiménez, R. Villa R Brunn-Minkowski and Zhang inequalities for convolution bodies. Adv. in Math. 238 (2013), pp. 50–69.
* [B] K. Ball. Logarithmically concave functions and sections of convex sets in $R^{n}$. Studia Math. 88 (1988), 69–84.
* [BGVV] S. Brazitikos, A. Giannopoulos, P. Valettas, B. H. Vritsiou. Geometry of Isotropic Convex Bodies. Mathematical Surveys and Monographs 196 (American Mathematical Society, Providence, RI., 2014).
* [GZ] R.J. Gardner, G. Zhang Affine inequalities and radial mean bodies. American Journal of Mathematics 120 (3) (1998), pp. 505–528.
* [KM] B. Klartag, V. Milman. Geometry of Log-concave Functions and Measures. Geom. Dedicata. 112 (1) (2005), pp. 169–182.
* [P] Petty C.M. Isoperimetric problems. Proceedings of the Conference on Convexity and Combinatorial Geometry, pp. 26–-41. University of Oklahoma, Norman (1971).
* [Sch] R. Schneider. Convex Bodies: The Brunn–Minkowski Theory, Encyclopedia of Mathematics and its Applications, 2nd edn. 151 (Cambridge University Press, Cambridge, 2014).
* [T] A. Tsolomitis. Convolution bodies and their limiting behavior. Duke Math. J. 87 (1) (1997), pp. 181–203.
* [Z] G. Zhang Restricted chord projection and affine inequalities. Geom. Dedicata. 39 (2) (1991), pp. 213–-222.
|
# Differentially Private ADMM-Based Distributed Discrete Optimal Transport for
Resource Allocation
Jason Hughes and Juntao Chen The authors are with the Department of Computer
and Information Sciences, Fordham University, New York, NY, 10023 USA. E-mail:
<EMAIL_ADDRESS>work was supported in part by the
National Science Foundation under Grant ECCS-2138956, and in part by a Faculty
Research Grant from Fordham Office of Research.
###### Abstract
Optimal transport (OT) is a framework that can guide the design of efficient
resource allocation strategies in a network of multiple sources and targets.
To ease the computational complexity of large-scale transport design, we first
develop a distributed algorithm based on the alternating direction method of
multipliers (ADMM). However, such a distributed algorithm is vulnerable to
sensitive information leakage when an attacker intercepts the transport
decisions communicated between nodes during the distributed ADMM updates. To
this end, we propose a privacy-preserving distributed mechanism based on
output variable perturbation by adding appropriate randomness to each node’s
decision before it is shared with other corresponding nodes at each update
instance. We show that the developed scheme is differentially private, which
prevents the adversary from inferring the node’s confidential information even
knowing the transport decisions. Finally, we corroborate the effectiveness of
the devised algorithm through case studies.
## I Introduction
The optimal transport (OT) paradigm can be leveraged to guide the most
efficient allocation of a limited amount of resources from a set of sources to
a set of targets by considering their heterogeneous preferences [1, 2]. The
standard OT framework computes the transport strategy in a centralized manner,
which requires the source and target nodes to send their information to a
centralized transport planner. This centralized computation mechanism is not
scalable when the transport network includes a large number of participants.
Thus, it is imperative to design a computationally efficient scheme that
applies to large-scale transport design.
To this end, distributed algorithm based on the alternating direction method
of multipliers (ADMM) can be used to achieve this goal. In the distributed
computation scheme, each node communicates directly with the connected nodes
regarding the transport decisions and reaches a consensus through iterative
negotiations. Under this paradigm, the central planner does not necessarily
need to coordinate the resource matching. The distributed OT design eliminates
the necessity of a centralized communication network where each node reports
their preference information to the central planner. Instead, the
communication occurs between each pair of connected source and target nodes
enabled by a peer-to-peer network. Thus, the ADMM-based distributed algorithm
does not require sharing all the nodes’ information over the network.
However, the distributed OT algorithm still faces adversarial threats [3].
Specifically, the nodes need to communicate their computed resource transport
preferences with the connected nodes at each update step in the algorithm.
This information could be intercepted by an adversary during its transmission
over the communication network (e.g., through eavesdropping attack). The
attacker can then use it to infer the private information at each
participating node (e.g., node’s utility parameters used for the design of
transport plan).
The privacy concerns of the distributed OT motivate us to develop an efficient
privacy-preserving mechanism that can protect the nodes’ sensitive utility
information. To do this, we resort to the powerful differential privacy
technique [4]. Specifically, we develop an output variable perturbation-based
differentially private distributed OT scheme. In this algorithm, instead of
sharing the authentic transport strategies directly between connected source
and target nodes, each node perturbs their transport decisions by adding a
random noise drawn from an appropriate distribution with specified parameters
at each step. The proposed algorithm prevents leakage of sensitive information
of participants in the network even if the transport strategies shared between
nodes during updates are captured by the adversary.
The contributions of this paper are presented as follows.
1. 1.
We develop a distributed OT design framework based on the alternating
direction method of multipliers to compute the OT strategies efficiently.
2. 2.
We incorporate privacy consideration into the distributed OT and propose a
differentially private distributed OT algorithm based on an output variable
perturbation mechanism.
3. 3.
We demonstrate the effectiveness of the developed algorithm through case
studies and characterize the trade-off between a node’s privacy and transport
utility.
Related Works. Differential privacy has been applied to many fields,
especially in artificial intelligence and machine learning. Differential
privacy has been studied with specific application to ADMM-based distributed
algorithms for both learning and optimization in [5, 6]. Specifically,
perturbation-based ADMM algorithms were developed to improve privacy in
classification learning problems [7, 8]. Differential privacy has also been
leveraged to investigate privacy issues in empirical risk minimization [9],
support vector machines [10] and deep learning [11]. Additionally,
differential privacy has been applied to tackle the privacy issues in various
societal applications, including fog computing [12], private data trading
through contracts [13], federated learning in the Internet of things [14], and
vehicular networks [15]. In this work, we address the privacy concerns in the
ADMM-based distributed OT algorithm based on differential privacy and develop
a protection scheme that has a theoretical guarantee to maintain the privacy
of the information at each participating transport node.
## II Discrete Optimal Transport over Networks and Distributed Algorithm
This section presents the framework of discrete optimal transport over a
network and then develops a distributed algorithm to compute the optimal
transport plan.
### II-A Discrete Optimal Transport
We denote by $\mathcal{X}:=\\{1,...,|\mathcal{X}|\\}$ a set of
destination/target nodes that receive the resources, and
$\mathcal{Y}:=\\{|\mathcal{X}|+1,...,|\mathcal{X}|+|\mathcal{Y}|\\}$ a set of
origin/source nodes that distribute resources to the targets over a transport
network. Additionally, we define $\mathcal{P}=\mathcal{X}\cup\mathcal{Y}$ as
the set of all nodes. Each source node $y\in\mathcal{Y}$ is connected to a
number of target nodes denoted by $\mathcal{X}_{y}$, representing that $y$ can
choose to allocate its resources to a specific group of destinations
$\mathcal{X}_{y}$. Similarly, each target node $x\in\mathcal{X}$ can receive
resources from multiple source nodes, and this set of resource suppliers to
target $x$ is denoted by $\mathcal{Y}_{x}$. It can be seen that the resources
are transported over a bipartite network, where one side of the network
consists of all source nodes and the other includes all destination nodes.
This bipartite network is not necessarily complete because of constrained
matching policies between participants. We further denote by $\mathcal{E}$ the
set of all feasible transport paths in the network, i.e.,
$\mathcal{E}:=\\{\\{x,y\\}|x\in\mathcal{X}_{y},y\in\mathcal{Y}\\}$. Here,
$\mathcal{E}$ also refers to the set of all edges in the established bipartite
graph for resource transportation.
We next denote by $\pi_{xy}\in\mathbb{R}_{+}$ the amount of resources
transported from the origin node $y\in\mathcal{Y}$ to the destination node
$x\in\mathcal{X}$, where $\mathbb{R}_{+}$ is the set of nonnegative real
numbers. Let $\Pi:=\\{\pi_{xy}\\}_{x\in\mathcal{X}_{y},y\in\mathcal{Y}}$ be
the designed transport plan. Then, the centralized optimal transport problem
can be formulated as follows:
$\displaystyle\max_{\Pi}\ \sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}$
$\displaystyle
t_{xy}(\pi_{xy})+\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy})$
(1) $\displaystyle\mathrm{s.t.}$
$\displaystyle\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}\leq\bar{p}_{x},\
\forall x\in\mathcal{X},$
$\displaystyle\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy}\leq\bar{q}_{y},\
\forall y\in\mathcal{Y},$ $\displaystyle\pi_{xy}\geq 0,\
\forall\\{x,y\\}\in\mathcal{E},$
where $t_{xy}:\mathbb{R}_{+}\rightarrow\mathbb{R}$ and
$s_{xy}:\mathbb{R}_{+}\rightarrow\mathbb{R}$ are utility functions for target
node $x$ and source node $y$, respectively. Furthermore,
$\bar{p}_{x}\geq\underline{p}_{x}\geq 0$, $\forall x\in\mathcal{X}$ and
$\bar{q}_{y}\geq\underline{q}_{y}\geq 0$, $\forall y\in\mathcal{Y}$. The
constraints
$\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}\leq\bar{p}_{x}$ and
$\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy}\leq\bar{q}_{y}$
capture the limitations on the amount of requested and transferred resources
at the target $x$ and source $y$, respectively.
We have the following assumption on the utility functions.
###### Assumption 1.
The utility functions $t_{xy}$ and $s_{xy}$ are concave and monotonically
increasing on $\pi_{xy}$, $\forall x\in\mathcal{X},\forall y\in\mathcal{Y}$.
Moreover, they are continuously differentiable with $t^{\prime}_{xy}\leq\rho$
and $s^{\prime}_{xy}\leq\rho$, where $\rho$ is a positive constant.
A rich class of functions satisfy the conditions in Assumption 1. For example,
the utility functions $t_{xy}$ and $s_{xy}$ can be linear on $\pi_{xy}$,
indicating a linear growth of benefits on the amount of transferred and
consumed resources.
### II-B Distributed Optimal Transport
Next, we establish a distributed algorithm for computing the optimal transport
strategy in (1). Our first step is to reformulate the optimization problem by
introducing ancillary variables $\pi_{xy,t}$ and $\pi_{xy,s}$. The additional
subscripts $t$ and $s$ indicate that the corresponding parameters belong to
the target node or the source node, respectively. We then set
$\pi_{xy}=\pi_{xy,t}$ and $\pi_{xy}=\pi_{xy,s}$, indicating that the solutions
proposed by the targets and sources are consistent. This reformulation
facilitates the design of a distributed algorithm which allows us to iterate
through the process in obtaining the optimal transport plan. To this end, the
reformulated optimal transport problem is presented as follows:
$\displaystyle\min_{\Pi_{t}\in\mathcal{F}_{t},\Pi_{s}\in\mathcal{F}_{s},\Pi}$
$\displaystyle-\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}t_{xy}(\pi_{xy,t})-\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy,s})$
(2) $\displaystyle\mathrm{s.t.}$ $\displaystyle\pi_{xy,t}=\pi_{xy},\
\forall\\{x,y\\}\in\mathcal{E},$ $\displaystyle\pi_{xy,s}=\pi_{xy},\
\forall\\{x,y\\}\in\mathcal{E},$
where $\Pi_{t}:=\\{\pi_{xy,t}\\}_{x\in\mathcal{X}_{y},y\in\mathcal{Y}}$,
$\Pi_{s}:=\\{\pi_{xy,s}\\}_{x\in\mathcal{X},y\in\mathcal{Y}_{x}}$,
$\mathcal{F}_{t}:=\\{\Pi_{t}|\pi_{xy,t}\geq
0,\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy,t}\leq\bar{p}_{x},\
\\{x,y\\}\in\mathcal{E}\\}$, and $\mathcal{F}_{s}:=\\{\Pi_{s}|\pi_{xy,s}\geq
0,\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy,s}\leq\bar{q}_{y},\
\\{x,y\\}\in\mathcal{E}\\}$.
We resort to the alternating direction method of multipliers (ADMM) [16] to
develop a distributed computational algorithm. First, let $\alpha_{xy,s}$ and
$\alpha_{xy,t}$ be the Lagrangian multipliers associated with the constraint
$\pi_{xy,s}=\pi_{xy}$ and $\pi_{xy,t}=\pi_{xy}$, respectively. The Lagrangian
function associated with the optimization problem (2) can then be written as
follows:
$\begin{split}&L\left(\Pi_{t},\Pi_{s},\Pi,\alpha_{xy,t},\alpha_{xy,s}\right)=-\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}t_{xy}(\pi_{xy,t})\\\
&-\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy,s})+\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}\alpha_{xy,t}(\pi_{xy,t}-\pi_{xy})\\\
&+\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}\alpha_{xy,s}(\pi_{xy}-\pi_{xy,s})+\frac{\eta}{2}\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}(\pi_{xy,t}-\pi_{xy})^{2}\\\
&+\frac{\eta}{2}\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}(\pi_{xy}-\pi_{xy,s})^{2},\end{split}$
(3)
where $\eta>0$ is a positive scalar constant controlling the convergence rate
in the algorithm designed below.
Note that in (3), the last two terms
$\frac{\eta}{2}\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}(\pi_{xy,t}-\pi_{xy})^{2}$
and
$\frac{\eta}{2}\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}(\pi_{xy}-\pi_{xy,s})^{2}$,
acting as penalization, are quadratic. Hence, the Lagrangian function $L$ is
strictly convex, ensuring the existence of a unique optimal solution.
We next apply ADMM to the minimization problem in (2). The designed
distributed algorithm is presented in the following proposition.
###### Proposition 1.
The iterative steps of applying ADMM to problem (2) are summarized as follows:
$\begin{split}\Pi_{x,t}(k+1)&\in\arg\min_{\Pi_{x,t}\in\mathcal{F}_{x,t}}-\sum_{y\in\mathcal{Y}_{x}}t_{xy}(\pi_{xy,t})\\\
&+\sum_{y\in\mathcal{Y}_{x}}\alpha_{xy,t}(k)\pi_{xy,t}+\frac{\eta}{2}\sum_{y\in\mathcal{Y}_{x}}(\pi_{xy,t}-\pi_{xy}(k))^{2},\end{split}$
(4) $\displaystyle\Pi_{y,s}(k+$ $\displaystyle
1)\in\arg\min_{\Pi_{y,s}\in\mathcal{F}_{y,s}}-\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy,s})$
(5)
$\displaystyle-\sum_{x\in\mathcal{X}_{y}}\alpha_{xy,s}(k)\pi_{xy,s}+\frac{\eta}{2}\sum_{x\in\mathcal{X}_{y}}(\pi_{xy}(k)-\pi_{xy,s})^{2},$
$\begin{split}\pi_{xy}(&k+1)=\arg\min_{\pi_{xy}}-\alpha_{xy,t}(k)\pi_{xy}+\alpha_{xy,s}(k)\pi_{xy}\\\
&+\frac{\eta}{2}(\pi_{xy,t}(k+1)-\pi_{xy})^{2}+\frac{\eta}{2}(\pi_{xy}-\pi_{xy,s}(k+1))^{2},\end{split}$
(6)
$\begin{split}\alpha_{xy,t}(k+1)=\alpha_{xy,t}(k)+\eta(\pi_{xy,t}(k+1)-\pi_{xy}(k+1))^{2},\end{split}$
(7)
$\begin{split}\alpha_{xy,s}(k+1)=\alpha_{xy,s}(k)+\eta(\pi_{xy}(k+1)-\pi_{xy,s}(k+1))^{2},\end{split}$
(8)
where $\Pi_{\tilde{x},t}:=\\{\pi_{xy,t}\\}_{y\in\mathcal{Y}_{x},x=\tilde{x}}$
represents the solution at target node $\tilde{x}\in\mathcal{X}$, and
$\Pi_{\tilde{y},s}:=\\{\pi_{xy,s}\\}_{x\in\mathcal{X}_{y},y=\tilde{y}}$
represents the proposed solution at source node $\tilde{y}\in\mathcal{Y}$. In
addition, $\mathcal{F}_{x,t}:=\\{\Pi_{x,t}|\pi_{xy,t}\geq
0,y\in\mathcal{Y}_{x},\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy,t}\leq\bar{p}_{x}\\}$,
and $\mathcal{F}_{y,s}:=\\{\Pi_{y,s}|\pi_{xy,s}\geq
0,x\in\mathcal{X}_{y},\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy,s}\leq\bar{q}_{y}\\}$.
###### Proof.
See Appendix -A. ∎
We can simplify steps (4)-(8) down to four steps, and the results are
summarized below.
###### Proposition 2.
The iterations (4)-(8) can be simplified as
$\begin{split}\Pi_{x,t}(k+1)&\in\arg\min_{\Pi_{x,t}\in\mathcal{F}_{x,t}}-\sum_{y\in\mathcal{Y}_{x}}t_{xy}(\pi_{xy,t})\\\
&+\sum_{y\in\mathcal{Y}_{x}}\alpha_{xy}(k)\pi_{xy,t}+\frac{\eta}{2}\sum_{y\in\mathcal{Y}_{x}}\left(\pi_{xy,t}-\pi_{xy}(k)\right)^{2},\end{split}$
(9)
$\begin{split}\Pi_{y,s}(k+&1)\in\arg\min_{\Pi_{y,s}\in\mathcal{F}_{y,s}}-\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy,s})\\\
&-\sum_{x\in\mathcal{X}_{y}}\alpha_{xy}(k)\pi_{xy,s}+\frac{\eta}{2}\sum_{x\in\mathcal{X}_{y}}\left(\pi_{xy}(k)-\pi_{xy,s}\right)^{2},\end{split}$
(10)
$\begin{split}\pi_{xy}(k+1)=\frac{1}{2}\left(\pi_{xy,t}(k+1)+\pi_{xy,s}(k+1)\right),\end{split}$
(11)
$\begin{split}\alpha_{xy}(k+1)=\alpha_{xy}(k)+\frac{\eta}{2}\left(\pi_{xy,t}(k+1)-\pi_{xy,s}(k+1)\right).\end{split}$
(12)
###### Proof.
The simplification can be obtained straightforwardly by first characterizing
the solution to (6) and then substituting it into (7) and (8). ∎
For convenience, we summarize the distributed OT algorithm into Algorithm 1.
## III Differentially Private Distributed Optimal Transport
In this section, we first present the privacy concerns in the developed
distributed OT in Section II. We then develop a differentially private
distributed OT algorithm that preserves nodes’ privacy explicitly during
decision updates.
### III-A Privacy Concerns in the Distributed OT
In the previous distributed OT algorithm, the intermediate results are shared
between connected nodes during updates. This sharing mechanism raises privacy
concerns as an adversary that can access this result (e.g., through
eavesdropping attack) has the ability to infer the participants’ private
information. Specifically, the adversary could leverage the compromised
information $\Pi_{x,t}(k)$ and $\Pi_{y,s}(k)$ at each update step, $k$, to
infer the node’s private information including the sensitive preference
parameters in the utility functions $t_{xy}$ and $s_{xy}$. We denote the set
of private preference information at node $p$ by $D_{p}$, $p\in\mathcal{P}$.
Algorithm 1 Distributed OT Algorithm
1:while $\Pi_{x,t}$ and $\Pi_{y,s}$ not converging do
2: Compute $\Pi_{x,t}(k+1)$ using (9), for all $x\in\mathcal{X}_{y}$
3: Compute $\Pi_{y,s}(k+1)$ using (10), for all $y\in\mathcal{Y}_{x}$
4: Compute $\pi_{xy}(k+1)$ using (11), for all $\\{x,y\\}\in\mathcal{E}$
5: Compute $\alpha_{xy}(k+1)$ using (12), for all $\\{x,y\\}\in\mathcal{E}$
6:end while
7:return $\pi_{xy}(k+1)$, for all $\\{x,y\\}\in\mathcal{E}$
We next use an example to further illustrate node’s private information set.
Specifically, we consider utility functions admitting a linear form for both
the sender and receiver: $t_{xy}(\pi_{xy})=\delta_{xy}\pi_{xy}$ and
$s_{xy}(\pi_{xy})=\gamma_{xy}\pi_{xy}$, where
$\delta_{xy},\gamma_{xy}\in\mathbb{R}_{+}$. Then, for a target node
$x\in\mathcal{X}$, we have set $D_{x}=\\{\delta_{xy}:\forall
y\in\mathcal{Y}_{x}\\}$. Similarly, for a source node $y\in\mathcal{Y}$, we
have set $D_{y}=\\{\gamma_{xy}:\forall x\in\mathcal{X}_{y}\\}$. The
information contained in $D_{p}$ is crucial for developing optimal transport
plans. Leakage of such private information is undesired in many resource
allocation scenarios, especially those with societal impacts. For example, in
the distribution of scarce vaccine resources, these preference parameters
could indicate the severity of epidemics in different neighborhoods (modeled
by nodes). It is obvious that each participant does not want to leak this
piece of information to other unauthorized parties.
To this end, we aim to protect the privacy of each node in the transport
network using differential privacy [4]. Specifically, we propose to add
randomness to the transport decisions communicated between each pair of
source-target nodes during updates, preventing the adversary from learning the
sensitive utility parameters of nodes simply based on the transport decisions.
To achieve this goal, first, let $D_{p}$ and $D_{p}^{\prime}$ be two
information/data sets differ by one data point (utility parameter). In other
words, their Hamming Distance is equal to 1, denoted by
$H(D_{p},D_{p}^{\prime})=1$. Here,
$H(D_{p},D_{p}^{\prime})=\sum_{i=1}^{|D_{p}|}\textbf{1}\\{i:d_{i}\neq
d_{i}^{\prime}\\}$, where $d_{i}$ and $d_{i}^{\prime}$ denote the
$i^{\mathrm{th}}$ data point in the information set $D_{p}$ and
$D_{p}^{\prime}$, respectively. Recall that the data points in these sets
refer to the nodes’ utility parameters which we aim to protect from leakage
under the condition that the adversary intercepts the transport plans. The
formal definition of differential privacy is presented below.
###### Definition 1 ($\beta_{p}(k)$-Differential Privacy).
Consider the transport network $\mathcal{G}=\\{\mathcal{P},\mathcal{E}\\}$,
where $\mathcal{P}$ is composed of both source nodes and target nodes, and
$\mathcal{E}$ is a set of edges connecting the nodes. At each node
$p\in\mathcal{P}$, there is an information set $D_{p}$ which is used to
compute the resource transport plan. Let $R$ be a randomized counterpart of
Algorithm 1. Further, let
$\beta(k)=\left(\beta_{1}(k),\beta_{2}(k),...,\beta_{|\mathcal{P}|}(k)\right)\in\mathbb{R}_{+}^{|\mathcal{P}|}$,
where $\beta_{p}(k)\in\mathbb{R}_{+}$ is the privacy parameter of node $p$ at
iteration $k$. Consider the outputs $\Pi_{x,t}(k)$ and $\Pi_{y,s}(k)$ at
iteration $k$ of Algorithm 1. Let $D_{p}^{\prime}$ be any information set such
that $H(D_{p}^{\prime},D_{p})=1$ and $\widetilde{\Pi}_{x}^{t}(k)$ and
$\widetilde{\Pi}_{y}^{s}(k)$ be the corresponding outputs of Algorithm 1 while
using the information set $D^{\prime}_{p}$. The algorithm $R$ is
$\beta_{p}(k)$-differentially private for any $D^{\prime}_{p}$ for all nodes
$p\in\mathcal{P}$ and for all possible sets of outcome solutions $S$, if the
following condition is satisfied at every iteration $k$:
$\displaystyle\mathrm{Pr}[\Pi_{p}(k)\in
S]\leq\exp{(\beta_{p}(k))}\cdot\mathrm{Pr}[\widetilde{\Pi}_{p}\in S],$ (13)
where $\Pi_{p}(k)=\begin{cases}\Pi_{p,t}(k),\ \mathrm{if}\ p\in\mathcal{X},\\\
\Pi_{p,s}(k),\ \mathrm{if}\ p\in\mathcal{Y},\end{cases}$ and
$\widetilde{\Pi}_{p}(k)=\begin{cases}\widetilde{\Pi}_{p,t}(k),\ \mathrm{if}\
p\in\mathcal{X},\\\ \widetilde{\Pi}_{p,s}(k),\ \mathrm{if}\
p\in\mathcal{Y}.\end{cases}$
### III-B Output Variable Perturbation
In order to ensure that the sensitive preference information at each node
remains private when transport plans are published over the network, we
develop a differentially private algorithm based on output variable
perturbation. This algorithm involves adding random noise to the output
decision variables $\Pi_{x,t}(k+1)$ and $\Pi_{y,s}(k+1)$ during updates. More
specifically, the random noise vectors,
$\epsilon_{x}(k+1)\in\mathbb{R}^{|\mathcal{Y}_{x}|}$ and
$\epsilon_{y}(k+1)\in\mathbb{R}^{|\mathcal{X}_{y}|}$ are added to the
variables $\Pi_{x,t}(k+1)$ and $\Pi_{y,s}(k+1)$ obtained by (9) and (10),
respectively.
Recall that $p\in\mathcal{P}=\mathcal{X}\cup\mathcal{Y}$ and thus $p=x$,
$\forall x\in\mathcal{X}$, and $p=y$, $\forall y\in\mathcal{Y}$. The random
noise vector $\epsilon_{p}(k)$ is generated according to a distribution with
density proportional to $e^{-\xi_{p}(k)||\epsilon||}$. Here,
$\xi_{p}(k)=\frac{\rho}{\eta}\beta_{p}(k)$, where $\beta_{p}$ is a privacy
term at each node $p$. Thus, the proposed solutions at target node $x$ and
source node $y$ at step $k+1$ admit
$\begin{split}\Pi_{x,t}^{*}(k+1)=\Pi_{x,t}(k+1)+\epsilon_{x}(k+1),\\\
\Pi_{y,s}^{*}(k+1)=\Pi_{y,s}(k+1)+\epsilon_{y}(k+1),\end{split}$ (14)
where $\Pi_{x,t}^{*}$ and $\Pi_{y,s}^{*}$ are perturbed solutions of
$\Pi_{x,t}$ and $\Pi_{y,s}$, respectively. The distributed OT algorithm with
output perturbation includes the following steps:
$\begin{split}\Pi_{x,t}(k+1)\in\arg\min_{\Pi_{x}^{t}\in\mathcal{F}_{x}^{t}}-\sum_{y\in\mathcal{Y}_{x}}t_{xy}(\pi_{xy,t})\qquad\qquad\\\
+\sum_{y\in\mathcal{Y}_{x}}\alpha_{xy}(k)\pi_{xy,t}+\frac{\eta}{2}\sum_{y\in\mathcal{Y}_{x}}\left(\pi_{xy,t}-\pi_{xy}(k)\right)^{2},\end{split}$
(15)
$\begin{split}\Pi_{x,t}^{*}(k+1)=\Pi_{x,t}(k+1)+\epsilon_{x}(k+1),\end{split}$
(16)
$\displaystyle\Pi_{y,s}(k+1)\in\arg\min_{\Pi_{y,s}\in\mathcal{F}_{y,s}}-\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy,s})$
(17)
$\displaystyle-\sum_{x\in\mathcal{X}_{y}}\alpha_{xy}(k)\pi_{xy,s}+\frac{\eta}{2}\sum_{x\in\mathcal{X}_{y}}\left(\pi_{xy}(k)-\pi_{xy,s}\right)^{2},$
$\begin{split}\Pi_{y,s}^{*}(k+1)=\Pi_{y,s}(k+1)+\epsilon_{y}(k+1),\end{split}$
(18)
$\begin{split}\pi_{xy}^{*}(k+1)=\frac{1}{2}\left(\pi_{xy,t}^{*}(k+1)+\pi_{xy,s}^{*}(k+1)\right),\end{split}$
(19)
$\begin{split}\alpha_{xy}(k+1)=\alpha_{xy}(k)+\frac{\eta}{2}\left(\pi_{xy,t}^{*}(k+1)-\pi_{xy,s}^{*}(k+1)\right).\end{split}$
(20)
Figure 1: Illustration of the differentially private distributed OT scheme.
The information exchanged between nodes is susceptible to be intercepted by
the adversary (e.g., by eavesdropping attack to the wireless channel). Hence,
an appropriate random noise is added to the outputs at each update step.
As a result of the perturbation in (16) and (18), $\Pi_{x,t}^{*}(k)$ and
$\Pi_{y,s}^{*}(k)$ are randomized. Specifically, within each iteration, the
node perturbs the output variable $\Pi_{x,t}(k)$ or $\Pi_{y,s}(k)$
respectively in order to obtain $\Pi_{x,t}^{*}(k)$ or $\Pi_{y,s}^{*}(k)$. The
proposed scheme is further illustrated in Fig. 1. It is important to note that
the information sets at each node, i.e., $D_{p}$ containing sensitive utility
parameters, remain untouched and not perturbed. Due to the random output
perturbation, the transport strategy does not converge to a deterministic
value compared with the distributed algorithm in Section II-B. Instead, the
algorithm converges approximately and oscillates within a bounded interval.
The magnitude of the oscillation is directly related to the differential
privacy parameter $\beta_{p}$ chosen by each node $p\in\mathcal{P}$. When
$\beta_{p}$ becomes larger, $\forall p\in\mathcal{P}$, the differentially
privacy algorithm tends to converge to the same solution yielded by Algorithm
1. Since noise is added to each output, the solution will oscillate around the
optimal solution. We will test the convergence of the proposed algorithm using
case studies. For convenience, the differentially private distributed OT
algorithm based on the output variable perturbation is summarized in Algorithm
2.
Algorithm 2 Differentially Private Distributed OT Algorithm With Output
Variable Perturbation
1:for $k=0,1,2,...$ do
2: for $x\in\mathcal{X}_{y}$ do
3: Compute $\Pi_{x,t}(k+1)$ using (15)
4: Compute $\Pi_{x,t}^{*}(k+1)$ using (16)
5: end for
6: for $y\in\mathcal{Y}_{x}$ do
7: Compute $\Pi_{y,s}(k+1)$ using (17)
8: Compute $\Pi_{y,s}^{*}(k+1)$ using (18)
9: end for
10: Compute $\pi_{xy}^{*}(k+1)$ using (19), for all $\\{x,y\\}\in\mathcal{E}$
11: Compute $\alpha_{xy}(k+1)$ using (20), for all $\\{x,y\\}\in\mathcal{E}$
12:end for
13:return $\pi_{xy}^{*}(k+1)$, for all $\\{x,y\\}\in\mathcal{E}$
We further have the following Theorem 1 to theoretically guarantee the
privacy-preserving property of Algorithm 2.
###### Theorem 1.
The proposed Algorithm 2 is $\beta$-differentially private with $\beta_{p}(k)$
for node $p$ at iteration $k$. Let $Q(\Pi_{x,t}^{*}|D_{x})$ and
$Q(\Pi_{x,t}^{*}|D_{x}^{\prime})$ be the probability density functions for
$\Pi_{x,t}^{*}$ given the information sets $D_{x}$ and $D_{x}^{\prime}$ such
that $H(D_{x},D_{x}^{\prime})=1$. The ratio of probability density of
$\Pi_{x,t}^{*}$ is bounded:
$\frac{Q(\Pi_{x,t}^{*}(k)|D_{x})}{Q(\Pi_{x,t}^{*}(k)|D_{x}^{\prime})}\leq
e^{\beta_{x}(k)}.$ (21)
It follows similarly for the probability density on the source side,
$\Pi_{y,s}^{*}$, i.e.,
$\frac{Q(\Pi_{y,s}^{*}(k)|D_{y})}{Q(\Pi_{y,s}^{*}(k)|D_{y}^{\prime})}\leq
e^{\beta_{y}(k)}.$ (22)
Note that (21) and (22) directly imply
$\frac{\mathrm{Pr}(\Pi_{x,t}^{*}(k)|D_{x})}{\mathrm{Pr}(\Pi_{x,t}^{*}(k)|D_{x}^{\prime})}\leq
e^{\beta_{x}(k)}$ and
$\frac{\mathrm{Pr}(\Pi_{y,s}^{*}(k)|D_{y})}{\mathrm{Pr}(\Pi_{y,s}^{*}(k)|D_{y}^{\prime})}\leq
e^{\beta_{y}(k)}$, respectively.
###### Proof.
We first show the bounded ratio in (21). We have
$\frac{Q(\Pi_{x,t}^{*}(k)|D_{x})}{Q(\Pi_{x,t}^{*}(k)|D_{x}^{\prime})}=\frac{F_{x}(\epsilon_{x}(k))}{F_{x}(\epsilon_{x}^{\prime}(k))}=\frac{e^{-\xi_{x}(k)||\epsilon_{x}(k)||}}{e^{-\xi_{x}(k)||\epsilon_{x}^{\prime}(k)||}}$.
Our goal is to find a $\xi_{x}(k)$ such that the following inequality holds
$\xi_{x}(k)(||\epsilon_{x}(k)||-||\epsilon_{x}^{\prime}(k)||)\leq\beta_{p}(k)$.
Let $W=\arg\min_{\Pi_{x,t}}f_{x}(k|D_{x})$ and
$W^{\prime}=\arg\min_{\Pi_{x,t}}f_{x}(k|D_{x}^{\prime})$, where $f_{x}(k)$ is
the objective function for the target node $x\in\mathcal{X}$ at iteration $k$,
shown in (15). Also, let $g$ and $h$ be defined at each node $x\in\mathcal{X}$
such that $g(\Pi_{x,t}^{*}(k))=f_{x}(k|D_{x})$ and
$h(\Pi_{x,t}^{*}(k))=f_{x}(k|D_{x}^{\prime})-f_{x}(k|D_{x})$.
Therefore,
$h(\Pi_{x,t}^{*}(k))=-\tilde{t}_{xy}(\pi_{xy,t})+t_{xy}(\pi_{xy,t}),$ where
$\tilde{t}_{xy}$ refers to the altered utility function due to the difference
between $D_{x}^{\prime}$ and $D_{x}$. Assumption 1 implies that
$f_{x}(k|D_{p})=g(\Pi_{x,t}^{*}(k))$ and
$f_{x}(k|D_{x}^{\prime})=g(\Pi_{x,t}^{*}(k))+h(\Pi_{x,t}^{*}(k))$ are both
convex. We differentiate $h(\Pi_{x,t}^{*}(k))$ with respect to
$\Pi_{x,t}^{*}(k)$ and get:
$\nabla
h(\Pi_{x,t}^{*}(k))=-\tilde{t}^{\prime}_{xy}(\pi_{xy,t})+t^{\prime}_{xy}(\pi_{xy,t}).$
Assumption 1 further implies that $0\leq t^{\prime}_{xy}\leq\rho$. Thus,
$||\nabla h(\Pi_{x,t}^{*})||\leq\rho$. From the definitions of $W$ and
$W^{\prime}$, we have $\nabla g(W)=\nabla g(W^{\prime})+\nabla
h(W^{\prime})=0$. Based on Lemma 14 in [17] and knowing that $g(\cdot)$ is
$\eta$-strongly convex, the following inequality holds: $\langle\nabla
g(W)-g(W^{\prime}),W-W^{\prime}\rangle\geq\eta||W-W^{\prime}||^{2}.$ Thus, by
Cauchy-Schwartz inequality, we obtain
$\begin{split}||W-W^{\prime}||\cdot||\nabla
h(W^{\prime})||\geq(W-W^{\prime})^{T}\nabla h(W^{\prime})=\\\ \langle\nabla
g(W)-g(W^{\prime}),W-W^{\prime}\rangle\geq\eta||W-W^{\prime}||^{2}.\end{split}$
Dividing both sides by $\eta||W-W^{\prime}||$ yields
$||W-W^{\prime}||\leq\frac{1}{\eta}||\nabla
h(W^{\prime})||\leq\frac{\rho}{\eta}.$ From (16), we have
$||W-W^{\prime}||=||\epsilon_{x}(k)-\epsilon^{\prime}_{x}(k)||\leq\frac{1}{\eta}||\nabla
h(W^{\prime})||.$ Thus, we obtain
$\xi_{x}(k)(||\epsilon_{x}(k)||-||\epsilon^{\prime}_{x}(k)||)\leq\xi_{x}(k)(||\epsilon_{x}(k)-\epsilon^{\prime}_{x}(k)||)\leq\frac{\rho}{\eta}\xi_{x}(k).$
By choosing $\xi_{x}(k)=\frac{\eta}{\rho}\beta_{p}(k)$, the inequality
$\xi_{x}(k)(||\epsilon_{x}(k)-\epsilon^{\prime}_{x}(k)||)\leq\beta_{p}(k)$
holds. Thus, the output variable perturbation is $\beta_{p}$-differentially
private for target node $x\in\mathcal{X}$. The proof follows identically for
the perturbed output variable $\Pi_{y,s}^{*}(k)$ at the source node
$y\in\mathcal{Y}$ and hence omitted. ∎
In summary, the proposed Algorithm 2 guarantees the privacy of all
participating nodes during their decision sharing.
## IV Numerical Case Studies
In this section, we corroborate the effectiveness of the developed
differentially private algorithm and show how the added privacy impacts the
transport plan and its efficiency.
We construct a transport network with four source nodes and thirty target
nodes in which every source node is connected to all target nodes, i.e., the
network is complete. The upper bounds at the target nodes $\bar{p}_{x}$ are
kept small (smaller than 5), while the upper bounds at the source nodes
$\bar{q}_{y}$ are relatively larger (between 20 and 40). Such selection yields
that the resources at the origin can be transported to heterogeneous target
nodes. Additionally, we consider linear utility functions
$t_{xy}(\pi_{xy})=\delta_{xy}\pi_{xy}$, and
$s_{xy}(\pi_{xy})=\gamma_{xy}\pi_{xy},\forall\\{x,y\\}\in\mathcal{E}$. The
utility parameters $\delta_{xy}$ and $\gamma_{xy}$ are randomly chosen
integers between 1 and 5 for each pair of connection,
$\forall\\{x,y\\}\in\mathcal{E}$.
In the following study, we investigate the impact of privacy parameter
$\beta_{p}$ on the transport utility. According to the definition, a smaller
$\beta_{p}$ yields a higher level of privacy. We compare the results for two
sets of $\beta_{p}$. For the first one, we assign a value of $1$ to
$\beta_{p}$, $p\in\mathcal{P}$. For the larger value of $\beta_{p}$ we use
1000. Furthermore, we select $\eta=1$ and $\rho=2$.
(a) Social Utility
(b) Transport Plan
(c) Privacy and Transport Efficiency Tradeoff
Figure 2: (a) shows the performance of the proposed algorithms. (b) depicts
the optimal transport plans designed by the central planner (CP) and the
solution given by the distributed differentially private (DP) algorithm. (c)
shows an increase of the privacy level (smaller $\beta_{p}$) decreases the
transport utility, reflecting the trade-off between privacy and transport
efficiency.
We leverage Algorithms 1 and 2 to compute the transport plans. The results are
shown in Fig. 2. First, we observe that in Fig. 2(a), the trajectory of
transport plan yielded by the differentially private algorithm converges
approximately to a certain value. The oscillation at the tail is due to the
random noise added to the decision at each output perturbation step. We can
also see that when $\beta_{p}$ is small, the resulting social utility (i.e.,
transport efficiency), which is an aggregation of the utilities of all
participating nodes, is relatively small. In comparison, when $\beta_{p}$ is
large, the social utility is close to the one returned by Algorithm 1 where
differential privacy is not incorporated. Fig. 2(c) further shows this
phenomenon and reveals the inherent trade-off between the amount of added
privacy and the transport efficiency. Fig. 2(b) illustrates how the privacy
factor affects the transport plan. The decreased optimality due to the privacy
promotion indicates that the resource allocation is no longer taking full
advantage of how much source nodes can provide or how much target nodes can
request. For example, the target node 12 can request at most 5 units of
resources, and does so when privacy is not added to the algorithm. When
privacy is concerned, it only requests and receives 4.2 units of resources and
hence the social utility is decreased.
## V Conclusion
This paper has developed a differentially private distributed optimal
transport algorithm with a theoretical guarantee of achieved privacy. The
algorithm protects the sensitive information at each node by perturbing the
output of the transport schemes shared between connected nodes during updates.
Under the designed mechanism, even if the transport decision is intercepted
during its transmission, the adversary still cannot discover the underlying
sensitive information used in the transport strategy design. The privacy level
for each node can be determined appropriately by considering its trade-off
with the resulting transport efficiency. Future work includes extending the
current model-based distributed optimal transport framework to data-driven
learning-based optimal transport while considering data privacy in the
learning process.
## References
* [1] J. Hughes and J. Chen, “Fair and distributed dynamic optimal transport for resource allocation over networks,” in _55th Annual Conference on Information Sciences and Systems (CISS)_ , 2021.
* [2] R. Zhang and Q. Zhu, “Consensus-based distributed discrete optimal transport for decentralized resource matching,” _IEEE Transactions on Signal and Information Processing over Networks_ , vol. 5, no. 3, pp. 511–524, 2019.
* [3] J. Hughes and J. Chen, “Resilient and distributed discrete optimal transport with deceptive adversary: A game-theoretic approach,” in _IEEE Control System Letters_ , 2022, pp. 1166–1171.
* [4] C. Dwork, A. Roth _et al._ , “The algorithmic foundations of differential privacy.” _Foundations and Trends in Theoretical Computer Science_ , vol. 9, no. 3-4, pp. 211–407, 2014.
* [5] Z. Huang, R. Hu, Y. Guo, E. Chan-Tin, and Y. Gong, “DP-ADMM: ADMM-based distributed learning with differential privacy,” _IEEE Transactions on Information Forensics and Security_ , vol. 15, p. 1002–1012, 2020.
* [6] C. Zhang, M. Ahmad, and Y. Wang, “ADMM based privacy-preserving decentralized optimization,” _IEEE Transactions on Information Forensics and Security_ , vol. 14, no. 3, pp. 565–580, 2019.
* [7] T. Zhang and Q. Zhu, “Dynamic differential privacy for ADMM-based distributed classification learning,” _IEEE Transactions on Information Forensics and Security_ , vol. 12, no. 1, pp. 172–187, 2017.
* [8] X. Zhang, M. M. Khalili, and M. Liu, “Improving the privacy and accuracy of ADMM-based distributed algorithms,” in _International Conference on Machine Learning_. PMLR, 2018, pp. 5796–5805.
* [9] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate, “Differentially private empirical risk minimization,” _Journal of Machine Learning Research_ , vol. 12, no. 29, pp. 1069–1109, 2011.
* [10] Y. Zhang, Z. Hao, and S. Wang, “A differential privacy support vector machine classifier based on dual variable perturbation,” _IEEE Access_ , vol. 7, pp. 98 238–98 251, 2019.
* [11] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” _Proceedings of the ACM SIGSAC Conference on Computer and Communications Security_ , 2016.
* [12] M. Du, K. Wang, X. Liu, S. Guo, and Y. Zhang, “A differential privacy-based query model for sustainable fog data centers,” _IEEE Transactions on Sustainable Computing_ , vol. 4, no. 2, pp. 145–155, 2017.
* [13] M. M. Khalili, X. Zhang, and M. Liu, “Designing contracts for trading private and heterogeneous data using a biased differentially private algorithm,” _IEEE Access_ , vol. 9, pp. 70 732–70 745, 2021.
* [14] Y. Zhao, J. Zhao, M. Yang, T. Wang, N. Wang, L. Lyu, D. Niyato, and K.-Y. Lam, “Local differential privacy-based federated learning for Internet of things,” _IEEE Internet of Things Journal_ , vol. 8, no. 11, pp. 8836–8853, 2020.
* [15] T. Zhang and Q. Zhu, “Distributed privacy-preserving collaborative intrusion detection systems for VANETs,” _IEEE Transactions on Signal and Information Processing over Networks_ , vol. 4, no. 1, pp. 148–161, 2018.
* [16] S. Boyd, N. Parikh, and E. Chu, _Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers_. Now Publishers, 2011.
* [17] S. Shalev-Shwartz, “Online learning: Theory, algorithms, and applications,” _PhD Dissertation_ , 2007.
### -A Proof of Proposition 1
###### Proof.
Let $\vec{x}=[\vec{\Pi}_{x,t}^{T},\vec{\Pi}^{T}]^{T}$,
$\vec{y}=[\vec{\Pi}^{T},\vec{\Pi}_{y,s}^{T}]^{T}$, and
$\alpha=[\\{\alpha_{xy,t}\\}^{T},\\{\alpha_{xy,s}\\}^{T}]^{T}$, where $\vec{}$
denotes the vectorization operator. We note that these vectors are all
$2|\mathcal{E}|\times 1$, where $|\mathcal{E}|$ denotes the number of
connections between targets and sources. Now we can write the constraints in
matrix form such that $A\vec{x}=\vec{y}$ where
$A=[\textbf{I},\textbf{0},\textbf{I},\textbf{0}]$. Here I and 0 denote the
identity and zero matrices respectively, both of which are
$|\mathcal{E}|\times|\mathcal{E}|$. Next, we note that
$\vec{x}\in\mathcal{F}_{\vec{x},t}$ and $\vec{y}\in\mathcal{F}_{\vec{y},s}$,
where $\mathcal{F}_{\vec{x},t}=\\{\vec{x}|\pi_{xy,t}\geq
0,\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy,t}\leq\bar{p}_{x},\\{x,y\\}\in\mathcal{E}\\},\
\mathcal{F}_{\vec{y},s}:=\\{\vec{y}|\pi_{xy,s}\geq
0,\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy,s}\leq\bar{q}_{y},\\{x,y\\}\in\mathcal{E}\\}.$
In turn we can solve the minimization in (2) with the iterations: 1)
$\vec{x}(k+1)\in\arg\min_{\vec{x}\in\mathcal{F}_{x,t}}L(\vec{x},\vec{y}(k),\alpha(k));$
2)
$\vec{y}(k+1)\in\arg\min_{\vec{y}\in\mathcal{F}_{y,s}}L(\vec{x}(k),\vec{y},\alpha(k));$
3) $\alpha(k+1)=\alpha(k)+\eta(A\vec{x}(k+1)-\vec{y}(k+1)),$ whose convergence
is proved [16]. Because there is no coupling among
$\Pi_{x,t},\Pi_{y,s},\pi_{xy},\alpha_{xy,t},$ and $\alpha_{xy,s}$, the above
iterations can be decomposed to (4)-(8). ∎
|
# Security Investment Over Networks with Bounded Rational Agents: Analysis and
Distributed Algorithm
Jason Hughes and Juntao Chen The authors are with the Department of Computer
and Information Sciences, Fordham University, New York, NY, 10023 USA. E-mail:
<EMAIL_ADDRESS>
###### Abstract
This paper considers the security investment problem over a network in which
the resource owners aim to allocate their constrained security resources to
heterogeneous targets strategically. Investing in each target makes it less
vulnerable, and thus lowering its probability of a successful attack. However,
humans tend to perceive such probabilities inaccurately yielding bounded
rational behaviors; a phenomenon frequently observed in their decision-making
when facing uncertainties. We capture this human nature through the lens of
cumulative prospect theory and establish a behavioral resource allocation
framework to account for the human’s misperception in security investment. We
analyze how this misperception behavior affects the resource allocation plan
by comparing it with the accurate perception counterpart. The network can
become highly complex with a large number of participating agents. To this
end, we further develop a fully distributed algorithm to compute the
behavioral security investment strategy efficiently. Finally, we corroborate
our results and illustrate the impacts of human’s bounded rationality on the
resource allocation scheme using cases studies.
## I Introduction
An efficient allocation of limited security resources to protect targeted
assets from malicious attacks is a critical problem faced by security
professionals. This problem becomes increasingly challenging as the modern
systems adopted in our society become more complex. For example, the societal
cyber-physical systems, such as industrial control systems and power grids,
consist of heterogeneous components including sensors, controllers, and
actuators, which are required to be jointly secured to achieve a desired
performance. Thus, it is important for the system operator to allocate the
available security resources strategically to enhance the holistic security.
Previous studies have investigated how a centralized system operator can
maximally reduce the vulnerability of assets from adversaries through security
investment [1, 2]. However, such a centralized paradigm, i.e., using a single-
source model, is insufficient to capture the emerging scenarios where multiple
resource owners111The resource owner also refers to the system
operator/planner, and they are used interchangeably in the paper. participate
in securing the targets collaboratively. To this end, this paper aims to
develop a framework and investigate the security investment problem over a
network with multiple sources (security resource investors) and a variety of
targets (valuable assets to be protected).
To develop an effective security investment scheme, it is necessary to
understand how the risks of targets change over the investment strategy. A
larger security investment amount will lower the probability of a successful
attack. However, human’s perception of such probabilistic events is
subjective, a commonly observed behavior when facing uncertainties.
Psychological studies have shown that humans often misperceive probabilities
on gains and losses by over-weighting low probabilities and under-weighting
high probabilities which lead to bounded rational behavior, a subject
receiving significant attention in prospect theory [3, 4]. Such behavioral
misperception plays an essential role in the focused security investment
problem where the resource owners need to evaluate the likelihood of
successful compromise of the targets under a given resource allocation scheme.
To this end, we incorporate this bounded rational consideration into our model
by developing a new behavioral decision-making framework for security
investment over networks. Under this paradigm, the resource owner perceives
the target having a relatively low chance of being compromised as more
vulnerable than it is, and a target with a high probability of being attacked
as less vulnerable than it is. We analyze the impact of attack success
misperception on the optimal resource allocation strategy and identify that
the bounded rational operator will prefer more to secure those targets with
higher values. In other words, the investors tend to pay more attention to
higher-valued assets as they become more behavioral, yielding a discriminative
distribution scheme comparing with the one without behavioral consideration.
Additionally, when more and more participating agents/nodes (resource owners
and targets) are introduced into the network, the computational complexity of
the resource allocation problem increases drastically [5]. Solving the
security investment problem with a massive number of networked sources and
targets in a centralized manner may be impractical or extremely
computationally expensive. In addition, the centralized approach requires the
planner to have complete information on the source and target agents,
including their utility parameters, supply and demand upper bounds, degree of
misperception on attack success, and value of targets. Thus, it does not
preserve a high level of privacy for participants. To this end, we propose a
distributed algorithm based on the alternating direction method of multipliers
(ADMM) [6], where the central resource transport planner is not needed and
each node solves its own simpler optimization problem and communicates its
decisions with the connected nodes in the network. Each pair of source and
target nodes will then negotiate to reach a consensus on how many security
resources should be transported. The proposed distributed algorithm converges
to the same optimal solution obtained under the centralized optimization
paradigm.
The contributions of this paper are summarized as follows:
1. 1.
We develop a bounded rational security investment framework over a network.
The model captures the decision-makers’ misperception of security resources’
effectiveness on protecting the targets, and it facilitates the analysis of
behavioral impacts on the resource allocation plan.
2. 2.
We discover a sequential water-filling nature of the optimal security resource
allocation over the targets and identify that the transport planners become
more discriminative by investing in a smaller set of higher-valued targets as
they tend to be more behavioral.
3. 3.
We further develop a distributed algorithm based on ADMM to compute the
optimal resource investment strategy for large-scale networks. We also
corroborate our algorithm and analytical results using case studies.
Related Works: Optimal security investment for defending assets has been
studied extensively in literature [7, 8, 9, 10]. Previous studies have also
considered the behavioral impacts on security investment. For example, the
authors in [11] have studied the interplay between the strategic defender and
the bounded rational adversary through a game-theoretic framework. [12] has
investigated the optimal investment strategies under a misperceived security
risk model based on prospect theory, in which the authors have focused on a
single decision maker investing on heterogeneous assets. Our work pays
attention to the resource transport in a multi-source multi-target framework.
Prospect theory has also been used to guide the optimal resource
allocation/decision-making in various applications, including water
infrastructures [13], communication networks [14], and the Internet of things
[15]. Our work is also related to the decision-making of resource allocation
over large-scale networks, in which efficient algorithms for computing the
optimal schemes have been proposed in different contexts, including in
consideration of efficiency [5, 16], fairness [17], and security and
resiliency [18, 19].
The rest of this paper is organized as follows. In Section II, we formulate
the security investment problem over a network that considers human
misperception on attack success rate. We characterize the optimal security
investment strategy and analyze how the behavioral consideration affects such
solutions in Sections III and IV. We further develop a distributed algorithm
to compute the behavioral security investment strategy in Section V and
corroborate our findings in Section VI.
## II Problem Formulation
In this section, we establish a framework for security investment over a
network with behavioral participants.
### II-A Security Resource Transport Network
In a network, we denote by $\mathcal{X}:=\\{1,...,|\mathcal{X}|\\}$ the set of
destinations/targets that receive the security resources, and
$\mathcal{Y}:=\\{1,...,|\mathcal{Y}|\\}$ the set of origins/sources that
distribute security resources to the targets. Specifically, each source node
$y\in\mathcal{Y}$ is connected to a number of target nodes denoted by
$\mathcal{X}_{y}$, representing that $y$ has choices in allocating its
resources to a specific group of destinations $\mathcal{X}_{y}$ in the
network. Similarly, it is possible that each target node $x\in\mathcal{X}$
receives resources from multiple source nodes, and this set of suppliers to
node $x$ is denoted by $\mathcal{Y}_{x}$. Note that $\mathcal{X}_{y}$,
$\forall y$ and $\mathcal{Y}_{x}$, $\forall x$ are nonempty. It is
straightforward to see that the security resources are transported over a
bipartite network, where one side of the network consists of all source nodes
and the other includes all destination nodes. This bipartite graph may not be
complete due to constrained matching policies between participants. For
convenience, we denote by $\mathcal{E}$ the set including all feasible
transport paths in the network, i.e.,
$\mathcal{E}:=\\{(x,y)|x\in\mathcal{X}_{y},y\in\mathcal{Y}\\}$. Note that
$\mathcal{E}$ also refers to the set of all edges in the established bipartite
graph for security resource transportation.
We next denote by $\pi_{xy}\in\mathbb{R}_{+}$ the amount of security resources
transported from the origin node $y\in\mathcal{Y}$ to the destination node
$x\in\mathcal{X}$, where $\mathbb{R}_{+}$ is the set of nonnegative real
numbers. Let $\Pi:=\\{\pi_{xy}\\}_{x\in\mathcal{X}_{y},y\in\mathcal{Y}}$ be
the designed resource transport plan. Furthermore, the security resources at
each source node $y\in\mathcal{Y}$ is upper bounded by
$\bar{q}_{y}\in\mathbb{R}_{+}$, i.e.,
$\sum_{x\in\mathcal{X}_{y}}\pi_{xy}\leq\bar{q}_{y}$.
### II-B Bounded Rational Security Investment
Each target node in the network faces threats and could be compromised by an
attacker. If target node $x\in\mathcal{X}$ is attacked, the induced loss is
$U_{x}>0$. The attacker’s probability of successfully compromising the target
node is related to the amount of security resources received. For each target
node $x\in\mathcal{X}$, defined by
$p_{x}:\mathbb{R}_{+}^{|\mathcal{Y}_{x}|}\rightarrow[0,1]$ a function that
maps the received security resources $\Pi_{x}$ to a successful attack
probability. It is natural to see that such probability should be related to
the aggregated resource received by node $x$ captured by
$\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}$. Thus, with a slightly abuse of notation,
$p_{x}(\Pi_{x})$ can be expressed by
$p_{x}(\sum_{y\in\mathcal{Y}_{x}}\pi_{xy})$, where the later one shows more
explicitly the relationship between the successful attack probability and the
total received resources at node $x\in\mathcal{X}$.
Each target node $x\in\mathcal{X}$ minimizes its cost $U_{x}p_{x}(\Pi_{x})$.
To this end, the central planner aims to minimize the following aggregated
loss $L(\Pi)$ at all targets under attacks:
$L(\Pi)=\sum_{x\in\mathcal{X}}U_{x}p_{x}(\Pi_{x}).$ (1)
It has been shown that humans tend to misperceive probabilities by over-
weighing low probabilities and under-weighing high probabilities during
decision-making under uncertainties. For a true probability $p\in[0,1]$,
humans will perceive it as $w(p)\in[0,1]$, where $w$ is a probability
weighting function. One such commonly used weighting function is given by [20]
$w(p)=\exp({-(-\log(p))^{\gamma}}),\ p\in[0,1],$ (2)
where $\gamma\in(0,1]$ is a parameter capturing the degree of misperception.
When $\gamma$ is closer to 0, it leads to a larger distortion of the
probability function $p$. In comparison, when $\gamma=1$, $w(p)=p$, indicating
there is no probability misperception.
Under the perceived probability, the target node $x$’s cost function becomes
$U_{x}w\left(p_{x}(\Pi_{x})\right)$. Thus, the cost function for the transport
planner under the perceived attack probability is
$\tilde{L}(\Pi)=\sum_{x\in\mathcal{X}}U_{x}w(p_{x}(\Pi_{x})).$ (3)
The security resource allocation strategies with bounded rational behavioral
consideration can be obtained by solving the following optimization problem:
$\displaystyle\mathrm{(OP-A):}\quad\min_{\Pi}$
$\displaystyle\sum_{x\in\mathcal{X}}U_{x}w(p_{x}(\Pi_{x}))$ (4)
$\displaystyle\mathrm{s.t.}$ $\displaystyle
0\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy}\leq\bar{q}_{y},\ \forall
y\in\mathcal{Y},$ $\displaystyle\pi_{xy}\geq 0,\
\forall\\{x,y\\}\in\mathcal{E}.$
Note that (OP-A) is solved for a distribution of resources across the source
nodes at a given time. If the amount of resources at each node changes, a new
optimal strategy can be obtained by solving (OP-A) repeatedly in a moving
horizon fashion. Extending (OP-A) to dynamic resource allocation over a period
a time is possible and left as subsequent work.
## III Preliminary Analysis
The successful attack probability function $p_{x}$ should capture the fact
that a larger security investment lowers the likelihood of attack. In
addition, the marginal benefit of security resource decreases for each target
node. To this end, we have the following assumption.
###### Assumption 1.
The successful attack probability function $p_{x}(\Pi_{x})$ satisfies the
following: 1) $p_{x}(\Pi_{x})\in[0,1]$ with
$\lim_{||\zeta||_{1}\rightarrow\infty}p_{x}(\zeta)$ = 0, where $||\cdot||_{1}$
denotes the $l_{1}$ norm, and is twice differentiable, 2) $p_{x}(\Pi_{x})$ is
strictly monotonic decreasing and log-convex with respect to $\pi_{xy}$, for
$y\in\mathcal{Y}_{x}$, and 3) $\frac{\partial p_{x}}{\partial\pi_{xy}}/p_{x}$
is bounded with respect to $\pi_{xy}$, for $y\in\mathcal{Y}_{x}$.
There are a number of functions of interest that satisfy the properties in
Assumption 1. For example,
$p_{x}(\Pi_{x})=\exp({-\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}-r_{x}}),$ (5)
where $r_{x}>0$ represents the existing security investment at node $x$ before
resource transport.
As another example,
$p_{x}(\Pi_{x})=\frac{1}{\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}+r_{x}}$, where
$r_{x}>1$ has a similar meaning as in the previous case. Both examples
indicate that the security resources can effectively decrease the attack
likelihood.
###### Lemma 1.
Under Assumption 1, the perceived probability of successful attack at node
$x$, $w(p_{x}(\Pi_{x}))$, is strictly convex in $\pi_{xy}$, $\forall
x\in\mathcal{X},\ y\in\mathcal{Y}_{x}$.
###### Proof.
See Appendix -A. ∎
## IV Analysis of Bounded Rational Security Investment Strategies
This section characterizes the bounded rational security investment strategies
and analyzes the impacts of behavioral considerations on such decision-making
outcomes.
### IV-A Security Resource Allocation Preferences
We first have the following assumption to facilitates the analysis.
###### Assumption 2.
Assume that the values of induced loss due to successful attack are ordered as
follows: $U_{1}>U_{2}>...>U_{|\mathcal{X}|}>0$. Furthermore, each target node
admits a same successful attack probability function, i.e., $p_{x}$, $\forall
x\in\mathcal{X}$, share a same form.
We next have the following result on the marginal cost associated with the
target nodes.
###### Lemma 2.
The following inequality holds for each pair of target nodes $i\in\mathcal{X}$
and $j\in\mathcal{X}$ with $i<j$,
$U_{i}\frac{\partial w(p_{i}(\Pi_{i}))}{\partial\pi_{iy}}<U_{j}\frac{\partial
w(p_{j}(\Pi_{j}))}{\partial\pi_{jy}},\ \forall
y\in\mathcal{Y}_{i}\cap\mathcal{Y}_{j}.$ (6)
And the marginal cost $U_{i}\frac{\partial
w(p_{i}(\Pi_{i}))}{\partial\pi_{iy}}$ is negative and continuously increasing
to 0 in $\pi_{iy}$, $\forall i\in\mathcal{X}$.
###### Proof.
First, we have, $\forall x\in\mathcal{X}$ and $\forall y\in\mathcal{Y}_{x}$,
$\displaystyle U_{x}\frac{\partial w(p_{x}(\Pi_{x}))}{\partial\pi_{xy}}=$
$\displaystyle U_{x}\gamma(-\log(p_{x}(\Pi_{x})))^{(\gamma-1)}$ (7)
$\displaystyle\cdot\frac{\partial
p_{x}(\Pi_{x})}{\partial\pi_{xy}}\Big{/}p_{x}(\Pi_{x})\cdot
w(p_{x}(\Pi_{x})).$
Based on Assumption 1, $\frac{\partial p_{x}(\Pi_{x})}{\partial\pi_{xy}}$ is
negative. In addition, $-\log(p_{x}(\Pi_{x}))$, $p_{x}(\Pi_{x})$ and
$w(p_{x}(\Pi_{x}))$ are all positive. Thus, $U_{x}\frac{\partial
w(p_{x}(\Pi_{x}))}{\partial\pi_{xy}}<0$. Lemma 1 shows that
$\frac{\partial}{\partial\pi_{xy}}(\frac{\partial
w(p_{x}(\Pi_{x}))}{\partial\pi_{xy}})>0$, indicating that the marginal cost is
monotonically increasing. In addition,
$\lim_{||\Pi_{x}||_{1}\rightarrow\infty}\left|U_{x}\frac{\partial
w(p(\Pi_{x}))}{\partial\pi_{xy}}\right|=\lim_{||\Pi_{x}||_{1}\rightarrow\infty}\left|\gamma
U_{x}(-\log(p_{x}(\Pi_{x})))^{\gamma-1}w(p_{x}(\Pi_{x}))\right|\left|\frac{\partial
p_{x}(\Pi_{x})}{\partial\pi_{xy}}\big{/}p_{x}(\Pi_{x})\right|.$ From
Assumption 1, we know that as $p_{x}(\Pi_{x})=0$ and thus
$w(p_{x}(\Pi_{x}))\rightarrow 0$ and $-\log(p_{x}(\Pi_{x}))\rightarrow\infty$
as $||\Pi_{x}||_{1}\rightarrow\infty$. Since $\frac{\partial
p_{x}}{\partial\pi_{xy}}/p_{x}$ is bounded,
$\lim_{||\Pi_{x}||_{1}\rightarrow\infty}|U_{x}\cdot\frac{\partial
w(p(\Pi_{x}))}{\partial\pi_{xy}}|=0$. Finally, to show the inequality between
the marginals, we note that based on Assumption 2, $U_{i}>U_{j}$ and thus
$\begin{split}&U_{i}(-\log(p(\Pi_{i})))^{\gamma-1}w(p(\Pi_{i}))\frac{\partial
p_{i}(\Pi_{i})}{\partial\pi_{iy}}\Big{/}p_{i}(\Pi_{i})\\\
&<U_{j}(-\log(p(\Pi_{i})))^{\gamma-1}w(p(\Pi_{i}))\frac{\partial
p_{j}(\Pi_{i})}{\partial\pi_{jy}}\Big{/}p_{j}(\Pi_{i}),\end{split}$
$\forall\Pi_{x}\in\mathbb{R}_{+}^{|\mathcal{Y}_{x}|},\forall x\in\mathcal{X}$
which yields (6) because $\frac{\partial p_{j}(\Pi_{i})}{\partial\pi_{jy}}$ is
negative. ∎
The following proposition characterizes the total amount of security resources
received by the target nodes from sources under some general assumptions.
###### Proposition 1.
Under Assumption 2 and a complete resource transport network, the optimal
strategy $\\{\Pi_{x}^{*}\\}_{x\in\mathcal{X}}$ satisfies the following
inequality
$||\Pi_{1}^{*}||_{1}\geq||\Pi_{2}^{*}||_{1}\geq...\geq||\Pi_{|\mathcal{X}|}^{*}||_{1}$,
and the $l_{1}$ norm $||\Pi_{x}||_{1}=\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}$ is
the total amount of security resources received by the target node
$x\in\mathcal{X}$.
###### Proof.
As each source can transfer resources to every target and the qualities of
resources are the same supplied by all source nodes, it is equivalent to
aggregate all the source nodes as a single super node that has a capacity
$\sum_{y\in\mathcal{Y}}\bar{q}_{y}$ managed by a central planner. Thus, the
transport network can be seen consisting of a single source connected to a set
of targets, i.e., $\mathcal{Y}=\\{1\\}$, $\mathcal{Y}_{x}=\mathcal{Y}$, and
$\Pi_{x}=\pi_{x1}$, $\forall x\in\mathcal{X}$. Based on the KKT condition, for
each pair of target nodes $i\in\mathcal{X}$ and $j\in\mathcal{X}$ receiving
nonzero security resources from the source node, we have $U_{i}\frac{\partial
w(p_{i}(\Pi_{i}))}{\partial\pi_{i1}}|_{\pi_{i1}=\pi_{i1}^{*}}=U_{j}\frac{\partial
w(p_{i}(\Pi_{j}))}{\partial\pi_{j1}}|_{\pi_{j1}=\pi_{j1}^{*}}$. Assumptions 1
and 2 indicate that $\gamma
U_{i}(-\log(p_{i}(\pi_{i1}^{*})))^{\gamma-1}w(p_{i}(\pi_{i1}^{*}))\frac{\partial
p_{i}(\pi_{i1}^{*})}{\partial\pi_{i1}}/p_{i}(\pi_{i1}^{*})=\gamma
U_{j}(-\log(p_{j}(\pi_{j1}^{*})))^{\gamma-1}w(p_{j}(\pi_{j1}^{*}))\frac{\partial
p_{j}(\pi_{j1}^{*})}{\partial\pi_{j1}}/p_{j}(\pi_{j1}^{*})$, which yields
$\displaystyle(-\log(p_{i}(\pi_{i1}^{*})))^{\gamma-1}w(p_{i}(\pi_{i1}^{*}))\frac{\partial
p_{i}(\pi_{i1}^{*})}{\partial\pi_{i1}}\frac{1}{p_{i}(\pi_{i1}^{*})}$
$\displaystyle=\frac{U_{j}}{U_{i}}(-\log(p_{j}(\pi_{j1}^{*})))^{\gamma-1}w(p_{j}(\pi_{j1}^{*}))\frac{\partial
p_{j}(\pi_{j1}^{*})}{\partial\pi_{j1}}\frac{1}{p_{j}(\pi_{j1}^{*})}$
$\displaystyle>(-\log(p_{j}(\pi_{j1}^{*})))^{\gamma-1}w(p_{j}(\pi_{j1}^{*}))\frac{\partial
p_{j}(\pi_{j1}^{*})}{\partial\pi_{j1}}\frac{1}{p_{j}(\pi_{j1}^{*})}.$
The inequality in the last step above is due to $U_{j}<U_{i}$ for $j>i$ and
the negative marginal cost of target node on the received security resources.
Based on Lemma 2, the marginal cost is continuously increasing. Thus, we can
conclude $\pi_{i1}^{*}>\pi_{j1}^{*}$, where $\pi_{i1}^{*}$ is the total amount
of resources received by target node $i$. Equivalently speaking, target node
$i$ receives more security resources than target node $j$ at the optimal
solution, for $i<j\in\mathcal{X}$. ∎
Remark: Proposition 1 indicates that, under some quite general conditions,
target node $i$ (higher-valued) receives more resources than target node $j$
(lower-valued) under the optimal allocation plan, $\forall i<j\in\mathcal{X}$.
This result is consistent with the objective of the system planner in
minimizing the aggregated expected loss of assets.
### IV-B Sequential Water-Filling of Security Investment
The transport network is still considered to be complete. Thus, it is
equivalent to combine all source nodes and regard them as a super source node
with capacity $\sum_{y\in\mathcal{Y}}\bar{q}_{y}$. For convenience, we denote
by $\tilde{\pi}_{i}$ the total amount of resources that target node $i$
received from the super source node, i.e.,
$\tilde{\pi}_{i}=\sum_{y\in\mathcal{Y}_{i}}\pi_{iy}$ in the original
framework. With a slight abuse of notation, $p_{x}$ can be seen as a single-
variable function on $\tilde{\pi}_{i}$. For all target nodes $i\in\mathcal{X}$
and $j\in\mathcal{X}$ with $i<j$, we define $\tilde{\pi}_{i}^{j*}$ as a
quantity that satisfies
$U_{i}\frac{\partial
w(p_{i}(\tilde{\pi}_{i}))}{\partial\tilde{\pi}_{i}}\Bigg{|}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{j*}}=U_{j}\frac{\partial
w(p_{j}(\tilde{\pi}_{j}))}{\partial\tilde{\pi}_{j}}\Bigg{|}_{\tilde{\pi}_{j}=0}.$
(8)
###### Proposition 2.
Under Assumption 2 and a complete transport network, the resources received by
target node $j$ from the super source node, $\tilde{\pi}_{j}$, will be nonzero
at the optimal solution if and only if
$\sum_{y\in\mathcal{Y}}\bar{q}_{y}>\sum_{i=1}^{j-1}\tilde{\pi}_{i}^{j*}$,
where $\tilde{\pi}_{i}^{j*}$ is defined in (8).
###### Proof.
Suppose that $\tilde{\pi}_{j}^{*}>0$ for some target node $j$. Also suppose by
contradiction that
$\sum_{y\in\mathcal{Y}}\bar{q}_{y}\leq\sum_{i=1}^{j-1}\tilde{\pi}_{i}^{j*}$.
Then, $\exists m\in\\{1,...,j-1\\}$ such that
$\tilde{\pi}_{m}^{*}<\tilde{\pi}_{m}^{j*}$. This indicates that it is
infeasible to allocate $\tilde{\pi}_{m}^{j*}$ or more resources to node $m$
without exceeding the upper bound. By definition of $\tilde{\pi}_{m}^{j*}$,
$\displaystyle U_{m}\frac{\partial
w(p_{m}(\tilde{\pi}_{m}))}{\partial\tilde{\pi}_{m}}\bigg{\rvert}_{\tilde{\pi}_{m}=\tilde{\pi}_{m}^{j*}}<U_{j}\frac{\partial
w(p_{j}(\tilde{\pi}_{j}))}{\partial\tilde{\pi}_{j}}\bigg{\rvert}_{\tilde{\pi}_{j}=0}$
$\displaystyle<U_{j}\frac{\partial
w(p_{j}(\tilde{\pi}_{j}))}{\partial\tilde{\pi}_{j}}\bigg{\rvert}_{\tilde{\pi}_{j}=\tilde{\pi}_{j}^{*}},$
which yields a contradiction, since the marginals must coincide at the optimal
solution. Thus, $\tilde{\pi}_{j}^{*}>0$ leads to
$\sum_{y\in\mathcal{Y}}\bar{q}_{y}>\sum_{i=1}^{j-1}\tilde{\pi}_{i}^{j*}$ under
the optimal resource allocation.
To prove the other direction, we first suppose that
$\sum_{y\in\mathcal{Y}}\bar{q}_{y}>\sum_{i=1}^{j-1}\tilde{\pi}_{i}^{*}$ and
suppose by contradiction $\tilde{\pi}_{j}=0$. Then we have
$\tilde{\pi}_{k}=0,\forall k>j$, and thus
$\sum_{k=1,2,...,j-1}\tilde{\pi}_{k}=\sum_{y\in\mathcal{Y}}\bar{q}_{y}$ and
$\exists i\in\\{1,...,j-1\\}$ such that
$\tilde{\pi}_{i}>\tilde{\pi}_{i}^{j*}$. A sufficiently small amount of
resource, $\epsilon\in\mathbb{R}_{+}$ is transferred from target $i$ to $j$
which will lead to a net cost reduction in (OP-A), and thus the resource
allocation is no longer optimal. Starting with non-zero resource allocation to
the target nodes $\\{1,...,j-1\\}$, the total cost is
$\sum_{x\in\mathcal{X}}U_{x}w(p_{x}(\tilde{\pi}_{x})).$ From target $i$ that
has $\tilde{\pi}_{i}^{*}=||\Pi_{i}^{*}||_{1}>\tilde{\pi}_{i}^{j*}$, remove a
sufficiently small amount of resource $\epsilon$ and add a resource amount of
$\epsilon$ to target $j$. Denote the modified resource transport plan as
$\pi^{(\epsilon)}$. The total cost after perturbation becomes
$\displaystyle\tilde{L}(\pi^{(\epsilon)})=\sum_{z\in\mathcal{X}\setminus\\{i,j\\}}U_{z}w(p_{z}(\tilde{\pi}_{z}^{*}))+U_{i}w(p_{i}(\tilde{\pi}_{i}-\epsilon))$
$\displaystyle+U_{j}w(p_{j}(\epsilon)).$
We next define
$g(\epsilon)=U_{i}w(p_{i}(\tilde{\pi}_{i}-\epsilon))+U_{j}w(p_{j}(\epsilon))$.
Then,
$\tilde{L}(\pi^{*})=\sum_{z\in\mathcal{X}\setminus\\{i,j\\}}U_{z}w(p_{z}(\tilde{\pi}_{z}))+g(0),\
\tilde{L}(\pi^{(\epsilon)})=\sum_{z\in\mathcal{X}\setminus\\{i,j\\}}U_{z}w(p_{z}(\tilde{\pi}_{z}))+g(\epsilon).$
If $g(\epsilon)<g(0)$, then
$\tilde{L}(\pi^{(\epsilon)})<\tilde{L}(\tilde{\pi}^{*})$, which yields a
positive net cost reduction, meaning that the resource allocation strategy
after perturbation is worse off. It is clear that
$\frac{dg}{d\epsilon}=-U_{i}\frac{\partial
w(p_{i}(\pi_{i}))}{\partial\tilde{\pi}_{i}}\bigg{\rvert}_{\pi_{i}=\tilde{\pi}_{i}-\epsilon}+U_{j}\frac{\partial
w(p_{j}(\pi_{j}))}{\partial\tilde{\pi}_{j}}\bigg{\rvert}_{\pi_{j}=\epsilon}.$
Based on $\tilde{\pi}_{i}^{*}>\tilde{\pi}_{i}^{j*}$ and Lemma 2,
$U_{i}\frac{\partial
w(p_{i}(\tilde{\pi}_{i}))}{\partial\tilde{\pi}_{i}}\bigg{\rvert}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{*}}>U_{j}\frac{\partial
w(p_{j}(\tilde{\pi}_{j}))}{\partial\tilde{\pi}_{j}}\bigg{\rvert}_{\tilde{\pi}_{j}=0}.$
Thus, $\lim_{\epsilon\rightarrow 0}\frac{dg}{d\epsilon}$ is negative,
indicating that $g(\epsilon)$ is decreasing for a sufficiently small
$\epsilon$. Therefore, we obtain
$\tilde{L}(\pi^{(\epsilon)})<\tilde{L}(\tilde{\pi}^{*})$ which is a
contradiction. ∎
Remark: Proposition 2 implies that the super source node first allocates
$\tilde{\pi}_{1}^{2*}$ security resources to target node 1, and then starts to
transfer resources to both target nodes 1 and 2 while maintaining a same
marginal cost until reaching $\tilde{\pi}_{1}^{3*}$ and
$\tilde{\pi}_{2}^{3*}$, respectively. Afterward, in addition to target nodes 1
and 2, target node 3 starts to receive resources, and the marginal costs at
all nodes are kept the same during security resource investment. The resource
allocation scheme will follow this fashion until all resources are
transferred. The above discussion leads to sequential water-filling of
security resource transport over networks.
As the original transport network includes multiple source nodes, we need to
determine the strategy for each of them. The above discussion indicates that
the optimal resource allocation plan can be obtained sequentially, i.e., each
source node completes allocating its security resources to the targets in
sequential order. Specifically, source node 1 will first transfer its resource
to target node 1. If $\bar{q}_{1}<\tilde{\pi}_{1}^{2*}$, then the next source
nodes (node 2, 3, etc) will continue allocate resource to target node 1 until
it receives $\tilde{\pi}_{1}^{2*}$ amount of resources. If
$\bar{q}_{1}>\tilde{\pi}_{1}^{2*}$, then source node 1 first allocates
$\tilde{\pi}_{1}^{2*}$ amount of resources to the target node 1, and then
start transferring the remaining resources to both targets 1 and 2 while
maintaining a same marginal cost at both nodes. After source node 1 completes
its resource transport, source node 2 starts to transfer its resources to the
appropriate targets in a similar manner. This process terminates until all
source nodes finish their resource allocation to the targets.
### IV-C Behavioral Impacts on Security Resource Allocation
The impact of incorporating the behavioral element to probability perception
is captured by the parameter $\gamma$. Clearly, there is no behavioral
consideration when $\gamma=1$, and the probabilities are perceived as their
actual values. When $\gamma\in(0,1)$, we have the following result on the
behavioral impacts on the resource allocation plan.
###### Proposition 3.
Under Assumptions 1 and 2, $p_{x}(0)<\frac{1}{e}$, $\forall x\in\mathcal{X}$,
and a complete transport network, ${d\tilde{\pi}_{i}^{j*}}/{d\gamma}<0$ for
$i<j$, $\forall i,j\in\mathcal{X}$, with $\tilde{\pi}_{i}^{j*}$ defined in
(8).
###### Proof.
Based on (2) and (8), we have
$\displaystyle
U_{i}\gamma(-\log(p_{i}(\tilde{\pi}_{i}^{j*})))^{(\gamma-1)}w(p_{i}(\tilde{\pi}_{i}^{j*}))\frac{\partial
p_{i}(\tilde{\pi}_{i})}{\partial\tilde{\pi}_{i}}\bigg{\rvert}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{j*}}\frac{1}{p_{i}(\tilde{\pi}_{i}^{j*})}$
(9)
$\displaystyle=U_{j}\gamma(-\log(p_{j}(0)))^{(\gamma-1)}w(p_{j}(0))\frac{\partial
p_{j}(\tilde{\pi}_{j})}{\partial\tilde{\pi}_{j}}\bigg{\rvert}_{\tilde{\pi}_{j}=0}\frac{1}{p_{j}(0)}.$
Based on (9), we can characterize the sensitivity of the amount of security
resources transported to each target over the behavioral parameter $\gamma$.
Taking log of each side of (9) and differentiating with respect to $\gamma$
yield
$\displaystyle\frac{d\tilde{\pi}_{i}^{j*}}{d\gamma}=\frac{((-\log(p_{i}(\tilde{\pi}_{i}^{j*})))^{\gamma}-1)\log(-\log(p_{i}(\tilde{\pi}_{i}^{j*})))}{\Lambda_{i}^{j}}$
(10)
$\displaystyle-\frac{((-\log(p_{j}(0)))^{\gamma}-1)\log(-\log(p_{j}(0)))}{\Lambda_{i}^{j}},$
where
$\Lambda_{i}^{j}=(\gamma-1-\gamma(-\log(p_{i}(\tilde{\pi}_{i}^{j*})))^{\gamma})\frac{\partial
p_{i}(\tilde{\pi}_{i})}{\partial\tilde{\pi}_{i}}\bigg{\rvert}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{j*}}\cdot\frac{1}{p_{i}(\tilde{\pi}_{i}^{j*})}\\\
\log(p_{i}(\tilde{\pi}_{i}^{j*}))+\frac{p_{i}(\tilde{\pi}_{i}^{j*})\frac{\partial^{2}p_{i}(\tilde{\pi}_{i})}{\partial\tilde{\pi}_{i}^{2}}\big{\rvert}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{j*}}-\big{(}\frac{\partial
p_{i}(\tilde{\pi}_{i})}{\partial\tilde{\pi}_{i}}\big{\rvert}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{j*}}\big{)}^{2}}{\frac{\partial
p_{i}(\tilde{\pi}_{i})}{\partial\tilde{\pi}_{i}}\big{\rvert}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{j*}}\cdot
p_{i}(\tilde{\pi}_{i}^{j*})}.$ Under the assumption that
$p_{j}(0)<\frac{1}{e}$ and $p_{i}(\tilde{\pi}_{i}^{j*})<p_{j}(0)$ for
$\tilde{\pi}_{i}^{j*}>0$, we have
$-\log(p_{i}(\tilde{\pi}_{i}^{j*}))>-\log(p_{j}(0))>1$ and thus
$\log(-\log(p_{i}(\tilde{\pi}_{i}^{j*})))>\log(-\log(p_{j}(0)))>0$ and
$(-\log(p_{i}(\tilde{\pi}_{i}^{j*})))^{\gamma}-1>(-\log(p_{j}(0)))^{\gamma}-1$.
Hence, the numerator of (10) is positive. From Assumption 1, we have
$\frac{\partial
p_{i}(\tilde{\pi}_{i})}{\partial\tilde{\pi}_{i}}\bigg{\rvert}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{j*}}<0$
and because $p_{i}(\tilde{\pi}_{i})$ is log-convex,
$p_{i}(\tilde{\pi}_{i}^{j*})\frac{\partial^{2}p_{i}(\tilde{\pi}_{i})}{\partial\tilde{\pi}_{i}^{2}}\big{\rvert}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{j*}}\geq\big{(}\frac{\partial
p_{i}(\tilde{\pi}_{i})}{\partial\tilde{\pi}_{i}}\big{\rvert}_{\tilde{\pi}_{i}=\tilde{\pi}_{i}^{j*}}\big{)}^{2}$.
Thus, the denominator of (10) negative, which yields
${d\tilde{\pi}_{i}^{j*}}/{d\gamma}<0$. ∎
Remark: The above analysis, together with Proposition 1 indicate that when the
behavioral misperception on the attack success probability is considered, the
sources will supply security resources to fewer target nodes than the optimal
strategy obtained under the non-behavioral counterpart. In other words, the
behavioral security resource owners prefer to secure higher-valued assets
while paying less attention to those relatively lower-valued targets, and it
leads to a discriminative resource allocation scheme comparing with the one
developed under fully rational scenario.
## V Distributed Algorithm for Bounded Rational Security Investment
This section aims to develop a distribution computational scheme to obtain the
behavioral security investment scheme.
In the established framework, the objective function can also incorporate the
preferences of the source nodes in the security resource transport design in
addition to the cost of the target nodes. The utility function of source node
$y$ on transferring $\pi_{xy}$ security resources to target node $x$ is
denoted by $s_{xy}:\mathbb{R}_{+}\rightarrow\mathbb{R}$. In addition, to
balance the security resource allocation, the planner considers an upper bound
of each target node $x\in\mathcal{X}$ on the received security resources from
connected sources, captured by $\bar{p}_{x}\in\mathbb{R}_{+}$, i.e.,
$\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}\leq\bar{p}_{x}$. To this end, the system
planner aims to address:
$\displaystyle\mathrm{(OP-B):}\quad\min_{\Pi}$
$\displaystyle\sum_{x\in\mathcal{X}}U_{x}w(p_{x}(\Pi_{x}))-\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}\tau_{y}s_{xy}(\pi_{xy})$
$\displaystyle\mathrm{s.t.}$ $\displaystyle
0\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}\leq\bar{p}_{x},\ \forall
x\in\mathcal{X},$ $\displaystyle
0\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy}\leq\bar{q}_{y},\ \forall
y\in\mathcal{Y},$ $\displaystyle\pi_{xy}\geq 0,\
\forall\\{x,y\\}\in\mathcal{E},$
where $\tau_{y}\in\mathbb{R}_{+}$ is a positive weighting factor balancing the
loss of the targets and the utility of the sources under a given security
allocation strategy. It is straightforward to observe that as
$\tau_{y}\rightarrow 0,\ \forall y$, the solution to (OP-B) will converge to
the one of (OP-A), given that the targets have no constraint on the maximum
received security resources.
As the resource transport network becomes complex with a large number of
participating nodes, a centralized scheme to compute the optimal solution to
(OP-B) can be computationally expensive. In addition, the centralized
optimization paradigm requires the planner to collect heterogeneous
information from all source and target nodes, including their utility
parameters, supply and demand upper bounds, degree of misperception on attack
success, and value of targets, which does not preserve a desirable level of
privacy. Due to the above two concerns, it is necessary to devise a
distributed and privacy-preserving scheme to obtain the behavioral resource
allocation strategy over a large-scale network.
To facilitate the development of such an algorithm, we first introduce two
ancillary variables $\pi_{xy}^{t}$ and $\pi_{xy}^{s}$. The superscripts $t$
and $s$ indicate that the corresponding parameter belongs to a target and
source node, respectively. We then set $\pi_{xy}=\pi_{xy}^{t}$ and
$\pi_{xy}=\pi_{xy}^{s}$, indicating that the solutions proposed by the targets
and sources are consistent. This reformulation facilitates the design of a
distributed algorithm which allows us to iterate to obtain the optimal
strategy. To this end, the reformulated problem is presented as follows:
$\displaystyle\min_{\Pi^{t}\in\mathcal{F}^{t},\Pi^{s}\in\mathcal{F}^{s},\Pi}$
$\displaystyle\sum_{x\in\mathcal{X}}U_{x}w(p_{x}(\Pi_{x}^{t}))-\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}\tau_{y}s_{xy}(\pi_{xy}^{s})$
(11) $\displaystyle\mathrm{s.t.}$ $\displaystyle\pi_{xy}^{t}=\pi_{xy},\
\forall\\{x,y\\}\in\mathcal{E},$ $\displaystyle\pi_{xy}^{s}=\pi_{xy},\
\forall\\{x,y\\}\in\mathcal{E},$
where $\Pi^{t}:=\\{\pi_{xy}^{t}\\}_{x\in\mathcal{X}_{y},y\in\mathcal{Y}}$,
$\Pi^{s}:=\\{\pi_{xy}^{s}\\}_{x\in\mathcal{X},y\in\mathcal{Y}_{x}}$,
$\mathcal{F}^{t}:=\\{\Pi^{t}|\pi_{xy}^{t}\geq
0,\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}^{t}\leq\bar{p}_{x},\
\\{x,y\\}\in\mathcal{E}\\}$, and $\mathcal{F}^{s}:=\\{\Pi^{s}|\pi_{xy}^{s}\geq
0,\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy}^{s}\leq\bar{q}_{y},\
\\{x,y\\}\in\mathcal{E}\\}$.
We resort to alternating direction method of multipliers (ADMM) [6] to develop
a distributed computational algorithm. First, let $\alpha_{xy}^{s}$ and
$\alpha_{xy}^{t}$ be the Lagrangian multipliers associated with the constraint
$\pi_{xy}^{s}=\pi_{xy}$ and $\pi_{xy}^{t}=\pi_{xy}$, respectively. The
Lagrangian function associated with the optimization problem (11) can then be
written as follows:
$\displaystyle\mathcal{L}$
$\displaystyle(\Pi^{t},\Pi^{s},\Pi,\alpha_{xy}^{t},\alpha_{xy}^{s})$ (12)
$\displaystyle=\sum_{x\in\mathcal{X}}U_{x}w(p_{x}(\Pi_{x}^{t}))-\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}\tau_{y}s_{xy}(\pi_{xy}^{s})$
$\displaystyle+\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}\alpha_{xy}^{t}\left(\pi_{xy}^{t}-\pi_{xy}\right)+\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}\alpha_{xy}^{s}\left(\pi_{xy}-\pi_{xy}^{s}\right)$
$\displaystyle+\frac{\eta}{2}\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}\left(\pi_{xy}^{t}-\pi_{xy}\right)^{2}+\frac{\eta}{2}\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}\left(\pi_{xy}-\pi_{xy}^{s}\right)^{2},$
where $\eta$ is a positive constant controlling the convergence. We have the
following result on the distributed algorithm.
###### Proposition 4.
The iterative steps of ADMM to solve (OP-B) are summarized as follows:
$\begin{split}\Pi_{x}^{t}(k+1)&\in\arg\min_{\Pi_{x}^{t}\in\mathcal{F}_{x}^{t}}U_{x}w(p_{x}(\Pi_{x}^{t}))\\\
&+\sum_{y\in\mathcal{Y}_{x}}\alpha_{xy}^{t}(k)\pi_{xy}^{t}+\frac{\eta}{2}\sum_{y\in\mathcal{Y}_{x}}(\pi_{xy}^{t}-\pi_{xy}(k))^{2},\end{split}$
(13) $\displaystyle\Pi_{y}^{s}(k+1)$
$\displaystyle\in\arg\min_{\Pi_{y}^{s}\in\mathcal{F}_{y}^{s}}-\sum_{x\in\mathcal{X}_{y}}\tau_{y}s_{xy}(\pi_{xy}^{s})$
(14)
$\displaystyle-\sum_{x\in\mathcal{X}_{y}}\alpha_{xy}^{s}(k)\pi_{xy}^{s}+\frac{\eta}{2}\sum_{x\in\mathcal{X}_{y}}(\pi_{xy}(k)-\pi_{xy}^{s})^{2},$
$\begin{split}\pi_{xy}(&k+1)=\arg\min_{\pi_{xy}}-\alpha_{xy}^{t}(k)\pi_{xy}+\alpha_{xy}^{s}(k)\pi_{xy}\\\
&+\frac{\eta}{2}(\pi_{xy}^{t}(k+1)-\pi_{xy})^{2}+\frac{\eta}{2}(\pi_{xy}-\pi_{xy}^{s}(k+1))^{2},\end{split}$
(15)
$\begin{split}\alpha_{xy}^{t}(k+1)=\alpha_{xy}^{t}(k)+\eta(\pi_{xy}^{t}(k+1)-\pi_{xy}(k+1))^{2},\end{split}$
(16)
$\begin{split}\alpha_{xy}^{s}(k+1)=\alpha_{xy}^{s}(k)+\eta(\pi_{xy}(k+1)-\pi_{xy}^{s}(k+1))^{2},\end{split}$
(17)
where
$\Pi_{\tilde{x}}^{t}:=\\{\pi_{xy}^{t}\\}_{y\in\mathcal{Y}_{x},x=\tilde{x}}$
represents the solution at target node $\tilde{x}\in\mathcal{X}$, and
$\Pi_{\tilde{y}}^{s}:=\\{\pi_{xy}^{s}\\}_{x\in\mathcal{X}_{y},y=\tilde{y}}$
represents the proposed solution at source node $\tilde{y}\in\mathcal{Y}$. In
addition, $\mathcal{F}_{x}^{t}:=\\{\Pi_{x}^{t}|\pi_{xy}^{t}\geq
0,y\in\mathcal{Y}_{x},\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}^{t}\leq\bar{p}_{x}\\}$,
and $\mathcal{F}_{y}^{s}:=\\{\Pi_{y}^{s}|\pi_{xy}^{s}\geq
0,x\in\mathcal{X}_{y},\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy}^{s}\leq\bar{q}_{y}\\}$.
###### Proof.
The proof follows similarly to the proof of Proposition 1 in [17]. ∎
The iterations in (13)-(17) can be further simplified to four iterations.
###### Proposition 5.
The iterations (13)-(17) can be simplified as follows:
$\begin{split}\Pi_{x}^{t}(k+1)&\in\arg\min_{\Pi_{x}^{t}\in\mathcal{F}_{x}^{t}}U_{x}w(p_{x}(\Pi_{x}^{t}))\\\
&+\sum_{y\in\mathcal{Y}_{x}}\alpha_{xy}(k)\pi_{xy}^{t}+\frac{\eta}{2}\sum_{y\in\mathcal{Y}_{x}}(\pi_{xy}^{t}-\pi_{xy}(k))^{2},\end{split}$
(18) $\displaystyle\Pi_{y}^{s}(k+1)$
$\displaystyle\in\arg\min_{\Pi_{y}^{s}\in\mathcal{F}_{y}^{s}}-\sum_{x\in\mathcal{X}_{y}}\tau_{y}s_{xy}(\pi_{xy}^{s})$
(19)
$\displaystyle-\sum_{x\in\mathcal{X}_{y}}\alpha_{xy}(k)\pi_{xy}^{s}+\frac{\eta}{2}\sum_{x\in\mathcal{X}_{y}}(\pi_{xy}(k)-\pi_{xy}^{s})^{2},$
$\begin{split}\pi_{xy}(k+1)=\frac{1}{2}\left(\pi_{xy}^{t}(k+1)+\pi_{xy}^{s}(k+1)\right),\end{split}$
(20)
$\begin{split}\alpha_{xy}(k+1)=\alpha_{xy}(k)+\frac{\eta}{2}\left(\pi_{xy}^{t}(k+1)-\pi_{xy}^{s}(k+1)\right).\end{split}$
(21)
###### Proof.
The proof follows similarly to the proof of Proposition 2 in [17]. ∎
We summarize the results into the following Algorithm 1.
Algorithm 1 Distributed Algorithm
1:while $\Pi_{x}^{t}$ and $\Pi_{y}^{s}$ not converging do
2: Compute $\Pi_{x}^{t}(k+1)$ using (18), for all $x\in\mathcal{X}_{y}$
3: Compute $\Pi_{y}^{s}(k+1)$ using (19), for all $y\in\mathcal{Y}_{x}$
4: Compute $\pi_{xy}(k+1)$ using (20), for all $\\{x,y\\}\in\mathcal{E}$
5: Compute $\alpha_{xy}(k+1)$ using (21), for all $\\{x,y\\}\in\mathcal{E}$
6:end while
7:return $\pi_{xy}(k+1)$, for all $\\{x,y\\}\in\mathcal{E}$
Remark: The developed algorithm can be interpreted as a negotiation process
between each pair of connected resource owner and target node on the security
resource allocation scheme, which does not require a central planner. The
final outcome indicates that the negotiation reaches a consensus.
## VI Case Studies
In this section, we corroborate the developed results by focusing on the
impacts of the behavioral consideration on the security investment decision-
making. We investigate a transport network consisting of two source nodes
(security resource owners) and five target nodes (assets to protect). We
define the loss parameter at each target as $U_{1}=12$, $U_{2}=9$, $U_{3}=5$,
$U_{4}=3$, and $U_{5}=2$. Additionally, we use Prelec’s probability weighting
function defined in (2) and the probability function shown in (5). We set the
upper bound of security resources to $\bar{q}_{1}=10$ units and
$\bar{q}_{2}=4$ units, meaning the maximum amount of security resources the
sources can invest. The utility functions of $s_{xy}$ are considered to be
linear.
### VI-A Impact of Behavioral Consideration
We first examine how incorporating the behavioral considerations will impact
the transportation of security resources from the sources to targets. This
involves looking at how the parameter $\gamma$ affects the outcome of the
resource allocation. We expect that as the parameter $\gamma$ goes to one, the
amount of resources received at each target should be the same as when
misperception is not considered, i.e., the objective function would be
$U_{x}(p_{x}(\Pi_{x}))$. Fig. 1 corroborates this result. We can also observe
that the target node with a larger value $U_{x}$ receives more resources,
indicating that the system planner prefers to secure more valuable targets
under a constrained budget. The relationship between the amount of received
resources at targets follows from the order of node’s value $U_{x}$ in
Assumption 2. Additionally, as the behavioral parameter $\gamma$ goes to 1,
the aggregated loss at the targets converges to the same value when
misperception is not considered, as shown in Fig. 1. It can also be seen that
the bounded rational security investment strategy is not as efficient as the
one under the accurate perception.
Figure 1: Impact of behavioral misperception on the security resource
allocation plan. (a): Impact of $\gamma$ on the amount of resources received
at each target node. The solution converges to the one without misperception
as $\gamma$ goes to 1. (b): Aggregated loss of the targets with varying
$\gamma$. A larger degree of misperception yields a less efficient resource
transport strategy.
### VI-B Performance of Distributed Algorithm
We next show the performance of the proposed distributed algorithm in
Algorithm 1. We use Algorithm 1 to solve the optimization problem in (OP-B).
Fig. 2 shows that the distributed algorithm can efficiently converge to the
centralized optimal solution. We also examine how the parameter $\tau_{y}$
influences the outcome of the transport plan. In the case study, $\tau_{y}$ is
set to be the same at every source node, i.e., $\tau_{y}=\tau,\ \forall
y\in\mathcal{Y}$. We leverage the developed distributed algorithm to compute
the optimal strategy for various $\tau\in[0,1]$, and Fig. 2 depicts the
results. We can observe that when $\tau$ goes to zero in (OP-B), the
aggregated loss of target nodes under the obtained strategy coincides with the
one to (OP-A). This result makes sense as the utility term $s_{xy}$ no longer
plays a role in (OP-B) when $\tau=0$. Another remark is that when $\tau<0.5$,
the loss of targets under the solution to (OP-B) is smaller than the (OP-A)
counterpart. This phenomenon is due to the system planner pays more attention
to minimize the risks of targets when $\tau$ is small. As $\tau$ increases,
the system planner cares more about maximizing the utility of resource owners,
and thus it yields a larger loss of targets as shown in Fig. 2.
Figure 2: (a): Effectiveness of the distributed algorithm 1. The distributed
algorithm converges to the centralized optimal solution to problem (OP-B).
(b): Impact of weighting constant $\tau$ on the aggregated loss of the
targets. The solution degenerates to the one to problem (OP-A) as $\tau$ goes
to 0.
## VII Conclusion
This paper has developed a behavioral framework for security investments over
a network consisting of multiple source nodes and heterogeneous target nodes.
The behavioral element captures human’s misperception of the successful attack
probabilities at targets under a given level of security investment. The
analysis has shown that the bounded-rational optimal resource allocation
admits a sequential water-filling nature. In addition, we have discovered that
fewer targets will receive security resources under the behavioral paradigm
compared with the non-behavioral setting, revealing the sub-optimal feature of
the strategy due to the behavioral misperception. We have further developed an
efficient distributed algorithm to compute the resource allocation plan with a
convergence guarantee which enjoys advantages when the transport network
becomes enormous and complex. The case studies have corroborated that the
system planner favors the higher-valued targets in bounded rational security
investment, often resulting in lower-valued targets receiving smaller amounts
or no resources. Future works include extending the current framework to an
adversarial setting and develop resilient security investment strategies.
Another direction is to investigate dynamic resource allocation over a time-
horizon with bounded rational agents.
## References
* [1] W. T. Yue, M. Çakanyıldırım, Y. U. Ryu, and D. Liu, “Network externalities, layered protection and IT security risk management,” _Decision Support Systems_ , vol. 44, no. 1, pp. 1–16, 2007.
* [2] Z. Su and Q. Xu, “Security-aware resource allocation for mobile social big data: A matching-coalitional game solution,” _IEEE Transactions on Big Data_ , vol. 7, no. 4, pp. 632–642, 2021.
* [3] D. Kahneman and A. Tversky, “Prospect theory: An analysis of decision under risk,” _Econometrica_ , vol. 47, no. 2, pp. 263–291, 1979.
* [4] A. Tversky and D. Kahneman, “Advances in prospect theory: Cumulative representation of uncertainty,” _Journal of Risk and Uncertainty_ , vol. 5, no. 4, pp. 297–323, 1992.
* [5] R. Zhang and Q. Zhu, “Consensus-based distributed discrete optimal transport for decentralized resource matching,” _IEEE Transactions on Signal and Information Processing over Networks_ , vol. 5, no. 3, pp. 511–524, 2019.
* [6] S. Boyd, N. Parikh, and E. Chu, _Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers_. Now Publishers, 2011.
* [7] M. Azaiez and V. M. Bier, “Optimal resource allocation for security in reliability systems,” _European Journal of Operational Research_ , vol. 181, no. 2, pp. 773–786, 2007.
* [8] M. M. Khalili, X. Zhang, and M. Liu, “Resource pooling for shared fate: Incentivizing effort in interdependent security games through cross-investments,” _IEEE Transactions on Control of Network Systems_ , 2020\.
* [9] L. Huang and Q. Zhu, “Adaptive strategic cyber defense for advanced persistent threats in critical infrastructure networks,” _ACM SIGMETRICS Performance Evaluation Review_ , vol. 46, no. 2, pp. 52–56, 2019.
* [10] W. Xing, X. Zhao, T. Basar, and W. Xia, “Security investment in cyber-physical systems: Stochastic games with asymmetric information and resource constrained players,” _IEEE Transactions on Automatic Control_ , vol. 67, no. 10, pp. 5384–5391, 2022.
* [11] R. Yang, C. Kiekintveld, F. Ordóñez, M. Tambe, and R. John, “Improving resource allocation strategies against human adversaries in security games: An extended study,” _Artificial Intelligence_ , vol. 195, pp. 440–469, 2013\.
* [12] M. Abdallah, P. Naghizadeh, T. Cason, S. Bagchi, and S. Sundaram, “Protecting assets with heterogeneous valuations under behavioral probability weighting,” in _IEEE Conference on Decision and Control_ , 2019.
* [13] H. He, A. Chen, M. Yin, Z. Ma, J. You, X. Xie, Z. Wang, and Q. An, “Optimal allocation model of water resources based on the prospect theory,” _Water_ , vol. 11, no. 6, p. 1289, 2019.
* [14] P. Vamvakas, E. E. Tsiropoulou, and S. Papavassiliou, “Exploiting prospect theory and risk-awareness to protect uav-assisted network operation,” _EURASIP Journal on Wireless Communications and Networking_ , vol. 2019, no. 1, pp. 1–20, 2019.
* [15] W. Tian, M. Du, X. Ji, G. Liu, Y. Dai, and Z. Han, “Honeypot detection strategy against advanced persistent threats in industrial Internet of things: a prospect theoretic game,” _IEEE Internet of Things Journal_ , vol. 8, no. 24, pp. 17 372–17 381, 2021.
* [16] Z. Xiong, S. Feng, D. Niyato, P. Wang, and Z. Han, “Optimal pricing-based edge computing resource management in mobile blockchain,” in _IEEE International Conference on Communications (ICC)_ , 2018, pp. 1–6.
* [17] J. Hughes and J. Chen, “Fair and distributed dynamic optimal transport for resource allocation over networks,” in _55th Annual Conference on Information Sciences and Systems (CISS)_ , 2021.
* [18] ——, “Resilient and distributed discrete optimal transport with deceptive adversary: A game-theoretic approach,” in _IEEE Control System Letters_ , 2022, pp. 1166–1171.
* [19] M. J. Farooq and Q. Zhu, “Adaptive and resilient revenue maximizing dynamic resource allocation and pricing for cloud-enabled iot systems,” in _American Control Conference (ACC)_ , 2018, pp. 5292–5297.
* [20] D. Prelec, “The probability weighting function,” _Econometrica_ , vol. 66, no. 3, pp. 497–527, 1998.
### -A Proof of Lemma 1
Here, we show that the second derivative of $w(p_{x}(\Pi_{x}))$ with respect
to $\pi_{xy}$, $\forall y\in\mathcal{Y}_{x}$, is positive. Using the
probability function in (2), we have
$\displaystyle\frac{\partial^{2}w(p_{x}(\Pi_{x}))}{\partial\pi_{xy}^{2}}=-\gamma(\gamma-1)(-\log(p_{x}(\Pi_{x})))^{\gamma-2}w(p_{x}(\Pi_{x}))$
$\displaystyle\cdot\left(\frac{\partial
p_{x}(\Pi_{x})}{\partial\pi_{xy}}\Big{/}p_{x}(\Pi_{x})\right)^{2}+\gamma(-\log(p_{x}(\Pi_{x})))^{\gamma-1}w(p_{x}(\Pi_{x}))$
$\displaystyle\cdot\frac{p_{x}(\Pi_{x})\cdot\frac{\partial^{2}p_{x}(\Pi_{x})}{\partial\pi_{xy}^{2}}-\big{(}\frac{\partial
p_{x}(\Pi_{x})}{\partial\pi_{xy}}\big{)}^{2}}{(p_{x}(\Pi_{x}))^{2}}+\Big{(}\gamma(-\log(p_{x}(\Pi_{x})))^{\gamma-1}$
$\displaystyle\cdot\frac{\partial
p_{x}(\Pi_{x})}{\partial\pi_{xy}}\Big{/}p_{x}(\Pi_{x})\Big{)}^{2}w(p_{x}(\Pi_{x})).$
The first and third terms multiply out to be positive, because of Assumption
1. The second term may or may not be positive depending on
$(p_{x}(\Pi_{x})\cdot\frac{\partial^{2}p_{x}(\Pi_{x})}{\partial\pi_{xy}^{2}}-\big{(}\frac{\partial
p_{x}(\Pi_{x})}{\partial\pi_{xy}}\big{)}^{2})/(p_{x}(\Pi_{x}))^{2}$. If the
second term term is positive than the second derivative is positive and thus
the function is convex. If the term is negative we need to show that:
$\displaystyle\Big{[}-\gamma(\gamma-1)(-\log(p_{x}(\Pi_{x})))^{\gamma-2}\left(\frac{\partial
p_{x}(\Pi_{x})}{\partial\pi_{xy}}\Big{/}p_{x}(\Pi_{x})\right)^{2}$
$\displaystyle+\Big{(}\gamma(-\log(p_{x}(\Pi_{x})))^{\gamma-1}\cdot\frac{\partial
p_{x}(\Pi_{x})}{\partial\pi_{xy}}\Big{/}p_{x}(\Pi_{x})\Big{)}^{2}\Big{]}$
$\displaystyle\cdot
w(p_{x}(\Pi_{x}))>\gamma(-\log(p_{x}(\Pi_{x})))^{\gamma-1}w(p_{x}(\Pi_{x}))$
$\displaystyle\cdot\frac{p_{x}(\Pi_{x})\cdot\frac{\partial^{2}p_{x}(\Pi_{x})}{\partial\pi_{xy}^{2}}-\big{(}\frac{\partial
p_{x}(\Pi_{x})}{\partial\pi_{xy}}\big{)}^{2}}{(p_{x}(\Pi_{x}))^{2}}.$
With cancellation and some algebraic manipulation, it is easy to see that the
statement is true. Thus, the objective function is strictly convex.
|
# The Thermodynamic Origins of Chiral Twist in Monolayer Assemblies of Hard
Rod-like Colloids
Yawei Liu<EMAIL_ADDRESS>ARC Centre of Excellence in Exciton
Science, School of Chemistry, University of Sydney, Sydney, New South Wales
2006, Australia Beijing Key Laboratory of Ionic Liquids Clean Process, CAS
Key Laboratory of Green Process and Engineering, State Key Laboratory of
Multiphase Complex Systems, Institute of Process Engineering, Chinese Academy
of Sciences, Beijing, 100190, China Jared A. Wood ARC Centre of Excellence
in Exciton Science, School of Chemistry, University of Sydney, Sydney, New
South Wales 2006, Australia The University of Sydney Nano Institute,
University of Sydney, Sydney, New South Wales 2006, Australia Achille
Giacometti Dipartimento di Scienze Molecolari e Nanosistemi, Università Ca’
Foscari di Venezia Campus Scientifico, Edificio Alfa, via Torino 155,30170
Venezia Mestre, Italy European Centre for Living Technology (ECLT) Ca’
Bottacin, 3911 Dorsoduro Calle Crosera, 30123 Venice, Italy Asaph Widmer-
Cooper<EMAIL_ADDRESS>ARC Centre of Excellence in Exciton
Science, School of Chemistry, University of Sydney, Sydney, New South Wales
2006, Australia The University of Sydney Nano Institute, University of
Sydney, Sydney, New South Wales 2006, Australia
###### Abstract
The propagation of chirality across scales is a common but poorly understood
phenomenon in soft matter. In this work, we use computer simulations to study
chiral monolayer assemblies formed by hard rod-like colloidal particles in the
presence of non-adsorbing polymer and characterize the thermodynamic driving
forces responsible for the twisting. Simulations show that straight (achiral)
rods assemble into monolayers with a spontaneous twist that is either left- or
right-handed, while helical (chiral) rods lead to assemblies with preferential
chiral features that depend on their handedness and curliness. The onset of
chirality in these monolayers can be traced back to small clusters formed at
the initial stage of the self-assembly. In these microscopic monolayers,
entropy drives twisting in ways that differ from the assumptions on which
existing continuum theory is built. Depending on the geometry of the
constituent rods, the preferred chiral twist can be driven by entropy gain of
the polymers, or of the rods, or both. In addition, the variation of the
polymer entropy with twist depends on changes in both the surface area and the
volume of the monolayer. Rod fluctuations perpendicular to the monolayer also
play an important role in stabilising the twisting.
## I Introduction
Colloidal suspensions composed of anisotropic particles can undergo self-
assembly that involves the propagation of chirality from the single-particle
level to the macroscopic level, and so have emerged as a versatile platform
for understanding this common phenomenon in soft matter Barry _et al._
(2006); Tombolato _et al._ (2006); Greco and Ferrarini (2015); Dussi and
Dijkstra (2016); Sharma _et al._ (2020). Cholesteric liquid crystals are a
well-known example, but there are many others. This includes the behaviour of
colloidal suspensions of DNA, viruses, peptides, polysaccharides and various
synthetic nanoparticles Siavashpouri _et al._ (2017); Miller _et al._
(2019); Nyström _et al._ (2018a, b); Aggeli _et al._ (2001); Lv _et al._
(2022); Fernández-Rico _et al._ (2020). While discussion continues about the
physical levers that can be used to control the phenomenon Harris _et al._
(1999); Sharma _et al._ (2009); Morrow _et al._ (2017); Wang _et al._
(2017); Yeom _et al._ (2015); Sun _et al._ (2018); Chiappini _et al._
(2019), its potential application in areas including optics, catalysis, and
sensing is already being explored Hentschel _et al._ (2017); Li _et al._
(2019); Hao _et al._ (2020), and will likely accelerate in light of advances
in the synthesis of anisotropic and chiral nanoparticles Glotzer and Solomon
(2007); Lee _et al._ (2018); González-Rubio _et al._ (2020); Fernández-Rico
_et al._ (2020). It is therefore important to have a better understanding of
the forces which control the propagation of chirality in these systems, and
which can even drive changes in surface topology Khanra _et al._ (2022).
One typical such colloidal suspension is a mixture consisting of rod-like
particles and non-adsorbing polymers in a good solvent. In these rod-polymer
mixtures, the polymers, which behave as random coils with a radius of gyration
$r_{g}$, can provide an effective attraction between the rods via depletion
forces Asakura and Oosawa (1954), and thus drive the rods to assemble into
diverse ordered structures Lekkerkerker and Stroobants (1994); Siavashpouri
_et al._ (2019). For example, two-dimensional colloidal membranes can form in
a suspension of filamentous viruses and dextran Barry and Dogic (2010). These
colloidal membranes are liquid-like monolayer assemblies, and often have a
round-shaped edge in which the constituent viruses are twisted and exhibit a
chiral distribution of their orientations Gibaud _et al._ (2012, 2017).
This chiral twist is characteristic of these nearly two-dimensional systems
and is very different from the more common cholesteric twist observed in bulk
(i.e., three dimensional) chiral assembly Straley (1976). The former one is
commonly known as “double twist” to distinguish it from the cholesteric
(single) twist. The double twist cannot be spatially uniform in the bulk and
always occurs with other deformations, with typical examples being the twist-
bend and splay-twist textures Selinger (2018). While the driving mechanism for
the cholesteric twist is relatively well understood, it remains elusive for
the double twist in colloidal membranes and has so far defied a complete
explanation, notwithstanding recent attempts. For instance, an entropically-
motivated continuum theory has been developed to explain the experimental
behaviour of these colloidal membranes Kang _et al._ (2016a). Briefly, the
entropy, manifested through the viruses as Frank elastic energy for the twist
distortion and through the polymers as an effective surface tension for the
excluded volume, drives the chiral twist of the membranes. This description
further assumes that the membranes are incompressible in the continuum limit.
To test the generality of this theory, and to serve as an important
complementary tool to interpret experimental results, it would be useful to
study such membranes using a particle-based simulation approach. This would be
especially useful for analyzing small clusters formed at the onset of the
self-assembly process where continuum descriptions often break down. To our
knowledge, however, existing simulation studies of twisted membranes have been
limited to the case of achiral rods which lack intrinsic chirality Gibaud _et
al._ (2012). In this work, we therefore study membranes formed by both achiral
rods and chiral rod-like helices Frezza _et al._ (2013) using Langevin
dynamics (LD) simulations and characterise the thermodynamic driving forces
responsible for the propagation of chirality in these systems.
## II Model and Method
### II.1 Models for rod-polymer suspensions
The rod-polymer suspensions were described using a continuous potential model
that approximates the well-known Asakura-Oosawa-Vrij (AO) model Asakura and
Oosawa (1954); Liu and Widmer-Cooper (2019): (achiral) straight rods,
described as hard spherocylinders, were represented by a rigid linear chain of
length $L$ consisting of overlapping hard spheres of diameter $D$ (Fig. 1a);
chiral rods, described as hard helices, were modeled as a set of hard spheres
having diameter $D$ evenly arranged along a helical line of contour length
$L$, pitch $p$ and radius $r$ (Fig. 1b); and the non-adsorbing polymers were
modelled as spheres with diameter $d=2r_{g}$ that are freely interpenetrable
to each other but experience a hard repulsion from the rod spheres. For
simplicity, we set the diameter of polymer spheres $d=D$. In our simulations,
the hard-core potential between rod-rod (rr) and between rod-polymer (rp)
sphere pairs was replaced by a continuous pseudo-hard-core potential, i.e.,
$U^{\alpha\beta}(r)=50(50/49)^{49}\epsilon[(\sigma/r)^{50}-(\sigma/r)^{49}]$
($\alpha\beta\in\\{\text{rr, rp}\\}$) truncated and shifted at
$r^{\alpha\beta}_{cut}=(50/49)\sigma$, where $r$ is the centre-to-centre
distance between the spheres, $\epsilon$ is the energy parameter, and $\sigma$
is the distance parameter with $\sigma=D$. Besides, for all rods used in this
work, the distance between consecutive spheres is $0.5D$, which is sufficient
to remove side effects associated with surface roughness (see Appendix. A).
While an implicit polymer model for (achiral) straight rods such as that in
Refs. Savenko and Dijkstra (2006); Cherstvy (2008); Patti and Dijkstra (2009)
can allow us to simulate large systems, the corresponding model for helical
rods is lacking and developing an accurate implicit polymer model for hard
helices, especially in the case of large polymers, could be quite challenging
Wood _et al._ (2021).
### II.2 Langevin dynamics simulation details
All LD simulations were carried out using LAMMPS Plimpton (1995) at a
dimensionless temperature $k_{B}T/\epsilon=1$ (where $k_{B}$ is the Boltzmann
constant and $T$ is the temperature). In the simulations, rod and polymer
spheres are subjected to three forces: the conservative force $f^{C}$ computed
via the pairwise interactions (i.e., the pseudo-hard-core potential); the
friction force $f^{F}=-(m/\gamma)v$ with $m$ the mass, $\gamma$ the damping
factor, and $v$ the velocity of the sphere; and the random force
$f^{R}\propto\sqrt{k_{B}Tm/(\Delta t\gamma)}$ with $\Delta t$ the time step.
All simulations were performed in a box with periodic boundary conditions. The
velocity-Verlet algorithm was used to integrate the equations of motion with a
time step $\Delta t=0.001\tau$ where $\tau=D\sqrt{m/(k_{B}T)}$, and the
damping factor was set to be $\gamma=1\tau$. In all simulations, we set the
masses of one polymer and one rod $m_{p}=m_{r}=m=1$. Simulations of large
monolayers were performed in an isothermal-isobaric ($NPT$) ensemble. A
Berendsen barostat with a time constant of $1\tau$ was applied. Most
simulations were initialised with $N_{r}=480$ rods in a single hexagonally
packed layer surrounded by $N_{p}=40000$ polymer spheres in a box with initial
dimensions $44\times 44\times 21D^{3}$. Initial configurations with different
chiral twists were also used to confirm that only one handedness was stable.
At least $10$ independent simulations with different initial configurations
were performed for each rod shape, and all simulations were run for at least
$5\times 10^{6}$ steps to collect enough configurations at the equilibrium
state.
### II.3 Free energy calculations
Simulations used for measuring changes in the free energy
($\Delta\Omega_{total}$) as a function of the twist ($\langle\psi_{i}\rangle$,
see its definition in next section) were performed in a semi-grand canonical
($\mu_{p}VT$) ensemble with $N_{r}=2-61$ rods. During the simulations, $1000$
GCMC insertion and deletion moves were performed every $1000$ LD steps to
maintain the chemical potential of the polymers ($\mu_{p}$). Simulations were
initialised with $N_{r}$ rods in a single hexagonally packed layer surrounded
by $\sim 2000$ polymer spheres in a box with dimensions $15\times 15\times
15D^{3}$. The values of $\Delta\Omega_{total}$ as a function of
$\langle\psi_{i}\rangle$ were evaluated by means of the umbrella sampling (US)
method Torrie and Valleau (1977). We imposed a harmonic spring biasing
potential given by
$U=0.5k[\langle\psi_{i}\rangle-\langle\psi_{i}\rangle_{0}]^{2}$ on the system
using the Colvars package Fiorin _et al._ (2013). Here, $k$ is the spring
constant, $\langle\psi_{i}\rangle_{0}$ is the desired twist, and
$\langle\psi_{i}\rangle$ is the actual twist in the monolayer. Under the
biasing potential, the monolayer is forced to stay in a pseudo-equilibrium
state with $\langle\psi_{i}\rangle$ fluctuating around
$\langle\psi_{i}\rangle_{0}$. Different twisted states can be described by a
series of values with
$\langle\psi_{i}\rangle_{0}\in(\langle\psi_{i}\rangle_{min},\langle\psi_{i}\rangle_{max})$.
In our simulations, $k=1k_{B}T/deg^{2}$,
$\langle\psi_{i}\rangle_{min}=-24^{\circ}$,
$\langle\psi_{i}\rangle_{max}=24^{\circ}$, and the increment of
$\langle\psi_{i}\rangle_{0}$ was $1^{\circ}$ or $2^{\circ}$. For each given
$\langle\psi_{i}\rangle_{0}$, the system was equilibrated for $2\times 10^{6}$
steps followed by another $2\times 10^{6}$ steps production run in which data
was accumulated every $1000$ LD steps. Finally, the WHAM algorithm Grossfield
was used to calculate the free energy change $\Delta\Omega_{total}$ as a
function of $\langle\psi_{i}\rangle$ . For each monolayer, $10$ independent
simulations were carried out to obtain good statistics. Meanwhile, to prevent
the disassembly of small monolayers, additional spring forces were imposed on
the rod to move it back when the distance from the centre of the rod to the
centre of the monolayer is larger than a critical value $r_{c}$, where
$r_{c}=0.75D$ for $N_{r}=2$, $r_{c}=1.5D$ for $N_{r}=7$, $r_{c}=3.0D$ for
$N_{r}=19$, $r_{c}=4.5D$ for $N_{r}=37$ and $r_{c}=6.0D$ for $N_{r}=61$. These
critical values are larger than the equilibrium radii of the respective stable
monolayers.
### II.4 Excluded volume calculations
During the production stage of US simulations, we sampled configurations every
$5000$ steps and computed the excluded volume (i.e., $\Delta V_{exc}$) for
polymer spheres due to rods and the corresponding contributions from the
volume and the surface area of the monolayer (i.e., $\Delta V_{exc}^{bulk}$
and $\Delta V_{exc}^{surf}$). For a given configuration, all rod spheres that
had at least one polymer neighbour within $1.5D$ from their centre were
classified as surface rod-spheres, and the rest were classified as bulk rod-
spheres. To compute the excluded volumes, the whole system was divided into
many small cubic bins with an edge length of $l=0.5D$. We confirmed that using
a smaller value of $l$ (e.g., $l=0.25D$, see Appendix. B) gave similar
results. A bin was occupied by rods if there was at least one rod-sphere whose
centre was less than $1.0D$ (corresponding to the polymer diameter) from the
bin’s centre, and the volume of the bin contributed to $\Delta V_{exc}^{surf}$
if all rod-spheres occupying this bin were surface rod-spheres, otherwise it
contributed to $\Delta V_{exc}^{bulk}$. The final value of the excluded volume
at a given $\langle\psi_{i}\rangle$ was averaged over all configurations
collected at the corresponding $\langle\psi_{i}\rangle_{0}$. For these
calculations, the Freud Python package Ramasubramani _et al._ (2020) was used
to analyse the simulation data.
### II.5 Suppressing perpendicular fluctuations
In the simulations for monolayers without rod fluctuations perpendicular to
the monolayer, the centres of mass of all rods were constrained on a common
plane via harmonic spring forces using a spring constant of
$1000k_{B}T/D^{2}$.
## III Results and discussion
Figure 1: Spontaneous twist in monolayers of rods. (a) An achiral straight rod
of length $L$ and diameter $D$. (b) A chiral rod (left-handed helix) with
contour length $L$, diameter $D$, pitch $p$ and radius $r$. $\theta$
quantifies the inclination angle of the rod Frezza _et al._ (2014). (c-d)
Side-view and top-view of (c) left-handed ($\mathcal{L}$) and (d) right-handed
($\mathcal{R}$) monolayers composed of straight rods with the color indicating
the normalised tilted angle ($\psi_{i}/|\psi_{i}^{max}|$) between the rod axis
$\hat{\textbf{u}}_{i}$ and the nematic director $\hat{\textbf{n}}$.
### III.1 Spontaneous twist in monolayers
We first considered monolayers composed of (achiral) straight rods with length
of $L=10D$ (Fig. 1a). Figure 1c, d show equilibrium configurations obtained
from simulations with $N_{r}=480$ rods surrounded by $N_{p}=40000$ polymer
spheres at the pressure $P=1.2k_{B}T/D^{3}$. The rods are parallel to the
normal axis (i.e., the nematic director $\hat{\textbf{n}}$) at the centre, but
tilt with increasing magnitude around the radial axis away from the centre. In
multiple independent simulations started with an untwisted configuration, the
monolayer shows nearly equal probability to end up showing left-handed
($\mathcal{L}$) or right-handed ($\mathcal{R}$) twist. Such monolayers, with
roughly square edge profiles, are also predicted by the continuum theory and
have been observed in experiments for small colloidal membranes Kang _et al._
(2016a).
To evaluate the degree of twist in the monolayers, we used the average tilt
angle of rods with respect to the nematic director, defined as
$\langle\psi_{i}\rangle=\left\langle\dfrac{\hat{\textbf{r}}_{i}\cdot(\hat{\textbf{n}}\times\hat{\textbf{u}}_{i})}{|\hat{\textbf{r}}_{i}\cdot(\hat{\textbf{n}}\times\hat{\textbf{u}}_{i})|}\cos^{-1}(|\hat{\textbf{n}}\cdot\hat{\textbf{u}}_{i}|)\right\rangle,$
(1)
where $\hat{\textbf{r}}_{i}$ is unit vector connecting the center-of-mass of
the monolayer and the center-of-mass of rod $i$, $\hat{\textbf{u}}_{i}$ is the
a unit vector along the long axis of the rod and $\langle\dots\rangle$
indicates an average over all rods in the monolayer and all configurations
collected at the equilibrium state. $\langle\psi_{i}\rangle$ is negative for
the $\mathcal{L}$ twist and positive for the $\mathcal{R}$ twist (Fig. 1c, d).
### III.2 Phase diagram of chirality in monolayers
We then studied monolayers of left-handed hard helices with $L=10$ and varying
$r$ and $p$ (Fig. 1b). A summary of the results obtained from simulations with
$N_{r}=480$ at $P=1.2k_{B}T/D^{3}$ is reported in Fig. 2. In the phase diagram
(Fig. 2a), we can identify the values of $r$ and $p$ that give rise to a
chiral twist, whose handedness with respect to that of the constituent rods is
(i) the same (e.g., $r=0.1$, $p=2$), (ii) the opposite (e.g., $r=0.1$,
$p=12$), or (iii) mixed with either $\mathcal{R}$ or $\mathcal{L}$ (e.g.,
$r=0.1$, $p=22$) (Fig. 2b).
Figure 2: Phase diagram of chirality in monolayers. (a) The phase diagram
shows the chirality of monolayer assemblies of hard rods with $L=10$ and
varying $r$ and $p$ at $P=1.2$. All symbols are colored according to
$\langle\psi_{i}\rangle$ (Eq. 1) of the corresponding monolayer. The
handedness of the monolayer with respect to that of the constituent hard rods
can be the same (blue square), opposite (red square), or mixed with either
$\mathcal{R}$ or $\mathcal{L}$ (bi-colored square). Note that all symbols with
$r=0$ or $p=\infty$ represent the case of straight rods. Lines (i) and (ii)
indicate the approximate phase boundaries for monolayers. Lines (iii-v),
provided for comparison, indicate the phase boundaries between the same and
opposite regions for bulk cholesteric phases. Line (iii) is given by the
critical inclination angle $\theta=45^{\circ}$ Tombolato _et al._ (2006);
Cherstvy (2008), while Lines (iv-v) were obtained using density functional
theory at (iv) low and (v) high volume fractions, respectively Belli _et al._
(2014). (b) Typical snapshots of stable monolayers obtained from simulations
using different left-handed helices.
In Fig. 2a, Line (v) is the phase boundary between same and opposite regimes
for the corresponding cholesteric phases at high volume fractions obtained
using density functional theory Belli _et al._ (2014). We can see that, in
comparison, the corresponding phase boundary for monolayers [i.e., Line (i)]
is shifted toward larger values of $p$ at $r>0.1$. Such shifting is also
observed in cholesteric phases when the packing density of helices increases
[compare Lines (iii)/(iv) to Line (v) in Fig. 2a]. Thus the difference between
Line (i) and Line (v) is likely due to the higher rod packing fraction in the
monolayers (0.6-0.7) compared to in the bulk cholesteric phases (0.35-0.5)
Belli _et al._ (2014).
For most helices, however, their monolayer assemblies and cholesteric phases
have the same handedness, supporting the experimental observation of
consistent chirality between the two for rod-shaped viruses Gibaud _et al._
(2017). For weakly curled helices in the mixed regime, the monolayers can be
either $\mathcal{R}$ or $\mathcal{L}$, while the cholesteric phases may only
exhibit weak opposite handedness Belli _et al._ (2014). As will be elaborated
later, the driving mechanism of this chiral monolayer assembly is very
different from the bulk cholesteric chiral assembly that was originally
predicted by Straley Straley (1976) and recently confirmed by density
functional theory Frezza _et al._ (2014); Belli _et al._ (2014) and
numerical simulations Cinacchi _et al._ (2017).
From the phase diagram, we also can see that the degree of twist (i.e.,
$\langle\psi_{i}\rangle$) is a non-monotonic function of the intrinsic pitch
of the rods (i.e., $p$), which is consistent with the behaviour of bulk
cholesteric phases formed by hard helices Frezza _et al._ (2014); Cinacchi
_et al._ (2017). Starting at $p=\infty$ (i.e., straight rods), the magnitude
of $\langle\psi_{i}\rangle$ increases as $p$ decreases, and reaches a maximum
for moderately curled helices (e.g., $r=0.1$, $p=12$ and $r=0.3$, $p=16$) in
the opposite regime, before decreasing to $0$ for helices at the phase
boundary between same and opposite regimes (e.g., $r=0.1$, $p=4$) (Fig. 2b).
In the same regime, $|\langle\psi_{i}\rangle|$ is small, but our results at
$r=0.1$ show that here again $|\langle\psi_{i}\rangle|$ first increases and
then deceases as $p$ decreases (Fig. 2a).
### III.3 Thermodynamic origins of chiral twist
To study the thermodynamic origins of chiral twist in these monolayer
assemblies, we considered a monolayer of $N_{r}$ rods in a sea of polymer
spheres at fixed volume $V$ and temperature $T$. This system was kept in
osmotic equilibrium with a large reservoir containing the pure polymer
solution at fixed fugacity $z_{p}=\exp(\mu_{p}/k_{B}T)$ where $\mu_{p}$ is the
polymer chemical potential. The grand potential of the system can be written
as $\Omega_{\text{total}}(N_{r},V,T,\mu_{p})=F_{r}-z_{p}(V-V_{exc})k_{B}T$,
where $F_{r}$ is the Helmholtz energy of the rods and $V_{exc}$ is the volume
excluded to the polymers by the hard rods Lekkerkerker and Stroobants (1994);
Bolhuis _et al._ (1997). The second term on the right is the free energy of
the polymers $\Omega_{p}$. $V_{exc}$ can be further divided into a bulk term
and a surface term associated to the volume and surface area of the monolayer,
respectively. Thus, we obtain the change in free energy expressed as
$\displaystyle\Delta\Omega_{total}$ $\displaystyle=\Delta
F_{r}+\Delta\Omega_{p}$ (2) $\displaystyle=\Delta F_{r}+z_{p}k_{B}T\Delta
V_{exc}$ $\displaystyle=\Delta F_{r}+z_{p}k_{B}T(\Delta V_{exc}^{bulk}+\Delta
V_{exc}^{surf}).$
Both $\Delta F_{r}$ and $\Delta V_{exc}$ depend on the twisting state of rods
in the monolayer. We measured $\Delta\Omega_{total}$ as a function of
$\langle\psi_{i}\rangle$ in a semi-grand canonical ($\mu_{p}VT$) ensemble with
fixed $N_{r}$ at $z_{p}=1.2$ (corresponding to $P=1.2k_{B}T/D^{3}$ in the
previous simulations). The twist was constrained using the US approach, while
$\Delta V_{exc}^{bulk}$ and $\Delta V_{exc}^{surf}$ were numerically
calculated.
We performed a series of US simulations to calculate the changes in free
energy as a function of the twist for monolayers formed by $N_{r}=2-61$ rods
with varying $p$ (see Appendix C). The average twist of stable monolayers
monotonically increases as the monolayer size increases, which is
qualitatively consistent with the theoretical description for small colloidal
membranes in which the tile angles of rods at the edge have yet to reach the
limiting value of $90^{\circ}$ Kang _et al._ (2016a). More importantly, we
found that simulating tens of rods is sufficient to capture the chiral
behaviour exhibited by the large monolayers shown in Fig. 2, indicating that
the chirality of these monolayers is determined already during the onset of
the self-assembly process.
Figure 3a shows $\Delta\Omega_{total}$ vs. $\langle\psi_{i}\rangle$ for three
typical monolayers made up of $37$ straight rods or left-handed helices. For
the monolayer of straight rods (i.e., $r=0$), the two identical minima at
$\langle\psi_{i}\rangle>0$ and $\langle\psi_{i}\rangle<0$ in the curve of
$\Delta\Omega_{total}$ indicate the stable twist is equally likely to be
$\mathcal{R}$ or $\mathcal{L}$. For the monolayer of helices, only one local
minimum appears in the curve of $\Delta\Omega_{total}$ and is located at
$\langle\psi_{i}\rangle>0$ for the left-handed moderately curled helices
(i.e., $r=0.1$, $p=12$) and at $\langle\psi_{i}\rangle<0$ for the left-handed
highly curled helices (i.e., $r=0.1$, $p=2$), consistent with the behaviour of
the larger monolayers summarised in Fig. 2a.
The decomposition of $\Delta\Omega_{total}$ in Eq. 2 reveals that chiral twist
in these monolayers is stabilised by different driving forces depending on the
rod shape. As shown in Fig. 3a (i), the twist in monolayers of straight rods
is driven by the entropy gain of the polymers with respect to the untwisted
state (i.e., the decease in $\Delta\Omega_{p}$), but further twisting beyond
the equilibrium state is also prevented by the rapidly increasing entropy loss
of the polymers at larger $|\langle\psi_{i}\rangle|$. The rod entropy in this
case shows an almost opposite dependence on $\langle\psi_{i}\rangle$, but the
polymer entropy dominates and stabilises the twist in the monolayer. For the
monolayer of moderately curled helices [Fig. 3a (ii)], the polymer entropy
also dominates and leads to a single stable twist, but now entropy gain from
the rods also contributes. In sharp contrast, for the monolayer of highly
curled helices [Fig. 3a (iii)], the single stable twist is entirely driven by
the rod entropy, competing against the entropy loss of the polymers.
The polymer entropy is related to changes in the volume excluded to polymers
($\Delta V_{exc}$), which is determined by both the volume and the surface
area of the monolayer. Figure 3b shows the change in $\Delta V_{exc}$ and its
volume/surface components when twisting the three monolayers discussed in the
previous paragraph. This reveals that not only the surface area but also the
volume of the monolayers changes significantly during the twisting process.
Especially for monolayers of straight rods [Fig. 3b (i)], the decrease in the
volume acts as the major driving force for twisting.
Figure 3: The thermodynamic origins of chiral twist in monolayers of different
rods. (a) The changes in free energy ($\Delta\Omega_{total}$, $\Delta F_{r}$
and $\Delta\Omega_{p}$) as a function of the twist ($\langle\psi_{i}\rangle$)
for monolayers formed by $N_{r}=37$ (i) straight rods ($r=0.0$, $p=\infty$),
(ii) left-handed moderately curled helices ($r=0.1$, $p=12$), and (iii) left-
handed highly curled helices ($r=0.1$, $p=2$). (b) The corresponding changes
of excluded volume ($\Delta V_{exc}$, $\Delta V_{exc}^{bulk}$ and $\Delta
V_{exc}^{surf}$).
### III.4 Comparison with continuum theory
Having shown that different entropy components can drive rod monolayers to
twist, we now compare our results with the continuum theory developed to
describe such colloidal membranes Kang _et al._ (2016a). The continuum theory
is based on a relatively simple physical picture (see Appendix D): that the
twist is driven mainly by the entropy gained by the polymers when the membrane
surface area is minimised at constant membrane volume. In this model, the
polymer entropy is invariant under chirality inversion and does not contribute
to the preference of the handedness, regardless of the chirality of the rods.
The preferred handedness is instead attributed to an entropy term in the Frank
elastic energy of the rods, whose magnitude depends on the preferred twist
wavenumber that implicitly contains the chiral features of the rods.
In contrast, our simulation results reveal the existence of more complex
thermodynamic behaviour. First, while the polymer entropy often drives
twisting, it can also oppose twisting, with the rod entropy instead driving
twisting in those cases [e.g., Fig 3a (iii)]. Second, the polymer entropy is
asymmetric under chirality inversion for monolayers of helical particles and
contributes to the preference of the handedness in these cases [e.g., Fig 3a
(ii)]. This indicates that, at best, the Frank elastic energy in the continuum
theory can depend on polymer concentration. Third, the constant-volume
assumption in the continuum theory clearly breaks down, at least for the small
assemblies considered here, indicating that the variation of the polymer
entropy involves contributions from not only the surface area but also the
volume of the monolayer (Fig. 3b).
Recent theoretical and experimental work Chaturvedi and Kamien (2020); Kamien
and Machon (2020); Miller _et al._ (2020) has also concluded that the volume
change upon twist plays a crucial role in determining the geometry and
stability of colloidal membranes of rod-like particles. The geometric
frustration between double-twist and splay causes the twisted monolayer to
have a hyperbolic edge (i.e., "splay-twist" texture Chaturvedi and Kamien
(2020)), and the splay of rods away from the monolayer midplane leads to a
local volume expansion, which is most significant at the top and the bottom of
the monolayer edge. Based on this geometric argument, using a combination of
experiments and theory in which the variation of rod density is considered,
Miller et al. Miller _et al._ (2020) demonstrated that for the colloidal
rafts in the membranes composed of rigid rods of different lengths, the splay
deformation causes expansion and compression of the inner and outer raft
edges,respectively, and their competition results in spontaneous twist even
for achiral systems and non-monotonic dependence of the stable twist as a
function of the raft size. As for our simulated monolayers which are assembled
from monodisperse rods in non-adsorbing polymers, the diffuse interfacial
region exhibits a clear decline of the rod density away from the midplane,
especially at the edges (see Fig. E.1a in Appendix E), but we did not observe
an obvious hyperboloid-like shape. Only when the perpendicular fluctuations of
the rods are suppressed and their centers are confined at the 2D midplane
(which is the same as the theoretical model in Refs. Miller _et al._ (2020)),
does our monolayer exhibit a hyperboloid-like shape (see Fig. E.1b in Appendix
E). Moreover, in our systems, the total volumes of small monolayers (which are
made up of $37$ rods in our US simulations) are always decreasing during the
initial twisting process (see Fig. E.2 Appendix E), suggesting that the volume
expansion due to the splay deformation does not dominate, at least in these
small monolayers. All these results suggest that the volume change due to
twist is important to the stable texture in various colloidal membranes of
rod-like particles and thus should not be ignored in theoretical models.
Figure 4: The role of perpendicular fluctuations. (a) The changes in free
energy ($\Delta\Omega_{total}$, $\Delta F_{r}$ and $\Delta\Omega_{p}$) as a
function of the twist ($\langle\psi_{i}\rangle$) for monolayers formed by
$N_{r}=37$ (i) straight rods ($r=0.0$, $p=\infty$), (ii) left-handed
moderately curled helices ($r=0.1$, $p=12$), and (iii) left-handed highly
curled helices ($r=0.1$, $p=2$) when the fluctuations perpendicular to the
plane of the monolayer are artificially suppressed. (b) Typical snapshots of
large equilibrated monolayers formed by $N_{r}=480$ rods (corresponding to
those in a), obtained from simulations without rod fluctuations perpendicular
to the monolayer.
### III.5 The role of perpendicular fluctuations
Finally, we considered a special entropy contribution from rods related to
their fluctuations perpendicular to the monolayer. Figure 4c shows
$\Delta\Omega_{total}$ vs. $\langle\psi_{i}\rangle$ for the three example
monolayers when the centres-of-mass of all rods are constrained to the
midplane of the monolayer. The rod fluctuations out of the plane are expected
to produce surface roughness and so increase the volume excluded to the
polymers. We found that suppressing the fluctuations resulted in more entropy
gain for the polymers upon twisting for monolayers of straight rods and
moderately curled helices [see larger changes of $\Delta\Omega_{p}$ in Fig. 4a
(i) and (ii) compared to that in Fig. 3a (i) and (ii)]. This stabilises the
twisted states for these monolayers, and even adds a new metastable twisted
state for the monolayer of moderately curled helices [Fig. 4a (ii)]. In
contrast, for the monolayer of highly curled helices, the polymer entropy
increases dramatically upon twisting when the fluctuations are suppressed,
causing the original weakly-stable twisted state to disappear [Fig. 4a (iii)].
These results are consistent with unconstrained simulations of large
monolayers (Fig. 4b), and clearly show that rod fluctuations perpendicular to
the monolayer have important effects on the stability of the chiral twists
that depend on the shape of the individual rods.
We note that such contributions from rod fluctuations perpendicular to the
monolayer are either ignored in continuum models of colloidal membranes or
only taken into account in a simplistic manner (which is not curliness-
dependent) Kang _et al._ (2016a) (see Appendix D). Our simulation results,
however, indicate that these fluctuations can play a crucial role in the
stability of the chiral twist, and thus may need to be accurately described in
order to predict the stable chiral twist.
## IV Conclusions
In summary, we have used a simple model to characterise spontaneous chiral
twist in monolayers assembled from either achiral or chiral rods in non-
adsorbing polymer solutions, and thus to reveal the thermodynamic driving
forces responsible for the chirality propagation from single particles to
their assemblies. Note that the chiral twist discussed in this work is the
double twist, which is distinct from the cholesteric (single) twist Selinger
(2018).
Depending on the geometry of the constituent rods, their monolayer assemblies
exhibit a broad range of chiral behaviour, including variations in handedness
and twist magnitude. Compared to the constituent rods, the (achiral) straight
rods and weakly curled helices form monolayers with either $\mathcal{R}$ or
$\mathcal{L}$ twist (i.e., the mixed regime), moderately curled helices form
monolayers with the opposite handedness (i.e., the opposite regime), and
highly curled helices form monolayers with the same handedness (i.e., the same
regime). Moreover, the degree of twist in the monolayers is a non-monotonic
function of the intrinsic pitch of the helices, with the most twisted
monolayers forming from moderately curled helices [Fig. 2a].
The thermodynamic forces responsible for spontaneous chiral twist also vary
dramatically between different particle shapes. In the mixed and the opposite
regimes, the twist in monolayers is mainly driven by the polymer entropy [Fig.
3a (i)]. As the rods becomes more curled, the rod entropy also contributes to
the twist, and only the twisted state with the opposite handedness remains
stable [Fig. 3a (ii)]. For even more curled helices, only one weakly twisted
state with the same handedness is stable, and is entirely driven by the rod
entropy [Fig. 3a (iii)].
Our simulation results also indicate important contributions from the volume
change upon twist and the rod fluctuations perpendicular to the monolayer that
have so far been ignored in continuum theories. Our preliminary results,
obtained from Monte Carlo (MC) simulations, for rods held together by explicit
attraction rather than polymer depletion indicate a similar complexity (see
Appendix F). Overall, we find increasing deviations from current continuum
theory as the attractive forces holding the rods together become weaker,
regardless of whether they are due to direct energetic or indirect entropic
effects (see Appendix F). All these results contribute to our understanding of
chirality transmission across scales when chiral objects assemble into larger
aggregates.
While our simulations were based on a simplified model for rod-like colloids
(i.e., hard spherocylinders and helices), they clearly show that twisted
colloidal membranes can also be formed by helical rods, which could be an
interesting behaviour to investigate in future experiments by using similar
natural and synthetic particles Barry _et al._ (2006); Zhu _et al._ (2014);
Feng _et al._ (2017); Tao _et al._ (2019). Our current work also offers a
helpful reference for understanding the behaviour in more complex systems. It
would be very useful to consider models which are closer to the chiral rods
(e.g. fd-virus and DNA origami rods) used in experiments of colloidal
membranes. For example, using a “straight and helically-decorated” model
Tombolato _et al._ (2006); Wensink and Ferreiro-Córdova (2017) would allow us
to compare the computational and experimental results more directly.
Meanwhile, recent experiments have shown that the shape fluctuation of chiral
rods also dramatically affects their assembled structures Tortora _et al._
(2020), thus it would be interesting to consider the flexibility of rod-like
particles in future work.
###### Acknowledgements.
This work was supported by the Australian Research Council under Grant
CE170100026 and by the MIUR PRIN-COFIN2017 Soft Adaptive Networks grant
2017Z55KCW. Computational resources were provided by the Sydney Informatics
Hub, a Core Research Facility of the University of Sydney.
## References
* Barry _et al._ (2006) E. Barry, Z. Hensel, Z. Dogic, M. Shribak, and R. Oldenbourg, Phys. Rev. Lett. 96, 018305 (2006).
* Tombolato _et al._ (2006) F. Tombolato, A. Ferrarini, and E. Grelet, Phys. Rev. Lett. 96, 258302 (2006).
* Greco and Ferrarini (2015) C. Greco and A. Ferrarini, Phys. Rev. Lett. 115, 147801 (2015).
* Dussi and Dijkstra (2016) S. Dussi and M. Dijkstra, Nat. Commun. 7, 11175 (2016).
* Sharma _et al._ (2020) A. Sharma, J. P. Wojciechowski, Y. Liu, T. Pelras, C. M. Wallace, M. Müllner, A. Widmer-Cooper, P. Thordarson, and G. Lakhwani, Cell Reports Phys. Sci. 1, 100148 (2020).
* Siavashpouri _et al._ (2017) M. Siavashpouri, C. H. Wachauf, M. J. Zakhary, F. Praetorius, H. Dietz, and Z. Dogic, Nat. Mater. 16, 849 (2017).
* Miller _et al._ (2019) J. M. Miller, C. Joshi, P. Sharma, A. Baskaran, A. Baskaran, G. M. Grason, M. F. Hagan, and Z. Dogic, Proc. Natl. Acad. Sci. 116, 15792 (2019).
* Nyström _et al._ (2018a) G. Nyström, M. Arcari, and R. Mezzenga, Nat. Nanotechnol. 13, 330 (2018a).
* Nyström _et al._ (2018b) G. Nyström, M. Arcari, J. Adamcik, I. Usov, and R. Mezzenga, ACS Nano 12, 5141 (2018b).
* Aggeli _et al._ (2001) A. Aggeli, I. A. Nyrkova, M. Bell, R. Harding, L. Carrick, T. C. B. McLeish, A. N. Semenov, and N. Boden, Proc. Natl. Acad. Sci. 98, 11857 (2001).
* Lv _et al._ (2022) J. Lv, X. Gao, B. Han, Y. Zhu, K. Hou, and Z. Tang, Nat. Rev. Chem. 6, 125 (2022).
* Fernández-Rico _et al._ (2020) C. Fernández-Rico, M. Chiappini, T. Yanagishima, H. de Sousa, D. G. A. L. Aarts, M. Dijkstra, and R. P. A. Dullens, Science 369, 950 (2020).
* Harris _et al._ (1999) A. B. Harris, R. D. Kamien, and T. C. Lubensky, Rev. Mod. Phys. 71, 1745 (1999).
* Sharma _et al._ (2009) V. Sharma, M. Crne, J. O. Park, and M. Srinivasarao, Science 325, 449 (2009).
* Morrow _et al._ (2017) S. M. Morrow, A. J. Bissette, and S. P. Fletcher, Nat. Nanotechnol. 12, 410 (2017).
* Wang _et al._ (2017) P.-P. Wang, S.-J. Yu, A. O. Govorov, and M. Ouyang, Nat. Commun. 8, 14312 (2017).
* Yeom _et al._ (2015) J. Yeom, B. Yeom, H. Chan, K. W. Smith, S. Dominguez-Medina, J. H. Bahng, G. Zhao, W. S. Chang, S. J. Chang, A. Chuvilin, D. Melnikau, A. L. Rogach, P. Zhang, S. Link, P. Král, and N. A. Kotov, Nat. Mater. 14, 66 (2015).
* Sun _et al._ (2018) J. Sun, Y. Li, F. Yan, C. Liu, Y. Sang, F. Tian, Q. Feng, P. Duan, L. Zhang, X. Shi, B. Ding, and M. Liu, Nat. Commun. 9, 2599 (2018).
* Chiappini _et al._ (2019) M. Chiappini, T. Drwenski, R. van Roij, and M. Dijkstra, Phys. Rev. Lett. 123, 068001 (2019).
* Hentschel _et al._ (2017) M. Hentschel, M. Schäferling, X. Duan, H. Giessen, and N. Liu, Sci. Adv. 3, 1 (2017).
* Li _et al._ (2019) S. Li, J. Liu, N. S. Ramesar, H. Heinz, L. Xu, C. Xu, and N. A. Kotov, Nat. Commun. 10, 4826 (2019).
* Hao _et al._ (2020) C. Hao, L. Xu, H. Kuang, and C. Xu, Adv. Mater. 32, 1 (2020).
* Glotzer and Solomon (2007) S. C. Glotzer and M. J. Solomon, Nat. Mater. 6, 557 (2007).
* Lee _et al._ (2018) H. E. Lee, H. Y. Ahn, J. Mun, Y. Y. Lee, M. Kim, N. H. Cho, K. Chang, W. S. Kim, J. Rho, and K. T. Nam, Nature 556, 360 (2018).
* González-Rubio _et al._ (2020) G. González-Rubio, J. Mosquera, V. Kumar, A. Pedrazo-Tardajos, P. Llombart, D. M. Solís, I. Lobato, E. G. Noya, A. Guerrero-Martínez, J. M. Taboada, F. Obelleiro, L. G. MacDowell, S. Bals, and L. M. Liz-Marzán, Science 368, 1472 (2020).
* Khanra _et al._ (2022) A. Khanra, L. L. Jia, N. P. Mitchell, A. Balchunas, R. A. Pelcovits, T. R. Powers, Z. Dogic, and P. Sharma, Proc. Natl. Acad. Sci. 119, e2204453119 (2022).
* Asakura and Oosawa (1954) S. Asakura and F. Oosawa, J. Chem. Phys. 22, 1255 (1954).
* Lekkerkerker and Stroobants (1994) H. N. W. Lekkerkerker and A. Stroobants, Nuovo Cim. D 16, 949 (1994).
* Siavashpouri _et al._ (2019) M. Siavashpouri, P. Sharma, J. Fung, M. F. Hagan, and Z. Dogic, Soft Matter 15, 7033 (2019).
* Barry and Dogic (2010) E. Barry and Z. Dogic, Proc. Natl. Acad. Sci. 107, 10348 (2010).
* Gibaud _et al._ (2012) T. Gibaud, E. Barry, M. J. Zakhary, M. Henglin, A. Ward, Y. Yang, C. Berciu, R. Oldenbourg, M. F. Hagan, D. Nicastro, R. B. Meyer, and Z. Dogic, Nature 481, 348 (2012).
* Gibaud _et al._ (2017) T. Gibaud, C. N. Kaplan, P. Sharma, M. J. Zakhary, A. Ward, R. Oldenbourg, R. B. Meyer, R. D. Kamien, T. R. Powers, and Z. Dogic, Proc. Natl. Acad. Sci. 114, E3376 (2017).
* Straley (1976) J. P. Straley, Phys. Rev. A 14, 1835 (1976).
* Selinger (2018) J. V. Selinger, Liq. Cryst. Rev. 6, 129 (2018).
* Kang _et al._ (2016a) L. Kang, T. Gibaud, Z. Dogic, and T. C. Lubensky, Soft Matter 12, 386 (2016a).
* Frezza _et al._ (2013) E. Frezza, A. Ferrarini, H. B. Kolli, A. Giacometti, and G. Cinacchi, J. Chem. Phys. 138, 164906 (2013).
* Liu and Widmer-Cooper (2019) Y. Liu and A. Widmer-Cooper, J. Chem. Phys. 150, 244508 (2019).
* Savenko and Dijkstra (2006) S. V. Savenko and M. Dijkstra, J. Chem. Phys. 124, 234902 (2006).
* Cherstvy (2008) A. G. Cherstvy, J. Phys. Chem. B 112, 12585 (2008).
* Patti and Dijkstra (2009) A. Patti and M. Dijkstra, Phys. Rev. Lett. 102, 128301 (2009).
* Wood _et al._ (2021) J. A. Wood, Y. Liu, and A. Widmer-Cooper, J. Chem. Phys. 154, 244505 (2021).
* Plimpton (1995) S. Plimpton, J. Comput. Phys. 117, 1 (1995).
* Torrie and Valleau (1977) G. Torrie and J. Valleau, J. Comput. Phys. 23, 187 (1977).
* Fiorin _et al._ (2013) G. Fiorin, M. L. Klein, and J. Hénin, Mol. Phys. 111, 3345 (2013).
* (45) A. Grossfield, “Wham: the weighted histogram analysis method,” Version 2.0.9.1.
* Ramasubramani _et al._ (2020) V. Ramasubramani, B. D. Dice, E. S. Harper, M. P. Spellings, J. A. Anderson, and S. C. Glotzer, Comput. Phys. Commun. 254, 107275 (2020).
* Frezza _et al._ (2014) E. Frezza, A. Ferrarini, H. Bindu Kolli, A. Giacometti, and G. Cinacchi, Phys. Chem. Chem. Phys. 16, 16225 (2014).
* Belli _et al._ (2014) S. Belli, S. Dussi, M. Dijkstra, and R. van Roij, Phys. Rev. E 90, 020503(R) (2014).
* Cinacchi _et al._ (2017) G. Cinacchi, A. Ferrarini, A. Giacometti, and H. B. Kolli, J. Chem. Phys. 147, 224903 (2017).
* Bolhuis _et al._ (1997) P. G. Bolhuis, A. Stroobants, D. Frenkel, and H. N. W. Lekkerkerker, J. Chem. Phys. 107, 1551 (1997).
* Chaturvedi and Kamien (2020) N. Chaturvedi and R. D. Kamien, Proc. R. Soc. A Math. Phys. Eng. Sci. 476, 20190824 (2020).
* Kamien and Machon (2020) R. D. Kamien and T. Machon, Proc. Natl. Acad. Sci. 117, 24102 (2020).
* Miller _et al._ (2020) J. M. Miller, D. Hall, J. Robaszewski, P. Sharma, M. F. Hagan, G. M. Grason, and Z. Dogic, Sci. Adv. 6, eaba2331 (2020).
* Zhu _et al._ (2014) Y. Zhu, J. He, C. Shang, X. Miao, J. Huang, Z. Liu, H. Chen, and Y. Han, J. Am. Chem. Soc. 136, 12746 (2014).
* Feng _et al._ (2017) W. Feng, J.-Y. Kim, X. Wang, H. A. Calcaterra, Z. Qu, L. Meshi, and N. A. Kotov, Sci. Adv. 3, e1601159 (2017).
* Tao _et al._ (2019) X. Tao, H. Li, B. Yu, X. Wu, Y. Lu, Y. Wang, and H. Chen, Nanoscale 11, 19729 (2019).
* Wensink and Ferreiro-Córdova (2017) H. H. Wensink and C. Ferreiro-Córdova, Soft Matter 13, 3885 (2017).
* Tortora _et al._ (2020) M. M. C. Tortora, G. Mishra, D. Prešern, and J. P. K. Doye, Sci. Adv. 6, eaaw8331 (2020).
* Meijer and Frenkel (1994) E. J. Meijer and D. Frenkel, J. Chem. Phys. 100, 6873 (1994).
* Shirts and Chodera (2008) M. R. Shirts and J. D. Chodera, J. Chem. Phys. 129, 124105 (2008), 0801.1426 .
* Kang _et al._ (2016b) L. Kang, T. Gibaud, Z. Dogic, and T. C. Lubensky, Soft Matter 12, 386 (2016b).
## Appendix A Effect of the number of spheres in one rod
For all results reported in the main text, the contour length $L=10$ and there
are $21$ spheres evenly distributed in each rod (i.e. the distance between
consecutive spheres is $0.5D$). We confirmed that this model is smooth enough
to remove side effects associated with grooves between overlapping spheres.
Fig. A.1 shows that using more spheres in each rod yields similar results.
Figure A.1: The total free energy ($\Delta\Omega_{total}$) and its
decomposition ($\Delta F_{r}$ and $\Delta\Omega_{p}$)as a function of the
degree of the twist ($\langle\psi_{i}\rangle$) for a monolayer formed by
$N_{r}=37$ straight rods with (a) $21$ and (b) $41$ spheres in each rod.
## Appendix B Effect of the bin size to determine the excluded volume
To compute the excluded volumes during the umbrella sampling simulations, the
whole system was divided into many small cubic bins with an edge length of
$l=0.5D$. We confirmed that using a smaller value of $l$ (e.g., $l=0.25D$,
Fig. B.1) gave similar results.
Figure B.1: The change in the total excluded volume ($\Delta V_{exc}$) as a
function of $\langle\psi_{i}\rangle$ and its decomposition into bulk and
surface contributions ($\Delta V_{exc}^{bulk}$ and $\Delta V_{exc}^{surf}$)
for monolayers of $37$ straight rods. In the calculations, the edge length of
the cubic volumetric bins was $0.5D$ in (a) and $0.25D$ in (b).
## Appendix C Free energy changes for monolayers composed of different
numbers of rods
Figure C.1: The total free energy change ($\Delta\Omega_{total}$) as a
function of the twist ($\langle\psi_{i}\rangle$) for monolayers formed by
$N_{r}=2-61$ rods with $L=10$, $r=0.1$ and varying $p$.
We performed a series of umbrella sampling simulations to calculate the change
in free energy as a function of the twist for monolayers formed by
$N_{r}=2-61$ rods with $L=10$, $r=0.1$ and varying $p$. The obtained results
shown in Fig. C.1 suggest that when $N_{r}>=19$, the free energy minima in
these curves are consistent with the phase behaviours reported in Fig. 2 (a)
in the main text, as discussed in the main text.
Figure C.2: The change in Gibbs free energy ($\Delta G$) as a function of the
twist ($\langle\psi_{i}\rangle$) for monolayers formed by $N_{r}=61$ straight
rods (Blue), $N_{r}=120$ (Orange), $N_{r}=240$ (Green), and $N_{r}=480$ (Red).
Shaded areas indicate one standard error of the mean.
We also examined the free energy of twisting for larger monolayers. For
computational efficiency, this work was performed using Monte Carlo (MC)
simulations of the standard hard AO interaction modelWood _et al._ (2021).
The same rod length and polymer size was used as in the rest of the paper,
with natural twisting observed at a polymer packing fraction of $0.467$ and
pressure of $1.2$. Staged umbrella samplingMeijer and Frenkel (1994) with the
same bias as in the main text was then used to determine the change in Gibbs
free energy as a function of the average tilt angle for several different
monolayer sizes, with the free energy profiles assembled using the MBAR
techniqueShirts and Chodera (2008). The results are shown in Fig. C.2 and are
consistent with the results presented in Fig. C.1 (at bottom right) for
$N_{r}=61$.
There are two notable changes as the size of the monolayer increases. First,
the magnitude of the preferred average twist increases with the size of the
cluster, despite the fraction of rods at the edge of the cluster shrinking
with increasing cluster size. This indicates that the local twist at the edge
has yet to reach its limiting value of 90°. Once that occurs, we expect the
average twist to reach a maximum value and then to gradually decrease. Second,
the depth of the free energy minimum, and consequently the free energy barrier
to reversing twist direction, increases with size. This increase is roughly
linear with the number of rods and monolayer area.
## Appendix D Brief review of the continuum theory for colloidal membranes
In the continuum theory Kang _et al._ (2016b), the colloidal membrane is
treated as a continuum medium composed of rods at constant density, and the
membrane has a fixed number of rods and a constant volume. In the membrane, a
rod at x (i.e. a given position in the membrane plane) is tilted by
$\theta(\textbf{x})$ with respect to the normal director. The membrane half-
thickness can be written as
$h(\textbf{x})=t\cos\theta(\textbf{x})+b(\textbf{x})$ where $t$ is the half-
length of the rod and $b(\textbf{x})$ is the height fluctuation amplitude of
the rod. Assuming a circularly-symmetric membrane of radius $R$ and using
cylindrical coordinates, one can have $h(r)$, $b(r)$, and $\theta(r)$ that
only depend on the radial coordinate.
The free energy associated with the twist distortion of the rods is described
by the Frank elastic free energy. Using the one-constant approximation, the
free energy is given by
$F_{\text{Frank}}=2\pi K\int_{0}^{R}drh\left[r(\partial_{r}\theta)^{2}+\sin
2\theta\partial_{r}\theta+\frac{\sin^{2}\theta}{r}-2qr\partial_{r}\theta-q\sin
2\theta\right],$ (3)
where $K$ is the Frank elastic constant and $q$ is the preferred twist
wavenumber associated with the intrinsic chirality of the constituent rods.
The free energy of the polymers is related to the volume excluded to them by
the rods. For polymers small compared to the dimensions of the membrane, this
excluded volume is approximately $V_{0}+aA$, where $V_{0}$ is the volume of
the membrane, $A$ is the surface area of the membrane, and $a$ is the polymer
characteristic radius. $V_{0}$ is assumed to be constant, so the corresponding
free energy is similar to an effective surface tension, which is expressed as
$F_{\text{polymer}}=4\pi
nak_{B}T\left[\int_{0}^{R}drr\sqrt{1+(\partial_{r}h)^{2}}+Rh(R)\right],$ (4)
where $n$ is the polymer concentration.
If the rod fluctuations perpendicularly to the membrane plane are ignored,
then $h\approx t\cos\theta$, and the profile of the membrane is can be
obtained by minimizing the total free energy over $h(r)$ using a volume-
conserving Lagrange multiplier $\lambda$, i.e.,
$F=F_{\text{Frank}}+F_{\text{polymer}}+\lambda\left[V_{0}-4\pi\int_{0}^{R}drrh\right],$
(5)
In this continuum theory, the key points for the twist are:
* •
The preferred handedness in membranes composed of chiral rods is entirely
determined by the non-zero $q$ through the free energy of the rods. For
example, when $q>0$, twisted membranes with $\partial_{r}\theta>0$ have lower
energy than those with $\partial_{r}\theta<0$.
* •
The polymer entropy drives the twist via minimizing the excluded volume, but
it is invariant under the chirality inversion $\theta\rightarrow-\theta$, and
thus does not contribute to the preferred handedness regardless of the
intrinsic chirality of the rods (i.e. $q$).
* •
The excluded volume only depends on the surface area of the membrane since the
volume of the membrane is assumed to be constant.
The rod fluctuations perpendicular to the membrane plane have complicated,
non-linear effects on the free energy, and so are very difficult to accurately
account for in the theory. The continuum theory in Ref. Kang _et al._ (2016b)
only considered fluctuations of single rods and thus ignored their
interactions and correlated motion. The corresponding free energy was
calculated in the small rod angle and small fluctuation amplitude limit, and
the rods were assumed to be packed hexagonally and to maintain a constant
perpendicular distance $\xi$ between nearest-neighbors. Under these
assumptions, the rod fluctuation free energy is given by
$F_{\text{fluctuation}}=\frac{8\pi^{2}nak_{B}T}{\sqrt{3}\xi^{2}}\int_{0}^{R}drr\cos\theta\left[h-t\cos\theta-(2\pi
na)^{-1/2}\right].$ (6)
## Appendix E Density profiles and volume changes of assembled monolayers
Figure E.1: The snapshots and the density profile for (a) a free monolayer
suspended in nonadsorbing polymers (b) a confined monolayer with centers of
all rods fixed at the 2D midplane. The dashed lines are density contours at
$\rho=0.5$, $1.0$, and $1.5$ $D^{-3}$. The slightly concave contour lines in
(b) are the results of the hyperboloid-like shape. All monolayers are made up
of (achiral) straight rods. Figure E.2: The changes of excluded volume for the
two monolayers made up of $37$ straight rods. (a) A free monolayer suspended
in nonadsorbing polymers. (b) A confined monolayer with centers of all rods
fixed at the 2D midplane.
## Appendix F Monolayers held together by explicit attraction
We observed similar twisting behavior in the absence of polymer depletion when
the rods interacted with each other via an explicit attractive potential. In
this model, the rods consisted of $40$ spheres over a length $L=10$ with the
spheres in different rods interacting with each other via a square well
potential of width $w=1$ and depth $1/20^{2}=0.0025\epsilon_{0}$, and the
temperature scaled to $T=k_{B}T/\epsilon_{0}$. There was no polymer in the
model, with rod-attraction coming only from the square well interactions. All
work was performed using monolayers of 61 rods. Using the same MC simulation
and umbrella sampling approach described in section C, this time in the $NVT$
ensemble, we calculated the Helmholtz free energy change ($\Delta F$) along
with the change in the mean potential energy ($\Delta U$) as a function of the
average twist. Using this, we also determined the entropy contribution to
$\Delta F$ using $-T\Delta S=\Delta F-\Delta U$.
Figure F.1: The change in free energy $\Delta F$ (Blue) and its decomposition
into $-T\Delta S$ (Orange) and $\Delta U$ (Green) as a function of twist for
monolayers formed by $N_{r}=61$ square-well rods at temperatures of a)
$T=0.1$, b) $T=0.12$, and c) $T=0.16$. d) Shows $\Delta U$ normalized by the
number of rods for all rods in the monolayer (Green), for rods located at the
edge of the monolayer (Red), and for rods located in the interior (Purple).
Shaded areas indicate one standard error of the mean.
The energy changes obtained as a function of the twist angle at three
different temperatures are shown in Fig. F.1 (a-c). At the lower temperatures
($T=0.1$ and $T=0.12$), the initial twisting is driven entirely by the
potential energy of the rods, with the rod entropy only becoming a significant
driver at higher angles. This is similar to what we observed for straight rods
when the twisting was driven by depletion-attraction [Fig. 3a (i) in the main
text]. In that case, the rod free energy $\Delta F_{r}$, which is analogous to
$-T\Delta S$ in this model, opposes twisting at small angles and only drives
it at larger angles. Similarly, the force holding the monolayer together in
both models ($\Delta\Omega_{p}$ and $\Delta U$) favors small twists but
opposes larger ones.
Breaking down the potential energy into contributions from (i) rods in the
interior of the monolayer and (ii) those at the edge, provides insight into
whether the initial twisting is driven by minimization of the surface energy
of the monolayer. This breakdown is shown for $T=0.12$ in Fig. F.1 (d).
Surprisingly, we find very little difference in the potential energy change
per rod between the different subgroups until past the optimum twist angle,
counter-intuitively indicating that the edge of the monolayer, where the
twisting is the greatest and there are fewer interactions per rod, is not more
energetically advantaged or disadvantaged by the twisting than the rest of the
cluster.
At higher temperature [$T=0.16$, Fig. F.1 (c)], the driving forces change,
with the rod entropy now driving the twist at all angles and the potential
energy always opposing the twist. This indicates that twisting becomes
increasingly driven by the rod entropy as the monolayer density decreases.
Consistent with this, we observe a similar change for depletion-driven
twisting as the polymer fugacity decreases, i.e. with the rod entropy $F_{r}$
increasingly favoring twisting at small twists (e.g., $\mu_{p}=0.8$) in the
depletion-driven case of straight rods [see Fig. F.2 compared to Fig. 3a in
the main text].
Figure F.2: The change in total free energy ($\Delta\Omega_{total}$) and its
decomposition ($\Delta F_{r}$ and $\Delta\Omega_{p}$) as a function of the
twist ($\langle\psi_{i}\rangle$) for a monolayer formed by $N_{r}=37$ straight
rods at a polymer fugacity of (a) $z_{p}=1.0$ and (b) $z_{p}=0.8$.
|
# Dynamic and Distributed Optimization for the Allocation of Aerial Swarm
Vehicles
Jason Hughes, Dominic Larkin, Charles O’Donnell, Christopher Korpela The
authors are with the Robotics Research Center, Department of Electrical
Engineering and Computer Science, United States Military Academy, West Point,
NY, 10996 USA. E-mail: {jason.hughes, dominic.larkin, charles.o’donnell,
<EMAIL_ADDRESS>
###### Abstract
Optimal transport (OT) is a framework that can guide the design of efficient
resource allocation strategies in a network of multiple sources and targets.
This paper applies discrete OT to a swarm of UAVs in a novel way to achieve
appropriate task allocation and execution. Drone swarm deployments already
operate in multiple domains where sensors are used to gain knowledge of an
environment [1]. Use cases such as, chemical and radiation detection, and
thermal and RGB imaging create a specific need for an algorithm that considers
parameters on both the UAV and waypoint side and allows for updating the
matching scheme as the swarm gains information from the environment.
Additionally, the need for a centralized planner can be removed by using a
distributed algorithm that can dynamically update based on changes in the
swarm network or parameters. To this end, we develop a dynamic and distributed
OT algorithm that matches a UAV to the optimal waypoint based on one parameter
at the UAV and another parameter at the waypoint. We show the convergence and
allocation of the algorithm through a case study and test the algorithm’s
effectiveness against a greedy assignment algorithm in simulation.
###### Index Terms:
Discrete Optimal Transport, Resource Matching, Swarming
## I Introduction
Optimal transport (OT) is a centralized framework that is often leveraged for
resource allocation from a set of source nodes to a set of target nodes [2].
OT has been used for many efficient resource allocation and matching problems
such as allocating raw materials for consumption, matching employees to tasks
in a corporate environment, and allocating limited power units in areas struck
by natural disasters.
While optimal transport is used for many problems, it has yet to be adapted
for mapping members of a UAV swarm to a series of waypoints in a dynamic and
distributed manner. To this end, the main contribution of this paper is a
dynamic and distributed optimal transport algorithm for the efficient mapping
of UAVs to waypoints. We consider this research as an initial development that
will lead to more complex matching problems with a heterogeneous swarm.
Leveraging the discrete optimal transport formulation has many advantages.
First, consider a network where both the UAV and the waypoint each have a
parameter to be accounted for in the optimization algorithm, as shown in Fig.
1. In this network, the distance between the UAV and the waypoint is the agent
side parameter, and the importance of visiting a waypoint is the waypoint side
parameter. With the OT algorithm, these parameters exist at the connection
between the nodes rather than at the nodes. Secondly, a dynamic algorithm is
formed to update the matching scheme when parameters in the network change.
The network parameters only change when a UAV visits a waypoint. This
constraint avoids the possibility of visiting the same waypoint twice and
prevents the algorithm from becoming stuck in a local minimum. The algorithm
also must account for when a UAV has to land for a battery swap. Finally,
consider an algorithm for application to a task where information is gained
only at a waypoint. Examples of these tasks include chemical sensing,
radiation sensing, or surveying an area.
A problem to be considered is as the network becomes larger and larger with
more nodes, the computational complexity grows exponentially. As illustrated
in Fig 1, there is a relatively small number of waypoints (40) and four UAVs,
but there are 120 connections resulting in 240 parameters. This problem
becomes more profound when the allocation occurs sequentially over time, as it
does for our matching problem. In this case, parameters are added at each
iteration, adding complexity. Since centralized control of the swarm is
undesirable, we developed a distributed algorithm using the Alternating
Direction Method of Multipliers (ADMM). This algorithm allows each UAV to
solve its own optimization problem and then communicate its results with the
other swarm members.
Figure 1: A Network $\mathcal{G}$ with four UAV nodes and forty waypoint
nodes, and a full connection scheme where every UAV is connected to every
waypoint node.
Another consideration is that the network changes as time progresses. This
change could lead to the undesirable behavior of swarm members visiting the
same waypoint more than once. There is also the problem of battery life. At
some point, UAVs may need to drop out of the swarm for a battery swap or a
malfunction. Lastly, as a UAV visits a waypoint and takes a sensor reading,
information is gained about the waypoint side of the network. The network
parameters change with this additional information, and a new optimal matching
now exists. These requirements lead to developing a dynamic distributed
algorithm that calculates an optimal solution any time the parameters and
network structure update.
Furthermore, consider the information that needs exchanging at each step. The
UAVs communicate by sending Wi-Fi packets when the algorithm runs on hardware.
This creates a need to keep the amount of data being transferred as small as
possible to support the scalability of the swarm. Once a waypoint is reached,
the proposed algorithm only needs to share three integers with the other
agents in the swarm, thus limiting the amount of data transferred.
The contributions of this paper are summarized as follows. First, a
centralized optimal transport algorithm is leveraged for the UAV to waypoints
matching problem. Second, a distributed algorithm is formed using ADMM, so
there is no need for a centralized control station. Third, a dynamic
distributed algorithm is created that can automatically calculate the optimal
solution to the matching problem when parameters are updated. Lastly, we
corroborate our results with a preliminary study in MATLAB and show the
algorithm’s effectiveness in simulation.
Related Work. Swarming UAVs have been widely studied [3], [4], [5], [6]. The
centralized and distributed optimal transport algorithm that we leverage is
developed in [7]. A dynamic and distributed optimal transport algorithm is
shown in [8], and a secure distributed OT algorithm is developed in [8].
Optimal transport has been used for swarm guidance in [9], [10] but has yet to
be used dynamically. A dynamic OT algorithm is developed in [11] but it is not
distributed. Radioactivity sensing swarms refer to waypoint allocation but do
not use an OT based algorithm [12],[13]. Distributed resource matching and
allocation algorithms are studied extensively in [14], [15], [16] and
specifically optimal transport in [2].
## II Algorithm Formulation
In this section we set up the discrete optimal transport problem with the UAV
to waypoint matching problem.
### II-A Discrete Optimal Transport
Start by considering each UAV as a ”target” node represented by
$x\in\mathcal{X}$ and each waypoint as a ”source” represented by
$y\in\mathcal{Y}$ and define $\mathcal{N}=\\{\mathcal{X}+\mathcal{Y}\\}$ as
the set of all nodes. Each source node is connected to some or all target
nodes, the set of targets that are connected to source $y$ is denoted by
$\mathcal{X}_{y}$, alternatively the set of source nodes connected to target
$x$ is denoted by $\mathcal{Y}_{x}$. For convenience, we denote the set of
edges connecting target and source nodes as
$\mathcal{E}:=\\{\\{x,y\\}|x\in\mathcal{X}_{y},y\in\mathcal{Y}_{x}\\}$. We
denote the network made up of the nodes and edges as
$\mathcal{G}=\\{\mathcal{N},\mathcal{E}\\}$. With this notation, we use the
optimal transport formulation from [7] that considers utility from both the
target and source sides. We have the following discrete optimal transport
formulation:
$\displaystyle\max_{\Pi}\ \sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}$
$\displaystyle
d_{xy}(\pi_{xy})+\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy})$
(1) $\displaystyle\mathrm{s.t.}$
$\displaystyle\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy}\leq\bar{p}_{x},\
\forall x\in\mathcal{X},$
$\displaystyle\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy}\leq\bar{q}_{y},\
\forall y\in\mathcal{Y},$ $\displaystyle\pi_{xy}\geq 0,\
\forall\\{x,y\\}\in\mathcal{E},$
where $d_{xy}:\mathbb{R}_{+}\rightarrow\mathbb{R}$ and
$s_{xy}:\mathbb{R}_{+}\rightarrow\mathbb{R}$ are utility functions for target
node $x$ and source node $y$, respectively. They capture chosen utilities at
the UAV and target side. The upper and lower bounds are shown by
$\bar{p}_{x}$, $\bar{q}_{y}$ and $\underline{p}_{x}$, $\underline{q}_{y}$,
respectively. These bounds mean that target $x$ will not take in more than
$\bar{p}_{x}$ and take in less than $\underline{p}_{x}$. It follows similarly
for the source nodes.
### II-B Distributed Discrete Optimal Transport
To form a distributed algorithm such that each UAV can solve its own problem,
we first make adjustments to the variables. First, the ancillary variables
$\pi_{xy,d}$ and $\pi_{xy,s}$ are introduced, where subscripts $d$ and $s$
indicate the relation to destination or source nodes, respectively. We then
set $\pi_{xy}=\pi_{xy,d}$ and $\pi_{xy,s}=\pi_{xy}$, doing so indicates that
the proposed solutions by the target and source nodes are consistent.Then, the
opposite of the original functions is taken to make a convex optimization
problem under the following assumption:
###### Assumption 1.
The utility functions $d_{xy}$ and $s_{xy}$ must be concave and monotonically
increasing.
With this assumption and the introduction of ancillary variables, we can
reformulate (1) to the following:
$\displaystyle\min_{\Pi_{t}\in\mathcal{F}_{t},\Pi_{s}\in\mathcal{F}_{s},\Pi}$
$\displaystyle-\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}d_{xy}(\pi_{xy,d})-\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy,s})$
(2) $\displaystyle\mathrm{s.t.}$ $\displaystyle\pi_{xy,d}=\pi_{xy},\
\forall\\{x,y\\}\in\mathcal{E},$ $\displaystyle\pi_{xy,s}=\pi_{xy},\
\forall\\{x,y\\}\in\mathcal{E},$
where $\Pi_{t}:=\\{\pi_{xy,d}\\}_{x\in\mathcal{X}_{y},y\in\mathcal{Y}}$,
$\Pi_{s}:=\\{\pi_{xy,s}\\}_{x\in\mathcal{X},y\in\mathcal{Y}_{x}}$,
$\mathcal{F}_{t}:=\\{\Pi_{t}|\pi_{xy,d}\geq
0,\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy,d}\leq\bar{p}_{x},\
\\{x,y\\}\in\mathcal{E}\\}$, and $\mathcal{F}_{s}:=\\{\Pi_{s}|\pi_{xy,s}\geq
0,\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy,s}\leq\bar{q}_{y},\
\\{x,y\\}\in\mathcal{E}\\}$.
With the convex optimization problem in (2) we can apply the Alternating
Direction Method of Multipliers (ADMM) to form a distributed algorithm [17].
We first form a Lagrangian with $\alpha_{xy,d}$ and $\alpha_{xy,s}$ as
Lagrangian multipliers for the constraints $\pi_{xy,d}=\pi_{xy}$ and
$\pi_{xy}=\pi_{xy,s}$.
$\begin{split}&L\left(\Pi_{t},\Pi_{s},\Pi,\alpha_{xy,d},\alpha_{xy,s}\right)=-\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}d_{xy}(\pi_{xy,d})\\\
&-\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy,s})+\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}\alpha_{xy,d}(\pi_{xy,d}-\pi_{xy})\\\
&+\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}\alpha_{xy,s}(\pi_{xy}-\pi_{xy,s})+\frac{\eta}{2}\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}(\pi_{xy,d}-\pi_{xy})^{2}\\\
&+\frac{\eta}{2}\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}(\pi_{xy}-\pi_{xy,s})^{2},\end{split}$
(3)
where $\eta>0$ is a positive scalar controlling the convergence rate of the
following algorithm. Note that in (3), the last two terms
$\frac{\eta}{2}\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}_{x}}(\pi_{xy,d}-\pi_{xy})^{2}$
and
$\frac{\eta}{2}\sum_{y\in\mathcal{Y}}\sum_{x\in\mathcal{X}_{y}}(\pi_{xy}-\pi_{xy,s})^{2}$,
acting as penalization, are quadratic. Hence, the Lagrangian function $L$ is
strictly convex, ensuring the existence of a unique optimal solution.
Next, we apply ADMM to the minimization problem in (2) with the Lagrangian to
form the distributed algorithm:
###### Proposition 1.
The simplified iterative steps of applying ADMM to problem (2) are summarized
as follows, the simplification comes from [7]:
$\begin{split}\Pi_{x,d}(k+1)&\in\arg\min_{\Pi_{x,d}\in\mathcal{F}_{x,d}}-\sum_{y\in\mathcal{Y}_{x}}d_{xy}(\pi_{xy,d})\\\
&+\sum_{y\in\mathcal{Y}_{x}}\alpha_{xy}(k)\pi_{xy,d}+\frac{\eta}{2}\sum_{y\in\mathcal{Y}_{x}}\left(\pi_{xy,d}-\pi_{xy}(k)\right)^{2},\end{split}$
(4)
$\begin{split}\Pi_{y,s}(k+&1)\in\arg\min_{\Pi_{y,s}\in\mathcal{F}_{y,s}}-\sum_{x\in\mathcal{X}_{y}}s_{xy}(\pi_{xy,s})\\\
&-\sum_{x\in\mathcal{X}_{y}}\alpha_{xy}(k)\pi_{xy,s}+\frac{\eta}{2}\sum_{x\in\mathcal{X}_{y}}\left(\pi_{xy}(k)-\pi_{xy,s}\right)^{2},\end{split}$
(5)
$\begin{split}\pi_{xy}(k+1)=\frac{1}{2}\left(\pi_{xy,d}(k+1)+\pi_{xy,s}(k+1)\right),\end{split}$
(6)
$\begin{split}\alpha_{xy}(k+1)=\alpha_{xy}(k)+\frac{\eta}{2}\left(\pi_{xy,d}(k+1)-\pi_{xy,s}(k+1)\right),\end{split}$
(7)
where $\Pi_{\tilde{x},t}:=\\{\pi_{xy,d}\\}_{y\in\mathcal{Y}_{x},x=\tilde{x}}$
represents the solution at target node $\tilde{x}\in\mathcal{X}$, and
$\Pi_{\tilde{y},s}:=\\{\pi_{xy,s}\\}_{x\in\mathcal{X}_{y},y=\tilde{y}}$
represents the proposed solution at source node $\tilde{y}\in\mathcal{Y}$. In
addition, $\mathcal{F}_{x,d}:=\\{\Pi_{x,d}|\pi_{xy,d}\geq
0,y\in\mathcal{Y}_{x},\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy,d}\leq\bar{p}_{x}\\}$,
and $\mathcal{F}_{y,s}:=\\{\Pi_{y,s}|\pi_{xy,s}\geq
0,x\in\mathcal{X}_{y},\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy,s}\leq\bar{q}_{y}\\}$.
For convenience, we provide these steps in the form of an algorithm:
Algorithm 1 Distributed OT Algorithm
1:while $\Pi_{x,d}$ and $\Pi_{y,s}$ not converging do
2: Compute $\Pi_{x,d}(k+1)$ using (4), for all $x\in\mathcal{X}_{y}$
3: Compute $\Pi_{y,s}(k+1)$ using (5), for all $y\in\mathcal{Y}_{x}$
4: Compute $\pi_{xy}(k+1)$ using (6), for all $\\{x,y\\}\in\mathcal{E}$
5: Compute $\alpha_{xy}(k+1)$ using (7), for all $\\{x,y\\}\in\mathcal{E}$
6:end while
7:return $\pi_{xy}(k+1)$, for all $\\{x,y\\}\in\mathcal{E}$
## III Dynamic Distributed Optimal Transport
In this section, we adapt the above distributed OT framework for applications
to the UAV waypoint matching problem. For this application, consider the UAVs
as the source and the set of waypoints as targets where each waypoint is a
target node.
### III-A Parameter Adjustments
To fit Algorithm 1 to the matching problem the constraints must be defined
more rigidly. First, consider that each waypoint should only have one UAV at
it at any given time, thus the constraint
$\underline{q}_{y}\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy,s}\leq\bar{q}_{y}$
becomes $0\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy,s}\leq 1$. On the source side,
every UAV must go to a waypoint thus
$\underline{p}_{x}\leq\sum_{y\in\mathcal{Y}_{x}}\pi_{xy,d}\leq\bar{p}_{x}$
becomes $\sum_{y\in\mathcal{Y}_{x}}=1$.
We also consider the network under linear parameters, making
$d_{xy}(\pi_{xy,d})=\gamma_{xy}\pi_{xy,d}$ and
$s_{xy}(\pi_{xy,s})=\delta_{xy}\pi_{xy,s}$. These parameters satisfy
Assumption 1 and are more applicable to the parameters we see in a UAV to
waypoint matching problem.
### III-B Dynamic Optimal Transport
The network structure and parameters need to be updated often; for example,
every time a UAV reaches a the waypoint, the network parameters need to be
updated so that waypoint is not visited by any agents in the swarm again.
Also, the parameters of the nodes need to be updated based on the information
gained at the waypoint. Other events like a UAV needing to land for a battery
replacement can also affect the network. We develop a dynamic algorithm by
adding in a time step to account for this. With the dynamic algorithm, the UAV
can constantly be iterating, and when parameters are updated, it will converge
to a new optimal solution without having to start a new set of iterations.
Once the algorithm has converged, the matching scheme can be captured, and the
UAV will have its next waypoint. We introduce a new set
$t\in\mathcal{T}=\\{1,2,...,T\\}$ that indicates a time $t$. From the
modifications in subsections III-A and III-B, we obtain the following
iterative steps:
$\begin{split}\Pi_{x,d}(k+1)&\in\arg\min_{\Pi_{x,d}\in\mathcal{F}_{x,d}}-\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_{x}}\gamma_{xy}\pi_{xy,d}^{t}+\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_{x}}\\\
&\alpha_{xy}(k)\pi_{xy,d}^{t}+\frac{\eta}{2}\sum_{t\in\mathcal{T}}\sum_{y\in\mathcal{Y}_{x}}\left(\pi_{xy,d}^{t}-\pi_{xy}^{t}(k)\right)^{2},\end{split}$
(8)
$\begin{split}\Pi_{y,s}(k+&1)\in\arg\min_{\Pi_{y,s}\in\mathcal{F}_{y,s}}-\sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_{y}}\delta_{xy}\pi_{xy,s}^{t}-\sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_{y}}\\\
&\alpha_{xy}(k)\pi_{xy,s}^{t}+\frac{\eta}{2}\sum_{t\in\mathcal{T}}\sum_{x\in\mathcal{X}_{y}}\left(\pi_{xy}^{t}(k)-\pi_{xy,s}^{t}\right)^{2}\end{split}$
(9)
$\begin{split}\pi_{xy}^{t}(k+1)=\frac{1}{2}\left(\pi_{xy,d}^{t}(k+1)+\pi_{xy,s}^{t}(k+1)\right),\end{split}$
(10)
$\begin{split}\alpha_{xy}(k+1)=\alpha_{xy}(k)+\frac{\eta}{2}\left(\pi_{xy,d}^{t}(k+1)-\pi_{xy,s}^{t}(k+1)\right),\end{split}$
(11)
where
$\Pi_{\tilde{x},t}:=\\{\pi_{xy,d}\\}_{y\in\mathcal{Y}_{x},x=\tilde{x},t\in\mathcal{T}}$
represents the solution at target node $\tilde{x}\in\mathcal{X}$, and
$\Pi_{\tilde{y},s}:=\\{\pi_{xy,s}\\}_{x\in\mathcal{X}_{y},y=\tilde{y},t\in\mathcal{T}}$
represents the proposed solution at source node $\tilde{y}\in\mathcal{Y}$. In
addition, $\mathcal{F}_{x,d}:=\\{\Pi_{x,d}|\pi_{xy,d}\geq
0,y\in\mathcal{Y}_{x},\sum_{y\in\mathcal{Y}_{x}}\pi_{xy,d}^{t}=1\\}$, and
$\mathcal{F}_{y,s}:=\\{\Pi_{y,s}|\pi_{xy,s}\geq
0,x\in\mathcal{X}_{y},0\leq\sum_{x\in\mathcal{X}_{y}}\pi_{xy,s}\leq 1\\}$
Figure 2: The distributed algorithm converges to the same solution of the
centralized algorithm by showing the aggregation of the utility parameters in
the network.
Once the iterative steps have converged, the UAV will communicate what
waypoint it is heading to. When it has reached its calculated waypoint, it can
take a reading from sensors or preform a required task and communicate the
information from that point to the other UAVs. The other agents can then
update the parameters $\delta_{xy}$ connecting to the waypoint node. Once the
update is made, the iterative steps of the algorithm will begin to converge
again. We also note that the iterative steps of the algorithm will not stop
iterating. This ensures that a new optimal solution is found even if something
out of the ordinary occurs. The compromised UAV can communicate this and the
other UAVs can update their parameters and converge to a new solution rather
than having to wait until they have arrived at their waypoint to find out that
another UAV has dropped out. Also, the agents will most likely reach their
waypoints asynchronously. With a dynamic algorithm, the UAVs will not need to
wait for the other agents to reach their destinations to calculate their next
waypoint. Instead, they can calculate the optimal waypoint for themselves at
that time with the given information. For convenience, this is summarized in
Algorithm 2.
Algorithm 2 Dynamic Distributed OT Algorithm for UAVs
1:while UAV is in the air do
2: Compute $\Pi_{x,d}(k+1)$ using (8), for all $x\in\mathcal{X}_{y}$
3: Compute $\Pi_{y,s}(k+1)$ using (9), for all $y\in\mathcal{Y}_{x}$
4: Compute $\pi_{xy}(k+1)$ using (10), for all $\\{x,y\\}\in\mathcal{E}$
5: Compute $\alpha_{xy}(k+1)$ using (11), for all $\\{x,y\\}\in\mathcal{E}$
6: if Convergence is reached then
7: Interpret waypoint from $\pi_{xy}(k)$
8: Go to waypoint
9: Communicate next waypoint to swarm
10: end if
11: if At waypoint then
12: Update $\gamma_{xy}$ and $\delta_{xy}$
13: Update network structure
14: Communicate information at waypoint
15: end if
16:end while
## IV Preliminary Case Study
In this section, we corroborate our algorithm by showing its convergence and
how it considers the parameters of the UAVs and waypoints. For example,
consider a swarm with three UAVs and ten waypoints and randomly generated
linear parameters between 1 and 10 for both $\gamma_{xy}$ and $\delta_{xy}$:
$\displaystyle\delta_{xy}=\begin{bmatrix}5&4&2&6&3&7&2&10&9&1\\\
8&2&4&5&9&5&2&4&9&2\\\ 1&1&4&7&1&6&9&7&1&9\end{bmatrix}$
$\displaystyle\gamma_{xy}=\begin{bmatrix}5&9&5&2&4&9&2&5&7&9\\\
7&1&6&9&7&1&9&10&4&1\\\ 3&7&2&10&9&1&1&6&7&8\end{bmatrix}$
We first show that the distributed algorithm converges to the solution of a
centralized algorithm. The convergence factor is set relatively high with
$\eta=10$, this allows the algorithm to converge more quickly, and time is
saved by shortening the number of iterations. The convergence of the
distributed algorithm in Alg. 1 to the centralized algorithm in (2) is shown
in Fig 2.
Once the algorithm has converged it will output a 3-by-10 matrix of zeros and
ones. The indices of the ones in the matrix indicate which UAV should go to
which waypoint. In this case, UAV 1 will be sent to waypoint 8, UAV 2 will be
sent to waypoint 5 and UAV 3 will be sent to waypoint 10.
We can corroborate the output of the algorithm by examining the aggregated
utility of the parameters $d_{xy}$ and $s_{xy}$.
$\delta_{xy}+\gamma_{xy}=\begin{bmatrix}6&10&9&14&6&12&5&17&14&3\\\
13&9&13&15&17&15&4&7&10&8\\\ 11&5&5&15&3&9&10&10&7&16\end{bmatrix}$
As this example is relatively simple, we can easily see that the algorithm is
sending the UAV to the waypoint with the highest utility as we expect it do.
### IV-A Dynamic Algorithm
Next we verify the results of the online algorithm. In this study, at each
iteration the network is updated to exclude the waypoints that have already
been visited, this ensures that the waypoint will not be visited twice by any
of the UAVs. The parameters $\gamma_{xy}$ and $\delta_{xy}$ are also updated
to capture the new updated distances and importance of the surrounding
waypoints. For the parameter updates we randomly generate new parameters for
this study. These updates to the network are made after every 250 iterations,
but the updates to the parameters or network structure can be made at any
iteration. The algorithm converges to the new optimal solution when the
parameters are updated, as shown in Fig. 3
Figure 3: The online algorithm converges to the new optimal solution even when
parameters are adjusted while iterating.
The online algorithm is advantageous for this application because the network
will update often. Every time a UAV reaches a waypoint it will communicate
sensor readings and location to the other agents which requires the network to
update so that other UAVs can calculate their optimal waypoint. As shown in
Fig. 3 the algorithm converges in about 50 iterations, so updates can be made
often and a new optimal solution can be computed with relative ease.
## V Simulation
(a) Waypoint Map
(b) Greedy Allocation
(c) Optimal Transport Algorithm
Figure 4: (a) shows the location of the twelve waypoints, blue indicates a
waypoint where there is no chemical present, orange indicates a waypoint with
chemical, (b) shows the path of the UAVs with the greedy algorithm, (c) shows
the path of the UAVs using Alg.2
In this section we compare the algorithm outline in Alg. 2 to a greedy
assignment algorithm in simulation. We simulate a swarm of UAVs with a
chemical sensor that will take a reading at multiple waypoints to find where
the chemical is strongest. We assume that the UAVs are altitude deconflicted
in this exercise.
### V-A Setup
Consider an environment with three UAVs and twelve waypoints to be scanned,
three of those waypoints are considered to have a chemical. The actual
chemical sensor records a numerical reading, in simulation we report a binary
reading, meaning there is chemical present or there is not, for simplicity.
The waypoints correspond to an area with a field and a few small buildings and
the area around the buildings is the area where the chemical is considered
present, as shown in Fig. 4(a).
We compare our algorithm to a greedy assignment algorithm. The greedy
algorithm assigns four waypoints to each UAVs before they start. It assigns
waypoints solely based on distance. The point that is closest and not taken by
another UAV already will be assigned to that agent. While this algorithm does
consider distance it does not consider it at run time so a UAV maybe a
assigned the closest waypoint without the consideration of the other UAVs. The
algorithm is greedy because the agent that comes online first will choose the
waypoints that are best for itself without considering the other UAVs. Thus a
waypoint may be assigned to one UAV even if it is far closer to another UAV.
For the optimal transport algorithm in Alg. 2, the UAVs need to be assigned
their first waypoint since they all take off at the same location. The first
waypoint is assigned such that UAV one goes to waypoint one, UAV two goes to
waypoint two, and so on. We do this since the distances to the waypoints will
be the same and there is nothing known about the chemical sensor readings at
the waypoints.
### V-B Results
The allocation schemes of the greedy and dynamic distributed optimal transport
algorithms are show in Fig. 4(b) and Fig. 4(c) respectively. Table I shows the
distances each UAV traveled between waypoints in meters.
UAV | Greedy | OT
---|---|---
1 | 255m | 108m
2 | 122m | 59m
3 | 49m | 179m
Total | 426m | 346m
TABLE I: Distances traveled
The greedy algorithm has a total distance traveled of 426 meters while the
optimal transport algorithm has a total distance traveled of 346 meters. The
smaller travel distance is essential for preserving battery life. We also
highlight that the distance are more varied even among the greedy algorithm
than with the optimal transport algorithm. The greedy algorithm has UAV one
traveling over double the distance of the other UAVs using more of that
batteries life to scan the same number of waypoints as the other agents. The
lesser total distance of the OT algorithm means that the scan of the waypoints
will be completed quicker, saving time. We note that the algorithm does take
time to converge in practice, in simulation it take around two seconds to
converge but realize this could be longer on the constrained hardware of a
UAV. These metrics show the effectiveness of the algorithm and provides a
better way to allocate the waypoints to each agent in a swarm.
## VI Conclusion & Future Work
In this paper we developed a dynamic and distributed algorithm for the
efficient matching of UAV swarm agents to waypoints. The algorithm is capable
of considering a parameter on both the agent and waypoint side and can
converge to a new optimal matching scheme when parameters in the network
change such as a UAV needing to land for a new battery or updating parameters
based on sensor readings. The simulation shows the effectiveness of this
algorithm and showed that the developed algorithm was able to find a more
efficient way of navigating the waypoints than a greedy assignment algorithm
by cutting down the total distance that the UAVs needed to travel thus saving
battery life and shortening mission time.
Future work includes the developing a similar algorithm for a more complex use
case such as a heterogeneous swarm of robots. We also plan to cut down on time
to reach convergence making the algorithm better suited for actual hardware.
We also consider extending the security measures of the swarm especially
looking into communication between the swarm with a MPU 5 or equivalent,
extending or combining simulation with other swarm missions and building
effective yet flexible templates for mission commanders to support their
specific parameters.
## VII Acknowledgements
This research was sponsored by the Army Research Laboratory and was
accomplished under Cooperative Agreement Number W911NF-21-2-0281. The views
and conclusions contained in this document are those of the authors and should
not be interpreted as representing the official policies, either expressed or
implied, of the Army Research Laboratory or the U.S. Government. The U.S.
Government is authorized to reproduce and distribute reprints for Government
purposes notwithstanding any copyright notation herein.
## References
* [1] A. Sanders, “Drone swarms,” _Masters Thessis_ , 2017.
* [2] A. Galichon, _Optimal Transport Methods in Economics_. Princeton University Press, 2018.
* [3] R. Arnold, K. Carey, B. Abruzzo, and C. Korpela, “What is a robot swarm: a definition for swarming robotics,” in _2019 IEEE 10th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON)_. IEEE, 2019, pp. 0074–0081.
* [4] R. Arnold, E. Mezzacappa, M. Jablonski, J. Jablonski, and B. Abruzzo, “Performance comparison of decentralized undirected swarms versus centralized directed swarms at different levels of quality of knowledge,” in _2021 IEEE International Symposium on Technologies for Homeland Security (HST)_. IEEE, 2021, pp. 1–9.
* [5] T. Brick, M. Lanham, A. Kopeikin, C. Korpela, and R. Morales, “Zero to swarm: Integrating suas swarming into a multi-disciplinary engineering program,” in _2018 International Conference on Unmanned Aircraft Systems (ICUAS)_ , 2018, pp. 308–314.
* [6] E. Şahin, “Swarm robotics: From sources of inspiration to domains of application,” in _Swarm Robotics_ , E. Şahin and W. M. Spears, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005, pp. 10–20.
* [7] R. Zhang and Q. Zhu, “Consensus-based distributed discrete optimal transport for decentralized resource matching,” _IEEE Transactions on Signal and Information Processing over Networks_ , vol. 5, no. 3, pp. 511–524, 2019.
* [8] J. Hughes and J. Chen, “Fair and distributed dynamic optimal transport for resource allocation over networks,” in _55th Annual Conference on Information Sciences and Systems (CISS)_ , 2021.
* [9] B. Açıkmeşe and D. S. Bayard, “Probabilistic swarm guidance for collaborative autonomous agents,” in _2014 American Control Conference_ , 2014, pp. 477–482.
* [10] V. Krishnan and S. Martínez, “Distributed optimal transport for the deployment of swarms,” in _2018 IEEE Conference on Decision and Control (CDC)_ , 2018, pp. 4583–4588.
* [11] Q. Wang and X. Mao, “Dynamic task allocation method of swarm robots based on optimal mass transport theory,” _Symmetry_ , vol. 12, no. 10, 2020. [Online]. Available: https://www.mdpi.com/2073-8994/12/10/1682
* [12] B. Savidge, A. Kopeikin, R. Arnold, and D. Larkin, “Uas swarm shares survey data to expedite coordinated mapping of radiation hotspots,” in _2019 IEEE International Symposium on Technologies for Homeland Security (HST)_ , 2019, pp. 1–7.
* [13] A. Kopeikin, S. Heider, D. Larkin, C. Korpela, R. Morales, and J. E. Bluman, _Unmanned Aircraft System Swarm for Radiological and Imagery Data Collection_. [Online]. Available: https://arc.aiaa.org/doi/abs/10.2514/6.2019-2286
* [14] M. Ghorbanzadeh, A. Abdelhadi, and C. Clancy, _Distributed Resource Allocation_. Cham: Springer International Publishing, 2017, pp. 61–91. [Online]. Available: https://doi.org/10.1007/978-3-319-46267-7_4
* [15] D. Niu and B. Li, “An efficient distributed algorithm for resource allocation in large-scale coupled systems,” in _2013 Proceedings IEEE INFOCOM_ , 2013, pp. 1501–1509.
* [16] D. A. Schmidt, C. Shi, R. A. Berry, M. L. Honig, and W. Utschick, “Distributed resource allocation schemes,” _IEEE Signal Processing Magazine_ , vol. 26, no. 5, pp. 53–63, 2009.
* [17] S. Boyd, N. Parikh, and E. Chu, _Distributed optimization and statistical learning via the alternating direction method of multipliers_. Now Publishers Inc, 2011.
|
# Reinforcement Learning for Multi-Truck Vehicle Routing Problems
Randall Correll QC Ware Corp., Palo Alto, CA USA Sean J. Weinberg QC Ware
Corp., Palo Alto, CA USA Fabio Sanches QC Ware Corp., Palo Alto, CA USA
Takanori Ide AISIN CORPORATION, Tokyo Research Center, Chiyoda-ku, Tokyo,
Japan Takafumi Suzuki Aisin Technical Center of America, San Jose, CA USA
###### Abstract
Vehicle routing problems and other combinatorial optimization problems have
been approximately solved by reinforcement learning agents with policies based
on encoder-decoder models with attention mechanisms. These techniques are of
substantial interest but still cannot solve the complex routing problems that
arise in a realistic setting which can have many trucks and complex
requirements. With the aim of making reinforcement learning a viable technique
for supply chain optimization using classical computing today and quantum
computing in the future, we develop new extensions to encoder-decoder models
for vehicle routing that allow for complex supply chains. We make two major
generalizations. First, our model allows for routing problems with multiple
trucks. Second, we move away from the simple requirement of having a truck
deliver items from nodes to one special depot node, and instead allow for a
complex tensor demand structure. We show how our model, even if trained only
for a small number of trucks, can be embedded into a large supply chain to
yield viable solutions.
## I Introduction
In the setting of commercial operations, computational problems of substantial
theoretical and practical difficulty regularly arise. Even a small improvement
on the quality of solutions to such problems can translate to a very
substantial benefit. One such problem, heavily studied in operations research,
is the vehicle routing problem [1, 2].
Vehicle routing problems are NP-hard combinatorial optimization problems for
which there are numerous heuristic algorithms which yield approximate
solutions. Relatively recently, there has been interest in solving such
routing problems using reinforcement learning (RL) [3]. In this context, a
truck driving between nodes can be thought of as an agent performing actions
(selecting its next node to drive to) in the environment of the supply chain.
This approach has been successful for a wide range of combinatorial
optimization problems, especially when using models with an encoder-decoder
attention mechanisms [4].
In a sense, the use of reinforcement learning for such problems is very
natural. RL is appropriate in contexts where decisions must be made with
complex consequences can only be learned through experience. Games like Go
naturally fit that description, and indeed they have been well-addressed with
RL methodology [5]. More easily overlooked is that the routing of a truck
through a supply chain is in many ways similar to a game like Chess or Go. The
“moves” in this game are the selections of where to drive, what to pick up,
and what to drop off. The consequences of these moves are complicated and can
have long-term effects that are best learned through experience.
Unfortunately, reinforcement learning approaches for combinatorial
optimization are not readily deployed in a commercial setting because of
simplifications that they make. For the vehicle routing problem in particular,
past work has largely focused on the case of a single truck with a simple
task: bring “demand” from nodes to a depot. In reality, supply chains involve
numerous trucks, and requirements are more complex than simply bringing
material to one depot node, and the current machine learning models are not
equipped to handle such complex situations.
In this work, we take steps toward developing RL agents that can obtain good
solutions in complex supply chains. We use as our model a real commercial
supply chain of Aisin Corporation, a Japanese automotive manufacturing
company. We build on the work of [4] by adding new techniques allowing for
multiple trucks and for far more general requirements for trucks. While our
model is specially designed with Aisin Corporation’s vehicle routing problems
in mind, our techniques for applying RL to a very general class of routing
problems apply widely.
## II Vehicle Routing Problems
This section describes the particular vehicle routing problem that we apply a
reinforcement learning method to solve as well as the full-scale logistical
problem that it is based on.
Broadly speaking, vehicle routing problems (VRPs) [1, 2] are combinatorial
optimization problems where one or more trucks must carry material between
locations to accomplish certain requirements subject to constraints like
capacity and time limitations. There are numerous variants of VRPs so “vehicle
routing problem” is an umbrella term rather than a specific computational
problem.
The three subsections below describe VRPs of gradually increasing complexity.
Section II.1 reviews standard vehicle routing problems for which there are
already many solution methods including reinforcement learning. Section II.2
describes a more general form of VRP that we use as the environment for our
reinforcement learning agent. Unlike the basic VRP, the general VRP allows for
multiple trucks and all-to-all connectivity for demand structure. Finally,
section II.3 describes a real routing problem encountered by Aisin
Corporation. This real routing problem is quite similar to the general VRP of
section II.2, but includes additional constraints that are difficult to model
on a GPU.
### II.1 Basic Vehicle Routing Problems
In this section, we review the the capcitated vehicle routing problem with
split-deliveries (SDVRP). We use the term “basic VRP” to refer to this problem
because it can be thought of as the base-model from which we generalize.
For the SDVRP, we are given a graph where nodes $z_{0},z_{1},\ldots z_{n}$ are
locations and weighted directed edges are the times needed to drive between
the locations. One of the nodes ,$z_{0}$, is special and is called the depot.
To every non-depot node $i$, there is a certain nonnegative real number
$d_{i}$ called the demand for node $i$. A truck, which starts at the depot,
must drive between the nodes, pick up demand, and return it to the depot. The
truck has limited capacity $1$ so it may need to make multiple trips to the
depot.
To be precise, an instance of the basic VRP is specified by:
1. 1.
A graph $G$ with $n+1$ nodes $z_{0},z_{1},\ldots,z_{n}$.
2. 2.
An $n+1\times n+1$ matrix $T$ with non-negative entries and $T_{ii}=0$ for all
$i\in\\{0,1,\ldots,n\\}$ called the _time matrix_.
3. 3.
A non-negative number $d_{i}$ assigned to each node $z_{i}$ except for the
depot ($i\neq 0$). These numbers are called _initial demands._
The nodes of the graph can be abstract, but in many cases they are explicitly
given as coordinates for locations that trucks might drive to. The time matrix
entry $T_{ij}$ is supposed to be the time it take a truck to drive from node
$i$ to node $j$. That is why we require $T_{ii}=0$. The initial demand $d_{i}$
is supposed to be the amount of material111“Amount of material” is
intentionally vague. In a practical application, demand can be quantified by
geometrical volume, by weight, or even by monetary value. For our purposes, we
will use geometrical volume as the standard meaning for “amount of material”
which makes it sensible that trucks have a limited carrying capacity. that
must be carried by a truck from the node $z_{j}$ to the depot node $z_{0}$.
A candidate solution to this VRP is given by a route: a list of integers
$\xi=\xi_{1},\xi_{2},\ldots,\xi_{k}$ where $k$ is some positive integer
(called the route length) and each $\xi_{j}$ is an element of
$\\{0,1,\ldots,n\\}$. We require that $\xi_{1}=0$ and $\xi_{k}=0$ so that the
truck starts and ends at the depot. Given such a sequence, there are two
questions:
* •
What is the total driving time for $\xi$?
* •
Is $\xi$ a demand-satisfying route?
Here, the driving time for the route $\xi$ is defined as
$\text{time}(\xi)=\sum_{t=1}^{k-1}T_{\xi_{t}\xi_{t+1}}.$ (1)
Meanwhile, the question of whether or not the route is demand-satisfying is
intuitive but not mathematically elegant to describe. In short, a route is
demand satisfying if it will result in a truck carrying all of the initial
demand to the depot. The truck which starts at $\xi_{1}$ and follows the route
has a _capacity_ which we always take to be 1. This is the amount of demand
that the truck can store. The amount of demand the truck is carrying at a
given time is called _on-board demand_ to distinguish it from _off-board_
demand which is the dynamical demand waiting to be picked up at each node. As
the truck navigates its route, it picks up as much demand as
possible222Actually, we might decide to not pick up the full demand at a given
node by optimizing a pickup-selection algorithm, but we don’t consider this
case here. at each stop, converting off-board demand to on-board demand. The
on-board demand is never allowed to exceed 1. When the truck returns to the
depot, the on-board demand is reset to 0, and the route can continue until all
off-board demand is zero and the truck as at the depot.
The optimization goal of this basic vehicle routing problem is to find, among
all demand-satisfying routes, the one with minimal driving time.
The usage of reinforcement learning for this split-delivery vehicle routing
problem is well-established in the literature [4]. In fact, the model we use
is a generalization of that of Kool _et al._ our
### II.2 Generalized Vehicle Routing Problems
We now turn to a generalization of the basic vehicle routing problem which is
inspired by the realistic supply chain optimization problem of Aisin
Corporation that we explain in detail in section II.3. The purpose of the
“general VRP” is to find a middle ground between the overly simple basic VRP
and the enormously complex Aisin Corporation VRP. This middle ground has major
features of the full-scaled supply chain logistics problem, but does not
include some messy constraints that are difficult to simulate efficiently for
training.
There are two ways that the general VRP is more complex than the basic
variant:
1. 1.
There are multiple trucks.
2. 2.
There is no special depot. Instead, demand is required to be moved between
specific nodes with various constraints. We refer to this as a _tensor demand
structure_ and explain it in detail below.
The use of multiple trucks is an obvious challenge for any routing algorithm
as optimal routes can involve subtle collaboration between trucks. Less
obvious is the surprisingly complex issue of tensor demand structure, where
any node can be a depot. For both these new ingredients, we have developed
modifications to the encoder and decoder models of [4].
#### Tensor Demand Structure
The basic VRP discussed in section II.1 has the property that all demand must
be taken to the same destination node (the depot). This is built into the
mathematical description of the problem because the off-board demand has a
_vector demand structure_. This means that the demand at a given time step $t$
is given by a vector
$d^{t}=\left(d_{1}^{t},d_{2}^{t},\ldots,d_{n}^{t}\right).$
In a realistic supply chain, we do not have the luxury of a single delivery
destination. Goods from a given node may be split into groups which need to be
taken to various delivery destination nodes. To accommodate such a situation,
we introduce the concept of a _tensor demand structure_. We begin with a
rank-2 tensor.
#### Rank-2 Demand
Assume that there is only one truck and consider a graph with $n$ nodes
$z_{1},\ldots,z_{n}$. (There is no longer any need for a special $z_{0}$
node). We introduce an $n\times n$ matrix $D^{t=0}$ with non-negative entries.
The meaning of the entry $D_{ij}^{0}$ is, intuitively, the initial amount of
demand that is located at node $i$ and must be shipped to node $j$. $D$ is
referred to as rank-2 off-board demand or as a matrix demand structure.
When the truck arrives at node $i$ at time $t$ with this sort of demand
structure, an issue arises: a _pickup selection_ decision must be made. There
are $n$ different types of demand that can be picked up from node $i$:
$\left(D_{i1}^{t-1},D_{i2}^{t-1},\ldots,D_{in}^{t-1}\right).$ There is not an
obvious way to perform pickup selection. In principle, a reinforcement
learning agent could learn optimal pickup selection, but this is beyond the
scope of our work. We assume that some reasonable selection algorithm is used.
After pickup selection, we obtain a new matrix $D^{t}$ by reducing $D^{t-1}$
by the amount of demand picked up by the truck from node $i$. Another issue
now appears: with demand on the truck, we have to remember which parts of the
on-board demand must go to which destination nodes. This can be dealt with by
promoting on-board demand at time $t$ to a vector
$E^{t}=\left(E_{1}^{t},E_{2}^{t},\ldots,E_{n}^{t}\right)$. The meaning of
$E_{i}^{t}$ is that, after the operations at time step $t$ (including pickup),
$E_{i}^{t}$ is the amount of demand on the truck which must be delivered to
node $i$.
Now that there is on-board demand in the truck, we need to revisit what
happens when the truck first arrives at a given node $i$. Before pickup
selection or any other operation, the first step is now to completely drop off
demand $E_{i}^{t-1}$. Mathematically, this simply means setting $E_{i}^{t}=0$.
If we wish, we can also keep track of the overall total demand satisfied after
each time step, in which case we would iteratively defined a sequence $S$ by
$\displaystyle S^{0}$ $\displaystyle=0,$ $\displaystyle S^{t}$
$\displaystyle=S^{t-1}+E_{\xi_{t}}^{t-1},$
where $\xi_{t}$ refers to the node visited at time step $t$.
#### Arbitrary Rank Demand
The ideas of a demand matrix $D_{ij}^{t}$ can readily be generalized to
higher-rank tensors. The reason for doing this is that in a practical supply
chain (including the one our study is based on), there are delivery
requirements along the lines of “move this box from node 3 to node 7, and then
from node 7 to node 5”. Such multi-leg requirements may sound odd, but they
can arise for numerous practical reasons. There may be capacity limitations at
node 5, and node 7 may be a storage warehouse. Or perhaps the box needs to
have an operation performed on it before its final delivery. Another important
reason for multi-leg delivery requirements is that a cargo container may need
to be sent somewhere else after delivery.
Whatever the reason, there promoting the matrix and vector structure of $D$
and $E$ to higher rank tensors allows us to encode the data that we need for
this new situation. An initial demand tensor $D_{ijk}^{0}$ can be interpreted
as “there is initially demand $D_{ijk}^{0}$ located at node $i$ which needs to
first travel to node $j$ and then travel to node $k$.”
Unfortunately, with higher-rank tensor structure like this, the operations
that are performed when a truck arrives at a node become even more
complicated. Consider first an empty truck arriving at node $i$ at time $t$.
It starts by performing pickup selection to decide what off-board to pick up:
any part of $D_{ijk}^{t-1}$ is fine as long as the first index is $i$. After
the pickup, the off-board demand is correspondingly reduced. However, the
loaded demand is now material with instructions like “go to node $j$, then go
to node $k$” so we must introduce a matrix on-board demand $E_{jk}^{t}$ to
track this. However, now that the truck has this on-board demand, when it
later drives to node $j$, the on-board $E_{jk}$ will be dropped off. This
demand is _not satisfied_ because it hasn’t reached its final destination of
node $k$. We are therefore forced to introduce a rank-2 matrix off-board
demand structure when this demand is dropped off! When that matrix off-board
demand is later picked up, it is converted to rank-1 vector on-board demand.
In conclusion, rank-$r$ off-board demand will automatically require tracking
off-board demands with ranks 2 through $r$as well as on-board demands with
ranks $1$ through $r-1$. These can be separately tracked by a collection of
tensors like
$\displaystyle D^{r\,t}$ $\displaystyle D^{r-1\,t}$ $\displaystyle\quad
E^{r-1\,t}$ $\displaystyle\vdots$ $\displaystyle D^{2\,t}$ $\displaystyle\quad
E^{2\,t}$ $\displaystyle\quad E^{1\,t}$
or, alternatively, we can use “diagonal entries” like $D_{ijj}$ instead of
$D_{ij}$. Regardless of the organizational approach, there is no question that
bookkeeping is one of the major issues that arise when dealing with this more
realistic version of a vehicle routing problem.
As with the cases above, we can introduce a “total demand satisfied” sequence
$S^{t}$ which accumulates only when demand is sent to its final destination.
We do not accumulate $S$ when rank-2 on-board demand arrives at a node, but we
do accumulate it when rank-1 on-board demand arrives because that node is the
final destination for that material.
#### Multiple Trucks
The next complexity to consider is the involvement of multiple trucks. This is
intuitive and easy to describe mathematically, but it adds immense difficulty
for optimization.
The main observation to make about the mathematical structure is that there is
an on-board demand for every truck, but there is only one off-board demand.
Thus, we need a new index $m$, which ranges from $1$ to the number of trucks
$N$, added to on-board demand. For instance:
$E_{ij}^{t,m}$
for rank-2 on-board demand. In this notation, we are dropping the $r=2$ symbol
as it is implied by the fact that there are two lower indices ($i$ and $j$).
In fact, because the notation involves so many indices, we will occasionally
also drop the reference to time as well. To help clarify, we use notation like
$E_{ij}^{m=2}$
to refer to rank 2 on-board demand for truck 2 in cases where we leave time
$t$ implicit. In other words, we explicitly write “$m=$” to indicate that we
are referring to truck number which ranges from $1$ to $N$.
In general, different trucks to have different capacities
$C_{1},\ldots,C_{N}$, but throughout our work we assume that all trucks have
capacity $1$. There is very little difficulty in adding varying capacities to
our models if needed.
The introduction of multiple trucks adds great subtleties to the problem. The
optimal solution to an instance with many trucks may involve highly
collaborative relationships between trucks.
Figure 1: Illustration of rank-2 cyclic off-board demand. At top, a package at
node 2 is waiting to be picked up. The package is to be sent to node 5 and
then returned. This contributes to $D^{\text{cyclic}}_{2\,5}$. After being
picked up by truck #1 (middle), the truck need not immediately travel to node
5. There is now a contribution to rank-2 on-board demand (and the cyclic
demand is gone). After dropping off the box at node 5, the empty box is still
required to go to node 2, contributing to rank-2 _direct_ demand while waiting
to be picked up.
#### Cyclic Demand Structure
The mathematical structure above is sufficient to describe another closely
related scenario with very little modification. In some commercial logistics
problems, trucks are required not only to deliver material but to then _return
empty containers back to the origin location_. This constraint is particularly
common for operations that occur in a repeating manner: trucks may need to
make regular (e.g. daily or weekly) shipments of identical items, and the same
specialized containers may need to be used. We allow for this constraint in
our general VRP.
Mathematically, we regard an empty box as no different from a full box as it
occupies the same amount of space in a truck.333Here, we are using volume as
the metric limited by truck capacity. We do not consider models constrained by
both weight and volume. The most straightforward way to enforce a box return
constraint is to simply increase the rank of tensor demand by one. However,
there is a more memory-efficient approach: we introduce a concept of _cyclic
off-board demand_. Meanwhile, we use the term _direct off-board demand_ to
refer to the sort of demand described earlier.
As an example of cyclic off-board demand, consider figure 1 in which material
initially at node 2 must be brought to node 5 before being returned to node 2.
We encode this demand as a term in a rank-2 tensor:
$D^{\text{cyclic}}_{2\,5}$. When a truck stops at node 2 to pick up this
cyclic demand, something different happens from the case of picking up direct
demand: the demand is converted to rank-2 on-board demand rather than rank 1.
Specifically, such a pickup of $D^{\text{cyclic}}_{2\,5}$ contributes to
$E_{5\,2}$ because the demand must be brought to node $5$ and then it must
later be brought to node $2$. It’s important to understand that once this
demand is dropped off at node 5, it contributes to _direct_ off-board demand
$D^{\text{direct}}_{5\,2}$ as discussed above.
To summarize the flow of demand in this example, we can write the following
table which shows how relevant components of demands are nonzero at various
moments of time:
Component | Description of most recent event.
---|---
$D_{2\,5}^{\text{cyclic}}$ | Initial material is at node 2; must be brought to node 5, then node 2
$E_{5\,2}^{m=1}$ | Box picked up from node 2 by truck 1; must go to node 5, then node 2
$D^{\text{direct}}_{5\,2}$ | Box dropped off and emptied at node 5 by truck 1, must be returned to node 2
At this point, the situation can continue. Suppose that truck 3 now picks up
the empty box. Then, then route for the box would end as follows:
Component | Description of most recent event.
---|---
$E_{2}^{m=3}$ | Empty box picked up from node 5 by truck 3; next stop: node 2
0 | Empty box returned to node 2 by truck 3, requirements fulfilled
Note that our notation may cause confusion initially. When truck #3 picks up
the empty box from node 5, the demand tensor component that gets a
contribution is $E_{2}^{m=3}$ which makes no reference to node 5. This is
because node 5 is no longer relevant. The fact that the node is currently
located at node 5 is handled by tracking truck locations, which is a separate
matter from tracking demand flow.
The general behavior of cyclic and direct demand, at rank 3 and beyond, is
intuitive but even more confusing. The following table shows a sequence of
events for cyclic initial demand starting at node 3 that must go to node 7 and
then node 4 before returning to node 3.
Component | Description of most recent event.
---|---
$D_{3\,7\,4}^{\text{cyclic}}$ | Initial material is at node 3; must be brought to node 7, then node 4, then node 3
$E_{7\,4\,3}^{m=2}$ | Picked up from node 3 by truck 2, next stop node 7
$D_{7\,4\,3}^{\text{direct}}$ | Dropped off at node 7 by truck 2; must be brought to node 4, then node 3
$E_{4\,3}^{m=1}$ | Picked up from node 7 by truck 1, next stop node 4
$D_{4\,3}^{\text{direct}}$ | Dropped off at node 4 by truck 1; must be brought to node 3
$E_{3}^{m=3}$ | Picked up from node 4 by truck 3, next stop node 3
0 | Dropped off at node 3 by truck 3, requirements fulfilled
#### Restricted Driving Windows
Typical VRPs involve minimizing driving time to accomplish the goal of
fulfilling all deliveries. However, in a commercial setting there is a
limitation on the time in which trucks can drive. As a result, we may need to
consider routes that fail to satisfy all demand.
Suppose that all trucks are only allowed to drive during an overall period of
time $T_{\mathrm{max}}>0$. The trucks drive simultaneously during this time.
We are not necessarily guaranteed that it is possible to fully satisfy demand
within that constraint, and the optimization goal is no longer obvious because
we need to make some decision about the relative importance of maximizing
demand satisfied and minimizing driving time.
### II.3 Aisin Corporation Vehicle Routing Problem
The goal of this work is develop a new technique for solving a supply chain
logistics problem that arises in the operations of Aisin Corporation. In this
section, we explain the remaining complexities of that Aisin Corporation VRP
which we refer to as the AVRP for brevity. Note, however, that we do not
consider AVRP to be a well-defined computational problem; it is better thought
of as a problem instance. In fact, when we discuss the AVRP below, we always
mean a specific instance which has 21 nodes and a specific demand structure
that we now explain.
The general VRP from section II.2 is carefully designed to already accommodate
most of the key details of the AVRP already. As a result, an instance of the
general VRP can be found to model the AVRP. Our approach is therefore to train
a machine learning model (see section III) that can solve the general VRP and
to then apply it to instances that approximate the AVRP.
Perhaps the most substantial AVRP complexity is the presence of individual
boxes. Demand is not an ambiguous real number but is composed of discrete
boxes that occupy a certain volume and have certain routing requirements. We
can use an index $a$ which we call box number:
$a\in\\{1,\ldots,\text{number of boxes}\\}$
to list all of the boxes. Given a box number $a$, there is a specific routing
requirements
$R_{a}=\left(R_{a}^{1},R_{a}^{2},\ldots,R_{a}^{r_{a}}\right)$ (2)
where each $R_{a}^{k}$ is a node. The meaning of this is that box $a$ must
start at the node $R_{a}^{1}$ and then it must visit the node $R_{a}^{2}$ and
so on. Moreover, for the AVRP, after visiting the final node $R_{a}^{r_{a}}$,
_all boxes must then be returned to the starting node_.
The difficulty of the AVRP somewhat reduced by the following facts:
* •
All boxes either have $r_{a}=2$ or $r_{a}=3$.
* •
Although there $\sim 330,000$ boxes for the AVRP, there are actually “only”
107 unique paths for boxes.
Figure 2: Depiction of the time matrix for the AVR.P Figure 3: Connectivity
graph for the AVRP of section II.3. Lines indicate pairs of the 21 nodes that
trucks drive between in the routes determined by Aisin Corporation logistics
experts. 142 trucks deliver approximately 340,000 boxes containing $\sim
15,000$ unique parts. Among these parts, there are 107 unique routing
requirements.
Given these observations, a general VRP instance that models the AVRP must
have initial off-board rank-2 and rank-3 cyclic demand and no initial direct
demand. We sometimes refer to the simplification of using continuous
quantities for demand, rather than discrete boxes, as the _box soup
simplification_.
The AVRP has 21 nodes and it has driving time windows: $T_{\mathrm{max}}$ is
equal to 16 hours, providing a pair of 8 hour shifts for each truck. Driving
times between nodes are illustrated in figure 2. The scale of the initial
demand is substantial: Aisin Corporation logistics experts currently use 142
trucks to delivery parts following routes illustrated in figure 3.
## III Policy Neural Networks
There are a variety of ways to solve routing optimization with reinforcement
learning [6, 7]. Our specific model is an encoder and decoder directly
inspired by that of Kool _et al._ [4]. Their model is quite general: after
training with REINFORCE [8], it performs well for numerous routing problems
including the traveling salesman problem, basic vehicle routing problems, and
the orienteering problem. However, it does not account for multiple trucks.
Moreover, there is no obvious way to incorporate a tensor demand structure
into their model without a substantially new approach.
### III.1 RL Structure for Routing Problems
Before describing our neural networks, we first explain how a routing problem
like the general VRP fits into the framework of reinforcement learning and
also how and encoder and decoder furnish a policy.
Reinforcement learning concerns a Markov decision process (MDP) where an
agent, presented with _state_ of the _environment_ , performs an _action_ of
its choice which results in a new environment state and gives the agent some
_reward_. The new state as well as the reward are both stochastic: there is a
probability of a given new state and reward value given the original state and
the agent’s action. The goal of the agent is not to maximize the reward for a
given action but to maximize the long term reward (a concept known as the
_return_).
The agent does not know the probability of getting a certain new state and
action, but can learn through experience the best actions to take in given
states. The agent learns a _policy_ which is a probability distribution over
possible actions to take in a given state. In other words, given a state $s$
and a potential action $a$, the agent computes a quantity
$\pi(a\,|\,s)\in[0,1]$ such that $\sum_{a^{\prime}}\pi(a^{\prime}\,|\,s)=1$.
$\pi$, interpreted as a conditional probability distribution, is known as a
policy, and the agent samples from $\pi$ to select an action. Finding the
optimal policy (the one that maximizes the expectation value of return) is the
goal of the agent’s learning process.
Consider the case of a basic vehicle routing problem with one truck. When the
truck is at a given node $\xi_{t-1}$ at time $t-1$, we want to use an RL agent
to determine the next node $\xi_{t}$ for the truck to drive to.
The most obvious approach is to make the state include the following
information:
* •
The route so far up to a given moment: $(\xi_{0},\xi_{1},\ldots,\xi_{t-1})$,
* •
The current remaining truck capacity,
* •
The remaining demand to be picked up at each node,
* •
The time matrix.444The time matrix is static throughout the episode, but is
conveniently included in the state data as we may want to train over states
with various time matrices.
Given this, the probability of going to node $z$ in a given state can be
written as
$\pi\left(z\,|\,(\xi_{0},\ldots,\xi_{t-1}),D_{t-1}\right).$
where $D_{t-1}$ is meant to include all of the demand information at time
$t-1$.
Assuming that we use the same policy $\pi$ for every step of the route, the
probability for selecting an entire route $x_{0},\ldots,\xi_{k}$ is
$\pi(\xi\,|\,D_{0})=\prod_{t=1}^{k}\pi\left(\xi_{t}\,|\,(\xi_{0},\ldots,\xi_{t-1}),D_{t-1}\right).$
(3)
This formula leads to an important observation that we will make use of when
discussing the learning algorithm below. Rather than thinking of the route as
consisting of $k$ actions, we can alternatively think of it as one single
action which has probability given by equation (3). Although this may seem
unnecessarily complicated, it’s convenient for the REINFORCE algorithm which
involves the logarithm of the probability which conveniently converts to
product in equation (3) to a summation.
#### RL Structure With Multiple Trucks
How does the discussion above need to change when we consider a vehicle
routing problem (or another routing problem) with multiple trucks? There are
two natural approaches:
* •
Use a different policy for each truck, and have multiple agents interact
simultaneously with the environment, learning to work together.
* •
Use the same shared policy for each truck.
The first approach is arguably a more powerful technique, allowing different
trucks to use different strategies in the same situation. However, the second
approach is simpler to implement and still has the potential to approach
optimality for vehicle routing problems. We use the second approach throughout
this work.
The concept is that every truck has the same policy and views every other
truck as a part of the environment. At the start of an episode, all trucks are
at initial nodes and we start with the first truck ($m=1$). That truck is
presented with the environment which can include, in principle, all of the
information about the locations of the other trucks as well as all of the
initial demand. The $m=1$ truck uses the (shared) policy to select its next
node. We then go to the next truck $m=2$ and do the same.
After all $N$ trucks have assignments, the next step has an important
difference. All trucks now drive to their selected nodes, but they may not all
arrive at the next node at the same time. This is controlled by a time matrix
$T_{ij}$ giving driving times between nodes. Suppose that truck $m=3$ happens
to arrive at its node first. In that case, we proceed by using the policy for
truck 3 to give it a new node. When we do this, every truck other than truck 3
is treated as a part of the environment. After assigning a new node for truck
3, the environment is updated to the state where whichever next truck arrives
at its destination node at the earliest time.
In this example, we refer to the truck $m=3$ as the _active_ truck while the
other trucks with $m\neq 3$ are called _passive_ trucks. One point of
potential confusion here is that there are two usages of “time”. The first,
which we will sometimes refer to as _physical_ time, is the time measured by
the time matrix. The order in which active trucks are selected is determined
by the order of the physical times for arrivals. The other “time” is a
discrete index which enumerates arrival nodes for trucks. It’s important to
understand that if we say that an episode has a “route”
$(\xi_{0},\ldots,\xi_{k})$, then the nodes $\xi_{t}$ are not all for the same
trucks. We thus need to separately keep track of the active trucks at each
time step $(m_{0},\ldots,m_{k})$.
### III.2 Encoder Without Tensor Demand
We begin by discussing a modification of the model of Kool _et al._ which
accounts for multiple trucks. This model accounts for many of the complexities
in the problem, but does not have any way to deal with tensor demand structure
described in section II.2. Tensor demand structure requires a modification to
the attention mechanism, and we describe two approaches to this later.
The encoder-decoder model we use follows the basic idea of the transformer
network [9]. A sequence of input data is converted to a sequence of vectors
with dimension $d$ by an encoder. Then, a decoder acts on the encoded sequence
and some additional information (called context) to determine a probability
distribution over the inputs. We can then select one of the inputs by sampling
from that distribution.
#### Encoder Overview
The encoder is essentially identical to the encoder of Kool _et al._ , so we
do not cover it in great detail here. The input data is a sequence of node
coordinates $\mathbf{x}_{1},\ldots,\mathbf{x}_{n}$ with each $\mathbf{x}_{i}$
a two-dimensional point.555We use two dimensions in our analysis, making nodes
points on a plane. However, any dimension can be used as an input dimension.
The input sequence should contain information not only about the locations of
the points but also about the initial demand structure. Initially, even if
there is high-rank tensor demand, we can compute a myopic rank-2 demand
$D_{ij}$ giving the material that is located at node $i$ and needs to go to
node $j$ as its first stop. For example, if we have cyclic rank-3 initial
demand $D^{\mathrm{cyclic}}_{ijk}$, then we can sum over the third index to
obtain a contribution to $D_{ij}$. At this point, define
$\displaystyle\delta^{\mathrm{out,init}}_{i}$ $\displaystyle=\sum_{j}D_{ij},$
(4) $\displaystyle\delta^{\mathrm{in,init}}_{i}$
$\displaystyle=\sum_{j}D_{ji}.$ (5)
These quantities are “myopic outgoing and ingoing demands” for every node. We
can concatenate the initial coordinates with these demands:
$\overline{\mathbf{x}}_{i}=\mathbf{x}_{i}\oplus\left(\delta_{i}^{\text{in,init}},\delta_{i}^{\text{out,init}}\right)$
(6)
where $\oplus$ denotes direct sum (which is equivalent to concatenation in
this context). We use these extended vectors
$(\overline{\mathbf{x}}_{1},\ldots,\overline{\mathbf{x}}_{n})$ as the sequence
of inputs for the encoder.
The first encoding step is a learned linear map with bias from the initial
input space $\mathbf{R}^{4}$ to an encoding space $\mathbf{R}^{4}$. (In our
experiments, we typically take $d=128$.) We denote this by
$\mathbf{h}^{0}_{i}=W^{\mathrm{init}}\overline{\mathbf{x}}_{i}+\mathbf{b}^{\mathrm{init}}$
(7)
where $W^{\mathrm{init}}$ is a $(d,4)$ matrix and $\mathbf{b}^{\mathrm{init}}$
is a $d$-dimensional vector. We emphasize that the same mapping is applied to
every input entry in the sequence: $W^{\mathrm{init}}$ and
$\mathbf{b}^{\mathrm{init}}$ do not depend on $i$.
After this initial encoding, we proceed with a sequence of three encoder
layers. Each encoder layer can be thought of as a two step process: an
attention layer (equation (8)) followed by a feedforward layer (equation (9)).
$\tilde{\mathbf{h}}^{l-1}_{i}=\mathrm{BN}\left(\mathbf{h}^{l-1}_{i}+\mathrm{MHA}\left(\mathbf{h}^{l-1}_{i}\right)\right),$
(8)
$\mathbf{h}^{l}_{i}=\mathrm{BN}\left(\tilde{\mathbf{h}}^{l-1}_{i}+\mathrm{FF}\left(\tilde{\mathbf{h}}^{l-1}_{i}\right)\right).$
(9)
In these equations, $\mathrm{BN}$ is a batch normalization layer [10],
$\mathrm{FF}$ is a feedforward network, and MHA is a multi-head attention
layer which we describe in detail below. The feedforward layers have a single
hidden dimension $d_{\mathrm{ff}}$ and consist of a linear layer with bias
mapping $\mathbf{R}^{d}\to\mathbf{R}^{d_{\mathrm{ff}}}$ followed by a ReLU
activation function, a dropout layer, and finally a linear map with bias back
to $\mathbf{R}^{d}$.
#### Multi-Head Attention
The function MHA is a multi-head attention mechanism identical to that of [4].
Below we consider two major generalizations of this attention mechanism to
deal with tensor demand structure, and this subsection establishes the
groundwork needed for those generalizations.
We begin by using prior embedded nodes
$\mathbf{h}_{1},\ldots,\mathbf{h}_{n}\in\mathbf{R}^{d}$ to compute vectors
known as queries, keys, and values. In a single-head attention mechanism, we
compute a single query, key, and value vector for each encoded node
$\mathbf{h}_{i}$:
$\displaystyle\mathbf{q}_{i}$
$\displaystyle=M^{\mathrm{query}}\,\mathbf{h}_{i}\in\mathbf{R}^{\alpha},$ (10)
$\displaystyle\mathbf{k}_{i}$
$\displaystyle=M^{\mathrm{key}}\,\mathbf{h}_{i}\in\mathbf{R}^{\alpha},$ (11)
$\displaystyle\mathbf{v}_{i}$
$\displaystyle=M^{\mathrm{value}}\,\mathbf{h}_{i}\in\mathbf{R}^{d}.$ (12)
Here, each $M$ is a matrix mapping $\mathbf{R}^{d}$ to either $\mathbf{R}^{d}$
or $\mathbf{R}^{\alpha}$ where $\alpha$ is some a new hyper-parameter.
For the multi-head attention mechanism, we have a positive integer
$n_{\mathrm{heads}}$ which we typically take to be 6. For each
$s\in\\{1,\ldots n_{\mathrm{heads}}\\}$, we have a different query, key, and
value map and thus different queries, keys, and values computed:
$\displaystyle\mathbf{q}_{s\,i}$
$\displaystyle=M_{s}^{\mathrm{query}}\,\mathbf{h}_{i}\in\mathbf{R}^{\alpha},$
(13) $\displaystyle\mathbf{k}_{s\,i}$
$\displaystyle=M_{s}^{\mathrm{key}}\,\mathbf{h}_{i}\in\mathbf{R}^{\alpha},$
(14) $\displaystyle\mathbf{v}_{s\,i}$
$\displaystyle=M_{s}^{\mathrm{value}}\,\mathbf{h}_{i}\in\mathbf{R}^{\beta}.$
(15)
The linear maps $M$ are learned during training. Note that like other aspects
of the encoder that we have discussed, $i$ behaves as batch index, allowing
sequences to have arbitrary length.
The next step is to compute a _compatibility_ for every query-key pair (for
each head). We define
$u_{s\,ia}=\frac{1}{\sqrt{\alpha}}\mathbf{q}_{s\,i}\cdot\mathbf{k}_{s\,a}$
(16)
with $\cdot$ denoting a standard dot product. After this, a softmax version of
compatibility is computed
$\rho_{s\,ia}=\frac{\exp(u_{s\,ia})}{\sum_{b}\exp(u_{s\,ib})}\in\mathbf{R}$
(17)
which is used to weight a sum over values:
$\mathbf{h}_{s\,i}^{\prime}=\sum_{a}\rho_{s\,ia}v_{s\,a}.$ (18)
The only remaining step to to merge the data from the $n_{\mathrm{heads}}$
attention heads. To do this, we simply concatenate the output from each head,
$\mathbf{h}_{1\,i}^{\prime}\oplus\ldots\oplus\mathbf{h}_{n_{\mathrm{heads}}\,i}^{\prime}$
and use a learned linear map with bias on this vector, from dimension
$n_{\mathrm{heads}}\beta$ to dimension $d$ to recover a vector
$\mathbf{h}^{\prime}_{i}\in\mathbf{R}^{d}$.
We use the symbol MHA to denote the function which starts with encoded vectors
$\mathbf{h}_{i}\in\mathbf{R}^{d}$, and returns the output sequence
$\mathbf{h}^{\prime}_{i}\in\mathbf{R}^{d}$. MHA computes queries, keys, and
values following equations (13)-(28), then computes compatibility and new
encoded vectors for each head, and finally maps to the original encoding
dimension.
### III.3 Decoder Without Tensor Demand
Unlike our discussion of the encoder, this section deviates meaningfully from
the model of [4] because we include new techniques for using multiple trucks.
After encoding, through $l$ layers, we obtain a tuple of $n$ vectors
$\left(\mathbf{h}^{l}_{i}\right)_{i=1}^{n}$ which are interpreted as encoded
nodes for the graph. To obtain the probability of selecting a given node as a
next action, we “decode” the nodes along with some additional information
called context.
At the moment of decoding, we are given
* •
The encoded nodes $\left(\mathbf{h}^{l}_{i}\right)_{i=1}^{n}$ which were
encoded once at the start of the episode,
* •
the active truck $m_{t-1}$,
* •
the current on-board demands $E^{m}_{i_{1},\ldots,i_{r}}$,
* •
the current off-board demands
$D{i_{1},\ldots,i_{r}},D^{\mathrm{cyclic}}{i_{1},\ldots,i_{r}}$,
* •
the next expected nodes for each passive truck and the time until those trucks
arrive, and
* •
the remaining capacities for all of the trucks.
This is an enormous amount of information, and we do not attempt to convey all
of it without loss to our agent.
#### Information Added to Nodes
As a first step, we consider a myopic form of off-board and on-board demand.
We define
$\epsilon^{m}_{i}=\sum_{i_{2},\ldots,i_{r}}E^{m}_{i,i_{2},\ldots,i_{r}}$ (19)
and
$\delta_{i}^{\mathrm{out}}=\sum_{i_{2},\ldots,i_{r}}D^{\mathrm{total}}_{i,i_{2},\ldots,i_{r}},$
(20)
where $D^{\mathrm{total}}$ is the sum of cyclic and direct demand. In words,
$\epsilon^{m}_{i}$ is the total material on truck $m$ which needs to be
dropped off at node $i$ for its next stop, and $\delta_{i}^{\mathrm{out}}$ is
the total material at node $i$ that is waiting to be picked up. We will also
define an ingoing myopic off-board demand
$\delta_{i}^{\mathrm{in}}=\sum_{i_{1},i_{3},\ldots,i_{r}}D^{\mathrm{total}}_{i_{1},i,i_{3},\ldots,i_{r}},$
(21)
but we won’t need it for the time being.
The quantities $\epsilon$ and $\delta$ do not fully encode on-board and off-
board demand but instead only give the most immediately relevant aspects of
the demand structure. Below we will give a technique than can encode the full
tensor structure at the cost of substantial memory consumption.
Now, let $m_{\star}$ be the active truck index. We then modify the encoded
nodes as follows
$\overline{\mathbf{h}}_{i}=\mathbf{h}_{i}\oplus\left(\delta_{i},\epsilon_{i}^{m_{\star}},\epsilon_{i}^{1},\epsilon_{i}^{2},\ldots,\epsilon_{i}^{m_{\star}-1},\epsilon_{i}^{m_{\star}+1},\ldots,\epsilon_{i}^{N}\right)$
(22)
where $\oplus$ is concatenation.
It’s worth stopping to observe some key principles for these new encoded
nodes. First, note that everything on the right hand side of equation (22)
strictly corresponds to the appropriate node: the index $i$ consistently
appears on both sides. This is one of the main reasons that it is difficult to
convey a tensor demand structure with attention models: we tensor demand
involves more than one node at a time. Our usage of myopic demands avoids the
difficulty at the cost of presenting the agent with incomplete environment
information.
Another important principle of equation (22) is the ordering of components.
The original encoded vectors have dimension $d$. After the $d^{\mathrm{th}}$
component, we have a slot for the off-board demand. Then we have a slot for
the on-board _active truck’s_ demand. Finally we have $N-1$ slots for the on-
board demands of passive trucks. Note that very little care is taken with
regard to the order of the passive trucks; this is intentional as all passive
trucks should be treated on equal footing.
#### Additional Contextual Information
There is still additional information that we need to convey: the remaining
truck capacities and the locations of trucks. To do so, we follow the spirit
of [4] and introduce one special _context node_. To define it, let $m_{\star}$
be the active truck index and put
$\mathbf{C}=\left(C_{m_{\star}},C_{1},C_{2},\ldots,C_{m_{\star}-1},C_{m_{\star}+1},\ldots,C_{N}\right)$
(23)
where $C_{k}$ is the capacity remaining for truck $k$. We will use
$\mathbf{C}$ as a part of the context vector, but we need to include also
information about truck locations. Let $z_{1},\ldots,z_{N}$ denote the nodes
of all trucks in the sense that $z_{m_{\star}}$ is the current node of the
active truck from which it is about to depart and, for the passive trucks,
$z_{1},\ldots,z_{m_{\star}-1},z_{m_{\star}+1},\ldots,z_{N}$ are the next
scheduled that the passive trucks are _currently heading toward_. Ideally, we
would like to append to the context node
$\left(h_{m_{\star}},h_{z_{1}},h_{z_{2}},\ldots,h_{z_{m_{\star}-1}},h_{z_{m_{\star}+1}},\ldots,h_{z_{N}}\right)$
but this would be extremely expensive: the context vector would have $dN$
dimensions just from these components. To avoid this, we take the view that
the passive truck is less important than the active truck, and use a single-
layer feedforward neural network $f$, consising of a linear layer, ReLU
nonlinearlity, and then one more linear layer, to reduce the dimension of the
passive trucks. From this we can define
$\mathbf{H}=\left(h_{z_{m_{\star}}},f\left(h_{z_{1}}\right),f\left(h_{z_{2}}\right),\ldots,\right)\\\
\left(f\left(h_{z_{m_{\star}-1}}\right),f\left(h_{z_{m\star+1}}\right),\ldots,f\left(h_{z_{N}}\right)\right).$
(24)
There is one more ingredient for the context node. An important piece of
information is the time until each passive node will arrive at its next
scheduled destination. Let $t_{m}$ denote the time until truck $m$ will arrive
at its next stop. We emphasize that $t_{m}$ measures the remaining physical
time (see section III.1) until the truck arrives. It is not the absolute
physical arrival time of the scheduled event for truck $m$ but rather the
difference between that absolute time and the physical time of the current
event for the current active truck. In particular, note that we can have
$t_{m}=0$ if $m=m_{\star}$ or in the event where truck $m$ has the same
arrival time as the active truck.
Now we define
$\mathbf{T}=\left(t_{1},\ldots
t_{m_{\star}-1},t_{m\star+1},\ldots,t_{N}\right).$ (25)
Note that the last $N-1$ components of $\mathbf{H}$ (equation (24)) correspond
to the same trucks with the same order as the components of $\mathbf{T}$. This
consistency is what matters. With different events, the same components may
correspond to different trucks, but for any given event, the components of
$\mathbf{C},\mathbf{H},$ and $\mathbf{T}$ line up in the same way.
We finally define the _context node_ :
$\mathbf{h}_{\mathrm{ctx}}=\mathbf{H}\oplus\mathbf{T}\oplus\mathbf{C}.$ (26)
#### Decoder Structure
We now take the modified nodes of equation (22) and put them through a single
layer with a structure identical to the encoder layers of equation (8) and (9)
except that the nodes have a greater dimension due to the modifications in
equation (22) and one additional modification explained momentarily. We denote
the output as $\left(\overline{\mathbf{h}}^{1}_{i}\right)_{i=1}^{n}$.
The additional modification is that, following [4], we adjust the computation
of keys and values in equations (14) and (15) by adding _source terms_ :
$\displaystyle k_{s\,i}$
$\displaystyle=M_{s}^{\mathrm{key}}\,\mathbf{h}_{i}+\mathbf{u}_{s}^{\mathrm{key,out}}\,\delta_{i}^{\mathrm{out}}+\mathbf{u}_{s}^{\mathrm{key,in}}\,\delta_{i}^{\mathrm{in}},$
(27) $\displaystyle v_{s\,i}$
$\displaystyle=M_{s}^{\mathrm{value}}\,\mathbf{h}_{i}+\mathbf{u}_{s}^{\mathrm{val,out}}\,\delta_{i}^{\mathrm{out}}+\mathbf{u}_{s}^{\mathrm{val,in}}\,\delta_{i}^{\mathrm{in}}.$
(28)
Here, $s$ is an index for attention heads, and various $\mathbf{u}_{s}$ are
learned vectors with the same dimension as the left hand side of the equations
they appear in; for example, $\mathbf{u}_{s}^{\mathrm{key,out}}$ is a vector
with the same dimension as keys which is $\alpha$ as specified in equation
(14). Note $\delta_{i}^{\mathrm{out}}$, defined in equation (20), is a real
number that re-scales the vector $\mathbf{u}_{s}^{\mathrm{key,out}}$. This
modification of keys and values helps to convey the current demand situation
to the attention mechanism directly.
We now proceed with a second attention layer, this one with the same number of
heads $n_{\mathrm{heads}}$ but defined on $n+1$ nodes: the $n$ nodes
$\left(\overline{\mathbf{h}}^{1}_{i}\right)_{i=1}^{n}$ from the first decoder
layer, and the one context node 26. However, for this attention layer,
following [4], we only make a single query from the context node and have keys
for all of the other nodes. In other words, for each head we compute one query
(for the context node), $n$ keys, and $n$ values (for the other nodes). In
equations (16)-(18), the index $i$ only take on a single value. We use sources
$\delta_{i}^{\mathrm{out}},\delta_{i}^{\mathrm{in}}$ just as in equations (27)
and (28). Moreover, for this layer we only compute MHA as described at the end
of section III.2. This is a pure attention layer (as opposed to and encoder
layer which uses equation (8) and (9).
The output of this second layer is one new context vector
$\mathbf{h}_{\mathrm{ctx}}^{\prime}$ with dimension $d_{\mathrm{ctx}}$. This
vector is then used for a final (third) layer much like the second layer
although we only use one head. Once again, a query $q_{\mathrm{ctx}}$ is
computed only for $\mathbf{h}_{\mathrm{ctx}}^{\prime}$ and keys are computed
for the encoded nodes $\overline{\mathbf{h}}^{1}_{i}$.
For this last layer, we obtain compatibility in the usual way except that we
regulate with $\tanh$ and we allow for _masking_ :
$u_{i}=\begin{cases}A\tanh(q_{\text{ctx}}\cdot k_{i})&\text{if node $i$ is
allowed}\\\ -\infty&\text{otherwise}\end{cases}$ (29)
where $A$ is a hyper parameter that we take to be 10. The idea is that we can
block certain nodes for the active truck to drive to if we know, for some
reason, that doing so is a poor choice. We describe the specific rules we use
for masking in section [[FILL]].
Finally, the $u_{i}$ are converted to probabilities with a softmax layer, and
these probabilities are interpreted as the values of the policy: the
probabilities of selecting node $i$ for the active truck’s next destination:
$\pi(i)=\frac{e^{u_{i}}}{\sum_{j=1}^{n}e^{u_{j}}}.$ (30)
There is no need for values to be computed in the final attention layer.
### III.4 Incorporating Tensor Demand Structure
In the prior sections, we have not fully incorporated tensor demand structure
described in section II.2. We have been able to incorporate aspects of the
demand structure by reducing to quantities like $\delta^{\mathrm{out,in}}$ and
$\epsilon^{m}_{i}$; these quantities fit into the framework of [4] because
they can be associated with a single node. Consider for example equation (22)
where we append the encoded nodes. The left and right hand sides of equation
(22) have the same $i$ index in a consistent manner. A similar observation can
be made for equations (27) and (28).
The fact that a quantity like a demand matrix $D_{ij}$ cannot be conveyed to
an encoder or decoder demonstrates a limitation of the attention model of [4].
To be certain, their model is very general and very powerful, providing
excellent solutions to a wide range of combinatorial optimization problems.
However, the difficulty arises in optimization problems where the problem
structure unavoidably involves data related to subsets of nodes.
As a example of a combinatorial optimization problem beyond the scope of [4],
we could consider a variant of the traveling salesman problem where we are
given a tensor $R_{ijk}$ and the goal is to navigate through the graph such
that we collect a reward $R_{ijk}$ when the agent moves from node $i$ to node
$j$ and next to node $k$. If the goal is to find the tour that maximizes the
sum of these rewards, or perhaps to find the path that does so within a time
constraint (given travel times between nodes), then there is no way around the
fact that a rank 3 tensor is central to the problem. This is a situation where
the attention model of [4] would require a meaningful modification.
In this section, we describe two different modifications of [4] which
incorporate tensor demand structure. The first, _dynamical masking_ , is a
fairly simple modification that does not result in a substantial computational
overhead. However, it only accounts for rank-2 quantities. The second
technique, which we refer to as a _tensor attention mechanism_ , is general
and powerful but can be extremely memory consuming.
#### Dyamical Masking
The step of the attention mechanism that involves more than one node is the
dot product evaluation between keys and queries. If we want incorporate a
demand tensor $D_{ij}$, a natural idea would be to replace the dot product
with
$\frac{1}{\sqrt{\alpha}}G_{ij}\,\mathbf{q}_{i}\cdot\mathbf{k}_{j}$ (31)
where $G$ is some tensor determined by $D$. This approach can exaggerate
query-key compatibility in cases where $D_{ij}$ is large and suppress
compatibility when the demand is small.
There are a few reasonable choices for $G$. The first is $G_{ij}=1$ which
reduces to a basic dot product compatibility. Next is $G_{ij}=M_{ij}$ where
$M$ is the mask defined as $M_{ij}=1$ when $D_{ij}>0$ and $M_{ij}=-\infty$
otherwise. Both of these are within the methodology of [4]. A third and more
novel choice of $G$ is $G_{ij}=\log D_{ij}$. This last form has several
virtues: it reduces to a mask in the case in the sense that it approaches
$-\infty$ as $D{ij}\to 0+$. Moreover, it can exaggerate compatibility when
$D_{ij}$ is large. A simple additional adjustment is to use
$G_{ij}=AD_{ij}+B\log D_{ij}$
which is more sensitive to changes in $D_{ij}$ for larger values.
Rather than having to pick from these various choices, we can in fact choose
all of them by taking advantage of the multiple heads. In other words, for a
given head $s\in\\{1,\ldots,n_{\mathrm{heads}}\\}$, we can put
$G^{s}_{ij}=A^{s}_{\mathrm{basic}}+A^{s}_{\mathrm{mask}}M_{ij}+A^{s}_{\mathrm{log}}\log
D_{ij}+A^{s}_{\mathrm{lin}}D_{ij}.$ (32)
In principle, even more terms can be used, and a more thorough investigation
of various models would be sensible.
#### Tensor Attention Mechanism
The drawback of dynamical masking is that it specializes to rank 2 tensors. To
move to arbitrary rank, we could add terms that sum over certain indices like
$\sum_{k}D_{ijk}$ but this will not access to full structure of the tensor.
There is, however, a way to modify the attention mechanism at a more
fundamental level. In equation (14)-(15), we obtain queries, keys, and values
by mapping embedded nodes $\mathrm{h}_{i}$ to keys by means of “attention
vector maps” $M^{\mathrm{query,key,value}}$. These maps create a
correspondence: for each node there is a query and so on. Demand structure can
then be presented to the decoder through equations (27) and (28). However, we
would like to modify, e.g., equation (27) to get something like
$\mathbf{k}_{i}\overset{??}{=}M^{\mathrm{key}}\mathbf{h}_{i}+\mathbf{u}D_{ijk}.$
This equation is intentionally nonsensical, but it does reveal what we need.
The indices $ijk$ must match on both sides. Thus, in the rank 3 case, rather
than requiring that a single node corresponds to a key, _we instead define a
key $K_{ijk}$ for each 3-tuple of nodes $(i,j,k)$_. We do this as follows:
$K_{ijk}=M^{\mathrm{key}}\left(\mathbf{h}_{i}\oplus\mathbf{h}_{j}\oplus\mathbf{h}_{k}\right)+\mathbf{u}^{\mathrm{key}}D_{ijk}$
(33)
where $M^{\mathrm{key}}$ is a learned linear map (without bias) from
$\mathbf{R}^{3d}$ to $\mathbf{R}^{\alpha}$ and $\mathbf{u}^{\mathrm{key}}$ is
a learned vector with dimension $\alpha$. As before, $\oplus$ denotes
concatenation or, equivalently, direct sum.
Note that $M^{\mathrm{key}}$ acts on the full vector
$\mathbf{h}_{i}\oplus\mathbf{h}_{j}\oplus\mathbf{h}_{k}$. The operation that
maps $(\mathbf{h}_{i})_{i=1}^{n}$ to the the 3-index object
$\left(\mathbf{h}^{\oplus 3}\right)_{i,j,k=1}^{n}$ is just a reorganization,
but it is, unfortunately, an expensive one. In practice, an implementation of
this object would have $b\cdot n^{3}\cdot 3d$ components where $n$ and $d$
are, the number of nodes and the encoding dimension, and $b$ is whatever batch
size is used in the implementation (for, e.g., parallelization). The problem
is the $n^{3}$ factor, which leads to a substantial memory cost for this
technique as the number of nodes grows.
Equation (33) is generalized as follows
$\displaystyle K^{s}_{a_{1}\ldots a_{r}}$
$\displaystyle=M_{s}^{\mathrm{key}}\left(\mathbf{h}_{a_{1}}\oplus\ldots\mathbf{h}_{a_{r}}\right)+\mathbf{u}^{\mathrm{key}}_{s}D_{a_{1}\ldots
a_{r}},$ (34) $\displaystyle V^{s}_{a_{1}\ldots a_{r}}$
$\displaystyle=M_{s}^{\mathrm{value}}\left(\mathbf{h}_{a_{1}}\oplus\ldots\mathbf{h}_{a_{r}}\right)+\mathbf{u}^{\mathrm{value}}_{s}D_{a_{1}\ldots
a_{r}}.$ (35)
In these equations, we refer to the tensor $D_{a_{1}\ldots a_{r}}$ as a
_source_. We can generalize to multiple sources in the obvious way: replacing
$\mathbf{u}_{s}D_{a_{1}\ldots a_{r}}$ by
$\sum_{k=1}^{n_{\mathrm{sources}}}\mathbf{u}_{k,s}\,D^{(k)}_{a_{1}\ldots
a_{r}}.$ (36)
Note that we are not constructing a tensor query. Our method is to keep using
equation 13 for the construction of queries. The concept is that each node
queries a sequence of $r$ other nodes, yielding a compatibility
$u^{s}_{i,a_{1}\ldots a_{r}}=q^{s}_{i}\cdot K^{s}_{a_{1}\ldots a_{r}}$ (37)
from which we find attention weights
$\rho^{s}_{i,a_{1}\ldots a_{r}}=\frac{\exp(u^{s}_{ia_{1}\ldots
a_{r}})}{\sum_{b_{1}\ldots b_{r}}\exp(u^{s}_{ib_{1}\ldots b_{r}})}$ (38)
and finally we obtain new nodes through a value sum:
$h_{i,s}^{\prime}=\sum_{a_{1}\ldots a_{r}}\rho^{s}_{ia_{1}\ldots
a_{r}}V^{s}_{a_{1}\ldots a_{r}}.$ (39)
As before, a final feedforward layer must be used to convert the concatenated
outputs for each head to a single new output node $h_{i}^{\prime}$.
## IV Training Methodology
Figure 4: Example of a training curve showing the objective function (equation
40) during training.
In this section, we go into some detail for our training techniques. One
aspect of this is our reinforcement learning algorithm which, keeping in line
with [4], is a variant of REINFORCE [8]. This section also elaborates on our
methodology for environment simulation and generation.
#### Objective Function
Reinforcement learning requires a reward definition. For our purposes, the
entire route can be regarded as a single action, and thus we only need to
define a reward $R(\xi)$ for a full route $\xi$. Equivalently, we an define an
objective function $F(\xi)=-R(\xi)$. This approach is natural because of our
use of REINFORCE.
Typical vehicle routing problems can use total driving time as an objective
function to minimize. However, our routing problem has a time constraint
$T_{\mathrm{max}}$ and we are not guaranteed that all demand will be
fulfilled. To address this, we define _demand coverage_ as the percentage
$\eta$ of initial demand that is eventually fulfilled. This quantity requires
care to calculate: when a truck arrives at a node $i$, all rank 1 on-board
demand with index $i$ is dropped off as a final stop. We can keep track of the
accumulated total of such demand as it is dropped off. This total, divided by
the sum of all components of initial demand, is the demand coverage $\eta$.
We also use a parameter $T$ to denote the physical time of the last truck when
it finishes its route. It’s very important to note that $T$ is not the average
time for all trucks.
We then introduce the objective function
$F(\xi)=-B_{\mathrm{coverage}}\eta+T/T_{\mathrm{max}}$ (40)
where $B_{\mathrm{coverage}}$ is a hyperparameter. Note that we are writing
$\xi$ is the input to $F$ to emphasize that $\eta$ and $T$ are determined by
the route (and also initial demand, time matrix, etc.)
$B$ is meant to be greater than 1, and a typical value is 10. The idea is that
demand coverage takes first priority, and until coverage approaches its
maximum value of 1, we do not try to reduce $T$ below the terminal time limit.
#### Environment Simulation
Training the model of section III for the general VRP requires a simulation of
the environment that is able to take advantage of GPU parallelization. We
developed such a simulation using PyTorch. There are several challenges that
our simulation overcomes that we highlight in this section.
Consider multiple episodes for vehicle routing problems which run in parallel.
The truck routes can be stored as a tensor $\xi_{b,t}$. Here, $b$ is a batch
index and $t$ is a time index. The value of $\xi_{b,t}$ is a node providing
the departure node at (indexed, not physical) time $t$ for batch entry $b$.
However, if there are multiple trucks, then it’s not clear which truck
$\xi_{b,t}$ refers to. Thus, we need to track a separate tensor $A_{b,t}$
which specifies which truck is the active truck at time $t$ for batch entry
$b$. This approach allows us to perform operations in parallel over a batch
with a CUDA implementation.
One of the pitfalls of this organizational approach is that routes are likely
to terminate for different batch entries with different $t$. As a result, we
must accept that $\xi_{b,t}$ will have a tail of repeating entries for most
values of $b$.
Demand structure must be tracked with great care for the general VRP. We use
tensors to separately track each rank of on-board demand, cyclic off-board
demand, and direct off-board demand. For example, we use a tensor
$E_{b,m,ijk}$ to track rank-3 on-board demand. The indices refer to,
respectively, the batch entry, the truck number, and the three tensor indices.
#### Environment Generation
Generating episodes is particularly challenging because of the need to
efficiently create examples of demand structures. Episodes start with zero on-
board demand and zero direct off-board demand, but have initial cyclic off-
board demand with ranks 2 and 3. Not just any tensor
$D^{\mathrm{cyclic}}_{ij},D^{\mathrm{cyclic}}_{ijk}$ are acceptable. For
example, a component like $D^{\mathrm{cyclic}}_{232}$ must be excluded. To
create instances, we start with a mask tensor with components which are 1 only
for allowed tensor components. We give random values within an allowed range
for tensors, mask away components that are not allowed. We then randomly
further mask components with some given probability. This is meant to cause
instances to be more representative of real data, where we don’t have all-to-
all connectivity.
#### REINFORCE Implementation
Following [4] we implemented a variant of REINFORCE [8].
The REINFORCE algorithm is a policy-gradient reinforcement learning algorithm.
While many RL algorithms are based on the idea of first trying to estimate
(from experience) the “value” of various actions in a given state, and then
taking actions with higher estimated value, policy-gradient algorithms
circumvent the intermediate step of estimating values. Instead, we work
directly with a parameterized policy, varying parameters to optimize the
return from an episode.
The REINFORCE algorithm, following [11], is given as follows [H] REINFORCE
Inputs:
Parameterized policy $\pi$
Initial parameter $\theta$
while desired performance not achieved do
Using $\pi(\theta)$, generate episode
$(s_{0},a_{0},r_{1},\ldots,r_{T})\leftarrow$ episode
for $t=0,1,2,\ldots,T-1$ do
$G\leftarrow r_{t+1}+\gamma r_{t+2}+\ldots\gamma^{T-t-1}r_{T}$
$\nabla
J\leftarrow\gamma^{t}G\,\nabla_{\theta}\log\left(\pi(a_{t}\,|\,s_{t},\theta)\right)$
$\theta\leftarrow\text{Ascent}(\theta,\nabla J)$
end for
end while
In this algorithm, $\gamma\in(0,1)$ is a fixed discount factor. “Ascent”
refers to any gradient-based ascent optimization step. This could be gradient
ascent on $J(\theta)$ or it could be replaced by other optimization algorithms
like Adam[12].
REINFORCE can learn much faster when a _baseline_ is added. This essentially
means that some function $b$ of states (but not of actions) is constructed
with each episode and the return $G$ in the algorithm is replaced by $G-b(s)$.
This algorithm still converges to the optimal policy theoretically and, with a
well-chosen baseline, does so much faster. This is easiest to understand when
$b(s)$ is taken to be an estimate of the return after state $s$ based on data
from recent previous episodes. In this case, $G-b(s)$ being positive indicates
that this episode was better than expected and thus it’s sensible to increase
the probability of following this sequence of actions. Meanwhile, if $G-b(s)$
is negative, the return is less than what is considered a reasonable par, and
the gradient ascent would reverse and reduce the probability of taking these
actions. The REINFORCE algorithm works regardless of whether or not such a
baseline is used, but the learning time is dramatically reduced with a good
baseline.
For our purposes we take a variant of REINFORCE adapted to our vehicle routing
problem as follows. First, the for the purposes of the algorithm we take the
entire episode to be defined by a single action. In other words, the episode
is just $a_{0},r_{1}$. The action $a_{0}$ is the entire route specification
$a_{0}=(\xi_{1},\xi_{2},\ldots\xi_{k})$ where each $\xi_{i}$ are nodes. This
odd-sounding choice is sensible because with the structure of 3 which makes
the logarithm of the policy equal to a sum over logarithms of probabilities of
each action in an episode. The reward $r_{1}$ is simply the negation of the
route length (or time) $-L(\xi)$ which is computed by summing the appropriate
distances between nodes based on a metric or on known travel times.
The second important aspect of our variant is the baseline methodology. This
idea is roughly adapted directly from [4]. We maintain a “baseline agent”
which uses the same parameterized policy but does constantly update its
parameter $\theta$. Instead, the baseline agent uses an outdated parameter
$\theta_{\text{BL}}$ which is occasionally updated to match the primary
agent’s $\theta$, but only when the agent substantially and consistently
outperforms the baseline.
Our REINFORCE variant is given in algorithm IV. Note that this algorithm is
broken up into epochs and batches.
REINFORCE variant for VRP
Input: Parameterized policy $\pi$
Input: Integers num_epochs, batch_size, batches_per_epoch
Input: Initial parameter $\theta$
$\theta_{\text{BL}}\leftarrow\theta$
for $e=1,\ldots,$ num_epochs do
for $b=1,\ldots,$ batches_per_epoch do
$\xi\leftarrow$ (batch_size many episodes from $\pi(\theta)$)
$\xi_{\text{BL}}\leftarrow$ (batch_size many episodes from
$\pi(\theta_{\text{BL}})$)
$\nabla
J\leftarrow\texttt{batch\\_mean}\left((L(\xi)-L(\xi_{\text{BL}})\nabla_{\theta}\log\left(\sum_{i=1}^{k}\pi(\xi^{i},\theta)\right)\right)$
$\theta\leftarrow\textrm{descent}(\theta,\nabla J(\theta))$
end for
if baseline_test() then
$\theta_{\text{BL}}\leftarrow\theta$
end if
end for
One confusing part of this algorithm may be the summation
$\sum_{i=1}^{k}\pi(\xi^{i},\theta)$. To clarify, this is a sum over the
probabilities computed by the encoder/decoder network at each stage of the
route. $k$ refers to the number of steps in the route and the index $i$ runs
over steps in the route, not over batch entries. The entire computation is
performed for each batch entry and averaged over.
The baseline_test() subroutine returns true when the policy $\pi(\theta)$
substantially outperformed the baseline policy $\pi(\theta_{\text{BL}})$ in
recent episodes. More specifically, after each epoch we compute percentage of
epochs in which the policy outperforms the baseline policy. If this percentage
exceeds 50% for 10 _consecutive epochs_ then we update the baseline
parameters. Moreover, if the percentage exceeds 70% for any epoch, we update
the parameters. There is certainly room for experimentation with different
methods here (like the one-sided T Test used in [4] but our methods were
satisfactory).
## V Supply Chain Management Workflow
The ultimate goal of our work is to find a way to apply reinforcement learning
techniques for combinatorial optimization problems in a realistic commercially
valuable setting. Training an agent through the methods of sections III and IV
yields approximate solutions to the general VRP described in section II.2.
This section explains how we can use such a trained agent to obtain
approximate solutions to the full AVRP of section II.3.
There are two difficulties to overcome:
1. 1.
The scale of the AVRP is too large, with 21 nodes and with so much demand that
$\sim 150$ trucks may be necessary to fulfill requirements in the 16 hour time
window.
2. 2.
The general VRP lacks the discrete box structure of the full VRP.
To deal with the first difficulty, we decompose the graph into pieces that can
be treated by agents trained for a smaller number of trucks and nodes as
described in section V.1. The second difficulty is handled by processing truck
routes obtained through the general VRP into a full-scale simulation of the
AVRP (see section V.2).
### V.1 Node Subset Search
Consider an instance of the general VRP with $n$ and demand structure
$D^{\mathrm{init}}$. Given the demand structure, the time matrix, and the
driving window $T_{\mathrm{max}}$, we can estimate the number of trucks $N$
that will be necessary to fulfill all requirements.
Suppose that we have an algorithm to find solutions to general VRPs with a
smaller number of nodes and trucks and with smaller demand structure. Our
algorithm works for $n^{\prime}<n$ nodes and $N^{\prime}<N$ trucks.
This situation arises in our context naturally: we can train, for instance, an
agent to solve general VRP instances with 10 nodes and three trucks, a scale
smaller than the AVRP with its 21 nodes and over 100 trucks.
We should be able to use our smaller-scale algorithm to solve the larger
problem by applying it repeatedly to different subsets to nodes to gradually
fulfill all demand requirements. This raises a question of how to find good
subsets.
We begin by looking at the demand structure $D^{\mathrm{init}}$. For
simplicity, assume that this consists only of rank-3 cyclic demand (other
cases are very similar and we describe the necessarily modifications below).
From $D^{\mathrm{init}}$ we can identify the nonzero components. These are
tuples of nodes
$\displaystyle i_{1},$ $\displaystyle j_{1},k_{1}$ $\displaystyle i_{2},$
$\displaystyle j_{2},k_{2}$ $\displaystyle\vdots$ $\displaystyle i_{u},$
$\displaystyle j_{u},k_{u}$
where $u$ is some integer (which happens to be 107 for the AVRP). We can begin
our node search by uniformly randomly selecting one of these triples of nodes
from the list. (In cases where demand also includes rank 2 or another rank, we
include tuples of appropriate length in the list for nonzero demand cases and
we allow such tuples to be selected as well.) After drawing a tuple from the
list, we remove it from the list. Suppose that we select the tuple
$(i_{3},j_{3},k_{3})$. We define a starting subset of nodes as
$A=\\{i_{3},j_{3},k_{3}\\}$ If $3<n^{\prime}$, we continue to draw nodes.
Suppose that we next draw $(i_{5},j_{5},k_{5})$. We now consider the set
$A=\\{i_{3},j_{3},k_{3},i_{5},j_{5},k_{5}\\}$. If any node repeats (for
instance, if $i_{3}=j_{5}$), that entry is only counted once in the set. There
will now be between 4 and 6 elements in $A$. If $|A|<n^{\prime}$, we continue
and otherwise we stop. Continuing in this way, we can either eventually run
out of tuples or we can reach $|A|\geq n^{\prime}$. If we reach
$|A|=n^{\prime}$, we stop and use $A$ as our first guess of a node subset. If
$|A|$ exceeds $n^{\prime}$, we remove the most recently added subset. If we
run out of tuples, we stop.
After this process, we might have $|A|<n^{\prime}$. In this case, we simply
randomly add additional nodes outside of $|A|$ until reaching
$|A|=n^{\prime}$.
At this point, we have obtained a random node subset $A$. We repeat this
process $K$ times to obtain $k_{\text{node draws}}$ different random subsets,
and we apply our algorithm $k_{\text{subset attempts}}$ times for each of the
$k_{\text{node draws}}$ subsets. We compute the mean demand fulfilled for the
$k_{\text{subset attempts}}$ trials, and we select the node subset with the
highest mean demand coverage.
### V.2 Execution Loop
With a technique for selecting node subsets, we are now in a position to
describe the execution procedure. This begins with the initial demand
structure $D^{\mathrm{init}}$ which is determined by the AVRP’s requirements.
Next, a node subset is selected through the node search technique of section
V.1. We identify the portion of $D^{\mathrm{init}}$ that is supported by the
subset and we map it onto a demand structure $D^{\prime}$ for $n^{\prime}$
nodes. To regulate the policy input, we clip $D^{\prime}$ at some maximum
value $C$. We then apply the trained agent $k_{\text{execution trials}}$ times
and select the trial with the highest percentage of covered demand. This trial
has a specific routing for $N^{\prime}$ trucks and corresponds to a certain
demand fulfillment. The route as well as the on-board and off-board demand at
each step is saved and the initial demand $D^{\mathrm{init}}$ is modified: it
is reduced by the amount of demand satisfied by the route.
This process is repeated, each time first performing node selection and then
finding the best route out of $k_{\text{execution trials}}$ attempts. We
repeat until all demand is satisfied. The total number of iterations of this
procedure multiplied by $N^{\prime}$ will be the total number of trucks needed
for our solution.
### V.3 Full Scale Simulation
The result of the execution loop yields approximate solutions to the general
VRP. We obtain a collection of routes for various trucks $x_{m\,\tau}$ where
$m$ ranges over trucks and $\tau$ over time steps for each truck.666Here, we
are breaking out earlier convention and using a unified time index $t$ and
instead using $\tau$ to range over the time steps for individual trucks.
However, we still need a way to convert such routes to candidate solutions to
the AVRP of section II.3.
Our strategy is to interpret the routes $x$ as “suggested routes” and to
attempt to use them in a _full-scale_ supply chain simulation. We make use of
a modified variant of the simulation of [13]. Every box is individually
tracked and has rank-2 or rank-3 requirements. Unlike [13], we require that
demand is cyclic: boxes must be returned to their origin node. Each box has a
specific individual volume. The simulation is designed to be as similar as
possible to the actual commercial routing problem of Aisin Corporation.
Trucks follow along the routes $x_{m,\tau}$ that they are assigned from the
execution algorithm and they pick up boxes as is appropriate for their route.
Boxes a picked up according to the algorithm described in [13] and we
carefully ensure that trucks only drive within the allowed drive time windows
(two 8 hour shifts). The result of full-scale simulation is that, rather than
having a list of suggested truck routes, we obtain a precise statement about
what each truck in the supply chain is doing at all times, including exactly
which boxes must be picked up and dropped off at various nodes.
## VI Results
### VI.1 Training Parameters
When training the general VRP agent, we used and encoding dimension $d=128$
and three encoder layers and 8 attention heads for all layers except for the
final policy output layer. For feedforward layers we used 64 dimensional
hidden layers. We used dynamical masking as described in section II.2 as we
found that the memory limitations were too prohibitive for full tensor demand
structure. For dynamical masking in the decoder, we used one source $D_{ij}$
given by
$D_{ij}=\sum_{k}D^{\mathrm{total}}_{ijk}+D^{\mathrm{total}}_{ij}$
where $D^{\mathrm{total}}$ is the sum of cyclic and direct off-board demand.
We also used dynamical masking in the first encoding layer with the same
$D_{ij}$ (although for the encoder we are using initial demand while for the
decoder we are using the demand at the moment of decoding).
Training was conducted in batches of length 256 with the Adam optimization
method. Using an initial learning rate of $2^{-23}$, we trained 100 epochs
each consisting of 64 batches each. The learning rate was taken to decay by a
factor of .965 for each epoch until reaching a final rate of $2^{-15}$ at
which point it was held constant. An example training curve is shown in figure
4.
### VI.2 Execution Performance
Figure 5: Typical demand satisfied by trucks as we assign trucks. Unlike the
algorithm used in [13], we maintain strong demand coverage consistently as we
add more trucks. However, the newly implemented box-return constraint cancels
this benefit. Figure 6: Remaining demand as we iterate through trucks. This is
the remaining demand, not the demand share on individual trucks. Figure 7: The
truck-routing connectivity graph obtained by assigning teams of 3 trucks using
the methods of section V. Note that this figure only shows connectivity
without regard for the demand flow along each edge, routing orientations, or
specific timing details.
The methods described in section V were used with truck groups of size
$N^{\prime}=3$. The estimated demand satisfied by each truck during this
process is shown in figure 5 and the estimated remaining demand as truck
groups are iteratively assigned is shown in figure 6. Upon full-scale
simulation (section V.3), we obtain the the routing graph shown in figure 7.
## VII Conclusion
While there is much additional work to do in exploring algorithmic and
workflow improvements, these first results demonstrate that this general VRP
agent can successfully train a model to solve the logistics problem. This is
especially noteworthy in that the general VRP agent addresses multiple trucks
and a distributed network of pickup and delivery nodes beyond the spoke and
hub model. This generality extends beyond the needs of the AVRP and can
address a wide variety of logistics situations.
This reinforcement learning approach is computationally demanding, especially
considering multiple trucks and the tensor demand structure, but the
introduction of attention head mechanism and decomposing the agents into sub-
teams of truck reduced the computational burden while still achieving
solutions. The box return constraint especially added to the computational
burden of finding solutions, just as it adds greater complexity to real-life
operations. This suggests that the design of future logistics approaches might
try to mitigate or replace the box return requirement with a different
approach that would increase efficiency and reduce cost.
We were only able to study a few training approaches during the course of this
project. Further work in alternative training approaches would likely find
more efficient and more effective training approaches. While the model
incorporates many key features of Aisin data, the overall workflow was not
optimized for Aisin data. So exploring problem structure unique to the Aisin
data might yield better training results.
And finally, the use of teams of trucks was necessary to make the problem
computationally tractable, and was also successful in yielding solutions, but
we have only explored a small sub-space of how teams of trucks can be used
iteratively to decompose the actions of the entire fleet of trucks. So more
work in this area would likely improve results.
Overall the results are quite promising using classical computing today and
amenable to benefit from quantum computing in the future. And given the
multiple parts of the training algorithm, exploring how to better implement
each part in future studies could yield significant improvement in the overall
solution quality for supply chain logistics.
## References
* Dantzig and Ramser [1959] George B Dantzig and John H Ramser. The truck dispatching problem. _Management science_ , 6(1):80–91, 1959.
* Toth and Vigo [2002] Paolo Toth and Daniele Vigo. _The vehicle routing problem_. SIAM, 2002.
* Bello et al. [2016] Irwan Bello, Hieu Pham, Quoc V Le, Mohammad Norouzi, and Samy Bengio. Neural combinatorial optimization with reinforcement learning. _arXiv preprint arXiv:1611.09940_ , 2016.
* Kool et al. [2018] Wouter Kool, Herke Van Hoof, and Max Welling. Attention, learn to solve routing problems! _arXiv preprint arXiv:1803.08475_ , 2018.
* Silver et al. [2017] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. _nature_ , 550(7676):354–359, 2017.
* Mazyavkina et al. [2020] Nina Mazyavkina, Sergei Sviridov, Sergei Ivanov, and Evgeny Burnaev. Reinforcement learning for combinatorial optimization: A survey. _CoRR_ , abs/2003.03600, 2020. URL https://arxiv.org/abs/2003.03600.
* Vinyals et al. [2015] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, _Advances in Neural Information Processing Systems_ , volume 28. Curran Associates, Inc., 2015. URL https://proceedings.neurips.cc/paper/2015/file/29921001f2f04bd3baee84a12e98098f-Paper.pdf.
* Williams [1992] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. _Machine Learning_ , 8(3):229–256, 1992. doi: 10.1007/BF00992696. URL https://doi.org/10.1007/BF00992696.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_ , 30, 2017.
* Ioffe and Szegedy [2015] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _International conference on machine learning_ , pages 448–456. PMLR, 2015.
* Sutton and Barto [2018] Richard S. Sutton and Andrew G. Barto. _Reinforcement Learning: An Introduction_. The MIT Press, second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd.html.
* Kingma and Ba [2015] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ , 2015. URL http://arxiv.org/abs/1412.6980.
* [13] To appear: Toward supply chain logistics with quantum optimization algorithms.
|
# The ReBB model at 8 TeV:
Odderon exchange is not a probability, but a certainty††thanks: Presented at
“Diffraction and Low-$x$ 2022”, Corigliano Calabro, Italy, Sept. 2022.
István Szanyi${}^{1,~{}2,~{}3,~{}{\dagger}}$ Tamás
Csörgő${}^{2,~{}3,~{}{\ddagger}}$ 1Eötvös University, H - 1117 Budapest,
Pázmány P. s. 1/A, Hungary;
2Wigner FK, H-1525 Budapest 114, POB 49, Hungary;
3MATE Institute of Technology, Károly Róbert Campus, H-3200 Gyöngyös, Mátrai
út 36, Hungary;
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
The Real Extended Bialas-Bzdak (ReBB) model study is extended to the 8 TeV
$pp$ TOTEM elastic differential cross section data. The analysis shows that
the ReBB model describes the $pp$ and $p\bar{p}$ differential cross section
data in the limited $0.37\leq-t\leq 1.2$ GeV2 and $1.96\leq\sqrt{s}\leq 8$ TeV
kinematic region, in a statistically acceptable manner. In this kinematic
region a greater than 30 $\sigma$ model-dependent Odderon signal is observed
by comparing the $pp$ and ReBB extrapolated $p\bar{p}$ differential cross
sections. Thus, in practical terms, within the framework of the ReBB model,
Odderon exchange is not a probability, but a certainty.
## 1 Introduction
In a recent paper [1], published in July 2021, we showed that the Real
Extended $p=(q,d)$ version of the Bialas-Bzdak (ReBB) model developed in Ref.
[2] based on the original papers, Refs. [3, 4], and later improvements, Refs.
[5, 6], describes in a statistically acceptable manner the proton-proton
($pp$) and proton-antiproton ($p\bar{p}$) scattering data in the kinematic
range of $0.546\leq\sqrt{s}\leq 7$ TeV and $0.37\leq-t\leq 1.2$ GeV2. With
these results at hand, we reported an at least 7.08 $\sigma$, discovery level
Odderon effect111The ReBB model is based on R. J. Glauber’s multiple
diffraction theory, so it operates directly on the level of the elastic
scattering amplitude of $pp$ and $p\bar{p}$ collisions. We obtain the C-even
(Pomeron) and C-odd (Odderon) components of the elastic scattering amplitude
as the average and the difference of elastic proton-antiproton and proton-
proton amplitudes. For the details see Appendix C of Ref. [1]. by comparing
the $pp$ and $p\bar{p}$ differential cross sections at the same energies
utilizing model-dependent extrapolations of the differential cross-sections of
elastic $pp$ scattering to $\sqrt{s}$ $=$ 1.96 TeV and elastic $p\bar{p}$
scattering up to the lowest measured energy at LHC, 2.76 TeV. Extrapolating
the $p\bar{p}$ scattering up to 7 TeV, the statistical significance of Odderon
exchange increased to greater than 10 $\sigma$, however, in Ref. [1] this
significance was not quantified more precisely, due to numerical limitations
of CERN’s Root, MS Excel, Wolfram Mathematica and similar data analysis
software tools.
Based on our recently published paper [7], we present here the results of the
extension of the ReBB analysis to the new 8 TeV $pp$ differential cross
section data of TOTEM, Ref. [8]. In this Ref. [7] we also more precisely
quantified the high significance of the Odderon observation by introducing an
analytical approximation scheme (see the Appendix of Ref. [7]).
## 2 ReBB model and Odderon exchange at 8 TeV
As an extension to Ref. [1], in Fig. 1 we show the comparison of the $pp$
differential cross section calculated from the ReBB model — using the energy
calibration of the fit parameters done in Ref. [1] — with the final 8 TeV $pp$
differential cross section data measured by TOTEM and published recently in
Ref. [8]. One can see that the energy-calibrated model, in its validity range,
$0.37\leq-t\leq 1.2$ GeV2, describes the data in a statistically acceptable
manner, with a confidence level of 0.2 %.
The ReBB model thus describes the data at 8 TeV in a limited kinematic region
which is suitable to perform a search for Odderon exchange. As detailed and
utilized recently in Refs. [1, 9, 10] a possible difference between $pp$ and
$p\bar{p}$ measurable quantities at the TeV energy scale theoretically can be
attributed only to the effect of a $t$-channel C-odd Odderon exchange.
The comparison of the $p\bar{p}$ differential cross section calculated from
the ReBB model — using the energy calibration of the fit parameters done in
Ref. [1] — with the 8 TeV $pp$ differential cross section data measured by
TOTEM [8] is shown in Fig. 2, which indicates a difference between the $pp$
and $p\bar{p}$ differential cross sections with a probability of essentially
1, corresponding to a $CL=1-1.111\times 10^{-74}$, i.e., an Odderon
observation with a statistical significance $\geq$18.28 $\sigma$ (for the
details of the significance calculation see the Appendix of Ref. [7].).
The fits and the model-data comparisons are done by utilizing the $\chi^{2}$
definition developed by the PHENIX collaboration. This method is equivalent
with the diagonalization of the covariance matrix if the experimental errors
are separated into three different types: point-to-point fluctuating
uncorrelated statistical and systematic errors (type A), point-to-point
varying and 100% correlated systematic errors (type B), and point-independent,
overall correlated systematic uncertainties (type C). In our study, the
available experimental errors of the analysed data can be and are categorized
into these three types: horizontal and vertical $t$-dependent statistical
errors (type A), horizontal and vertical $t$-dependent systematic errors (type
B), and overall normalization uncertainties (type C).
The PHENIX method is validated by evaluating the $\chi^{2}$ from a full
covariance matrix fit of the $\sqrt{s}$ = 13 TeV TOTEM differential cross-
section data using the Lévy expansion method of Ref. [11]. The PHENIX method
and the fit with the full covariance matrix result in the same minimum within
one standard deviation of the fit parameters. Thus the PHENIX method is a
reasonable choice at energies, where the full covariance matrixes are not
published. The exact form of the $\chi^{2}$ definition 222The $\chi^{2}$
parameters $\epsilon_{B}$ and $\epsilon_{C}$ were considered as fit parameters
in Ref. [1], decreasing NDF. However $\epsilon_{B}$ and $\epsilon_{C}$, in
fact, have a known central value (0) and a known standard deviation (1), hence
they must be considered not only as fit parameters, but also new data points.
Thus in the end they are not effecting the NDF. This was done in Ref. [7], but
this correction does not effect the conclusions drawn in Ref. [1]. used in
this analysis with correlation parameters, $\epsilon_{B}$ and $\epsilon_{C}$
resulting from such a classification of measurement errors can be found in
Ref. [1].
Figure 1: Comparison of the $pp$ differential cross sections from Ref. [7].
The ReBB model calculations for $pp$ are based on Ref. [1] and they agree, at
0.2 % CL, with the recently published TOTEM $pp$ data at $\sqrt{s}=$ 8 TeV
[8].
Figure 2: Comparison of the $p\bar{p}$ differential cross sections from Ref. [7]. The ReBB model calculations for $p\bar{p}$ are based on Ref. [1] and they disagree, at $1.1\times 10^{-72}$ % CL, with the recently published TOTEM $pp$ data at $\sqrt{s}=$ 8 TeV [8]. $\sqrt{s}$ (TeV) | $\chi^{2}$ | $NDF$ | CL | significance ($\sigma$)
---|---|---|---|---
1.96 | 24.283 | 14 | 0.0423 | 2.0
2.76 | 100.347 | 22 | 5.6093 $\times 10^{-12}$ | 6.8
7 | 2811.46 | 58 | $\textless$ 7.2853$\times 10^{-312}$ | $\textgreater$37.7
8 | 426.553 | 25 | 1.1111$\times 10^{-74}$ | $\geq$18.2
Table 1: Summary on Odderon signal observation significances in the ReBB model
analysis from Ref. [7]. The significances higher than 8 $\sigma$ were
calculated by utilizing an analytical approximation schema, detailed in the
Appendix of the same paper [7].
## 3 ReBB model and Odderon at the TeV energy range
Table 1 summarises all the Odderon signal observation significances in our
ReBB model analysis. The dataset at 7 TeV carries the largest, dominant
Odderon signal, greater than 37.75 $\sigma$. The existence of a significant
Odderon signal is confirmed with the new TOTEM data at 8 TeV, which provides
an also clear-cut, greater than 18.28 $\sigma$ Odderon signal. The
significance of the Odderon signal in the $\sqrt{s}=2.76$ TeV TOTEM data is
$6.8$ $\sigma$. Within the framework of the ReBB model, no statistically
significant Odderon signal is observed from the comparison of the
$\sqrt{s}=1.96$ TeV D0 data with ReBB model extrapolated elastic $pp$
differential cross-sections.
Given that the datasets are independent measurements, we can evaluate their
combined significances step by step, by adding the individual $\chi^{2}$ and
the individual $NDF$ values. Another option for combining the significances is
Stouffer’s method ($i.e.$ by summing the significances and dividing the sum by
the square root of the number of summed significances) as used by TOTEM in
Ref. [10]. As it is detailed in Ref. [7] in Table 2, independently which
method is used, the combination of the results at the two lowest energies,
$i.e.$ 1.96 and 2.76 TeV, gives greater than 6 $\sigma$ significance for the
Odderon exchange, while the combination of the results at $\sqrt{s}=$ 1.96,
2.76, 7 and 8 TeV gives a greater than 30 $\sigma$ significance.
Fig. 3 shows the total cross-section (with systematic error band) obtained
from the optical theorem using the ReBB model amplitude of Odderon exchange,
as evaluated from the log-linear excitation functions of the model from Ref.
[1]. The result indicates that the total cross-section of the Odderon exchange
is sharply increasing in the few TeV energy range, but it is two orders of
magnitude smaller than the contribution of the Pomeron exchange that is
dominant at the same energy scale, as detailed in Ref. [1].
Figure 3: The Odderon total cross section, determined from the ReBB model in
Ref. [1], indicates a threshold effect for Odderon exchange. The Odderon
contribution to the total cross-section starts to be statistically significant
around 1 TeV.
## 4 Summary
The Real Extended Bialas-Bzdak (ReBB) model describes all the available $pp$
and $p\bar{p}$ differential cross-section data in the kinematic range of
$0.546\leq\sqrt{s}\leq 7$ TeV and $0.37\leq-t\leq 1.2$ GeV2 in a statistically
acceptable manner. The statistical significance of Odderon exchange is greater
than 30 $\sigma$ when the results obtained from $\sqrt{s}=$1.96, 2.76, 7, and
8 TeV are combined. Thus. within the framework of the ReBB model, Odderon
exchange is not a probability, but a certainty at the TeV energy scale.
## 5 Acknowledgments
We thank A. Papa and his team for their kind hospitality and for organizing an
inspiring and useful meeting. Our research has been supported by the Hungarian
NKFIH grant K133046 and the ÚNKP-22-3 New National Excellence Program.
## References
* [1] T. Csörgő and I. Szanyi. Observation of Odderon effects at LHC energies: a real extended Bialas–Bzdak model study. Eur. Phys. J. C, 81(7):611, 2021.
* [2] F. Nemes, T. Csörgő, and M. Csanád. Excitation function of elastic pp scattering from a unitarily extended Bialas–Bzdak model. Int. J. Mod. Phys., A30(14):1550076, 2015.
* [3] A. Bialas and A. Bzdak. Wounded quarks and diquarks in heavy ion collisions. Phys. Lett. B, 649:263–268, 2007. [Erratum: Phys.Lett.B 773, 681–681 (2017)].
* [4] A. Bialas and A. Bzdak. Constituent quark and diquark properties from small angle proton-proton elastic scattering at high energies. Acta Phys. Polon. B, 38:159–168, 2007.
* [5] F. Nemes and T. Csörgő. Detailed Analysis of $p^{+}p$ Elastic Scattering Data in the Quark-Diquark Model of Bialas and Bzdak from $\sqrt{s}=23.5$ GeV to 7 TeV. Int. J. Mod. Phys., A27:1250175, 2012.
* [6] T. Csörgő and F. Nemes. Elastic scattering of protons from $\sqrt{s}=23.5$ GeV to 7 TeV from a generalized Bialas-Bzdak model. Int. J. Mod. Phys., A29:1450019, 2014.
* [7] I. Szanyi and T. Csörgő. The ReBB model and its H(x) scaling version at 8 TeV: Odderon exchange is a certainty. Eur. Phys. J. C, 82(9):827, 2022.
* [8] G. Antchev et al. Characterisation of the dip-bump structure observed in proton–proton elastic scattering at $\sqrt{s}$ = 8 TeV. Eur. Phys. J. C, 82(3):263, 2022.
* [9] T. Csörgő, T. Novák, R. Pasechnik, A. Ster, and I. Szanyi. Evidence of Odderon-exchange from scaling properties of elastic scattering at TeV energies. Eur. Phys. J. C, 81(2):180, 2021.
* [10] V. M. Abazov et al. Odderon Exchange from Elastic Scattering Differences between $pp$ and $p\bar{p}$ Data at 1.96 TeV and from pp Forward Scattering Measurements. Phys. Rev. Lett., 127(6):062003, 2021.
* [11] T. Csörgő, R. Pasechnik, and A. Ster. Odderon and proton substructure from a model-independent Lévy imaging of elastic $pp$ and $p\bar{p}$ collisions. Eur. Phys. J., C79(1):62, 2019.
|
# Trust and Time Preference: Measuring a Causal Effect in a Random-Assignment
Experiment
Linas Nasvytis
Columbia University, University of Oxford
###### Abstract
Large amounts of evidence suggest that trust levels in a country are an
important determinant of its macroeconomic growth. In this paper, we
investigate one channel through which trust might support economic
performance: through the levels of patience, also known as time preference in
the economics literature. Following Gabaix and Laibson (2017), we first argue
that time preference can be modelled as optimal Bayesian inference based on
noisy signals about the future, so that it is affected by the perceived
certainty of future outcomes. Drawing on neuroscience literature, we argue
that the mechanism linking trust and patience could be facilitated by the
neurotransmitter oxytocin. On the one hand, it is a neural correlate of
trusting behavior. On the other, it has an impact on the brain’s encoding of
prediction error, and could therefore increase the perceived certainty of a
neural representation of a future event. The relationship between trust and
time preference is tested experimentally using the Trust Game. While the paper
does not find a significant effect of trust on time preference or the levels
of certainty, it proposes an experimental design that can successfully
manipulate people’s short-term levels of trust for experimental purposes.
## 1 Introduction
Trust is a fundamental, yet also an elusive concept in economic research. As
noted by Kenneth Arrow, ”virtually every economic transaction has an element
of trust” (Arrow, 1973), which helps explain why the last 30 years have
witnessed a growing interest of research that examines the various ways
through which trust affects economic outcomes.
On the one hand, macroeconomic studies have linked trust to economic
performance of countries, starting with Knack and Keefer (1997). While trust
is a multidimensional variable, many studies have focused on the impact of
generalized trust – the levels of trust that people have in anonymous
individuals within a society (Ho, 2021). On the other hand, the introduction
of the Trust Game in experimental economics by Berg et al. (1995) has led to
an increasing interest of studying the impact of trust in individual decision-
making. Part of the appeal of the Trust Game lies in its simplicity of
measuring trust, which allows researchers to apply the game across different
cultures (Alós-Ferrer and Farolfi, 2019). The game is played by two agents,
Participant A and Participant B. At the beginning, Participant A is endowed
with some amount of money. She can choose to send any fraction of that amount
to Participant B. Before B receives the amount, it is tripled. After receiving
the tripled amount, Participant B decides how much (if any) she wants to
return back to Participant $\mathrm{A}$. While social preferences might play a
role as a confound (Fehr and Schmidt, 1999), the amount sent by Participant
$\mathrm{A}$ is widely used as a measure of trust in an anonymous individual,
while the amount of money returned measures the level of reciprocity (Berg et
al., 1995).
This paper seeks to combine both approaches to studying trust. It is primarily
motivated by recent macroeconomic evidence that trust might be linked to
levels of patience across countries (Falk et al., 2018). We conjecture that
patience might be one of the channels through which trust affects economic
growth. In particular, we hypothesize that higher levels of generalized trust
might reduce the level of present bias. To understand why that might be the
case, it is important to look at time preference as a consequence of noisy
mental encoding of future outcomes. Specifically, we follow time discounting
model from Gabaix and Laibson (2017). The authors argue that people discount
the future because it contains more noise and uncertainty, compared to the
present. We believe that trust might play a role in time preference, because
trust is fundamentally a belief about future outcomes. Being more trusting
means being more certain about the trustworthiness about others, and thus more
certain about the actions and outcomes in the future. We present evidence from
biology and neuroscience on the hormone oxytocin, which supports the idea that
trust might be linked to the perceived certainty of future outcomes.
To explore the potential relationship between trust and present bias, we
conduct a random-assignment experiment, in which we manipulate subjects’
short-term levels of trust, and then measure their levels of time preference.
Levels of short-term trust are manipulated using the Trust Game. All subjects
are randomly assigned to one of two treatment groups: High Trust or Low Trust.
In each round, subjects are playing against a computer, who they think is
another participant. In the High Trust group, the algorithm uses strategies of
highly trusting and trustworthy individuals. In the Low Trust group, subjects
are playing against an algorithm who is using strategies of highly untrusting
and untrustworthy individuals. All strategies of the algorithm are derived
from real player data in the Trust Game from Fiedler and Haruvy (2017).
Importantly, deception was only used because the experiment could not be
conducted in a classroom setting due to COVID-19 crisis. Under normal
conditions that allow for live gameplay between subjects, treatment could be
implemented without any need of deception.
The experiment tests three main hypotheses. First, subjects in the High Trust
group will have significantly higher levels of trust, compared to participants
in the Low Trust group. Second, participants of High Trust group will have a
lower level of present bias. Third, High Trust subjects will exhibit greater
certainty about future outcomes.
Our results indicate that trust has no significant effect on time preference
or perceived certainty about future outcomes, but that it is possible to
manipulate people’s short-term levels of trust for experimental purposes. The
latter finding opens ample opportunities for researchers to study the causal
effects of trust under experimental conditions.
The next section of this paper will provide an overview of relevant
literature. The third and fourth sections provide details on experimental
design and econometric specifications. The fifth section presents the main
findings. The sixth and final section discusses potential problems of internal
validity in the experiment, and the implications of our results.
## 2 Literature Review
### 2.1 A. Trust and Time Preference: Why it matters
To understand why studying the relationship between trust and time preference
might be important, we need to start by reviewing literature on the
relationship between trust and economic growth. Growing empirical evidence
suggests that higher levels of trust are linked to greater macroeconomic
performance of countries. Knack and Keefer (1997) provide evidence for a
positive relationship between trust and economic growth for a sample of 29
market economies, while Zak and Knack (2001) expand the sample to 41
countries, and find the same result. More recently, using data from Global
Preference Survey (GPS) across 76 countries, Falk et al. (2018) provide
evidence for the same positive link between trust and economic growth.
However, the causal connection between trust and economic growth could flow
either way, both ways simultaneously, or even be due to a third confound. That
is why researchers have tried to establish a causal effect of trust on
economic performance. Knack and Keefer (1997) establish this positive causal
relationship by using two instrumental variables for trust - the highest
”ethno linguistic” group in a society, and the fraction of law students as a
percentage of all postsecondary students. However, the authors are open to
admit that other confounds might be present in both of these instruments.
More recently, Algan and Cahuc (2010) investigate the causal effect of trust
on economic growth by focusing on the levels of inherited trust of US
immigrants. For instance, by comparing trust levels of Americans with German
or Italian origin, whose forebears migrated to the US between 1950-1980, the
authors can detect differences in trust of these two origin countries between
1950-1980. Once they obtain the levels of inherited trust at different points
in time, they can estimate the effect of a change in inherited trust levels on
the change in income per capita of the countries of origin, while controlling
for potential confounds. The authors find a significantly positive causal
effect of trust on economic growth. Lastly, Bartling et al. (2018) examine the
causal effects of trust in an experimental setting. The authors adopt a
principal-agent game with multiple equilibria, and find that trust indeed has
a causal effect on the equilibrium levels of efficiency in the game.
While the current literature establishes evidence for a causal effect of trust
on economic performance, it cannot easily identify the precise channels
through which trust could affect economic growth. Knack and Keefer (1997)
conjecture that higher trust could increase the efficiency of contracts,
reduce reliance on formal credit institutions, and improve the quality of
economic policies conducted by governmental institutions. Zak and Knack (2001)
build a general equilibrium model that shows how higher trust could lead to
higher increasing investments due to greater reliability of social, economic
and institutional environments. However, very few empirical studies have tried
to establish the validity of any of these conjectures. Even among experiment
studies, only Bartling et al. (2018) find that institutional environment
appears to be the key for whether trust has causal effects and the persistence
of these effects throughout time. In simple terms, our current understanding
of the causal effect of trust on economic growth is similar to the way that
most people understand Physics - we know it works, but we are not entirely
sure how.
The first contribution of this paper is the empirical investigation of one
possible channel through which trust affects economic growth – patience.
Several recent studies have found evidence for a positive relationship between
trust and patience. In a sample of 76 countries, Chen (2013) measures patience
as the propensity to save, and finds that individuals who think others are
generally trustworthy are on average $23\%$ more likely to have saved that
year. An even more interesting finding comes from the aforementioned study by
Falk et al. (2018). The authors observe that the relationship between patience
and income per capita is much stronger than the link between trust and income
per capita – in both magnitude and statistical significance. They find that
patience can explain around $40\%$ variation in the levels of income between
countries. Crucially, the authors report that once trust and patience are
included in a joint regression on income levels, trust loses significance.
While the authors do not discuss this finding in detail, it is reasonable to
conclude that trust and patience are working in similar channels to affect
income levels. More specifically, trust might be working through patience to
affect economic growth, which is precisely the idea that this paper will
investigate.
A preliminary support for the causal effect of trust on patience comes from a
study by Jachimowicz et al. (2017), which seems to be the only paper on this
topic. The authors conducted an online $2\times 2$ experiment to establish a
causal link between community trust and time discounting among low income
individuals. The design involved manipulating levels of felt income
(low/high), and levels of felt community trust (low/high). The results suggest
that low-income individuals with higher community trust discount the future
less heavily than individuals with lower community trust. While the results
are encouraging, it is important to note that the study focuses on trust in
local community, rather than generalized trust in anonymous individuals. In
addition, the authors only focus on low income individuals, which presents a
shortcoming for the extrapolation of their result to the whole population.
### 2.2 Certainty: The Potential Link Between Trust and Time Preference
The second question we need to ask is, is there a reason to believe that
generalized trust might have a causal effect on time preference? To understand
why that might be the case, we will turn to economic research, which
investigates the reasons why time preference exists in the first place. The
lion’s share of literature on time preference, ranging from models of
exponential discounting introduced by Samuelson (1937) to quasi-hyperbolic
discounting proposed by Laibson (1997), try to answer the question of how do
we discount the future, but not necessarily why. The very fact that we talk
about impatience as an economic preference provides a hint that we can take it
as a given, and analyze the implications for decision making assuming this
preference. What all these models have in common is the assumption that people
make decisions on a correct representation of rewards they receive at
different dates, with complete and coherent preferences 111This particular
sentence is inspired by lecture notes of Michael Woodford’s “Cognitive
Mechanisms and Economic Behavior” class taught in Fall 2019 at Columbia
University
More recently, researchers have been exploring the idea that time preference
might not be a preference after all, but rather a cognitive illusion.
Specifically, present bias might be the result of noisy mental encoding of
information about future rewards. In other words, time preference could be a
function of how clear we perceive the future to be compared to the present.
Gabaix and Laibson (2017) propose a model of a perfectly patient Bayesian
decision-maker, who receives noisy, unbiased signals about future events.
Assuming that the noisiness of the future increases with distance from the
present, the agent will act as if she has time preferences, even though she is
simply optimizing based on posterior Bayesian beliefs about the future. In
this model, discount factor is simply a function of a signal-to-noise ratio
associated with the future outcome: the larger the distance between the
present and the future reward, the noisier do we perceive the future reward to
be. Noisiness increases the uncertainty about the future outcome, and
therefore the level of present bias. Using time preference experiments, Khaw
et al. (2017) present empirical support for the idea that outcomes further in
the future are indeed encoded with greater mental imprecision.
Evidence from neuroscience suggests that the key link between trust and
certainty could be a neuropeptide hormone oxytocin (OT). The literature on the
relation between oxytocin and trust has evolved over the last 15 years. An
experimental study using the Trust Game by Kosfeld et al. (2005) found
evidence for a causal link between oxytocin and trust, which has later been
disputed due to problems of replication (see a review by Alós-Ferrer and
Farolfi (2019)). What remains relatively convincing is the evidence that being
exposed to trustworthy behavior of others in the Trust game is indeed linked
to higher levels of oxytocin (Zak et al., 2005)
The reason why oxytocin matters is because evidence suggests it could be
linked to our certainty of the future. Owen et al. (2013) find that oxytocin
enhances cortical information transfer while simultaneously lowering
background activity, thus improving the clarity of signal in the brain and
reducing the background noise of neurons. In the words of the authors,
oxytocin increases the ”signal-to-noise ratio” of information transfer, a
phrase which we have encountered in the noisy mental discounting model of
Gabaix and Laibson (2017). Moreover, a review of studies on oxytocin by
Eskander et al. (2020) concludes that ”what is now more widely accepted is
that oxytocin has an impact on the brain’s encoding of prediction error and
therefore its ability to modify preexisting beliefs”. If oxytocin indeed
decreases the prediction error of outcomes, then it reasonable to consider the
idea that oxytocin could increase our certainty about future outcomes, and
therefore, the level of our time preference.
The idea that trust could be linked to certainty about the future is visible
in the very definition of trust as the belief in the trustworthiness of
others. As argued by Ho (2021), it is likely that people make the decision on
whether to trust someone, based on their prediction about the future
trustworthiness of that person. At its very core, trust concerns the certainty
about the behavior of others in the future. The second contribution of this
experiment is the examination of a potential link between trust and certainty
about future outcomes, in relation to time preference. To the best of our
knowledge, no research has yet studied the potential link between trust and
certainty.
### 2.3 Experimental Methods on the Causal Effects of Trust
The third contribution of this paper concerns experimental methods. So far,
very few studies have tried to investigate the causal effect of trust in an
experimental setting. We conjecture that one of the underlying reasons for
that might be a lack of a reliable experimental method to manipulate people’s
short-term trust beliefs. Bartling et al. (2018) have adopted a principal-
agent game with multiple equilibria to study the conditions under which trust
has causal effects on the equilibrium levels of efficiency. However, the
experimental game seems to be specifically designed to test these findings,
which makes it challenging to apply the same game to analyze causal effects of
trust on other outcomes more generally.
Jachimowicz et al. (2017) conduct an online $2\times 2$ experiment to
establish a causal link between community trust and time discounting among low
income individuals. The authors manipulate levels of community trust by
increasing the salience of this construct in the minds of respondents. More
specifically, they ask participants to either list 2 (low salience) or 10
(high salience) examples where community trust was justified. As we have
mentioned before, the shortcoming of this method is twofold: first, the
authors manipulate levels of community trust, which is less widely measured
than generalized trust; second, the method of manipulating people’s levels of
trust through a questionnaire might lead to possible confounds - for example,
people could potentially run out of 10 examples to justify trusting anonymous
individuals.
This paper contributes to experimental literature by introducing a successful
method to manipulate people’s levels of generalized trust using the Trust
Game. While in this particular experiment the method involves deceiving the
subjects into thinking they are playing against a real participant rather than
an algorithm, it is crucial to emphasize that there is no inherent need for
deception, if such an experiment were conducted in a lab setting. In our case,
deception was used completely out of the need to conduct the experiment online
due to COVID-19 health crisis. This trust-manipulation procedure could be used
in any experiment that examines the causal effect of trust on economic
outcomes, or even in fields like psychology or sociology.
## 3 Experimental Design
### 3.1 Overview
The effect of trust on time preference was examined using an online
experiment. While the initial design involved conducting the experiment in a
classroom at Columbia University, due to COVID-19 health crisis the experiment
was conducted online, using a survey platform Qualtrics.
The main procedural steps were as follows. First, subjects were provided
general instructions about the experiment. Participants were told that all
their answers will be anonymous. They were informed they would be playing a
game with other participants in the experiment, which would be followed by a
series of questions. They were also instructed about the chance to win a
monetary reward, ranging from $\$25-40$. It was noted that the probability of
winning the reward depends on their cumulative payoffs in the game. Second,
trust-manipulation procedure was administered using the Trust Game. Third,
subjects answered 12 time preference questions. Fourth, subjects answered 5
questions about their levels of generalized trust. Fifth, subjects answered 4
questions concerning their levels of certainty about future outcomes. Sixth,
subjects responded to a series of demographic questions. Finally, subjects
were provided a debriefing form about the experiment. The total duration of
this procedure is around $10-15$ minutes.
The experiment was conducted in seven different online sessions, which took
place on the same day, April 21, 2020. Each session was separated by two-hour
intervals, the first one starting at 10:00 am, the last one starting at 10:00
pm. Every subject could only register and participate in one of the sessions.
Such a design allowed to maximize the number of participants by accommodating
to different time zones, while preserving the idea of a live gameplay at every
session.
### 3.2 Participants
102 participants completed the experiment. They were recruited using social
media. Two days before the experiment, we shared a registration form on
Facebook for everyone who would like to participate in the experiment. The
form described the date and time of the experiment, and asked every
participant to choose one of seven time slots that they would like to
participate in. The choices were 10:00-10:15 am, 12:00-12:15 pm, 4:00-4:15 pm,
6:00-6:15 pm, 8:00-8:15 pm, 10:00-10:15 pm, all in EST time. A day before the
experiment, every subject was sent a reminder email. They were notified that
they would be receiving the link to the experiment one minute before the
beginning of their time slot. For example, if the subject had registered for
10:00-10:15 am session, they would be receiving the link at 9:59 am. The
subjects were instructed to start the experiment right after receiving the
link
### 3.3 Trust belief-manipulation procedure
Participants’ short-term trust levels of trust were manipulated using the
Trust Game. The setting of the game is as follows. When subjects were playing
as Participant A, they would be endowed with $\$10$, and could choose to send
Participant B any integer amount $x$ $(0\leq x\leq\$10)$, with the hope that
$\mathrm{B}$ will return some amount $y$ $(0\leq y\leq\$3x)$. When playing as
Participant $\mathrm{B}$, subjects would receive some integer amount $x$, and
needed to choose how much (if any) they would like to return to Participant A.
102 participants completed the experiment. They were recruited using social
media. Two days before the experiment, we shared a registration form on
Facebook for everyone who would like to participate in the experiment. The
form described the date and time of the experiment, and asked every
participant to choose one of seven time slots that they would like to
participate in. The choices were 10:00-10:15 am, 12:00-12:15 pm, 4:00-4:15 pm,
6:00-6:15 pm, 8:00-8:15 pm, 10:00-10:15 pm, all in EST time. A day before the
experiment, every subject was sent a reminder email. They were notified that
they would be receiving the link to the experiment one minute before the
beginning of their time slot. For example, if the subject had registered for
10:00-10:15 am session, they would be receiving the link at 9:59 am. The
subjects were instructed to start the experiment right after receiving the
link
Each participant played the game for 11 rounds. In each of these rounds, they
were playing against a computer, who they thought was another participant. In
the High Trust group, the algorithm used strategies of highly trusting and
trustworthy individuals, derived from trust game data in Fiedler and Haruvy
(2017). In the Low Trust group, the algorithm was playing strategies of highly
untrusting and untrustworthy individuals from the same dataset. In odd rounds
(6 in total), subjects played as Participant A, while in even rounds (5 in
total), they played as Participant B. Before the game began, subjects also
played one practice round as Participants $B$. In this round, the algorithm
sent $\$5$ (tripled to $\$15$ ) for both treatment groups. This amount is the
median sent by Participant $\mathrm{A}$ in experimental data from Fiedler and
Haruvy (2017). It was used to avoid priming effects.
### 3.4 The strategy of the computer as Participant $A$
For the role of Participant A, High Trust strategy of the algorithm involved
sending $10, $9, $8 or $7, where $\$10$ is the maximum amount that can be sent
in the game. These amounts were sent at their respective relative frequencies
observed in the data from Fiedler and Haruvy (2017). In Low Trust strategy,
the amount sent was either $\$3$, $2, $1 or $\$0$, again following their
relative frequencies from the data. We chose these values for two reasons. The
median amount sent in the data from Fiedler and Haruvy (2017) was $\$5$. We
decided to exclude the median value and the two values right next to it, in
order to create a greater contrast between the two treatments. Four different
values in each treatment should also preserve the diversity of algorithm’s
strategies, which is needed to simulate live gameplay. A detailed account of
these strategies can be found in Appendix Table 1.
### 3.5 The strategy of the computer as Participant $B$
The guiding principle for the algorithm of computer playing as Participant B
is very simple: Whatever amount a real subject sends to the computer, the
expected return will be (weakly) larger if she is in the High Trust treatment,
and (weakly) smaller if she is in the Low Trust treatment. More specifically,
for every integer amount $\mathrm{x}$ sent by a real Participant A $(0\leq
x\leq\$10)$, one of three amounts would be randomly returned. These three
amounts depend on the treatment. Suppose the subject is playing as Participant
A and sends $\$5$. In the High-Trust treatment, she would be returned either
$7, $8 or $10 with equal probability, whereas in the Low-Trust treatment, she
would be returned either $\$2,\$1$ or $\$0$ with equal probability. The return
amounts are not random - they are composed of top $25\%$ most (least)
reciprocal gameplays in the sample of Fiedler and Haruvy (2017). A detailed
account of these strategies can be found in Appendix Table 2.
### 3.6 Note on deception
We should note that the treatment procedure involved deceiving the subjects
into thinking they are playing against a real participant, when in fact they
were playing against a computer. That it is because in order to change the
short-term beliefs of subjects about trusting anonymous individuals, it is
necessary that they think they are interacting with one. McCabe et al. (2001)
have shown that in the areas of the brain usually associated with trusting
behaviour – medial prefrontal cortex (mPFC) and the temporoparietal junction
(TPJ) – increased activity during the Trust Game only takes place when
subjects think they are playing against another person, rather than a
computer.
The treatment procedure was carefully designed to abide by the rules that
justify deception in economic experiments, set out in Cooper (2014). Cooper
notes that, among other things, deception is justified when (1) the study
would be prohibitively difficult to conduct without deception, and (2)
subjects are adequately debriefed after the fact about the presence of
deception. With regards to the first point, deception in this experiment was
used only because of the need to conduct the experiment online due to COVID-19
health emergency. If the experiment was conducted in a lab setting, it would
be perfectly possible to use a similar treatment without any deception. For
example, all subjects could play the trust game for 3 rounds, after which they
would be divided into three groups: top $25\%$ most trusting, top $25\%$ least
trusting, and middle $50\%$. The middle $50\%$ participants would then be
randomly assigned to play with either the very trusting or the very untrusting
subjects for $\mathrm{x}$ number of rounds. This treatment should produce a
very similar outcome to the treatment used in this experiment. Crucially, such
a method requires live gameplay, which is extremely difficult to achieve, if
subjects are conducting an online experiment at different times, as was the
case in this experiment. With regards to the second point, at the end of the
experiment, all subjects were also provided a debrief about how and why
deception was used
### 3.7 Time Preference Questionnaire
After completing 11 rounds of trust game, all subjects were provided with a
time preference questionnaire. Every subject answered 12 questions of the
following form:
Would you rather be paid:
1. 1.
$p today
2. 2.
$m in $t$ days
Three values of $m$ {$25, $30, $40}, and four values of t $\\{1$ week, 2
weeks, 4 weeks, 8 weeks $\\}$ were used. Values of $p$ were calculated based
on discount rates observed in previous experiments with identical time periods
in Ifcher and Zarghamee (2011). All 12 pairs of binary choices a presented in
Table 1.
$p$ (present value of future payment) as a function of future value $m$ and time distance $t$ |
---|---
| $t$ (time period from today)
$\mathbf{m}$ (future value) | $\mathbf{1}$ week | $\mathbf{2}$ weeks | 4 weeks | $\mathbf{8}$ weeks
$\mathbf{\$25}$ | $\$21$ | $\$21$ | $\$19$ | $\$19$
$\$\mathbf{30}$ | $\$26$ | $\$26$ | $\$23$ | $\$23$
$\mathbf{\$40}$ | $\$34$ | $\$34$ | $\$31$ | $\$31$
Table 1: Present $(p)$ and future values $(m)$ of payments for different time
distances $(t)$. Note; Data on discount rates, from which the present value of
future payment was derived, was adopted from Ifcher and Zarghamee (2011)
It is important to note that for every value of the future payment $m$, the
present value $p$ is identical for $t\in\\{1$ week, 2 weeks $\\}$, and for
$t\in\\{4$ weeks, 8 weeks $\\}$. This was used to determine the pair of $p$
and $m$ that makes a subject indifferent for each of the two time frames. The
point of indifference was used to determine the discount rate for each
subject. To avoid order effects, the sequence of all 12 questions, and the
order of answers for each question, were randomized for each participant. In
each of 12 questions, binary choices were presented in a way that maximizes
the variance of responses of subjects who have different degrees of time
preference. In other words, if all subjects were presented with two choices
that overestimate their time preference, say $\$40$ today or $\$20$ tomorrow,
we would not find much difference between the two treatment groups. A similar
situation would take place, if subjects were presented with two choices that
underestimate their time preference, say $\$40$ today or $\$39$ in 8 weeks.
That is why we derived discount rates for each $m$ and $t$, using data from
Ifcher and Zarghamee (2011).
### 3.8 Equalizing transaction costs
At the very beginning of the questionnaire, participants were informed about
the reward-claim process, in case they were chosen as the winner. The process
was designed to equalize transaction costs and uncertainty associated with the
payment. For example, if subjects believed that taking the reward at a later
time would require some additional effort, then relative transaction costs of
taking the money today would be lower, potentially affecting their responses
to time preference questions. That is why they were instructed that the online
payment would be made automatically, regardless of the date of the payment.
### 3.9 Potential impact of COVID-19 on time preference
The potential impact on time preference caused by COVID-19 crisis was also
taken into account. In particular, we took into account two potential effects.
We firstly considered this crisis as a health emergency, which causes people
to stay at home across countries. If we compare the current health crisis to
that caused by natural disasters, evidence from Callen (2015) would suggest
that people should become more patient in response to the crisis. Intuitively,
when people are locked in their homes, they might be more willing to postpone
their consumption in the present, because there is less choice of what to
consume. However, if we consider the COVID-19 situation as an economic crisis,
evidence from Jetter et al. (2020) suggests that worsening economic conditions
(e.g. higher unemployment) makes people less patient. Because these two
effects have opposing directions, we have assumed that they would on average
cancel each other out.
### 3.10 Trust Level Questionnaire
After completing time preference questionnaire, subjects were asked five
questions to test the success of trust-manipulation procedure. Each question
was testing a different aspect of trust in anonymous individuals. Subject had
to respond on a sliding scale, ranging from $-50$ to 50. These five questions
are as follows:
1. 1.
How much do you agree with the following statement: In general, you can trust
people.
2. 2.
How much do you agree with the following statement: Nowadays, you can’t rely
on anybody.
3. 3.
How much do you agree with the following statement: When dealing with
strangers, it’s better to be cautious before trusting them.
4. 4.
How much do you trust strangers you meet for the first time?
5. 5.
Imagine you lost your wallet with your money, identification or address in
your city/area and it was found by a stranger. How likely do you think your
wallet would be returned to you?
Questions 1-4 were adopted from Naef and Schupp (2009), while Question 5 was
adopted from Helliwell and Wang (2010). To avoid order effects, the sequence
of these five questions was randomized. In Questions 1-3, five labels were
provided as cues on the slider scale: disagree strongly (-50), disagree
somewhat (-25), neutral (0), agree somewhat (25), agree strongly (50). In the
analysis of results, the responses to 2 and 3 were multiplied by -1. In
Question 4, four labels were provided as guidelines: I don’t trust them at all
(-50), I trust them very little (-25), I trust them quite a bit (25), I trust
them a lot (50). In Question 5, four labels were again provided as cues: Not
likely at all (-50), Not very likely (-25), Fairly likely (25), Very likely
(50).
We should note that in surveys, such as General Social Survey (GSS) or World
Values Survey (WVS), trust is measured by a person’s binary agreement with the
statement: Generally speaking, would you say that most people can be trusted
or that you can’t be too careful in dealing with people? However, Naef and
Schupp (2009) present evidence that trust in strangers can be more accurately
measured by breaking down this statement into several questions.
### 3.11 Certainty questionnaire
Lastly, subjects were given four questions which were designed to measure
their level of certainty about future outcomes. We conjectured that due to
COVID-19 crisis, any question concerning people’s certainty about the upcoming
year might reflect numerous confounds, including political beliefs about how
well the crisis is handled by national governments. Therefore, subjects were
asked two sets of questions about more distant future, formulated in the same
manner:
1. 1.
Do you agree with the following statement: ”In $t$ years, I will be better off
than I am right now”
2. 2.
How certain are you about your response?
### 3.12 Demographic questionnaire
At the end of the survey, subjects were asked a series of demographic
questions, which concerned their age, ethnicity, gender, education level,
college major, and practice of religion. They were also provided the
opportunity to leave an email address, in order to be considered for the
lottery.
### 3.13 Debriefing form
After completing the experiment, subjects were provided with a debriefing
form. The form detailed what was measured in each stage, and emphasized that
the subjects were playing against a computer in the trust game part.
## 4 Econometric Specifications
### 4.1 Measuring the Success of the Treatment procedure
In analyzing the effect of playing the Trust Game on subjects’ trust levels,
we consider the fixed effects regression model of the form:
$Trust_{i}=\beta H_{i}+\alpha_{i}+\sum_{K}^{N}\gamma_{K}I_{K}(k)+\epsilon_{i}$
(1)
where $Trust_{i}$ is the measured level of trust in question $i\in\\{1,5\\}$.
Regression analysis here is possible, because in each of the five questions,
trust levels were measured as a continuous variable, where
${Trust_{i}}\in[-50,50]$. $H_{i}$ is the dummy of being in the High Trust
treatment for question $i$. The question-specific intercept is denoted by
$\alpha_{i}$, and the question-specific error term is denoted by
$\varepsilon_{i}$. The model also includes demographics controls, where
$I_{K}(k)$ is an indicator function, which takes the value of 1 if the subject
belongs to the demographic category $K$. The number demographic categories is
given by the interval $[\mathrm{K},\mathrm{N}]$222In all regressions,
demographic categories are: gender, age, ethnicity, education, college major,
and religious practice..
We chose to conduct a fixed effects regression model for the following
reasons. As we have explained in the experimental design section, each of the
five questions in the trust level questionnaire measure a different aspect of
trust in anonymous individuals. In addition, they are formulated in rather
different ways - questions (1)-(3) ask subjects to state their level of
agreement with a given statement, question (4) is a direct question, while
question (5) requires subjects to estimate a probability. This led to us to
expect that each of these questions might have different average responses,
which we confirmed by testing the difference in mean responses. Therefore, we
want to allow each question to have its own intercept, when estimating the
average effect of the High Trust treatment on trust levels across these five
questions. That is precisely the purpose of a fixed-effects regression model.
We use OLS with robust standard errors to estimate the equation 1
### 4.2 Measuring the Effect of Trust on Time Preference
To measure the effect of High Trust treatment on subjects’ time preference, we
used a regression model of the following form:
$D=\beta H+\sum_{K}^{N}\gamma_{K}I_{K}(k)+\epsilon$ (2)
where $H$ is the dummy of being in a High Trust treatment and $D$ is the
discount rate. The model also includes demographics controls, where $I_{K}(k)$
is an indicator function, which takes the value of 1 if the subject belongs to
the demographic category $K$. The number of demographic categories is given by
the interval $[K,N]$. We use OLS with robust standard errors to estimate
equation 2.
Our approach to estimating time preference is a standard exponential
discounting model (Frederick et al., 2002), originally introduced by Samuelson
(1937). It follows a literature of similar methods that more recently have
been applied by Benjamin et al. (2016), Reuben et al. (2015), and Burks et al.
(2009). Using our time preference questionnaire, we can determine the amount
$x$, at which the subject is indifferent between receiving $x$ now, or
receiving $p$ after $t$ weeks, where $p\in\\{\$25,\$30,\$40\\}$. Indifference
implies that:
$u(x)=D^{t}u(p)$ (3)
If we assume that utility is approximately linear, then taking the log of both
sides yields:
$\log x-\log p=t\log D$ (4)
Each subject was given 12 questions to measure their time preference. These
questions formed 6 blocks, that enabled us to estimate 6 values of discount
rates for the same individual. In each block, subjects were asked to choose
between receiving an amount today and in the future. The combination of these
amounts in the two questions is the same. The difference between the two
questions in a block is the time distance $t$. For every block, $x$ is the
amount at which the individual switched to the present-day payment and $t$ is
time delay in weeks.
### 4.3 Measuring the Effect of Trust on Certainty about Future Outcomes
To estimate the effect of High Trust treatment on subjects’ certainty about
future outcomes, we used a regression of the following form:
$Certainty=\beta H+\sum_{K}^{N}\gamma_{K}I_{K}(k)+\varepsilon$ (5)
where $H$ is the dummy of being in a High Trust treatment and Certainty is a
measure of subjects’ certainty about future outcomes, Certainty $\in[0,100]$.
The model also includes the same demographics controls, which are used in
other regression models. We use OLS with robust standard errors to estimate
equation 5.
## 5 Results
### 5.1 Demographic Characteristics of Subjects
Table 2: Demographic statistics
---
Table 2 presents demographic characteristics of subjects in the experiment. We
find significant difference between the values of two demographic
characteristics of the two treatment groups. Therefore, even though the
assignment to a treatment group was completely random during the experiment,
we cannot conclude it is completely random ex post. First, the number of
students who are pursuing or have pursued a Bachelor’s degree in Economics is
significantly different (two-sided t-test yields $\mathrm{p}=0.0043$ ): 3
subjects in Low Trust treatment, and 14 subjects in High Trust treatment.
Moreover, the number of students who are pursuing or have pursued a Bachelor’s
degree in Psychology is significantly different (two-sided t-test yields
$\mathrm{p}=0.0233$ ): 7 subjects in High Trust treatment, and 1 subject in
Low Trust treatment. These two groups are important primarily because they
might have had prior exposure to the Trust game, which might have had an
effect for the success of the treatment. However, we have controlled for these
variables in our regression models, so they should not affect the validity of
our results. As a precaution, in the analysis that follows, we have also
included a regression with two interaction variables: BA in Economics
$\mathrm{x}$ High Trust treatment, and BA in Psychology $\mathrm{x}$ High
Trust treatment.
### 5.2 High Trust Treatment Increases Trust Levels
We find evidence that High Trust treatment significantly increases trust
levels, which are presented in Table 3 below. This effect is stable regardless
of specification. In column 1 , we find that the effect of High Trust
treatment on trust levels holds without controlling for demographic variables.
On average, playing against the High Trust algorithm instead of the Low Trust
algorithm during the trust game increases trust levels by around $4.5$ points,
where they were measured on a scale from $-50$ to 50 . The result is
significant at $5\%$ level. In columns 2-4, controls are added for gender,
age, ethnicity, education, religious practice, and college major. When
demographic controls are added, treatment becomes significant at $1\%$ level,
increasing trust levels by around 6 points.
Table 3: Estimates from equation 1
---
In column 3 , the analysis of column 2 is repeated, adding interactive
variables of BA in Economics x High Trust treatment, and BA in Psychology x
High Trust treatment. These variables are included, because there is a
statistically different number of participants who are pursuing or have
pursued a Bachelor’s degree in Economics or Psychology between the two
treatment groups. We can see a slight increase in both the coefficient and the
standard error of the High Trust treatment dummy. This effect might be due to
the result that the interactive term BA in Psychology $\mathrm{x}$ High Trust
treatment is significant at $1\%$ level.
Finally, in column 4, the regression of column 3 is repeated, but observations
of four subjects from the experiment are excluded. These four subjects had
explicitly indicated their awareness of playing against a computer in the
treatment procedure in the feedback form at the end of the experiment. As we
have discussed earlier, evidence suggests that people evoke their trust
beliefs in the Trust game more strongly, if they believe they are playing
against a real individual. With the restricted sample, the effect of the
treatment increases further. We can also see that the effect of the treatment
is significant on every individual question in the trust questionnaire, except
for Question 3.
We also find a significant difference between mean responses to each of the
five questions, taking both treatment groups together. In Questions (1)-(5),
the mean trust level is respectively $15.8,20.2,-15.7,5.0,7.3$.
In summary, the first finding of this experiment is that High Trust treatment
significantly increases trust levels, compared to the Low Trust treatment.
This effect is robust to a number of econometric-specification checks.
### 5.3 There is No Effect of High Trust treatment on Time Preference
We find no significant evidence that High Trust treatment affects time
preference. Out of the six regressions presented in Table 4, we find a
significantly negative effect of High Trust treatment on time preference in
one of them, which does control for demographic variables. Once demographic
variables are included, the significance disappears. Even if statistically
insignificant, the results are somewhat surprising - in all six regressions,
the coefficient of High Trust treatment is negative.
Table 4: Estimates from equation 2
---
In columns $1-3$, time preference is measured as the average of the six
discount rates we have calculated for every individual. Column 1 presents
regression results that do not control for demographic variables. The effect
of High Trust treatment is negative, but insignificant at $5\%$ level. In
column 2 , the analysis of column 1 is repeated, adding demographic controls.
The effect remains insignificant and negative. The model in column 3 adds two
interactive variables, Economics $\mathrm{x}$ High Trust treatment, and BA in
Psychology x High Trust treatment. The effect of High Trust treatment on time
preference remains insignificant. However, the interactive term BA in
Psychology $\mathrm{x}$ High Trust treatment is significant at $5\%$ level.
In columns 4-6, time preference for each individual is measured as six
different discount rates, which we have calculated for each combination of the
monetary amount and time horizon. For this reason, the number of observations
increases from $\mathrm{N}=102$ to $\mathrm{N}=612$. Column 4 presents
regression results that do not control for demographic variables. The effect
of High Trust treatment is negative and significant at $5\%$ level. There are
two reasons why we do not draw any conclusions from this result. First, once
demographic controls are included in columns 5-6, the significance disappears.
Evidence suggests that time preference tends to vary with demographic
characteristics (Harrison et al., 2002), and we have already shown that our
random assignment process did not produce completely random samples from a
demographic point of view. For this reason, including demographic variables
should yield more accurate results. Moreover, it is important to look at the
adjusted R-squared value in column 4, which is $0.005$. In both absolute
terms, and compared to regressions that include demographic variables in
columns 5 and 6 , this value is considerably small: High Trust treatment
explains around $0.5\%$ of variation in time preference. Once demographic
variables are included, treatment explains over $10\%$ of the same variation.
In column 5 , the analysis of column 4 is repeated, adding demographic
controls. The effect remains insignificant and negative. Column 6 adds two
interactive variables, Economics $\mathrm{x}$ High Trust treatment, and BA in
Psychology x High Trust treatment. The effect of High Trust treatment on time
preference remains insignificant.
### 5.4 There is No Effect of High Trust Treatment on Certainty about the
Future
We find no significant evidence that High Trust treatment affects certainty
about future outcomes, presented in Table 5. Even though the results are
insignificant, it is somewhat surprising that both coefficients are negative.
In both columns, levels of certainty are measured as individual responses to
each of the two certainty questions. For every subject, we therefore have 2
observations, leading to $N=204$. In column 1 , regression does not include
demographic variables. Adding demographic controls in column 2 does not make
the results significant, but it raises the value of adjusted R-squared from
$0.000$ to $0.125$. As we will discuss in the next section, even if this
result was significant, it would be difficult to make strong conclusions due
to the potential impact of COVID-19 on the levels of certainty.
Table 5: Estimates from equation 5
---
## 6 Discussion
Our research presents two main findings. First of all, trust has no
significant effect on time preference or levels of certainty. Second, it is
possible to manipulate people’s short-term levels of trust for experimental
purposes. The first finding sheds light on the relationship between trust and
patience. Given that both variables seem to have an interconnected
relationship with economic growth, it helps us understand how trust and
patience affect each other, which has not been studied much in the literature.
More specifically, if trust has little causal effect on patience, it is likely
that patience might have an effect on trust, or that they are both related to
a third confound. The second finding opens the door for further investigations
into causal effects of trust on economic outcomes - a topic that is quite
integral to the understanding of the determinants of economic growth, yet has
been studied in very few experiments. We would like turn to several important
shortcomings of this experiment.
### 6.1 Internal Validity of the Experiment
#### 6.1.1 Measuring time preference
First of all, there are potential shortcomings in analysing the level of time
preference in this experiment. The main problem here is a possible confounding
variable - the effect of COVID-19 health crisis on subjects’ time preference.
As we have discussed in the section on experimental design, time preference
questions were constructed in a way that would maximize the difference of
answers from subjects with distinct levels of discounting. However, the choice
design used discount rates that people exhibit in normal conditions, rather
than a situation of a severe health and economic crisis. We have conjectured
two potential effects of COVID-19 emergency on people’s discounting rates -
individuals might become more patient, because there are fewer opportunities
to spend the reward today compared to the future, or they might become less
patient, because worsening economic conditions usually push people to consume
more today and save less for tomorrow. Because these mechanisms have
contrasting effects on time preference, we have assumed that on average, they
will cancel out. However, one of them might have been stronger than the other,
or unequally distributed across the two treatment groups.
Moreover, because participants were recruited using personal social media
account, our sample likely suffers from selection bias. Most of the subjects
were either direct friends of the author, friends of friends, or other
students at Columbia University. This is problematic not only because it’s an
unrepresentative sample. Feedback after the experiment suggests that a few
friends might have chosen to take a lower monetary reward today over a higher
one in the future, not out of a personal preference, but rather because they
thought the experiment was funded by the author himself, and they wanted to
minimize his expenses. While that is a very considerate gesture from one
perspective, it might have had a problematic effect on time preference data.
If we assume that subjects from High Trust treatment were more likely to
behave in this way (increased trust could trigger other prosocial
preferences), this might serve as one of the many explanations for why the
coefficient of High Trust treatment on time preference was actually negative,
even though insignificant.
Lastly, there is a one validity issue that concerns the time preference survey
itself. The results of the experiment indicate that the mean discount rate for
Low Trust treatment was slightly higher than that of the High Trust treatment.
However, the opposite is true for the variance of responses - subjects in the
High Trust treatment had a higher variance of their binary responses. It is
true that the reason why discount rates are slightly higher in the Low Trust
group is that its subjects chose one type of a payment more often. At the same
time, this might reflect another trend: Low Trust subjects might have been
more consistent in their choices, because they were actually less patient, and
wanted to get through the 12-time preference questions as quickly as possible.
For the purpose of speed, a rule of thumb of choosing the same response in
each question might have been quite useful, yet it might also be falsely
interpreted as a feature of consistency, associated with a higher level of
patience. Feedback from one participant suggests this might have been the case
for some subjects.
#### 6.1.2 Measuring certainty
Even if we had obtained any significant results about the effects of treatment
on people’s certainty about future outcomes, it would be very difficult to
draw strong conclusions from them. The main reason for that is again the
COVID-19 situation, which might have had a profound impact on the levels of
certainty about the future. Moreover, certainty levels at the moment might
reflect political biases as much as personal beliefs, since at least in the
United States, some divergence in the way people perceive this emergency
reflects the level of political support for the work of the current
administration of the US government333See, for example: D. Roberts,
”Partisanship is the strongest predictor of coronavirus response”, published
in Vox, 31 March, 2020..
Moreover, the experiment could be strengthened by a more rigorous procedure to
test the certainty levels themselves. It would be interesting to test two
interrelated variables: the accuracy of the predictions of an outcome, as well
as the associated sense of confidence of this prediction. Meyniel et al.
(2015) have introduced an experimental design for testing precisely these two
variables. In their experiment, subjects estimate transition probabilities
between two visual or auditory stimuli in a changing environment, and report
their mean estimate and confidence in this report444For the suggestion of this
study, I am indebted to Arthur Prat-Carrabin from Columbia University’s
Cognition and Decision Lab.. We therefore invite researchers to explore the
relationship between trust and certainty further.
#### 6.1.3 Treatment: trust manipulation procedure
There are also several potential issues that concern the internal validity of
the treatment procedure. The most important problem is that some subjects were
aware they were playing with a computer, instead of a real person. As we have
discussed before, evidence suggests that trust beliefs are only active during
the Trust game, when when subjects think they are playing against another
person, rather than a computer. Therefore, the effect of High Trust treatment
on trust levels is potentially stronger. What is important to emphasize again
is that in a lab setting, there would be no inherent need to use deception -
it was used only because the experiment had to be conducted online.
### 6.2 Implications: Moving Forward
We would like to end by emphasizing three main takeaways. First of all, most
of internal validity problems discussed above could be solved without
increasing the budget of the experiment, or its sample size. Instead, it would
suffice to conduct the experiment in a lab with anonymous subjects, at a time
when the current public health crisis will be largely resolved. For this
reason, we believe it would be worthwhile to replicate the experiment again in
the future, when such conditions can be met. Second, the success of the trust-
manipulation treatment procedure in this experiment opens ample opportunities
for further research in the field of economics of trust. Understanding the
causal relationship between trust and other economic outcomes might shed light
on the precise mechanisms by which trust affects economic growth, which are
not yet clear at the moment. More broadly, the treatment could be useful for
any experiment that examines the causal effect of trust even outside of the
field of economics, in areas like psychology or sociology. As we have
discussed before, there is no need to use deception to change people’s short-
term beliefs of trust, when using the treatment outlined in this experiment if
it is conducted in a lab setting.
Above all, we hope that this paper will serve as an invitation for others to
investigate the numerous interesting links between trust, certainty, time
preference, and economic growth.
## 7 Appendix
Table 6: Strategies of the Algorithm as Participant A
---
Table 7: Strategies of the Algorithm as Participant B
---
## References
* * Algan and Cahuc (2010) Algan, Y., and Cahuc, P. (2010). Inherited trust and growth. _American Economic Review_ , _100_(5), 2060–92.
* Alós-Ferrer and Farolfi (2019) Alós-Ferrer, C., and Farolfi, F. (2019). Trust games and beyond. _Frontiers in neuroscience_ , 887.
* Arrow (1973) Arrow, K. J. (1973). Information and economic behavior.
* Bartling et al. (2018) Bartling, B., Fehr, E., Huffman, D., and Netzer, N. (2018). The causal effect of trust.
* Benjamin et al. (2016) Benjamin, D. J., Choi, J. J., and Fisher, G. (2016). Religious identity and economic behavior. _Review of Economics and Statistics_ , _98_(4), 617–637.
* Berg et al. (1995) Berg, J., Dickhaut, J., and McCabe, K. (1995). Trust, reciprocity, and social history. _Games and economic behavior_ , _10_(1), 122–142.
* Burks et al. (2009) Burks, S. V., Carpenter, J. P., Goette, L., and Rustichini, A. (2009). Cognitive skills affect economic preferences, strategic behavior, and job attachment. _Proceedings of the National Academy of Sciences_ , _106_(19), 7745–7750.
* Callen (2015) Callen, M. (2015). Catastrophes and time preference: Evidence from the indian ocean earthquake. _Journal of Economic Behavior & Organization_, _118_ , 199–214.
* Chen (2013) Chen, M. K. (2013). The effect of language on economic behavior: Evidence from savings rates, health behaviors, and retirement assets. _American Economic Review_ , _103_(2), 690–731.
* Cooper (2014) Cooper, D. J. (2014). A note on deception in economic experiments. _Journal of Wine Economics_ , _9_(2), 111–114.
* Eskander et al. (2020) Eskander, E., Sanders, N., and Nam, C. S. (2020). Neural correlates and mechanisms of trust. In _Neuroergonomics_ (pp. 451–461). Springer.
* Falk et al. (2018) Falk, A., Becker, A., Dohmen, T., Enke, B., Huffman, D., and Sunde, U. (2018). Global evidence on economic preferences. _The Quarterly Journal of Economics_ , _133_(4), 1645–1692.
* Fehr and Schmidt (1999) Fehr, E., and Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. _The quarterly journal of economics_ , _114_(3), 817–868.
* Fiedler and Haruvy (2017) Fiedler, M., and Haruvy, E. (2017). The effect of third party intervention in the trust game. _Journal of Behavioral and Experimental Economics_ , _67_ , 65–74.
* Gabaix and Laibson (2017) Gabaix, X., and Laibson, D. (2017). _Myopia and discounting_ (Tech. Rep.). National bureau of economic research.
* Harrison et al. (2002) Harrison, D. A., Price, K. H., Gavin, J. H., and Florey, A. T. (2002). Time, teams, and task performance: Changing effects of surface-and deep-level diversity on group functioning. _Academy of management journal_ , _45_(5), 1029–1045.
* Helliwell and Wang (2010) Helliwell, J. F., and Wang, S. (2010). _Trust and well-being_ (Tech. Rep.). National Bureau of Economic Research.
* Ho (2021) Ho, B. (2021). _Why trust matters: An economist’s guide to the ties that bind us_. Columbia University Press.
* Ifcher and Zarghamee (2011) Ifcher, J., and Zarghamee, H. (2011). Happiness and time preference: The effect of positive affect in a random-assignment experiment. _American Economic Review_ , _101_(7), 3109–29.
* Jachimowicz et al. (2017) Jachimowicz, J. M., Chafik, S., Munrat, S., Prabhu, J. C., and Weber, E. U. (2017). Community trust reduces myopic decisions of low-income individuals. _Proceedings of the National Academy of Sciences_ , _114_(21), 5401–5406.
* Jetter et al. (2020) Jetter, M., Magnusson, L. M., and Roth, S. (2020). Becoming sensitive: Males’ risk and time preferences after the 2008 financial crisis. _European Economic Review_ , _128_ , 103512.
* Khaw et al. (2017) Khaw, M. W., Li, Z., and Woodford, M. (2017). _Risk aversion as a perceptual bias_ (Tech. Rep.). National Bureau of Economic Research.
* Knack and Keefer (1997) Knack, S., and Keefer, P. (1997). Does social capital have an economic payoff? a cross-country investigation. _The Quarterly journal of economics_ , _112_(4), 1251–1288.
* Kosfeld et al. (2005) Kosfeld, M., Heinrichs, M., Zak, P. J., Fischbacher, U., and Fehr, E. (2005). Oxytocin increases trust in humans. _Nature_ , _435_(7042), 673–676.
* Laibson (1997) Laibson, D. (1997). Golden eggs and hyperbolic discounting. _The Quarterly Journal of Economics_ , _112_(2), 443–478.
* McCabe et al. (2001) McCabe, D. L., Treviño, L. K., and Butterfield, K. D. (2001). Cheating in academic institutions: A decade of research. _Ethics &Behavior_, _11_(3), 219–232.
* Meyniel et al. (2015) Meyniel, F., Sigman, M., and Mainen, Z. F. (2015). Confidence as bayesian probability: From neural origins to behavior. _Neuron_ , _88_(1), 78–92.
* Naef and Schupp (2009) Naef, M., and Schupp, J. (2009). Measuring trust: Experiments and surveys in contrast and combination.
* Owen et al. (2013) Owen, S. F., Tuncdemir, S. N., Bader, P. L., Tirko, N. N., Fishell, G., and Tsien, R. W. (2013). Oxytocin enhances hippocampal spike transmission by modulating fast-spiking interneurons. _Nature_ , _500_(7463), 458–462.
* Reuben et al. (2015) Reuben, E., Sapienza, P., and Zingales, L. (2015). Procrastination and impatience. _Journal of Behavioral and Experimental Economics_ , _58_ , 63-76. Retrieved from https://www.sciencedirect.com/science/article/pii/S2214804315000956 DOI: https://doi.org/10.1016/j.socec.2015.07.005
* Samuelson (1937) Samuelson, P. A. (1937). A note on measurement of utility. _The review of economic studies_ , _4_(2), 155–161.
* Zak and Knack (2001) Zak, P. J., and Knack, S. (2001). Trust and growth. _The economic journal_ , _111_(470), 295–321.
* Zak et al. (2005) Zak, P. J., Kurzban, R., and Matzner, W. T. (2005). Oxytocin is associated with human trustworthiness. _Hormones and behavior_ , _48_(5), 522–527.
|
# Self-Emphasizing Network for Continuous Sign Language Recognition
Lianyu Hu, Liqing Gao, Zekang liu, Wei Feng Corresponding author
###### Abstract
Hand and face play an important role in expressing sign language. Their
features are usually especially leveraged to improve system performance.
However, to effectively extract visual representations and capture
trajectories for hands and face, previous methods always come at high
computations with increased training complexity. They usually employ extra
heavy pose-estimation networks to locate human body keypoints or rely on
additional pre-extracted heatmaps for supervision. To relieve this problem, we
propose a self-emphasizing network (SEN) to emphasize informative spatial
regions in a self-motivated way, with few extra computations and without
additional expensive supervision. Specifically, SEN first employs a
lightweight subnetwork to incorporate local spatial-temporal features to
identify informative regions, and then dynamically augment original features
via attention maps. It’s also observed that not all frames contribute equally
to recognition. We present a temporal self-emphasizing module to adaptively
emphasize those discriminative frames and suppress redundant ones. A
comprehensive comparison with previous methods equipped with hand and face
features demonstrates the superiority of our method, even though they always
require huge computations and rely on expensive extra supervision. Remarkably,
with few extra computations, SEN achieves new state-of-the-art accuracy on
four large-scale datasets, PHOENIX14, PHOENIX14-T, CSL-Daily, and CSL.
Visualizations verify the effects of SEN on emphasizing informative spatial
and temporal features. Code is available at
https://github.com/hulianyuyy/SEN˙CSLR
## Introduction
Sign language is one of the most commonly-used communication tools for the
deaf community in their daily life. It mainly conveys information by both
manual components (hand/arm gestures), and non-manual components (facial
expressions, head movements, and body postures) (Dreuw et al. 2007; Ong and
Ranganath 2005). However, mastering this language is rather difficult and
time-consuming for the hearing people, thus hindering direct communications
between two groups. To relieve this problem, isolated sign language
recognition tries to classify a video segment into an independent
gloss111Gloss is the atomic lexical unit to annotate sign languages..
Continuous sign language recognition (CSLR) progresses by sequentially
translating image streams into a series of glosses to express a complete
sentence, more prospective towards bridging the communication gap.
Figure 1: Visualization of class activation maps with Grad-CAM (Selvaraju et
al. 2017) for VAC (Min et al. 2021) (baseline). Top: Original frames. Bottom:
activation maps. It’s observed that without extra supervision, it fails to
locate discriminative face and hand regions precisely.
In sign language, the left hand, right hand, and face play the most important
role in expressing glosses. Mostly, they convey the information through
horizontal/vertical hand movements, finger activities, and static gestures,
assisted with facial expressions and mouth shapes to holistically deliver
messages (Dreuw et al. 2007; Ong and Ranganath 2005). As a result, hand and
face, are always especially leveraged and incorporated in sign language
systems. In isolated sign language recognition, early methods (Freeman and
Roth 1995; Sun et al. 2013) leveraged hand-crafted features to describe the
gestures and motion of both hands. Recent methods either choose to build a
pure pose-based system (Tunga, Nuthalapati, and Wachs 2021; Hu et al. 2021)
based on detected keypoints for both hands and face, or construct appearance-
based systems (Hu, Zhou, and Li 2021; Boukhayma, Bem, and Torr 2019) with
cropped patches for hands and face as collaborative inputs. In CSLR, CNN-LSTM-
HMM (Koller et al. 2019) builds a multi-stream (hands and face) Hidden-Markov-
Model (HMM) to integrate multiple visual inputs to boost recognition accuracy.
STMC (Zhou et al. 2020) explicitly inserts a pose-estimation network and uses
the detected regions (hand and face) as multiple cues to perform recognition.
More recently, C2SLR (Zuo and Mak 2022) leverages the pre-extracted pose
keypoints heatmaps as additional supervision to guide models to focus on hand
and face areas.
Although it has been proven effective to incorporate hand and face features to
improve recognition performance for sign language systems, previous methods
usually come at huge computations with increased training complexity, and rely
on additional pose estimation networks or extra expensive supervision (e.g.,
heatmaps). However, without these supervision signals, we find current methods
(Min et al. 2021; Hao, Min, and Chen 2021; Cheng et al. 2020) in CSLR fail to
precisely locate the hand and face regions (Fig. 1), thus unable to
effectively leverage these features. To more effectively excavate these key
cues but avoid introducing huge computations or relying on expensive
supervision, we propose a self-emphasizing network (SEN) to explicitly
emphasize informative spatial regions in a self-motivated way. Specifically,
SEN first employs a lightweight subnetwork to incorporate local spatial-
temporal features to identify informative regions, and then dynamically
emphasizes or suppresses input features via attention maps.
It’s also observed that not all frames contribute equally to recognition. For
example, frames with hand/arm movements of the signer are usually more
important than those transitional frames. We present a temporal self-
emphasizing module to emphasize those discriminative frames and suppress
redundant ones dynamically. Remarkably, SEN yields new state-of-the-art
accuracy upon four large-scale CSLR datasets, especially outperforming
previous methods equipped with hand and face features, even though they always
come at huge computations and rely on expensive supervision. Visualizations
verify the effects of SEN in emphasizing spatial and temporal features.
## Related Work
### Continuous Sign Language Recognition
Sign language recognition methods can be roughly categorized into isolated
sign language recognition (Tunga, Nuthalapati, and Wachs 2021; Hu et al. 2021;
Hu, Zhou, and Li 2021) and continuous sign language recognition (Pu, Zhou, and
Li 2019; Cheng et al. 2020; Cui, Liu, and Zhang 2019; Niu and Mak 2020; Min et
al. 2021) (CSLR), and we focus on the latter in this paper. CSLR tries to
translate image frames into corresponding glosses in a weakly-supervised way:
only sentence-level label is provided. Early methods in CSLR usually depend on
hand-crafted features (Gao et al. 2004; Freeman and Roth 1995) to provide
visual information, especially body gestures, hands, and face, or rely on HMM-
based systems (Koller et al. 2016; Han, Awad, and Sutherland 2009; Koller,
Zargaran, and Ney 2017; Koller, Forster, and Ney 2015) to perform temporal
modeling and then translate sentences step by step. The HMM-based methods
typically first employ a feature extractor to capture visual representations
and then adopt an HMM to perform long-term temporal modeling. The recent
success of convolutional neural networks (CNNs) and recurrent neural networks
brings huge progress for CSLR. The widely-used CTC loss (Graves et al. 2006)
enables end-to-end training for recent methods by aligning target glosses with
inputs.
Especially, hands and face are paid close attention to by recent methods. For
example, CNN-LSTM-HMM (Koller et al. 2019) employs a multi-stream HMM
(including hands and face) to integrate multiple visual inputs to improve
recognition accuracy. STMC (Zhou et al. 2020) utilizes a pose-estimation
network to estimate human body keypoints and then sends cropped patches
(including hands and face) for integration. More recently, C2SLR (Zuo and Mak
2022) leverages the pre-extracted pose keypoints as supervision to guide the
model. Despite high accuracy, they consume huge additional computations and
training complexity.
Practically, recent methods (Pu, Zhou, and Li 2019; Pu et al. 2020; Cheng et
al. 2020; Cui, Liu, and Zhang 2019; Niu and Mak 2020; Min et al. 2021) usually
first employ a feature extractor to capture frame-wise visual representations
for each frame, and then adopt 1D CNN and BiLSTM to perform short-term and
long-term temporal modeling, respectively. However, several methods (Pu, Zhou,
and Li 2019; Cui, Liu, and Zhang 2019) found in such conditions the feature
extractor is not well trained and propose the iterative training strategy to
refine the feature extractor, but consume much more computations. More recent
methods try to directly enhance the feature extractor by adding visual
alignment losses (Min et al. 2021) or adopt pseudo label (Cheng et al. 2020;
Hao, Min, and Chen 2021) for supervision. We propose the self-emphasizing
network to emphasize informative spatial features, which can be viewed to
enhance the feature extractor in a self-motivated way.
### Spatial Attention
Spatial attention has been proven to be effective in many fields including
image classification (Cao et al. 2019; Hu et al. 2018; Woo et al. 2018; Hu,
Shen, and Sun 2018), scene segmentation (Fu et al. 2019) and video
classification (Wang et al. 2018). SENet (Hu, Shen, and Sun 2018), CBAM (Woo
et al. 2018), SKNet (Li et al. 2019) and ECA-Net (Wang et al. 2020) devise
lightweight channel attention modules for image classification. The widely
used self-attention operator (Wang et al. 2018) employs dot-product feature
similarities to build attention maps and aggregate long-term dependencies.
However, the calculation complexity of the self-attention operator is
quadratic to the incorporated pixels, incurring a heavy burden for video-based
tasks (Wang et al. 2018). Instead of feature similarities, our SEN employs a
learnable subnetwork to aggregate local spatial-temporal representations and
generates spatial attention maps for each frame, much more lightweight than
self-attention operators. Some works also propose to leverage external
supervision to guide the spatial attention module. For example, GALA (Linsley
et al. 2018) collects click maps from games to supervise the spatial attention
for image classification. A relation-guided spatial attention module (Li et
al. 2020) is designed to explore the discriminative regions globally for
Video-Based Person Re-Identification. MGAN (Pang et al. 2019) introduces an
attention network to emphasize visible pedestrian regions by modulating full
body features. In contrast to external supervision, our self-emphasizing
network strengthens informative spatial regions in a self-motivated way, thus
greatly lowering required computations and training complexity.
## Method
### Framework Overview
As shown in fig. 2, the backbone of CSLR models is consisted of a feature
extractor (2D CNN222Here we only consider the feature extractor based on 2D
CNN, because recent findings (Adaloglou et al. 2021; Zuo and Mak 2022) show 3D
CNN can not provide as precise gloss boundaries as 2D CNN, and lead to lower
accuracy. ), a 1D CNN, a BiLSTM, and a classifier (a fully connected layer) to
perform prediction. Given a sign language video with $T$ input frames
$x=\\{x_{t}\\}_{t=1}^{T}\in\mathcal{R}^{T\times 3\times H_{0}\times W_{0}}$, a
CSLR model aims to translate the input video into a series of glosses
$y=\\{y_{i}\\}_{i=1}^{N}$ to express a sentence, with $N$ denoting the length
of the label sequence. Specifically, the feature extractor first processes
input frames into frame-wise features
$v=\\{v_{t}\\}_{t=1}^{T}\in\mathcal{R}^{T\times d}$. Then the 1D CNN and
BiLSTM perform short-term and long-term temporal modeling based on these
extracted visual representations, respectively. Finally, the classifier
employs widely-used CTC loss to predict the probability of target gloss
sequence $p(y|x)$.
To emphasize the informative spatial and temporal features for CSLR models, we
present a spatial self-emphasizing module (SSEM) and a temporal self-
emphasizing module (TSEM). Specifically, we incorporate them into the feature
extractor to operate on each frame. Fig. 2 shows an example of a common
feature extractor consisting of multiple stages with several blocks in each.
We sequentially place the SSEM and TSEM before the $3\times 3$ spatial
convolution in each block to emphasize informative spatial and temporal
features, respectively. When designing the architecture, efficiency is our
core consideration, to avoid heavy computational burdens like previous methods
(Zhou et al. 2020; Zuo and Mak 2022) based on heavy pose-estimation networks
or expensive heatmaps. We next introduce our SSEM and TSEM, respectively.
Figure 2: A overview for our SEN. It first employs a feature extractor (2D
CNN) to capture frame-wise features, and then adopts a 1D CNN and a BiLSTM to
perform short-term and long-term temporal modeling, respectively, followed by
a classifier to predict sentences. We place our proposed spatial self-
emphasizing module (SSTM) and temporal self-emphasizing module (TSEM) into
each block of the feature extractor to emphasize the spatial and temporal
features, respectively.
### Spatial Self-Emphasizing Module (SSEM)
From fig. 1, we argue current CSLR models fail to effectively leverage the
informative spatial features, e.g., hands and face. We try to enhance the
capacity of the feature extractor of CSLR models to incorporate such
discriminative features without affecting its original spatial modeling
ability. Practically, our SSEM is designed to first leverage the closely
correlated local spatial-temporal features to identify the informative regions
for each frame, and then augment original representations in the form of
attention maps.
As shown in fig. 3, SSEM first projects the input features
$s=\\{s_{t}\\}_{t=1}^{T}\in\mathcal{R}^{T\times C\times H\times W}$ into
$s_{r}\in\mathcal{R}^{T\times C/r\times H\times W}$ to decrease the
computational costs brought by SSEM, with $r$ the reduction factor as 16 by
default.
Figure 3: Illustration for our spatial self-emphasizing module (SSEM).
The frame-wise features $s$ in the feature extractor are independently
extracted for each frame by 2D convolutions, failing to incorporate local
spatial-temporal features to distinguish the informative spatial regions.
Besides, as the signer has to throw his/her arms and hands to express glosses,
the informative regions in adjacent frames are always misaligned. Thus, we
devise a multi-scale architecture to perceive spatial-temporal features in a
large neighborhood to help identify informative regions.
Instead of a large spatial-temporal convolution kernel, we employ $N$ parallel
factorized branches with group-wise convolutions of progressive dilation rates
to lower computations and increase the model capacity. As shown in fig. 3,
these $N$ branches own the same spatial-temporal kernel size $K_{t}\times
K_{s}\times K_{s}$, with different spatial dilation rates $[1\cdots N]$.
Features from different branches are multiplied with learnable factors
$\\{\sigma_{1}\dots\sigma_{k}\\}$ to control the importance of different
branches via gradient-based backward propagation, and are then added to mix
information from different receptive fields. This multi-scale architecture is
expressed as:
$s_{m}=\sum_{i=1}^{N}{\sigma_{i}\times{\rm Conv}_{i}(s_{r})}$ (1)
where the group-wise convolution ${\rm Conv}_{i}$ at different levels captures
spatial-temporal features from different receptive fields, with dilation rate
$(1,i,i)$.
Especially, as the channels are downsized by $r$ times in SSEM and we employ
group-wise convolutions with small spatial-temporal kernels to capture multi-
scale features, the overall architecture is rather lightweight with few
(<0.1%) extra computations compared to the original model, as demonstrated in
our ablative experiments.
Next, $s_{m}$ is sent into a $1\times 1\times 1$ convolution to project
channels back into $C$, and then passed through a sigmoid activation function
to generate attention maps $M_{s}\in\mathcal{R}^{T\times C\times H\times W}$
with values ranging between $[0,1]$ as:
$M_{s}={\rm Sigmoid}({\rm Conv}_{1\times 1\times 1}(s_{m}))$ (2)
Finally, the attention maps $M_{s}$ are used to emphasize informative spatial
regions for input features. To avoid hurting original representations and
degrading accuracy, we propose to emphasize input features via a residual way
as:
$u=(M_{s}-0.5\times\mathds{1})\odot s+s$ (3)
where $\odot$ denotes element-wise multiplication and $u$ is the output.
In specific, we first subtract $0.5\times\mathds{1}$ from the attention maps
$M_{s}$, with $\mathds{1}\in\mathcal{R}^{T\times C\times H\times W}$ denoting
an all-one matrix, to change the range of values in $M_{s}$ into $[-0.5,0.5]$.
Then we element-wisely multiply the resulting attention maps with input
features $s$ to dynamically emphasize the informative regions and suppress
unnecessary areas. Here, the values in $M_{s}$ larger than 0 would strengthen
the corresponding inputs features, otherwise they would weaken the input
features. Finally, we add the modulated features with input features $s$ to
emphasize or suppress certain spatial features, but avoid hurting original
representations.
Figure 4: Illustration for our temporal self-emphasizing module (TSEM).
### Temporal Self-Emphasizing Module
We argue that not all frames in a video contribute equally to recognition,
where some frames are more discriminative than others. For example, frames in
which the signer moves his/her arms to express a sign are usually more
important than those transitional frames or idle frames with meaningless
contents. However, the feature extractor only employs 2D spatial convolutions
to capture spatial features for each frame, equally treating frames without
considering their temporal correlations. We propose a temporal self-
emphasizing module (TSEM) to adaptively emphasize discriminative frames and
suppress redundant ones.
As shown in fig. 4, input features $u\in\mathcal{R}^{T\times C\times H\times
W}$ first undergo a global average pooling layer to eliminate the spatial
dimension, i.e., $H$ and $W$. Then these features pass through a convolution
with kernel size of 1 to reduce channels by $r$ times into
$u_{r}\in\mathcal{R}^{T\times C/r}$ as:
$u_{r}={\rm Conv}_{K=1}({\rm AvgPool}(u))$ (4)
where $K$ denotes the kernel size. To better exploit local temporal movements
to identify the discriminative frames, we leverage the temporal difference
operator to incorporate motion information between adjacent frames to make
decisions better. Specially, we calculate the difference between two adjacent
frames for $u_{r}$ as approximate motion information, and then concatenate it
with appearance features $u_{r}$ as :
$u_{m}={\rm Concat}([u_{r},u_{r}(t+1)-u_{r}])$ (5)
Next, we send $u_{m}$ into a 1D temporal convolution with kernel size of
$P_{t}$ to capture the short-term temporal information. As the size of $u_{m}$
is rather small, we here employ a normal temporal convolution instead of a
multi-scale architecture. The features then undergo a convolution with kernel
size of 1 to project channels back into $C$, and pass through a sigmoid
activation function to generate attention maps $M_{t}\in\mathcal{R}^{T\times
C}$ as:
$M_{t}={\rm Sigmoid}({\rm Conv}_{K=1}(u_{m}))$ (6)
Finally, we employ $M_{t}$ to emphasize the discriminative features for input
$u$ in a residual way as :
$o=(M_{t}-0.5\times\mathds{1})\odot u+u$ (7)
where $\odot$ denotes element-wise multiplication and $o$ is the output.
## Experiments
### Experimental Setup
#### Datasets.
PHOENIX14 (Koller, Forster, and Ney 2015) and PHOENIX14-T (Camgoz et al. 2018)
are both recorded from a German weather forecast broadcast before a clean
background with a resolution of 210 $\times$ 260\. They contain 6841/8247
sentences with a vocabulary of 1295/1085 signs, divided into 5672/7096
training samples, 540/519 development (Dev) samples and 629/642 testing (Test)
samples.
CSL-Daily (Zhou et al. 2021) is recorded indoor with 20654 sentences, divided
into 18401 training samples, 1077 development (Dev) samples and 1176 testing
(Test) samples.
CSL (Huang et al. 2018) is collected in the laboratory environment by fifty
signers with a vocabulary size of 178 with 100 sentences. It contains 25000
videos, divided into training and testing sets by a ratio of 8:2.
#### Training details.
We adopt ResNet18 (He et al. 2016) as the 2D CNN with ImageNet (Deng et al.
2009) pretrained weights. We place SSEM and TSEM before the second convolution
in each block. The 1D CNN consists of a sequence of {K5, P2, K5, P2} layers
where $K$ and $P$ denotes a 1D convolutional layer and a pooling layer with
kernel size of 5 and 2, respectively. We then adopt a two-layer BiLSTM with
1024 hidden states and a fully connected layer for prediction. We train our
model for 80 epochs with initial learning rate 0.0001 decayed by 5 after 40
and 60 epochs. Adam optimizer is adopted with weight decay 0.001 and batch
size 2. All frames are first resized to 256$\times$256 and then randomly
cropped to 224$\times$224, with 50% horizontal flip and $\pm$20% random
temporal scaling during training. During inference, a central 224$\times$224
crop is simply selected. We use VE and VA losses from VAC (Min et al. 2021)
for extra supervision.
#### Evaluation Metric.
We use Word Error Rate (WER) as the evaluation metric, which is defined as the
minimal summation of the substitution, insertion, and deletion operations to
convert the predicted sentence to the reference sentence, as:
$\rm WER=\frac{\\#sub+\\#ins+\\#del}{\\#reference}.$ (8)
Note that the lower WER, the better accuracy.
Configurations | FLOPs | Dev(%) | Test(%)
---|---|---|---
- | 3.64G | 21.2 | 22.3
$K_{t}$=9, $K_{s}$=3, $N$=1 | +0.4M | 20.5 | 22.0
$K_{t}$=9, $K_{s}$=3, $N$=2 | +0.6M | 20.2 | 21.8
$K_{t}$=9, $K_{s}$=3, $N$=3 | +0.8M | 19.9 | 21.4
$K_{t}$=9, $K_{s}$=3, $N$=4 | +1.0M | 20.2 | 21.7
$K_{t}$=7, $K_{s}$=3, $N$=3 | +0.7M | 20.2 | 21.6
$K_{t}$=11, $K_{s}$=3, $N$=3 | +1.0M | 20.3 | 21.8
$K_{t}$=9, $K_{s}$=7, $N$=1 | +2.9M | 20.5 | 22.0
Table 1: Ablations for the multi-scale architecture of SSEM on the PHOENIX14 dataset. Configurations | Dev(%) | Test(%)
---|---|---
- | 21.2 | 22.3
$M_{s}\odot s$ | 22.3 | 23.4
$M_{s}\odot s+s$ | 20.6 | 21.7
$(M_{s}-0.5\times\mathds{1})\odot s$ | 20.2 | 21.5
$(M_{s}-0.5\times\mathds{1})\odot s+s$ | 19.9 | 21.4
Table 2: Ablations for the implementations of SSEM to augment input features on the PHOENIX14 dataset. Configurations | Dev(%) | Test(%)
---|---|---
- | 19.9 | 21.4
$u_{r}$ | 19.8 | 21.2
${\rm Concat}([u_{r},u_{r}(t+1)-u_{r}])$ | 19.5 | 21.0
$P_{t}$ = 7 | 19.6 | 21.2
$P_{t}$ = 9 | 19.5 | 21.0
$P_{t}$ = 11 | 19.7 | 21.3
Table 3: Ablations for TSEM on the PHOENIX14 dataset. Configurations | Dev(%) | Test(%)
---|---|---
- | 21.2 | 22.3
SSEM | 19.9 | 21.4
TSEM | 20.5 | 21.7
SSEM + TSEM | 19.8 | 21.4
TSEM + SSEM | 19.6 | 21.2
Parallelled | 19.5 | 21.0
Table 4: Ablations for the effectiveness of SSEM and TSEM on the PHOENIX14 dataset. Methods | Dev(%) | Test(%)
---|---|---
- | 21.2 | 22.3
w/ SENet (Hu, Shen, and Sun 2018) | 20.7 | 21.6
w/ CBAM (Woo et al. 2018) | 20.5 | 21.3
CNN+HMM+LSTM (Koller et al. 2019) | 26.0 | 26.0
STMC (Zhou et al. 2020) | 21.1 | 20.7
C2SLR (Zuo and Mak 2022) | 20.5 | 20.4
SEN | 19.5 | 21.0
Table 5: Comparison with other methods of channel attention or hand and face
features on the PHOENIX14 dataset. Figure 5: Visualizations of class
activation maps by Grad-CAM (Selvaraju et al. 2017). Top: raw frames; Middle:
class activation maps of our baseline; Bottom: class activation maps of our
SEN. Our baseline usually focuses on nowhere or only attends to a single hand
or face. Our SEN could generally focus on the human body (light yellow areas)
and pays special attention to informative regions like hands and face (dark
red areas).
### Ablation Study
We perform ablation studies on the PHOENIX14 dataset and report on both
development (Dev) and testing (Test) sets.
#### Effects of the multi-scale architecture of SSEM.
Tab. 1 ablates the implementations for the multi-scale architecture of SSEM.
Our baseline achieves 21.2% and 22.3% WER on the Dev and Test Set. When fixing
$K_{t}$=9, $K_{s}$=3 and varying the number of branches to expand spatial
receptive fields, it’s observed larger $N$ consistently brings better
performance. When $N$ reaches 3, it brings no more performance gain. We set
$N$ as 3 by default and test the effects of $K_{t}$. One can see that either
increasing $K_{t}$ to 11 or decreasing $K_{t}$ to 7 achieves worse
performance. We thus adopt $K_{t}$ as 9 by default. Notably, one can find SSEM
brings few extra computations compared to our baseline. For example, the best-
performing SSEM with $K_{t}$=9, $K_{s}$=3 and $N$=3 only owns 0.8M (<0.1%)
extra FLOPs, which can be neglected compared to 364G FLOPs of our baseline
model. Finally, we compare our proposed multi-scale architecture with a normal
implementation of more computations. The receptive field of SSEM with
$K_{t}$=9, $K_{s}$=3 and $N$=3 is identical to a normal convolution with
$K_{t}$=9 and $K_{s}$=7. As shown in the bottom of tab. 1, a normal
convolution not only brings more computations than SSEM, but also performs
worse, verifying the effectiveness of our architecture.
#### Implementations of SSEM to augment inputs features.
Tab. 2 ablates the implementations of SSEM to augment original features. It’s
first observed directly multiplying the attention maps $M_{s}$ with input
features $s$ severely degrades performance, attributed to destroying input
features distributions. Implemented in a residual way by adding $s$,
$M_{s}\odot s+s$ could notably relieve such phenomenon and achieves +0.6% &
+0.6% on the Dev and Test Sets. Further, we first subtract
$0.5\times\mathds{1}$ from the attention maps $M_{s}$ to emphasize or suppress
certain positions, and then element-wisely multiply it with $s$. This
implementation bring +1.0% & +0.8% performance boost. Finally, we update this
implementation in a residual way by adding input features $s$ as
$(M_{s}-0.5\times\mathds{1})\odot s+s$, achieving notable performance boost by
+1.3% & +0.9%.
#### Study on TSEM.
Tab. 3 ablates the configurations for TSEM. We here adopt SSEM as our baseline
and ablate the configurations for TSEM. It’s first noticed that combining
motion information by concatenating $u_{r}(t+1)-u_{r}$ with $u_{r}$ slightly
outperforms only using $u_{r}$ to capture short-term temporal dependencies,
verifying the effectiveness of local motion information. Next, when varying
$P_{t}$, it’s observed $P_{t}$=9 achieves the best performance among
$P_{t}$=[7,9,11], which is adopted by default in the following.
#### Study on the effectiveness of SSEM and TSEM.
Tab. 4 studies how to combine SSEM with TSEM. We first notice that only using
SSEM or TSEM could already bring a notable performance boost, by +1.3& +0.9%
and +0.7 & +0.6% on the Dev and Test Sets, respectively. When further
combining SSEM with TSEM by sequentially placing SSEM before TSEM (SSEM+TSEM),
placing TSEM before SSEM (TSEM+SSEM) or paralleling TSEM and TSEM, it’s
observed SSEM+TSEM performs best with +1.7% & +1.3% performance boost on the
Dev and Test Sets, respectively, adopted as the default setting.
#### Comparison with other methods.
We compare our SEN with related well-known channel attention methods like
SENet (Hu, Shen, and Sun 2018) and CBAM (Woo et al. 2018), and previous CSLR
methods equipped with hand and face features by extra pose-estimation networks
or pre-extracted heatmaps. In the upper part of tab. 5, one can see SEN
largely outperforms these channel attention methods, for its superior ability
to emphasize informative hand and face features. In the bottom part of tab. 5,
it’s observed SEN greatly surpasses previous CSLR methods equipped with hand
and face features, even though they employ extra heavy networks or expensive
supervision. These results verify the effectiveness of our SEN in leveraging
hand and face features.
### Visualizations
#### Visualization for SSEM.
We sample a few frames for expressing a gloss and plot the class activation
maps for our baseline and SEN with Grad-CAM (Selvaraju et al. 2017) in fig. 5.
The activation maps generated by our baseline usually focus on nowhere or only
attend to a single hand or face, failing to fully focus on the informative
regions (e.g., hands and face). Instead, our SEN could generally focus on the
human body (light yellow areas), and pays special attention to those
discriminative regions like hands and face (dark red areas). These
visualizations show that without additional expensive supervision, our SEN
could still effectively leverage the informative spatial features in a self-
supervised way.
Methods | Backbone | PHOENIX14 | PHOENIX14-T
---|---|---|---
Dev(%) | Test(%) | Dev(%) | Test(%)
del/ins | WER | del/ins | WER
Align-iOpt (Pu, Zhou, and Li 2019) | 3D-ResNet | 12.6/2 | 37.1 | 13.0/2.5 | 36.7 | - | -
Re-Sign (Koller, Zargaran, and Ney 2017) | GoogLeNet | - | 27.1 | - | 26.8 | - | -
SFL (Niu and Mak 2020) | ResNet18 | 7.9/6.5 | 26.2 | 7.5/6.3 | 26.8 | 25.1 | 26.1
FCN (Cheng et al. 2020) | Custom | - | 23.7 | - | 23.9 | 23.3 | 25.1
CMA (Pu et al. 2020) | GoogLeNet | 7.3/2.7 | 21.3 | 7.3/2.4 | 21.9 | - | -
VAC (Min et al. 2021) | ResNet18 | 7.9/2.5 | 21.2 | 8.4/2.6 | 22.3 | - | -
SMKD (Hao, Min, and Chen 2021) | ResNet18 | 6.8/2.5 | 20.8 | 6.3/2.3 | 21.0 | 20.8 | 22.4
SLT∗ (Camgoz et al. 2018) | GoogLeNet | - | - | - | - | 24.5 | 24.6
CNN+LSTM+HMM∗ (Koller et al. 2019) | GoogLeNet | - | 26.0 | - | 26.0 | 22.1 | 24.1
DNF∗ (Cui, Liu, and Zhang 2019) | GoogLeNet | 7.3/3.3 | 23.1 | 6.7/3.3 | 22.9 | - | -
STMC∗ (Zhou et al. 2020) | VGG11 | 7.7/3.4 | 21.1 | 7.4/2.6 | 20.7 | 19.6 | 21.0
C2SLR∗ (Zuo and Mak 2022) | ResNet18 | - | 20.5 | - | 20.4 | 20.2 | 20.4
Baseline | ResNet18 | 7.9/2.5 | 21.2 | 8.4/2.6 | 22.3 | 21.1 | 22.8
SEN (Ours) | ResNet18 | 5.8/2.6 | 19.5 | 7.3/4.0 | 21.0 | 19.3 | 20.7
Table 6: Comparison with state-of-the-art methods on the PHOENIX14 and
PHOENIX14-T datasets. $*$ indicates extra clues such as face or hand features
are included by additional networks or pre-extracted heatmaps.
#### Visualization for TSEM.
We visualize the temporal attention maps of TSEM in fig 6. We sample several
frames corresponding to an output gloss ’nord’ as an example. The darker
color, the higher weight. One can find that TSEM tends to allocate higher
weights for frames with rapid movements (the latter two frames in the first
line; the middle three frames in the second line). TSEM assigns lower weights
for static frames with few body movements. Such observation is consistent with
our habits, as humans always pay more attention to those moving objects in the
visual field to capture key movements. Those frames can also be considered
conveying more important pattern for expressing a sign.
Figure 6: Visualizations of temporal attention maps for TSEM. One can find
that TSEM highlight frames with rapid movements and suppress those static
frames.
### Comparison with State-of-the-Art Methods
PHOENIX14 and PHOENIX14-T. Tab. 6 shows a comprehensive comparison between our
SEN and other state-of-the-art methods. We notice that with few extra
computations, SEN could outperform other state-of-the-art methods upon both
datasets. Especially, SEN outperforms previous CSLR methods equipped with hand
and faces acquired by heavy pose-estimation networks or pre-extracted heatmaps
(notated with *), without additional expensive supervision.
CSL-Daily. CSL-Daily is a recently released large-scale dataset with the
largest vocabulary size (2k) among commonly-used CSLR datasets, covering daily
contents. Tab. 7 shows that our SEN achieves new state-of-the-art accuracy
upon this challenging dataset with large progresses, which generalizes well
upon real-world scenarios.
CSL. As shown in tab. 8, our SEN could achieve extreme superior accuracy (0.8%
WER) upon this well-examined dataset, outperforming existing CSLR methods.
Methods | Dev(%) | Test(%)
---|---|---
LS-HAN (Huang et al. 2018) | 39.0 | 39.4
TIN-Iterative (Cui, Liu, and Zhang 2019) | 32.8 | 32.4
Joint-SLRT (Camgoz et al. 2020) | 33.1 | 32.0
FCN (Cheng et al. 2020) | 33.2 | 32.5
BN-TIN (Zhou et al. 2021) | 33.6 | 33.1
Baseline | 32.8 | 32.3
SEN(Ours) | 31.1 | 30.7
Table 7: Comparison with state-of-the-art methods on the CSL-Daily dataset (Zhou et al. 2021). Methods | WER(%)
---|---
SubUNet (Cihan Camgoz et al. 2017) | 11.0
SF-Net (Yang et al. 2019) | 3.8
FCN (Cheng et al. 2020) | 3.0
STMC (Zhou et al. 2020) | 2.1
VAC (Min et al. 2021) | 1.6
C2SLR (Zuo and Mak 2022) | 0.9
Baseline | 3.5
SEN(Ours) | 0.8
Table 8: Comparison with state-of-the-art methods on the CSL dataset (Huang et
al. 2018).
## Conclusion
This paper proposes a self-motivated architecture, coined as SEN, to
adaptively emphasize informative spatial and temporal features. Without extra
expensive supervision, SEN outperforms existing CSLR methods upon four CSLR
datasets. Visualizations confirm the effectiveness of SEN in leveraging
discriminative hand and face features.
## References
* Adaloglou et al. (2021) Adaloglou, N.; Chatzis, T.; Papastratis, I.; Stergioulas, A.; Papadopoulos, G. T.; Zacharopoulou, V.; Xydopoulos, G. J.; Atzakas, K.; Papazachariou, D.; and Daras, P. 2021. A comprehensive study on deep learning-based methods for sign language recognition. _IEEE Transactions on Multimedia_ , 24: 1750–1762.
* Boukhayma, Bem, and Torr (2019) Boukhayma, A.; Bem, R. d.; and Torr, P. H. 2019. 3d hand shape and pose from images in the wild. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 10843–10852.
* Camgoz et al. (2018) Camgoz, N. C.; Hadfield, S.; Koller, O.; Ney, H.; and Bowden, R. 2018. Neural sign language translation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 7784–7793.
* Camgoz et al. (2020) Camgoz, N. C.; Koller, O.; Hadfield, S.; and Bowden, R. 2020. Sign language transformers: Joint end-to-end sign language recognition and translation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 10023–10033.
* Cao et al. (2019) Cao, Y.; Xu, J.; Lin, S.; Wei, F.; and Hu, H. 2019. Gcnet: Non-local networks meet squeeze-excitation networks and beyond. In _Proceedings of the IEEE/CVF international conference on computer vision workshops_ , 0–0.
* Cheng et al. (2020) Cheng, K. L.; Yang, Z.; Chen, Q.; and Tai, Y.-W. 2020. Fully convolutional networks for continuous sign language recognition. In _ECCV_.
* Cihan Camgoz et al. (2017) Cihan Camgoz, N.; Hadfield, S.; Koller, O.; and Bowden, R. 2017. Subunets: End-to-end hand shape and continuous sign language recognition. In _ICCV_.
* Cui, Liu, and Zhang (2019) Cui, R.; Liu, H.; and Zhang, C. 2019. A deep neural framework for continuous sign language recognition by iterative training. _TMM_ , 21(7): 1880–1891.
* Deng et al. (2009) Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; and Fei-Fei, L. 2009. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , 248–255. Ieee.
* Dreuw et al. (2007) Dreuw, P.; Rybach, D.; Deselaers, T.; Zahedi, M.; and Ney, H. 2007. Speech recognition techniques for a sign language recognition system. _hand_ , 60: 80.
* Freeman and Roth (1995) Freeman, W. T.; and Roth, M. 1995. Orientation histograms for hand gesture recognition. In _International workshop on automatic face and gesture recognition_ , volume 12, 296–301. Zurich, Switzerland.
* Fu et al. (2019) Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; and Lu, H. 2019. Dual attention network for scene segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 3146–3154.
* Gao et al. (2004) Gao, W.; Fang, G.; Zhao, D.; and Chen, Y. 2004. A Chinese sign language recognition system based on SOFM/SRN/HMM. _Pattern Recognition_ , 37(12): 2389–2402.
* Graves et al. (2006) Graves, A.; Fernández, S.; Gomez, F.; and Schmidhuber, J. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In _Proceedings of the 23rd international conference on Machine learning_ , 369–376.
* Han, Awad, and Sutherland (2009) Han, J.; Awad, G.; and Sutherland, A. 2009. Modelling and segmenting subunits for sign language recognition based on hand motion analysis. _Pattern Recognition Letters_ , 30(6): 623–633.
* Hao, Min, and Chen (2021) Hao, A.; Min, Y.; and Chen, X. 2021. Self-Mutual Distillation Learning for Continuous Sign Language Recognition. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 11303–11312.
* He et al. (2016) He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 770–778.
* Hu et al. (2021) Hu, H.; Zhao, W.; Zhou, W.; Wang, Y.; and Li, H. 2021. SignBERT: Pre-Training of Hand-Model-Aware Representation for Sign Language Recognition. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 11087–11096.
* Hu, Zhou, and Li (2021) Hu, H.; Zhou, W.; and Li, H. 2021. Hand-model-aware sign language recognition. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, 1558–1566.
* Hu et al. (2018) Hu, J.; Shen, L.; Albanie, S.; Sun, G.; and Vedaldi, A. 2018. Gather-excite: Exploiting feature context in convolutional neural networks. _Advances in neural information processing systems_ , 31.
* Hu, Shen, and Sun (2018) Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 7132–7141.
* Huang et al. (2018) Huang, J.; Zhou, W.; Zhang, Q.; Li, H.; and Li, W. 2018. Video-based sign language recognition without temporal segmentation. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 32.
* Koller et al. (2019) Koller, O.; Camgoz, N. C.; Ney, H.; and Bowden, R. 2019. Weakly supervised learning with multi-stream CNN-LSTM-HMMs to discover sequential parallelism in sign language videos. _PAMI_ , 42(9): 2306–2320.
* Koller, Forster, and Ney (2015) Koller, O.; Forster, J.; and Ney, H. 2015. Continuous sign language recognition: Towards large vocabulary statistical recognition systems handling multiple signers. _Computer Vision and Image Understanding_ , 141: 108–125.
* Koller et al. (2016) Koller, O.; Zargaran, O.; Ney, H.; and Bowden, R. 2016. Deep sign: Hybrid CNN-HMM for continuous sign language recognition. In _Proceedings of the British Machine Vision Conference 2016_.
* Koller, Zargaran, and Ney (2017) Koller, O.; Zargaran, S.; and Ney, H. 2017. Re-sign: Re-aligned end-to-end sequence modelling with deep recurrent CNN-HMMs. In _CVPR_.
* Li et al. (2019) Li, X.; Wang, W.; Hu, X.; and Yang, J. 2019. Selective kernel networks. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 510–519.
* Li et al. (2020) Li, X.; Zhou, W.; Zhou, Y.; and Li, H. 2020. Relation-guided spatial attention and temporal refinement for video-based person re-identification. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, 11434–11441.
* Linsley et al. (2018) Linsley, D.; Shiebler, D.; Eberhardt, S.; and Serre, T. 2018. Learning what and where to attend. _arXiv preprint arXiv:1805.08819_.
* Min et al. (2021) Min, Y.; Hao, A.; Chai, X.; and Chen, X. 2021. Visual Alignment Constraint for Continuous Sign Language Recognition. In _ICCV_.
* Niu and Mak (2020) Niu, Z.; and Mak, B. 2020. Stochastic fine-grained labeling of multi-state sign glosses for continuous sign language recognition. In _ECCV_.
* Ong and Ranganath (2005) Ong, S. C.; and Ranganath, S. 2005. Automatic sign language analysis: A survey and the future beyond lexical meaning. _IEEE Transactions on Pattern Analysis & Machine Intelligence_, 27(06): 873–891.
* Pang et al. (2019) Pang, Y.; Xie, J.; Khan, M. H.; Anwer, R. M.; Khan, F. S.; and Shao, L. 2019. Mask-guided attention network for occluded pedestrian detection. In _Proceedings of the IEEE/CVF international conference on computer vision_ , 4967–4975.
* Pu et al. (2020) Pu, J.; Zhou, W.; Hu, H.; and Li, H. 2020. Boosting continuous sign language recognition via cross modality augmentation. In _ACM MM_.
* Pu, Zhou, and Li (2019) Pu, J.; Zhou, W.; and Li, H. 2019. Iterative alignment network for continuous sign language recognition. In _CVPR_.
* Selvaraju et al. (2017) Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-cam: Visual explanations from deep networks via gradient-based localization. In _Proceedings of the IEEE international conference on computer vision_ , 618–626.
* Sun et al. (2013) Sun, C.; Zhang, T.; Bao, B.-K.; Xu, C.; and Mei, T. 2013. Discriminative exemplar coding for sign language recognition with kinect. _IEEE Transactions on Cybernetics_ , 43(5): 1418–1428.
* Tunga, Nuthalapati, and Wachs (2021) Tunga, A.; Nuthalapati, S. V.; and Wachs, J. 2021. Pose-based sign language recognition using gcn and bert. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_ , 31–40.
* Wang et al. (2020) Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; and Hu, Q. 2020. ECA-Net: Efficient channel attention for deep convolutional neural networks. In _Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Seattle, WA, USA_ , 13–19.
* Wang et al. (2018) Wang, X.; Girshick, R.; Gupta, A.; and He, K. 2018. Non-local neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 7794–7803.
* Woo et al. (2018) Woo, S.; Park, J.; Lee, J.-Y.; and Kweon, I. S. 2018. Cbam: Convolutional block attention module. In _Proceedings of the European conference on computer vision (ECCV)_ , 3–19.
* Yang et al. (2019) Yang, Z.; Shi, Z.; Shen, X.; and Tai, Y.-W. 2019. SF-Net: Structured feature network for continuous sign language recognition. _arXiv preprint arXiv:1908.01341_.
* Zhou et al. (2021) Zhou, H.; Zhou, W.; Qi, W.; Pu, J.; and Li, H. 2021. Improving sign language translation with monolingual data by sign back-translation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 1316–1325.
* Zhou et al. (2020) Zhou, H.; Zhou, W.; Zhou, Y.; and Li, H. 2020. Spatial-temporal multi-cue network for continuous sign language recognition. In _AAAI_.
* Zuo and Mak (2022) Zuo, R.; and Mak, B. 2022. C2SLR: Consistency-Enhanced Continuous Sign Language Recognition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 5131–5140.
|
# Topological Frenkel Exciton-Polaritons in One-Dimensional Lattices of
Strongly Coupled Cavities
J. Andrés Rojas-Sánchez Yesenia A. García Jomaso Brenda Vargas David Ley
Dominguez César L. Ordoñez-Romero Hugo A. Lara-García Arturo Camacho-
Guardian<EMAIL_ADDRESS>Giuseppe Pirruccio<EMAIL_ADDRESS>Instituto de Física, Universidad Nacional Autónoma de México, Apartado Postal
20-364, Ciudad de México C.P. 01000, Mexico
###### Abstract
Frenkel polaritons, hybrid light-matter quasiparticles, offer promises for the
designing of new opto-electronic devices. However, their technological
implementations are hindered by sensitivity to imperfections. Topology has
raised as a way to circumvent defects and fabrication limitations. Here, we
propose a lattice of cavities to realize the one-dimensional Su-Schrieffer-
Heeger model (SSH) for topological Frenkel polaritons. By engineering the
configuration of the cavities we demonstrate that the SSH topological and
trivial phases can be accessed, which we unravel by complementary classical
and quantum theories. We demonstrate the polariton edge state robustness under
defects and the broadening of the photon and exciton lines. Our findings
propose a realistic yet simple experimental setup to realize topological room
temperature polaritons.
## I Introduction
Frenkel excitons have emerged as a successful platform to realize strongly
hybridized phases of light and matter at room temperature Lidzey _et al._
(1998); Kéna-Cohen and Forrest (2010). Experimental breakthroughs have
demonstrated the ability to produce many-body phases such as Bose-Einstein
condensation Cookson _et al._ (2017); Plumhof _et al._ (2014); Scafirimuto
_et al._ (2018); Betzold _et al._ (2019), superfluidity Lerario _et al._
(2017), and a variety of effects resulting from exciton-polariton Daskalakis
_et al._ (2014); Yagafarov _et al._ (2020) and plasmon-exciton-polariton
interactions,Vasa _et al._ (2008); Väkeväinen _et al._ (2014); Ramezani _et
al._ (2019); Wang _et al._ (2019); Zakharko _et al._ (2018); De Giorgi _et
al._ (2018) including lasing Wei _et al._ (2019); Ballarini _et al._ (2014);
Mazza _et al._ (2013), polariton parametric emission Zhao _et al._ (2022)
and oscillation Wu _et al._ (2021); Kuznetsov _et al._ (2020), among others.
The flexibility of these systems permits the polaritonic control of the
internal energy levels Eizner _et al._ (2019); Stranius _et al._ (2018), has
opened up the field of polaritonic chemistry Keeling and Kéna-Cohen (2020);
Sánchez-Barquilla _et al._ (2022); Liu _et al._ (2020); Du and Yuen-Zhou
(2022), and encouraged studies beyond the quasiparticle approach of polaritons
García Jomaso _et al._ (2022). The ultimate control of strongly coupled
light-matter excitations, paired up with the emerging field of topological
photonics, paved the way to the advent of topological polaritonics Ozawa _et
al._ (2019); Lu _et al._ (2014); Karzig _et al._ (2015). This may boost
technological applications in quantum optical circuits Blanco-Redondo (2019),
non-linear light Smirnova _et al._ (2020), chiral and topological lasers
Harari _et al._ (2018); Bandres _et al._ (2018); Jimenez _et al._ (2017),
and in general where high fabrication precision is challenging to be reached.
The advances in topological photonics and polaritonics, include breakthrough
experiments and theories in the context of cavity- and circuit-QED systems
Schmidt and Koch (2013); Qi _et al._ (2018); Mei _et al._ (2015); Owens _et
al._ (2018); Cho _et al._ (2008), ring resonator arrays Lin _et al._ (2018);
Hafezi _et al._ (2013); Mittal _et al._ (2014, 2016), photonic cystals Wang
_et al._ (2020); Lu _et al._ (2016); Wu and Hu (2015); Skirlo _et al._
(2015); Raghu and Haldane (2008); Malkova _et al._ (2009); Poshakinskiy _et
al._ (2014), microwaves Kuhl and Stöckmann (1998); Hu _et al._ (2015); Cheng
_et al._ (2016); Anderson _et al._ (2016), and metamaterials Poli _et al._
(2015); Rosenthal _et al._ (2018); Zhao _et al._ (2018); Xin _et al._
(2020); Krishnamoorthy _et al._ (2012) in which many intriguing topological
phases have been realized exploiting the light-matter coupling. Perhaps, the
canonical one-dimensional (1D) model with non-trivial topological properties
is the so-called Su-Schrieffer-Heeger (SSH) chain. SSH models have already
been realized and studied in many systems including plasmonic chains Bleckmann
_et al._ (2017); Kruk _et al._ (2017); Ling _et al._ (2015); Poddubny _et
al._ (2014); Slobozhanyuk _et al._ (2015), waveguide QED Kim _et al._
(2021), radiative heat transfer Ott and Biehs (2020) and polaritons Solnyshkov
_et al._ (2016); St-Jean _et al._ (2017); Parto _et al._ (2018); Kozin _et
al._ (2018); Downing _et al._ (2019); Su _et al._ (2021); Dusel _et al._
(2021). Topological edge states provides an efficient way to create localized
polaritonic modes which are protected by their bulk environment. Room
temperature topological systems are of particular interest because their
robustness against fabrication imperfections may lead to next-generation
polariton-based technologies.
Figure 1: Schematic representation of the array of $2N$ stacked cavities
filled with an organic material and its analogy to the 1D SSH model. Each
cavity supports Frenkel exciton-polaritons. The different couplings found in
the SSH model are realized by alternating the two mirror’s width separating
each active layer. (a) Trivial and (b) Topological configuration.
Here, we theoretically propose a room-temperature setup for the realization of
the SSH model with Frenkel exciton-polaritons in a one-dimensional lattice of
stacked nanocavities. We demonstrate that by alternating the width of the
mirrors it is possible to obtain both trivial and non-trivial topological
polariton phases. For this, we employ a dual approach based on the transfer
matrix method (TMM) combined with a tight-binding model for a chain of
exciton-polaritons. The TMM, for the appropriate configuration unveils the
emergence of polariton edge states with localized electric field around the
edges of the array. Concomitantly, the reflectance spectrum of the stack is
found to closely resemble that of an isolated cavity. The correspondence of
these branches with the topologically-protected states of the SSH model is
demonstrated by means of the tight binding formalism for exciton-polaritons.
Our twofold approach is general and provides a comprehensive tool that enables
a deep understanding of the fundamental aspects of stacks of strongly coupled
cavities. In addition, we discuss the experimental implementation of our
proposal and its robustness against typical fabrication limitations. Even
though we mainly deal with Frenkel polaritons, our formalism is applicable to
exciton-polaritons in inorganic materials. This is demonstrated in the last
section where we consider homogeneously broadened excitons lacking vibronic
coupling.
Our proposal can be implemented in a wide family of organic polaritons at room
temperature and provides therefore a valuable guide for future experiments and
theories. An additional value of our proposal stems from the facile and cheap
fabrication process combined with a simple and scalable design.
## II System
Our system consists of an array of $2N$ stacked nanocavities, as illustrated
in Fig. 1. All cavities are loaded with a polymer matrix mixed with a highly
concentrated organic molecule. In Fig. 2 we show the imaginary part
$\kappa(\omega)$ of the refractive index for a generic organic molecule. It is
formed by a principal electron transition, associated to the zero-phonon
exciton line, strongly coupled to a vibronic sideband. In a typical organic
molecule at room temperature, the vibronic shoulder is slightly detuned from
the mean peak yet overlaps with it giving rise to a continuum of material
excitations that cannot be disentangled. The cavity length, $L_{c}$, common to
all cavities, is such that the fundamental optical mode is zero-detuned from
the mean exciton energy at normal incidence. The width of the metallic mirrors
alternates, as depicted in Fig. 1. Two configurations are possible.
Figure 2: Imaginary part of the refractive index. Solid red curve represents
the spectrum of a typical organic molecule with a principal peak at
$\omega_{X}\approx 2.32\text{eV}$ and a second vibron mode around
$\omega_{S}\approx 2.5\text{eV}.$ Dashed blue curve represents an ideal
exciton with an oscillatory strength of $2\Omega=0.33\text{eV}$ peaked at
$\omega_{X}=2.32\text{eV}$ and an exciton linewidth of
$\gamma_{X}=0.025\text{eV}$
The trivial configuration, shown in Fig. 1(top), consists of an array of
cavities where the the odd mirrors have a width of $L_{M,odd}$ , whereas the
even mirrors’ width equals $L_{M,even}$, with $L_{M,odd}>L_{M,even}$. The
topological array is obtained by switching the order of the mirrors.
Intuitively, we expect that cavities separated by a thin mirror couple more
efficiently than those distanced by a thicker mirror. Thus, the trivial
configuration allows for the effective coupling of the cavities by pairs, as
in the trivial phase of the SSH model. On the other hand, for the topological
configuration, only the internal cavities of the stack couple efficiently,
whereas the two cavities at the edges of the array appear as isolated, as in
the topological phase of the SSH model.
Our proposal is completely general and independent of the specific organic
molecule employed. However, to highlight the experimental feasibility of our
setup, we illustrate our results for a concrete dye-doped polymer. Erythrosine
B (ErB) has already proved as a suitable molecule for the realization of
exciton-polaritons at room temperature García Jomaso _et al._ (2022).
## III SSH Polaritons: A transfer matrix-based approach
We start our theoretical study employing the transfer matrix method. This is a
simple yet powerful tool to study light propagation in multilayer systems with
ideal planar and parallel interfaces Yeh (2005).
Our setup, illustrated in Fig. 1, consists of $4N+3$ layers: $2N+1$ silver
(Ag) mirrors, $2N$ active layers, and $2$ semi-infinite dielectric media at
the ends of the lattice. The specific active layer used in the following
corresponds to poly-vinyl alcohol (PVA) mixed with ErB. Its complex refractive
index, $\tilde{n}_{\text{ErB}}(\omega)$, is obtained from experimental
measurements García Jomaso _et al._ (2022), and its imaginary part,
$\kappa_{\text{ErB}}(\omega)$, is shown in Fig. 2.
$\kappa_{\text{ErB}}(\omega)$ presents a mean exciton at $\omega_{X}\approx
2.32\text{eV}$ and a secondary peak at $\omega_{S}\approx 2.5\text{eV}.$ Here
we take $\hbar=1.$
Without loss of generality, we consider both latter media to be air with
$n_{\text{Air}}=1$, noticing that the introduction of a substrate in our
formalism is straightforward. The Ag complex refractive index,
$\tilde{n}_{\text{Ag}}(\omega),$ is taken from the experimental reported
values Palik (1985).
The length of all the active layers is fixed to $L_{c}=140\text{nm},$ the
width of all odd mirrors is $L_{\text{odd}}$, while for all even mirrors it is
$L_{\text{even}}$. Plane waves propagating in each layer indexed by $l$, are
described by an electric field $E_{l}(z)=A_{l}e^{ik_{l}z}+B_{l}e^{-ik_{l}z}$.
Here, $l=0$ denotes the first medium (air), for the mirrors and the active
layers we have $4N+2>l>0,$ where for an odd $l$ light propagates in a mirror,
while for an even $l$ it propagates in an active layer. Finally, $A_{l}$ and
$B_{l}$ are the amplitude coefficients for the in-coming/out-going electric
fields in each medium. We write the amplitudes in vectorial form
$\mathbf{v}_{l}=[A_{l},B_{l}]^{T}$ and connect the coefficients via Maxwell
equations and the appropriate boundary conditions. For $s$-polarized waves and
light propagating from the $l-$th to the $l+1$-th medium we obtain
$\mathcal{D}_{l}\mathbf{v}_{l}=\mathcal{D}_{l+1}\mathbf{v}_{l+1}.$ Here, the
dynamical matrix $\mathcal{D}_{l}$ is given by,
$\displaystyle\mathcal{D}_{l}=\begin{bmatrix}1&&1\\\
n_{l}\cos\theta_{l}&&-n_{l}\cos\theta_{l}\end{bmatrix}.$ (1)
Through medium $l$, the phase changes by
$\displaystyle\mathcal{P}_{l}=\begin{bmatrix}e^{i\phi_{l}}&&0\\\
0&&e^{-i\phi_{l}}\end{bmatrix},$ (2)
where $\phi_{l}=k^{l}_{z}L_{l}$, $L_{l}$ is the length of the medium, and
$k^{l}_{z}$ is the perpendicular component of the wave-vector of the electric
field and it is given by
$\displaystyle k^{l}_{z}=\frac{\omega}{c}n_{l}\cos\theta_{l}$ (3)
with $\theta_{l}$ the angle of incidence of the light field in the $l-$th
medium measured from the $z$-axis, i.e., normal to the stack.
It is convenient to introduce
$\mathcal{M}_{l}=\mathcal{D}_{l}\mathcal{P}_{l}\mathcal{D}_{l}^{-1},$
to write the total transfer matrix $\mathcal{T}$ as following
$\displaystyle\mathcal{T}=\mathcal{D}_{0}^{-1}\left(\prod_{l=1}^{4N+1}\mathcal{M}_{l}\right)\mathcal{D}^{\prime}_{0},$
(4)
with $\mathcal{D}_{0},$ and $\mathcal{D}^{\prime}_{0},$ the interfaces matrix
for the air at the ends of the array.
Finally, the reflectance can be calculated as
$\displaystyle R=\left|\frac{\mathcal{T}(2,1)}{\mathcal{T}(1,1)}\right|^{2}.$
(5)
Figure 3: $s$-polarized reflectance for a single cavity having
$L_{c}=140\text{nm}$, $L_{1}=\text{40nm}$ and $L_{3}$ semi-infinite. The
dashed line indicates the energy of the bare exciton peak centered around
$\omega_{X}=2.32\text{eV},$ while the solid red line corresponds to the bare
cavity photon dispersion.
Single cavity.- Before we explore the reflectance for the two configurations
shown in Fig. 1, let us remind the reflectance spectrum for a single cavity.
The experimental study of Frenkel polaritons in a single cavity was subject of
study in Ref. García Jomaso _et al._ (2022). The reflectance spectrum for a
single cavity of length $L_{c}=140\text{nm}$ features two polariton branches.
The lower polariton arises as well-defined quasiparticle whereas the upper
polariton emerges as an ill-defined polariton at small angles and only becomes
a quasiparticle at large angles. As discussed in Ref. García Jomaso _et al._
(2022), the blurring of the upper polariton is an inherent feature of Frenkel
polaritons and dramatically influence the quasiparticle character of the
polaritons. The separation between the polariton branches at normal incidence
is estimated to be around $2\Omega=0.33\text{eV}.$
Trivial.- Let us start discussing the trivial configuration. We take $N=10,$
that is 20 cavities, the width of all odd mirrors is
$L_{\text{odd}}=30\text{nm}$, while for all even mirror it is
$L_{\text{even}}=40\text{nm}$. In Fig. 4 we show the reflectance for this
configuration as a function of $\omega$ and
$k_{||}=\frac{\omega}{c}\sin\theta$. Below the energy of the bare exciton two
polariton bands appear separated by a bandgap that becomes maximal at normal
incidence and closes for large $k_{||}$ values. In the limit of infinite N,
both bands form a continuum. However, for finite $N$ a slight discreteness in
the bottom polariton band is expected.
Figure 4: $s$-polarized reflectance for the trivial configuration where four
polariton bands appear with two bandgaps above and below the bare exciton
energy.
Above the bare exciton energy, two upper polariton bands arise. As a
consequence of the vibronic shoulder of the exciton absorption, at normal
incidence only one of these bands is clearly resolved. For large $k_{||}$, the
two upper polariton bands are clearly distinguishable and exhibit a bandgap.
We note two facts: (i) the bandgaps opened by stacking the cavities are
smaller than the splitting between the two upper and lower polariton bands;
(ii) these bandgaps lie at the spectral position of the polaritons for a
single cavity, shown in Fig. 3.
Topological.- We now turn our attention to the topological configuration
illustrated in Fig. 1(b). The reflectance spectrum obtained from the TMM is
shown in Fig. 5 and exhibits striking features compared to the trivial
configuration. In this case, the reflectance minima locates inside the
bandgaps found for the trivial configuration and closely resembles the upper
and lower polaritons of the single cavity, displayed in Fig. 3. In accordance
with our previous discussion, only the lower polariton remains well-defined
for all incident angles. The upper polariton state visibly blurres at normal
incidence.
Figure 5: $s$-polarized reflectance for the topological configuration.
Reflectance minima arise within the two bandgaps observed for the trivial
configuration.
The spatial distribution of the normalized electric field intensity,
$|E(z)/E_{Max}|^{2}$, is shown in Fig. 7 (c) as a function of $z$ (blue curve)
for the topological configuration. The electric field peaks in the odd
cavities whereas significantly drops and essentially vanishes inside the even
cavities. The intensity of the electric field in the odd cavities decays
exponentially which further hints the topological character of our setup.
The TMM strongly suggests that our setup is analogous to the SSH model for
exciton-polaritons. However, to explicitly unveil the link with the SSH model,
in the following sections we develop a tight-binding model for the exciton-
polaritons and contrast it to the TMM.
## IV SSH Polaritons: An effective tight-binding model approach
The following Hamiltonian describes a set of $2N$ coupled cavities that can be
arranged either in the trivial or topological configuration, as illustrated in
Fig. 6(top),
$\displaystyle\hat{H}=\sum_{i=1}^{N}\omega_{c}(\theta)\left(\hat{a}_{i}^{\dagger}\hat{a}_{i}+\hat{b}_{i}^{\dagger}\hat{b}_{i}\right)+\omega_{X}\left(\hat{x}_{i,A}^{\dagger}\hat{x}_{i,A}+\hat{x}_{i,B}^{\dagger}\hat{x}_{i,B}\right)$
$\displaystyle+\Omega\sum_{i=1}^{N}\left(\hat{a}_{i}^{\dagger}\hat{x}_{i,A}+\hat{b}_{i}^{\dagger}\hat{x}_{i,B}+\text{h.c}\right)$
$\displaystyle-\sum_{i=1}^{N}\left(v(\hat{a}_{i}^{\dagger}\hat{b}_{i}+\hat{b}_{i}^{\dagger}\hat{a}_{i})+w(\hat{a}_{i+1}^{\dagger}\hat{b}_{i}+\hat{b}_{i}^{\dagger}\hat{a}_{i+1})\right),$
Figure 6: (a) Eigenvalues for the trivial configuration at normal incidence:
four polariton bands corresponding to the two branches of the lower and upper
polaritons. Inset shows a cartoon of the notation employed. (b) Eigenvalues
for the topological configuration, inside the energy gaps of the polariton
bands two edge states per branch appear. We take $2N=20$ cavities giving $40$
eigenvalues.
here, $\hat{a}_{i}^{\dagger}$ and $\hat{b}_{i}^{\dagger}$ create a cavity
photon in a site $A$ and $B$ respectively with energy $\omega_{c}(\theta)$
which depends on the incident angle $\theta$ given by the solid black line in
Fig. 3. On the other hand $\hat{x}^{\dagger}_{i,A}$ and
$\hat{x}^{\dagger}_{i,B}$ create excitons with site index $i$ in the cavity
$A$ y $B$, respectively. Here, the energy of the excitons is $\omega_{X}.$
Excitons and photons couple with a strength $\Omega$ only if all site indices
are equal. Adjacent cavities couple through the tunneling of photons, where
the tunneling amplitude is given either by $v$ or $w$, depending on the
configuration, as illustrated in Fig. 6 (top).
We now study the tight-binding model for the exciton-polaritons within the SSH
model. For reasons will become clear later we take hopping coefficients of
$v=0.15$ and $w=0.09$ for the trivial configuration, whereas for the
topological we simply swap these coefficients, i.e., $w=0.15$ and $v=0.09.$
The SSH model for exciton-polaritons is a simple quadratic Hamiltonian that
can straightforwardly be diagonalized. For consistency with the TMM we take
$N=10$ corresponding to 20 cavities.
Figure 7: (a-b)Eigenvalues of the SSH model in Eq. LABEL:HP as a function of
the in-plane momentum $k_{||}$ illustrated by the red dots. The energies of
the SSH model are plotted on top of the reflectance spectrum for (a) trivial
configuration and (b) topological configuration. The white dots are the
eigenvalues of Eq. LABEL:HP corresponding to edge states and lie at the
energies of the bare lower and upper polaritons. (c) Normalized electric field
intensity as a function of $z,$ (blue curve). Amplitude of the wavefunction
obtained from the SSH model (red curve). The gray vertical lines give the
position of the center of the cavities.
Trivial.- We start discussing the trivial configuration. For clarity, we show
in Fig. 6(a) the eigenvalues of the Hamiltonian in Eq. LABEL:HP considering
first normal incidence and resonant conditions
$\omega_{c}(\theta=0)=\omega_{X}$. In this case we observe that the lower and
upper polariton states split leading to four polariton bands: two above and
two below the energy of the bare exciton energy. The lower and upper
polaritons yield to two bands separated by a gap, marked by the pink area. The
Rabi coupling leads to the avoided crossing (yellow area) that separates the
lower from the upper polariton bands. These four polariton bands display a
difference in their bandwidths. Specifically, the lowest polariton band is
broader than the second polariton band. This feature, also visible in the TMM
calculations, can hardly be explained within the TMM. Conversely, the tight-
binding model provides a very intuitive physical explanation for it. When
photons hybridise with excitons forming polaritons, the photon tunneling
between adjacent cavities depends on the polaritons Hopfield coefficents,
i.e., the coupling efficiency of polaritons living in adjacent cavities
depends on their photonic/excitonic component. Thus, one expects that
polaritons with large excitonic component exhibit a weak tunnelling leading to
narrow polariton bands. This is the case for the two bands located around the
bare exciton energy. On the other hand, polaritons with large photonic
component lead to a broad bandwidth consequence of dispersion and enhanced
hopping.
Formally, these arguments can be read by studying the hopping terms of the
Hamiltonian. For instance, the tunneling between photons $A$ and $B$ with same
site index, $i$, in the polariton basis
($\hat{L}_{i,\alpha},\hat{U}_{i,\alpha})$ with $\alpha=A,B$ is
$\displaystyle-v\left(\hat{a}_{i}^{\dagger}\hat{b}_{i}+\text{h.c}\right)=-v\left(\mathcal{S}_{i,A}\mathcal{S}_{i,B}\hat{L}^{\dagger}_{i,A}\hat{L}_{i,B}\right.$
$\displaystyle+\mathcal{C}_{i,A}\mathcal{C}_{i,B}\hat{U}_{i,A}^{\dagger}\hat{U}_{i,B}+\mathcal{S}_{i,A}\mathcal{C}_{i,B}\hat{L}^{\dagger}_{i,A}\hat{U}_{i,B}$
$\displaystyle\left.+\mathcal{C}_{i,A}\mathcal{S}_{i,B}\hat{U}^{\dagger}_{i,A}\hat{L}_{i,B}+\text{h.c}\right).$
Here, the photon written in the basis of the lower and upper polariton in
terms of the standard Hopfield coefficients is
$\hat{a}_{i}/\hat{b}_{i}=\mathcal{S}_{i,\alpha}\hat{L}_{i,\alpha}+\mathcal{C}_{i,\alpha}\hat{U}_{i,\alpha},$
with $\mathcal{S}_{i,\alpha}^{2}+\mathcal{C}^{2}_{i,\alpha}=1,$ where
$\mathcal{C}_{i,\alpha}^{2}=\frac{1}{2}\left(1+\frac{\omega_{c}(\theta)-\omega_{X}}{(\omega_{c}(\theta)-\omega_{X})^{2}+4\Omega^{2}}\right).$
Equation LABEL:EqTR stresses that adjacent cavity polaritons can only couple
via their photonic component. Thus, states with very small photonic components
have suppressed tunnelling and tend to localize within the corresponding
cavities. Such localization corresponds to the band flattening observed for
the two bands that are close to the bare exciton energy (Fig. 6). In contrast,
for states with a large photonic component, the tunnelling of polaritons
becomes essentially the bare photon term, $v,$ which makes the two polariton
bands far detuned from the exciton dispersive and broad.
We remark that the understanding of the narrowing of the bands close to the
bare exciton line cannot straightforwardly be read from the TMM. This
discussion naturally arises from the tight-binding formalism providing a
deeper insight into the physical setup.
We can further understand the tight-binding model and its equivalence to the
experimental proposal if we vary the incident angle $\theta$. In Fig. 7 (a) we
plot the eigenvalues of the Hamiltonian (red dots) in Eq. LABEL:HP as a
function of $k_{||}.$ For clarity in our comparison we show in the background
the reflectance spectrum obtained from the TMM. The remarkable agreement
between the TMM and the SSH model for exciton-polaritons support our
experimental proposal. Furthermore, we also observe the closing of the lower
bandgap as the lower polariton becomes more excitonic at larger incident
angles, in clear agreement with our previous discussion based on the Hopfield
coefficients.
Topological.- The topological configuration at normal incidence yields to the
eigenvalues in Fig. 6 (b). In addition to the four polariton bands, we observe
the appearance of two edge states located in the middle of the polariton gaps
(pink area) whose energy lies exactly at the energy of the polaritons
sustained by the single cavity:
$\displaystyle\omega_{\text{LP/UP}}(\theta)=\frac{1}{2}\left(\omega_{c}(\theta)+\omega_{X}\mp\sqrt{(\omega_{c}-\omega_{X})^{2}+4\Omega^{2}}\right).$
(8)
Since we consider resonant conditions, in Fig. 6 (b) the edge states are
distanced precisely by $2\Omega.$
For varying angle of incidence, we observe in Fig. 7 (b) the persistence of
these edge states which remain confined within the corresponding bandgaps. For
clarity, we have highlighted the energy of these states with white markers
whereas the red dots correspond to the bulk states. In accordance with our
previous analysis, also for the topological configuration the bandgap
associated with the lower polariton bands closes for large angles. This is
consequence of the large excitonic component of the polaritons. On the other
hand, the gap separating the two upper bands slightly increases at large
angles as polaritons become predominantly photonic.
Finally, in Fig. 7 (c) we show the distribution of the wavefunction for the
edge state along the cavities. The wavefunction is only non-zero for the $A$
cavities whereas it vanishes for the $B$ cavities. The amplitude of the
wavefunction in the $A$ cavities decays exponentially with the index site $i.$
At each site, the state of the polariton retains the maximal coupling between
the excitons and photons, that is, the Hopfield coefficients equal to 1/2. The
distribution of the amplitude of the wavefunction predicted by the tight-
binding model for exciton-polaritons agrees remarkably well with the electric
field intensity obtained with the TMM. It captures both the vanishing of the
light intensity for the $B$ cavities and the exponential decay observed in the
$A$ cavities as a function of $z$. Note that the electric field is a
continuous function of $z$ where the wavefunction is discrete in the index
site $i.$
The SSH model for exciton-polaritons has added remarkable physical insights to
the polariton physics predicted by the TMM. However, features that extend
beyond the single-particle approach of polaritons are beyond the realm of the
SSH model, hence, cannot be captured by this model. For instance, the
breakdown of the upper polariton at normal incidence which washes out the
bandgap of the upper bands cannot be obtained within our SSH model for
polaritons. This remarks the need for the dual approach: on the one hand, the
TMM absent of any fitting parameters provides a powerful tool that gives the
reflectance spectrum that should be experimentally observed but does not link
directly to the SSH model. On the other hand, the tight-binding model allows
us to link the phenomenology of the TMM with the SSH model and provides deep
physical insight, yet it fails to contain the full complexity of the system.
By combining these two approaches we obtain the complete picture of the SSH
exciton-polaritons both from a pragmatic experimental point of view and the
fundamental understanding of the model.
## V Experimental Considerations: Robustness to Fabrication imperfections
In practice, there are experimental considerations that may limit the
realization of the SSH array of polaritons that require discussion. First,
incoherent processes such as photon leaking, non-radiative losses of the
excitons, and coupling to additional excitonic modes. Second, the ability to
produce mirrors with uniform widths and cavities with different lengths.
Finally, limitations to realize a large number of cavities, that is, finite
size effects.
In our TMM formalism, we have included the experimental values of the
refractive index of the active layer, this includes all of the incoherent
matter processes. On the other hand, the leaking of the photons is naturally
included and arises as a broadening of the photonic lines. Our results
discussed previously demonstrate that the SSH for exciton-polaritons is very
robust towards these effects. The topological effects for the lower polariton
bands are well-defined at all incident angles. For the upper polaritons, the
breaking of the quasiparticle picture leads to a blurred region where the edge
modes are hardly visible, however, with the opening of the angle these states
become well-defined. In both cases, we have found a very good agreement with
the SSH polaritons treated at the single-particle level.
To study the effects of fabrication imperfections, we add a random and
different error to all of the widths of both the mirrors and cavities.
Experimentally, we estimate that the mirrors and cavities can be realized
within an error of circa $\text{4nm},$ thus we add an error for the
fabrication of the mirrors of $5\%,$ that is, we take
$L^{i}_{M,\text{even/odd}}=L_{M,\text{even/odd}}^{0}+\delta
L^{i}_{M,\text{even,odd}},$ where $L_{M,even/odd}^{0}$ is the length of the
mirrors discussed previously and $\delta L^{i}_{M,\text{even,odd}}$ a random
number taken different for each site $i.$ We also consider an error for the
cavity lengths $L^{c}_{i}=L^{c}_{0}+\delta L_{i}$ with
$L^{0}_{c}=140\text{nm}$ and an error of $\delta L_{i}$ in between the range
$(-3.5,3.5)\text{nm},$ again, different for each cavity.
The reflectance spectrum adding these fabrication imperfections for an array
of 20 cavities is shown in Fig. 9 (a) for the topological configuration. We
obtain a reflectance that closely resembles the uniform case in Fig. 5. This
remarks that our proposal is indeed robust to a defects on the experimental
procedure that may produce mirrors and cavities with small errors in their
widths.
Finally, we study the reflectance spectrum for a set of eight cavities, that
is $N=4$ dimers, here we also retain the fabrication imperfections discussed
above together with the experimental absorption spectrum of the organic
molecules. The reflectance is shown in Fig. 9(b), we observe clearly the edge
mode of the lower lower band, whereas the edge mode of the upper band becomes
distinguishable at large angles. Note, that Fig. 9(b) captures all the
relevant features of Fig. 5 and Fig. 7(b).
Figure 8: Reflectance spectrum in the presence of imperfections for (a) $N=10$
and (b) $N=4,$ that is for $20$ and $8$ cavities respectively for the
topological configuration.
Our findings allow us to conclude that our experimental proposal is robust to
the underlying complexity of the exciton spectrum, inherent fabrication errors
and limited number of cavities. Therefore, stands as a promising platform to
study the SSH model for organic polaritons at room temperature.
## VI Ideal Excitons
Let us turn our attention to the study of ideal excitons. Here, the absorption
spectrum of the active layer is single peaked and the incoherent processes
coming from the vibronic coupling are removed. This scenario is more commonly
found in inorganic materials. The imaginary part of the refractive index is
shown in Fig. 1 with the dashed blue curve and it consists of a single narrow
peak centered around $\omega\approx 2.32\text{eV}$ with an oscillatory
strength of $2\Omega=0.33\text{eV}$ and a small exciton broadening of
$\gamma_{X}=0.025\text{eV}$.
Figure 9: s-polarized reflectance for the one-dimensional lattice of cavities
containing a material with ideal excitonic response. (a) Trivial and (b)
topological configuration.
In Fig. 9 we calculate the $s$-polarized reflectance for the trivial and
topological configurations of the one-dimensional lattice where the organic
material has been replaced by one with lorentzian excitonic response. Figure
9(a) corresponds to the trivial configuration and closely resembles Fig. 7(a).
However, in this case, the upper branches are well-defined even at normal
incidence and the four polariton branches are clearly visible. As expected, in
the absence of the matter incoherent processes, the two edge states existing
in the topological configuration appear well-defined for all incident angles,
as shown in Fig. 9(b).
## VII Perspectives and Conclusions
Frenkel polaritons offer a tunable platform to realize topological phases of
light and matter. The ability to produce topological states at ambient
conditions is a necessary condition to deliver their promise on technological
applications such as integrated quantum optical circuits, non-linear light,
and chiral and topological lasers.
In this article, we have studied a one-dimensional lattice of nanocavities
filled with a dye-doped polymer strongly coupled to light. By using two
complementary approaches, we have demonstrated the direct analogy between the
polariton band structure of the lattice to the one-dimensional SSH model.
First, we have calculated the propagation of the light field across the
lattice by using the transfer matrix method. The spectra strongly depend on
the configuration of the lattice: in the trivial phase we observed four
polariton bands, two lower bands below the exciton energy separated by a
bandgap, and two upper bands above the exciton energy also distance by a
bandgap; conversely, in the topological phase, we obtained two polariton
states whose dispersion falls within the bandgaps and whose electric field
intensity localizes around the edge cavity, exponentially decaying within the
lattice.
We complemented our analysis with an effective tight-binding model, which
allows us to link the reflectance spectra with the SSH model. By combining
these approaches we obtained a comprehensive understanding both from the
experimentally relevant picture and with the elementary blocks of the single-
particle polariton topological physics. Our works provides valuable benchmark
for future theories on lattices of Frenkel polaritons and realistic
experimental implementations of the SSH model for Frenkel polaritons at
ambient conditions.
## VIII Acknowledgments
G. P. acknowledges financial support from Grants UNAM DGAPA PAPIIT No.
IN104522 and CONACyT projects 1564464 and 1098652. H. L. G. acknowledges
financial support from Grant UNAM DGAPA PAPIIT No. IA103621. A. C. G.
acknowledges financial support from Grant UNAM DGAPA PAPIIT No. IN108620. C.
L. O-R acknowledges financial support from Grant UNAM DGAPA PAPIIT IG100521.
## References
* Lidzey _et al._ (1998) D. Lidzey, D. Bradley, M. Skolnick, T. Virgili, S. Walker, and D. Whittaker, Nature 395, 53 (1998).
* Kéna-Cohen and Forrest (2010) S. Kéna-Cohen and S. Forrest, Nature Photonics 4, 371 (2010).
* Cookson _et al._ (2017) T. Cookson, K. Georgiou, A. Zasedatelev, R. T. Grant, T. Virgili, M. Cavazzini, F. Galeotti, C. Clark, N. G. Berloff, D. G. Lidzey, _et al._ , Advanced Optical Materials 5, 1700203 (2017).
* Plumhof _et al._ (2014) J. D. Plumhof, T. Stöferle, L. Mai, U. Scherf, and R. F. Mahrt, Nature materials 13, 247 (2014).
* Scafirimuto _et al._ (2018) F. Scafirimuto, D. Urbonas, U. Scherf, R. F. Mahrt, and T. Stöferle, ACS Photonics 5, 85 (2018).
* Betzold _et al._ (2019) S. Betzold, M. Dusel, O. Kyriienko, C. P. Dietrich, S. Klembt, J. Ohmer, U. Fischer, I. A. Shelykh, C. Schneider, and S. Höfling, ACS Photonics 7, 384 (2019).
* Lerario _et al._ (2017) G. Lerario, A. Fieramosca, F. Barachati, D. Ballarini, K. S. Daskalakis, L. Dominici, M. De Giorgi, S. A. Maier, G. Gigli, S. Kéna-Cohen, _et al._ , Nature Physics 13, 837 (2017).
* Daskalakis _et al._ (2014) K. Daskalakis, S. Maier, R. Murray, and S. Kéna-Cohen, Nature materials 13, 271 (2014).
* Yagafarov _et al._ (2020) T. Yagafarov, D. Sannikov, A. Zasedatelev, K. Georgiou, A. Baranikov, O. Kyriienko, I. Shelykh, L. Gai, Z. Shen, D. Lidzey, _et al._ , Communications Physics 3, 1 (2020).
* Vasa _et al._ (2008) P. Vasa, R. Pomraenke, S. Schwieger, Y. I. Mazur, V. Kunets, P. Srinivasan, E. Johnson, J. E. Kihm, D. S. Kim, E. Runge, G. Salamo, and C. Lienau, Phys. Rev. Lett. 101, 116801 (2008).
* Väkeväinen _et al._ (2014) A. I. Väkeväinen, R. J. Moerland, H. T. Rekola, A. P. Eskelinen, J. P. Martikainen, D. H. Kim, and P. Törmä, Nano Letters 14, 1721 (2014).
* Ramezani _et al._ (2019) M. Ramezani, M. Berghuis, and J. G. Rivas, J. Opt. Soc. Am. B 36, E88 (2019).
* Wang _et al._ (2019) S. Wang, Q. Le-Van, F. Vaianella, B. Maes, S. Eizagirre Barker, R. H. Godiksen, A. G. Curto, and J. Gomez Rivas, ACS Photonics 6, 286 (2019).
* Zakharko _et al._ (2018) Y. Zakharko, M. Rother, A. Graf, B. Hähnlein, M. Brohmann, J. Pezoldt, and J. Zaumseil, Nano Letters 18, 4927 (2018).
* De Giorgi _et al._ (2018) M. De Giorgi, M. Ramezani, F. Todisco, A. Halpin, D. Caputo, A. Fieramosca, J. Gomez-Rivas, and D. Sanvitto, ACS Photonics 5, 3666 (2018).
* Wei _et al._ (2019) M. Wei, S. K. Rajendran, H. Ohadi, L. Tropf, M. C. Gather, G. A. Turnbull, and I. D. Samuel, Optica 6, 1124 (2019).
* Ballarini _et al._ (2014) D. Ballarini, M. De Giorgi, S. Gambino, G. Lerario, M. Mazzeo, A. Genco, G. Accorsi, C. Giansante, S. Colella, S. D’Agostino, _et al._ , Advanced Optical Materials 2, 1076 (2014).
* Mazza _et al._ (2013) L. Mazza, S. Kéna-Cohen, P. Michetti, and G. C. La Rocca, Physical Review B 88, 075321 (2013).
* Zhao _et al._ (2022) J. Zhao, A. Fieramosca, R. Bao, W. Du, K. Dini, R. Su, J. Feng, Y. Luo, D. Sanvitto, T. C. H. Liew, and Q. Xiong, Nature Nanotechnology 17, 396 (2022).
* Wu _et al._ (2021) J. Wu, R. Su, A. Fieramosca, S. Ghosh, J. Zhao, T. C. H. Liew, and Q. Xiong, Advanced Photonics 3, 055003 (2021).
* Kuznetsov _et al._ (2020) A. S. Kuznetsov, G. Dagvadorj, K. Biermann, M. H. Szymanska, and P. V. Santos, Optica 7, 1673 (2020).
* Eizner _et al._ (2019) E. Eizner, L. A. Martínez-Martínez, J. Yuen-Zhou, and S. Kéna-Cohen, Science advances 5, eaax4482 (2019).
* Stranius _et al._ (2018) K. Stranius, M. Hertzog, and K. Börjesson, Nature Communications 9, 1 (2018).
* Keeling and Kéna-Cohen (2020) J. Keeling and S. Kéna-Cohen, Annual Review of Physical Chemistry 71, 435 (2020), pMID: 32126177.
* Sánchez-Barquilla _et al._ (2022) M. Sánchez-Barquilla, A. I. Fernández-Domínguez, J. Feist, and F. J. García-Vidal, ACS Photonics 9, 1830 (2022).
* Liu _et al._ (2020) B. Liu, V. M. Menon, and M. Y. Sfeir, ACS Photonics 7, 2292 (2020), https://doi.org/10.1021/acsphotonics.0c00895 .
* Du and Yuen-Zhou (2022) M. Du and J. Yuen-Zhou, Physical Review Letters 128, 096001 (2022).
* García Jomaso _et al._ (2022) Y. A. García Jomaso, B. Vargas, D. Ley Dominguez, C. L. Ordoñez-Romero, H. A. Lara-García, A. Camacho-Guardian, and G. Pirruccio, arXiv e-prints , arXiv (2022).
* Ozawa _et al._ (2019) T. Ozawa, H. M. Price, A. Amo, N. Goldman, M. Hafezi, L. Lu, M. C. Rechtsman, D. Schuster, J. Simon, O. Zilberberg, _et al._ , Reviews of Modern Physics 91, 015006 (2019).
* Lu _et al._ (2014) L. Lu, J. D. Joannopoulos, and M. Soljačić, Nature photonics 8, 821 (2014).
* Karzig _et al._ (2015) T. Karzig, C.-E. Bardyn, N. H. Lindner, and G. Refael, Physical Review X 5, 031001 (2015).
* Blanco-Redondo (2019) A. Blanco-Redondo, Proceedings of the IEEE 108, 837 (2019).
* Smirnova _et al._ (2020) D. Smirnova, D. Leykam, Y. Chong, and Y. Kivshar, Applied Physics Reviews 7, 021306 (2020).
* Harari _et al._ (2018) G. Harari, M. A. Bandres, Y. Lumer, M. C. Rechtsman, Y. D. Chong, M. Khajavikhan, D. N. Christodoulides, and M. Segev, Science 359, eaar4003 (2018).
* Bandres _et al._ (2018) M. A. Bandres, S. Wittek, G. Harari, M. Parto, J. Ren, M. Segev, D. N. Christodoulides, and M. Khajavikhan, Science 359, eaar4005 (2018).
* Jimenez _et al._ (2017) J. Jimenez, L. Cerdan, F. Moreno, B. L. Maroto, I. García-Moreno, J. L. Lunkley, G. Muller, and S. de la Moya, The Journal of Physical Chemistry C 121, 5287 (2017).
* Schmidt and Koch (2013) S. Schmidt and J. Koch, Annalen der Physik 525, 395 (2013).
* Qi _et al._ (2018) L. Qi, Y. Xing, J. Cao, X.-X. Jiang, C.-S. An, A.-D. Zhu, S. Zhang, and H.-F. Wang, SCIENCE CHINA Physics, Mechanics & Astronomy 61, 1 (2018).
* Mei _et al._ (2015) F. Mei, J.-B. You, W. Nie, R. Fazio, S.-L. Zhu, and L. C. Kwek, Physical Review A 92, 041805 (2015).
* Owens _et al._ (2018) C. Owens, A. LaChapelle, B. Saxberg, B. M. Anderson, R. Ma, J. Simon, and D. I. Schuster, Phys. Rev. A 97, 013818 (2018).
* Cho _et al._ (2008) J. Cho, D. G. Angelakis, and S. Bose, Phys. Rev. Lett. 101, 246809 (2008).
* Lin _et al._ (2018) Q. Lin, X.-Q. Sun, M. Xiao, S.-C. Zhang, and S. Fan, Science advances 4, eaat2774 (2018).
* Hafezi _et al._ (2013) M. Hafezi, S. Mittal, J. Fan, A. Migdall, and J. Taylor, Nature Photonics 7, 1001 (2013).
* Mittal _et al._ (2014) S. Mittal, J. Fan, S. Faez, A. Migdall, J. M. Taylor, and M. Hafezi, Physical review letters 113, 087403 (2014).
* Mittal _et al._ (2016) S. Mittal, S. Ganeshan, J. Fan, A. Vaezi, and M. Hafezi, Nature Photonics 10, 180 (2016).
* Wang _et al._ (2020) H. Wang, S. K. Gupta, B. Xie, and M. Lu, Frontiers of Optoelectronics 13, 50 (2020).
* Lu _et al._ (2016) L. Lu, C. Fang, L. Fu, S. G. Johnson, J. D. Joannopoulos, and M. Soljačić, Nature Physics 12, 337 (2016).
* Wu and Hu (2015) L.-H. Wu and X. Hu, Physical review letters 114, 223901 (2015).
* Skirlo _et al._ (2015) S. A. Skirlo, L. Lu, Y. Igarashi, Q. Yan, J. Joannopoulos, and M. Soljačić, Physical review letters 115, 253901 (2015).
* Raghu and Haldane (2008) S. Raghu and F. D. M. Haldane, Physical Review A 78, 033834 (2008).
* Malkova _et al._ (2009) N. Malkova, I. Hromada, X. Wang, G. Bryant, and Z. Chen, Optics Letters 34, 1633 (2009).
* Poshakinskiy _et al._ (2014) A. V. Poshakinskiy, A. N. Poddubny, L. Pilozzi, and E. L. Ivchenko, Phys. Rev. Lett. 112, 107403 (2014).
* Kuhl and Stöckmann (1998) U. Kuhl and H.-J. Stöckmann, Physical review letters 80, 3232 (1998).
* Hu _et al._ (2015) W. Hu, J. C. Pillay, K. Wu, M. Pasek, P. P. Shum, and Y. Chong, Physical Review X 5, 011012 (2015).
* Cheng _et al._ (2016) X. Cheng, C. Jouvaud, X. Ni, S. H. Mousavi, A. Z. Genack, and A. B. Khanikaev, Nature materials 15, 542 (2016).
* Anderson _et al._ (2016) B. M. Anderson, R. Ma, C. Owens, D. I. Schuster, and J. Simon, Physical Review X 6, 041043 (2016).
* Poli _et al._ (2015) C. Poli, M. Bellec, U. Kuhl, F. Mortessagne, and H. Schomerus, Nature Communications 6, 6710 (2015).
* Rosenthal _et al._ (2018) E. I. Rosenthal, N. K. Ehrlich, M. S. Rudner, A. P. Higginbotham, and K. W. Lehnert, Phys. Rev. B 97, 220301 (2018).
* Zhao _et al._ (2018) H. Zhao, P. Miao, M. H. Teimourpour, S. Malzard, R. El-Ganainy, H. Schomerus, and L. Feng, Nature Communications 9, 981 (2018).
* Xin _et al._ (2020) L. Xin, Y. Siyuan, L. Harry, L. Minghui, and C. Yanfeng, Current Opinion in Solid State and Materials Science 24, 100853 (2020).
* Krishnamoorthy _et al._ (2012) H. N. S. Krishnamoorthy, Z. Jacob, E. Narimanov, I. Kretzschmar, and V. M. Menon, Science 336, 205 (2012).
* Bleckmann _et al._ (2017) F. Bleckmann, Z. Cherpakova, S. Linden, and A. Alberti, Physical Review B 96, 045417 (2017).
* Kruk _et al._ (2017) S. Kruk, A. Slobozhanyuk, D. Denkova, A. Poddubny, I. Kravchenko, A. Miroshnichenko, D. Neshev, and Y. Kivshar, Small 13, 1603190 (2017).
* Ling _et al._ (2015) C. Ling, M. Xiao, C. T. Chan, S. F. Yu, and K. H. Fung, Optics express 23, 2021 (2015).
* Poddubny _et al._ (2014) A. Poddubny, A. Miroshnichenko, A. Slobozhanyuk, and Y. Kivshar, Acs Photonics 1, 101 (2014).
* Slobozhanyuk _et al._ (2015) A. P. Slobozhanyuk, A. N. Poddubny, A. E. Miroshnichenko, P. A. Belov, and Y. S. Kivshar, Physical review letters 114, 123901 (2015).
* Kim _et al._ (2021) E. Kim, X. Zhang, V. S. Ferreira, J. Banker, J. K. Iverson, A. Sipahigil, M. Bello, A. González-Tudela, M. Mirhosseini, and O. Painter, Physical Review X 11, 011015 (2021).
* Ott and Biehs (2020) A. Ott and S.-A. Biehs, Phys. Rev. B 102, 115417 (2020).
* Solnyshkov _et al._ (2016) D. Solnyshkov, A. Nalitov, and G. Malpuech, Physical review letters 116, 046402 (2016).
* St-Jean _et al._ (2017) P. St-Jean, V. Goblot, E. Galopin, A. Lemaître, T. Ozawa, L. Le Gratiet, I. Sagnes, J. Bloch, and A. Amo, Nature Photonics 11, 651 (2017).
* Parto _et al._ (2018) M. Parto, S. Wittek, H. Hodaei, G. Harari, M. A. Bandres, J. Ren, M. C. Rechtsman, M. Segev, D. N. Christodoulides, and M. Khajavikhan, Physical review letters 120, 113901 (2018).
* Kozin _et al._ (2018) V. Kozin, I. Shelykh, A. Nalitov, and I. Iorsh, Physical Review B 98, 125115 (2018).
* Downing _et al._ (2019) C. Downing, T. Sturges, G. Weick, M. Stobińska, and L. Martín-Moreno, Physical Review Letters 123, 217401 (2019).
* Su _et al._ (2021) R. Su, S. Ghosh, T. C. Liew, and Q. Xiong, Science Advances 7, eabf8049 (2021).
* Dusel _et al._ (2021) M. Dusel, S. Betzold, T. H. Harder, M. Emmerling, J. Beierlein, J. Ohmer, U. Fischer, R. Thomale, C. Schneider, S. Hofling, _et al._ , Nano Letters 21, 6398 (2021).
* Yeh (2005) P. Yeh, _Optical Waves in Layered Media_ (Wiley, 2005).
* Palik (1985) E. D. Palik, _Handbook of Optical Constants of Solids._ (Academic, 1985).
|
[f,e]S. Kuberski
# Intermediate window observable for the hadronic vacuum polarization
contribution to the muon $g-2$ from O$(a)$ improved Wilson quarks
M. Cè A. Gérardin G. von Hippel R. J. Hudspith H. B. Meyer K. Miura D.
Mohler K. Ottnad S. Paul A. Risch T. San José H. Wittig
###### Abstract
Following the publication of the new measurement of the anomalous magnetic
moment of the muon, the discrepancy between experiment and the theory
prediction from the $g-2$ theory initiative has increased to $4.2\,\sigma$.
Recent lattice QCD calculations predict values for the hadronic vacuum
polarization contribution that are larger than the data-driven estimates,
bringing the Standard Model prediction closer to the experimental measurement.
Euclidean time windows in the time-momentum representation of the hadronic
vacuum polarization contribution to the muon $g-2$ can help clarify the
discrepancy between the phenomenological and lattice predictions.
We present our calculation of the intermediate distance window contribution
using $N_{\mathrm{f}}=2+1$ flavors of O$(a)$ improved Wilson quarks. We employ
ensembles at six lattice spacings below $0.1\,$fm and pion masses down to the
physical value. We present a detailed study of the continuum limit, using two
discretizations of the vector current and two independent sets of improvement
coefficients. Our result at the physical point displays a tension of
$3.9\,\sigma$ with a recent evaluation of the intermediate window based on the
data-driven method.
## 1 Introduction
Given a long-standing tension between experimental findings and Standard Model
expectations, the anomalous magnetic moment of the muon, $a_{\mu}$, is
considered to be an excellent probe for physics beyond the Standard Model at
the high precision frontier. The combination of the first results of the
Fermilab Muon $g-2$ Experiment [1] with the final result of the E821
experiment at BNL [2] yields a $4.2\,\sigma$ discrepancy with the theoretical
estimate in the 2020 White Paper [3]. The uncertainty of this theory
prediction is dominated by the uncertainty of the leading-order hadronic
vacuum polarization (HVP) contribution, $a_{\mu}^{\rm hvp}$. The White Paper
average for $a_{\mu}^{\rm hvp}$ with an error of $0.6\%$ is based on
evaluations of a dispersion integral involving hadronic cross section data in
Refs. [4, 5, 6, 7, 8, 9]. Given the foreseen reduction of the experimental
uncertainties by upcoming results, the precision of the theory prediction has
to improve accordingly in the near future to scrutinize the discrepancy.
Lattice QCD offers the natural framework for an ab-initio computation of
hadronic contributions to $a_{\mu}$ and can therefore provide an independent
alternative to the traditional data-driven evaluations. Until recently, the
uncertainty of lattice evaluations was too large to have an impact on global
averages of $a_{\mu}^{\rm hvp}$. Thanks to a number of recent algorithmic and
conceptual improvements, the evaluation of $a_{\mu}^{\rm hvp}$ to sub-percent
precision is in reach for a number of groups and a first result with $0.8\%$
precision has been published by the BMW collaboration [10]. This result is in
$2.1\,\sigma$ tension with the White Paper average and reduces the tension
with the experimental average to $1.5\,\sigma$. To be able to quote a reliable
theory prediction for $a_{\mu}$, this tension between data-driven and lattice
estimates has to be understood. Further precise lattice computations are
urgently needed.
Time windows in the time momentum representation (TMR) of $a_{\mu}^{\rm hvp}$
have been introduced in Ref. [11], where a window at intermediate distance has
been identified as being ideally suited for a lattice evaluation. It is
therefore a good testing ground to compare different lattice calculations at
high precision. Furthermore, the evaluation of the same quantity with data-
driven methods helps to shed light on the current discrepancies within theory
predictions for $a_{\mu}^{\rm hvp}$. In these proceedings, we summarize the
findings of our work in Ref. [12] and discuss their implications.
## 2 Lattice setup
We work with $2+1$ dynamical flavors of $\mathrm{O}(a)$ improved Wilson
fermions and a tree-level improved Lüscher-Weisz gauge action in the isospin
limit of QCD on ensembles by the Coordinated Lattice Simulations (CLS)
initiative [13]. Our set of 24 ensembles covers six lattice spacings in the
range [0.039 - 0.099] fm. The pion masses are found to be between 130 MeV and
420 MeV. On each chiral trajectory, the sum of the bare quark masses is held
constant, leading to a constant $\mathrm{O}(a)$ improved bare coupling
$\tilde{g}_{0}$. We employ open boundaries in the temporal direction to
alleviate the freezing of the topological charge [14], especially on the
finest ensembles. An overview of the ensembles used in this work can be found
on the left panel of Fig. 1.
Figure 1: Left: Overview of the ensembles used in this work. Two labels for
one circle indicate two ensembles with identical parameters but different
volumes. Right: TMR integrand for the isovector contribution to $a_{\mu}^{\rm
hvp}$ (black crosses) at physical pion mass together with the short (SD),
intermediate (win) and long-distance (LD) contributions.
We compute the intermediate window contribution $a_{\mu}^{\mathrm{win}}$ to
$a_{\mu}^{\mathrm{hvp}}$ in the TMR [15],
$\displaystyle
a_{\mu}^{\mathrm{win}}\equiv\left(\frac{\alpha}{\pi}\right)^{2}\int_{0}^{\infty}dt\,\widetilde{K}(t)\,G(t)\,[\Theta(t,t_{0},\Delta)-\Theta(t,t_{1},\Delta)]\,,$
(1)
from the spatially summed, zero-momentum correlation function $G(t)$ of the
electromagnetic current, with a known QED weight function $\tilde{K}(t)$ [16]
and the smoothed step function $\Theta$ [11],
$\displaystyle G(t)=-\frac{a^{3}}{3}\sum_{k=1}^{3}\sum_{\vec{x}}\left\langle
j_{k}^{\rm em}(t,\vec{x})\,j_{k}^{\rm
em}(0)\right\rangle,\qquad\Theta(t,t^{\prime},\Delta)\equiv{\textstyle\frac{1}{2}}\left(1+\tanh[(t-t^{\prime})/\Delta]\right)\,,$
(2)
with $t_{0}=0.4\,{\rm fm}$, $t_{1}=1.0\,{\rm fm}$ and $\Delta=0.15\,{\rm fm}$.
On the right panel of Fig. 1 we illustrate the integrand of Eq. 2 (blue
diamonds) together with the corresponding integrand for $a_{\mu}^{\rm hvp}$
(black crosses), as well as for the short- and long-distance contributions to
the isovector contribution to $a_{\mu}^{\rm hvp}$. The noisy long-distance
tail of the integrand as well as the short-distance region which is the source
of potentially large cutoff effects, are suppressed in
$a_{\mu}^{\mathrm{win}}$. Furthermore, the sizable finite-volume effects on
$a_{\mu}^{\rm hvp}$ affect mostly the long-distance tail and are therefore
reduced in the case of $a_{\mu}^{\mathrm{win}}$. We find relative statistical
uncertainties at the few per-mil level.
We employ two discretizations of the vector current: The local and the point-
split version. While only the former needs to be renormalized, both currents
have to be $\mathrm{O}(a)$ improved. We utilize two sets of improvement
coefficients and renormalization constants, set 1 based on Ref. [17] and set 2
based on Refs. [18, 19]. Both sets remove $\mathrm{O}(a)$ cutoff effects but
higher order lattice artifacts differ between the two, providing us with
insight in our ability to perform reliable continuum extrapolations. Before
extrapolating our results to the continuum limit and interpolating them to
physical quark masses, we correct the isovector contribution for finite-size
effects. As in Ref. [20], we employ two procedures: At long distances
$t>(m_{\pi}L/4)^{2}/m_{\pi}$, where only a few states contribute significantly
to the finite-volume isovector correlation function, we compute the difference
between finite and infinite-volume correlation function via the Meyer-
Lellouch-Lüscher formalism [21, 22, 23] and a Gounaris-Sakurai parametrization
[24] of the time-like pion form factor. At short distances that are more
relevant for the intermediate window, we employ the method by Hansen and
Patella [25, 26] based on a monopole parametrization of the electromagnetic
pion form factor in the space-like region [27]. The resulting finite-size
corrections are of the same order as the statistical errors on each ensemble.
We extrapolate the isovector, isoscalar (without charm content) and the charm
contribution separately to the physical point according to the following
functional form
$a_{\mu}^{\mathrm{win,}f}(X_{a},X_{\pi},X_{K})=a_{\mu}^{\mathrm{win,}f}(0,X_{\pi}^{\exp},X_{K}^{\exp})+\beta_{2}\,X_{a}^{2}+\beta_{3}\,X_{a}^{3}+\delta\,X_{a}^{2}X_{\pi}+\epsilon\,X_{a}^{2}\log
X_{a}\\\ +\gamma_{0}\left(X_{K}-X_{K}^{\rm
phys}\right)+\gamma_{1}\,\left(X_{\pi}-X_{\pi}^{\exp}\right)+\gamma_{2}\left(f_{\rm
ch}(X_{\pi})-f_{\rm ch}(X_{\pi}^{\exp})\right)\,,$ (3)
where f denotes the flavor/isospin component and $X_{a}=a/\sqrt{t_{0}}$
parametrizes the lattice spacing. The dimensionless variables $X_{\pi}\propto
m_{\pi}^{2}$ and $X_{K}\propto m_{K}^{2}+\frac{1}{2}m_{\pi}^{2}$ are employed
for the interpolation to physical quark masses, and higher order effects in
$X_{\pi}$ are described via one of the functions $f_{\rm
ch}(X_{\pi})\in\\{0;~{}\log(X_{\pi});~{}X_{\pi}^{2};~{}1/X_{\pi};~{}X_{\pi}\log(X_{\pi})\\}$.
We are not able to determine all of the parameters in Eq. 3 in a single fit.
Instead, we test variations of the fit form by setting some of the parameters
$\beta_{3}$, $\delta$ and $\epsilon$ to zero, by varying the functional form
$f_{\rm ch}$ and by performing cuts in the pion mass and/or the lattice
spacing. Our final estimate for the central value, the statistical and the
systematic uncertainty of the observable $a_{\mu}^{\mathrm{win,}f}$ are
determined from a model average [28] of the fit results and their respective
fit qualities.
## 3 Results
On the left panel of Fig. 2 we illustrate the continuum extrapolation of the
dominant isovector contribution to $a_{\mu}^{\mathrm{win}}$ at the
$\mathrm{SU}(3)_{\rm f}$ symmetric point, i.e., on the ensembles where
$m_{\pi}=m_{K}\sim 420\,\mathrm{MeV}$. We show four sets of data based on the
two discretization prescriptions of the vector current and the two sets of
improvement and renormalization procedures. Whereas the cutoff effects differ
substantially between the four data sets, we achieve consistent independent
extrapolations to the continuum limit. The universality of the continuum limit
therefore provides a strong check of our extrapolations. We note in passing
that the data based on set 1 may be extrapolated with a single term $\propto
a^{2}$ over the full range of resolutions. Despite our good control over the
continuum limit, the variation of the ansatz for the continuum extrapolation
contributes dominantly to the systematic uncertainty of our final result.
Figure 2: Left: Study of the continuum extrapolation of
$a_{\mu}^{\mathrm{win,I1}}$ at the $\mathrm{SU}(3)_{\rm f}$ symmetric point.
The black and green data points correspond to the two sets of improvement
coefficients. Right: Exemplary chiral-continuum extrapolation of
$a_{\mu}^{\mathrm{win,I1}}$. Each color indicates one value of the bare
coupling. The curves show the fit function evaluated at the corresponding
lattice spacing. Data are shifted to physical $X_{K}$. Figures taken from
[12].
On the right panel of Fig. 2 we illustrate a typical chiral-continuum fit
using $f_{\rm ch}=1/\tilde{y}$ with $\tilde{y}=m_{\pi}^{2}/(8\pi f_{\pi}^{2})$
to our data for $a_{\mu}^{\mathrm{win,I1}}$, where the dependence on $X_{K}$
has been projected out in the plot. The data is well described over the full
range of pion masses and, most importantly, constrained by the ensembles close
to physical quark masses. Performing variations in the fit form and excluding
data at large pion masses does not lead to a significant variation of the
result at the physical point. After taking the model average of our fits, we
find
$\displaystyle a_{\mu}^{\mathrm{win,I1}}$ $\displaystyle=(186.30\pm
0.75_{\mathrm{stat}}\pm 1.08_{\mathrm{syst}})\times 10^{-10}\,.$ (4)
An example for a chiral-continuum extrapolation of the data for the isoscalar
contribution excluding the charm quark is shown on the left panel of Fig. 3.
Although the noisy quark-disconnected contribution enters for ensembles away
from the $\mathrm{SU}(3)_{\rm f}$ symmetric point, we obtain precise data
thanks to the suppression of long-distance contributions. We restrict the
model average to fits based on functions $f_{\rm ch}$ that are not singular in
the chiral limit and arrive at
$\displaystyle a_{\mu}^{\mathrm{win,I0}}{}^{,c\\!\\!\\!/}$
$\displaystyle=(47.41\pm 0.23_{\mathrm{stat}}\pm 0.29_{\mathrm{syst}})\times
10^{-10}\,.$ (5)
Figure 3: Chiral-continuum extrapolation of contributions to
$a_{\mu}^{\mathrm{win}}$. Each color indicates one value of the bare coupling.
The curves show the fit function evaluated at the corresponding lattice
spacing. Data are shifted to physical $X_{K}$. Left: Isoscalar contribution.
Right: Charm-connected contribution extrapolated in
$X_{\pi}=\Phi_{2}=8t_{0}m_{\pi}^{2}$. Figures taken from [12].
The charm-connected contribution is calculated in the partially quenched setup
on our $2+1$ flavor configurations. We compute the vector current at three
values of the quark mass close to the charm quark mass and perform an
interpolation to the point where the mass of the ground-state ${\rm c\bar{s}}$
pseudoscalar meson matches the physical $D_{\rm s}$ meson mass. We employ a
massive renormalization scheme. Due to large cutoff effects in the local-local
discretization of the correlation function, we take only the local-conserved
one into account in our fits. Since the strange quark mass is not held
constant along our chiral trajectory, we have to perform a mild chiral
extrapolation of the charm-connected contribution.111Fixing the charm quark
mass via the quark-connected contribution to the $\eta_{\mathrm{c}}$ meson or
via the flavor-averaged combination
$m_{\bar{D}}=\frac{2}{3}m_{D}+\frac{1}{3}m_{D_{\mathrm{s}}}$, as in Ref. [29],
could significantly reduce the pion mass dependence as both masses are
approximately constant on our chiral trajectory where
$2am_{\mathrm{l}}+am_{\mathrm{s}}$ is held constant. For both choices, no
visible dependence of the charm quark mass on the light quark masses has been
found on our chiral trajectory in Ref. [30]. After performing the model
average, we obtain
$\displaystyle a_{\mu}^{\mathrm{win,c}}=(2.89\pm 0.03_{\mathrm{stat}}\pm
0.03_{\mathrm{syst}}\pm 0.13_{\rm scale})\times 10^{-10}\,.$ (6)
As detailed in Appendix D of Ref. [12], we estimate the effect of neglecting
charm quarks in the sea to be well below our uncertainties. We furthermore
neglect the bottom quark contribution to $a_{\mu}^{\mathrm{win}}$ that is
expected to be much smaller than our current uncertainty [31].
We work in the isospin-symmetric setup of QCD. In order to compare our
computation with Nature at the sub-percent level, the effects of the non-
degeneracy of the up- and down-quark masses and QED have to be taken into
account. We have performed a computation of $a_{\mu}^{\mathrm{win}}$ in
QCD+QED using the technique of Monte Carlo reweighting [32, 33, 34, 35, 36]
combined with a leading-order perturbative expansion of QCD+QED around
isosymmetric QCD in terms of the electromagnetic coupling $e^{2}$ as well as
the shifts in the bare quark masses $\Delta m_{u},\Delta m_{d},\Delta m_{s}$
[36, 37, 38, 39, 40]. A detailed description of our setup can be found in
Refs. [41, 37, 38]. Since the renormalization procedure differs from the one
used in the isosymmetric QCD calculation, we compute the relative correction
due to isospin breaking in the QCD+QED setup. So far, we have performed our
computation on five ensembles at three resolutions and pion masses in the
range $215$-$352\,\mathrm{MeV}$. The results are displayed on the left panel
of Fig. 4. Without performing an explicit extrapolation to the physical point,
we estimate the correction to be $(0.3\pm 0.1)\%$ of the isosymmetric
contribution. We currently neglect the effect of quark-disconnected diagrams
as well as isospin-breaking effects in sea-quark contributions. Furthermore,
an investigation of finite-volume effects on the correction is in progress. We
double the uncertainty of our estimate to account for these unknown systematic
effects before including the correction in our final result.
Combining the results of Eqs. 4, 5 and 6, we find
$\displaystyle
a_{\mu}^{\mathrm{win,iso}}=a_{\mu}^{\mathrm{win,I1}}+a_{\mu}^{\mathrm{win,I0}}{}^{,c\\!\\!\\!/}+a_{\mu}^{\mathrm{win,c}}$
$\displaystyle=(236.60\pm 0.79_{\mathrm{stat}}\pm 1.13_{\mathrm{syst}}\pm
0.05_{\rm Q})\times 10^{-10}\,,$ (7)
where an additional uncertainty due to the quenching of the charm quark is
included. Our final result, after including our estimate of isospin-breaking
corrections, is
$\displaystyle a_{\mu}^{\mathrm{win}}=(237.30\pm 0.79_{\mathrm{stat}}\pm
1.13_{\mathrm{syst}}\pm 0.05_{\rm Q}\pm 0.47_{\rm IB})\times 10^{-10}\,.$ (8)
## 4 Comparison of lattice results
Figure 4: Left: Overview of isospin-breaking effects on
$a_{\mu}^{\mathrm{win}}$. Right: Comparison of our result for
$a_{\mu}^{\mathrm{win}}$ including isospin-breaking corrections with the
estimates by the ETM [42, 43], BMW [10] and RBC/UKQCD [11] collaborations. The
estimate based on the data-driven method of Ref. [44] is shown in red.
To compare our results with the findings of other collaborations, we collect
them in Fig. 5 in the flavor decomposition instead of the isospin
decomposition that we have discussed before.222Note that the sum
$a_{\mu}^{\mathrm{win,I1}}+a_{\mu}^{\mathrm{win,I0}}{}^{,c\\!\\!\\!/}$ is very
well compatible with
$a_{\mu}^{\mathrm{win,ud}}+a_{\mu}^{\mathrm{win,s}}+a_{\mu}^{\mathrm{win,disc}}$
in our work, providing an additional cross-check of our chiral-continuum
extrapolations. Since the writing of Ref. [12], three additional sets of
results have appeared. The calculation in Ref. [43] provides results for all
flavor components for the intermediate and the short-distance windows. The
results of Refs. [45, 46] for the light-connected contribution have so far
only been presented at workshops. This light-connected contribution dominates
$a_{\mu}^{\mathrm{win}}$, contributing about $87\%$ to the total.
Figure 5: Comparison of our results [12] (in units of $10^{-10}$) with other
lattice calculations [11, 49, 10, 50, 42, 51, 52, 43, 45, 46] in isosymmetric
QCD. The four panels on the left show compilations of the individual quark-
disconnected, charm, strange and light quark contributions. The result for
$a_{\mu}^{\mathrm{win}}$ in the isosymmetric case is shown in the rightmost
panel. Our results are represented by green circles and vertical bands.
Results that so far have only been presented at workshops are indicated by
open symbols.
Let us first consider the subleading contributions
$a_{\mu}^{\mathrm{win,disc}}$, $a_{\mu}^{\mathrm{win,c}}$ and
$a_{\mu}^{\mathrm{win,s}}$. Here, the results of the different collaborations
broadly agree, apart from slight tensions in $a_{\mu}^{\mathrm{win,s}}$. These
tensions are not large enough to have a significant impact on
$a_{\mu}^{\mathrm{win}}$. The results in Refs. [43, 45] shift the discussion
concerning the status of $a_{\mu}^{\mathrm{win,ud}}$ considerably. The results
labeled RBC/UKQCD 18 [11] and ETMC 21 [42] based on domain wall fermions and
Wilson twisted-mass fermions, respectively, deviate from the bulk of the
results for $a_{\mu}^{\mathrm{win,ud}}$. In both cases, the extrapolation to
the continuum limit is quite long and based on a small number of lattice
spacings. In ETMC 21, ensembles with pion masses larger than 220 MeV have been
used to compute $a_{\mu}^{\mathrm{win}}$. The new result ETMC 22 employs three
ensembles around the physical pion mass and therefore, no chiral extrapolation
is necessary. With respect to RBC/UKQCD 18, data at a third, finer lattice
spacing (about 0.073 fm) at physical pion mass as well as a second
discretization of the vector current has been added to the analysis in
RBC/UKQCD 22. If one takes into account these two updates, the agreement for
$a_{\mu}^{\mathrm{win,ud}}$ between the different groups, working with a wide
variety of fermion actions and strategies to approach the physical point, is
excellent.333We note that the comparison presented here contains an inherent
ambiguity regarding the definition of the physical point in isosymmetric QCD,
see the contributions [47, 48] to this conference.
Based on the current status displayed in Fig. 5, there is little room for a
significant shift in the value for $a_{\mu}^{\mathrm{win}}$ from lattice QCD.
This is particularly important when our result, corrected for quark-connected
isospin-breaking and electromagnetic effects, and the result of Ref. [10] are
compared with a recent data-driven evaluation of the same quantity, see the
right panel of Fig. 4. A significant tension, $3.9\,\sigma$ between our result
and the result of Ref. [44], is found. The absolute deviation between our
result for $a_{\mu}^{\mathrm{win}}$ and the prediction in Ref. [44] is about
half of the size of the deviation between the White Paper average for
$a_{\mu}^{\rm hvp}$ and the lattice evaluation of the BMW collaboration.
Before a solid statement regarding the Standard Model prediction for
$a_{\mu}^{\rm hvp}$ can be made, this discrepancy between data-driven and
lattice evaluations has to be understood.
## 5 Outlook
The foreseen reduction of the experimental uncertainties for $a_{\mu}$
requires a corresponding improvement of the precision of the SM prediction for
$a_{\mu}^{\rm hvp}$. We aim to contribute to this task by providing a
determination of $a_{\mu}^{\rm hvp}$ to sub-percent precision in the near
future. The first precise lattice result in Ref. [10] has opened up new
questions due to a significant deviation from the well-established dispersive
evaluations. As a consequence, time windows in the Euclidean time integral of
the TMR are considered to be an ideal testbed to scrutinize the validity of
lattice results. For the intermediate-distance window, the cross-check of
lattice results has been very successful. However, the comparison with a data-
driven evaluation of this quantity points to an even more significant tension
than in the case of $a_{\mu}^{\rm hvp}$. A similar deviation has been found
for the closely related hadronic running of the electromagnetic coupling in
Ref. [20].
The investigation of other time windows than the one considered in this work
may help to shed light on the origin of the aforementioned discrepancies, see
also the recent suggestions in Refs. [44, 53]. The computation of the short-
distance contribution to $a_{\mu}^{\rm hvp}$ may help to probe the continuum
extrapolation of lattice results that makes up a significant fraction of the
systematic uncertainty of recent studies. To reach our goal of a sub-percent
precision calculation of $a_{\mu}^{\mathrm{hvp}}$, the main task is to
decrease the statistical uncertainty of our calculation, especially at close-
to-physical quark masses. Noise reduction techniques in the computation of the
vector correlation function, as well as dedicated spectroscopy studies [54,
55, 56] will help us to achieve this goal.
## Acknowledgments
Calculations for this project have been performed on the HPC clusters Clover
and HIMster-II at Helmholtz Institute Mainz and Mogon-II at Johannes
Gutenberg-Universität (JGU) Mainz, on the HPC systems JUQUEEN, JUWELS and
JUWELS Booster at Jülich Supercomputing Centre (JSC), and on the GCS
Supercomputers HAZEL HEN and HAWK at Höchstleistungsrechenzentrum Stuttgart
(HLRS). The authors gratefully acknowledge the support of the Gauss Centre for
Supercomputing (GCS) and the John von Neumann-Institut für Computing (NIC) for
project CHMZ21, CHMZ23 and HINTSPEC at JSC and project GCS-HQCD at HLRS. This
work has been supported by Deutsche Forschungsgemeinschaft (German Research
Foundation, DFG) through project HI 2048/1-2 (project No. 399400745) and
through the Cluster of Excellence “Precision Physics, Fundamental Interactions
and Structure of Matter” (PRISMA+ EXC 2118/1), funded within the German
Excellence strategy (Project ID 39083149). D.M. acknowledges funding by the
Heisenberg Programme of the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) – project number 454605793. A.G. received funding from
the Excellence Initiative of Aix-Marseille University - A*MIDEX, a French
_Investissements d’Avenir_ programme, AMX-18-ACE-005 and from the French
National Research Agency under the contract ANR-20-CE31-0016. We are grateful
to our colleagues in the CLS initiative for sharing ensembles. Parts of the
statistical data analysis have been performed using the $\Gamma$-method in the
implementation of the pyerrors package [57, 58, 59].
## References
* [1] Muon g-2 collaboration, B. Abi et al., _Phys. Rev. Lett._ 126 (2021) 141801 [2104.03281].
* [2] Muon g-2 collaboration, G. W. Bennett et al., _Phys. Rev._ D73 (2006) 072003 [hep-ex/0602035].
* [3] T. Aoyama et al., _Phys. Rept._ 887 (2020) 1 [2006.04822].
* [4] M. Davier, A. Hoecker et al., _Eur. Phys. J._ C77 (2017) 827 [1706.09436].
* [5] A. Keshavarzi, D. Nomura et al., _Phys. Rev._ D97 (2018) 114025 [1802.02995].
* [6] G. Colangelo, M. Hoferichter et al., _JHEP_ 02 (2019) 006 [1810.00007].
* [7] M. Hoferichter, B.-L. Hoid et al., _JHEP_ 08 (2019) 137 [1907.01556].
* [8] M. Davier, A. Hoecker et al., _Eur. Phys. J._ C80 (2020) 241 [1908.00921].
* [9] A. Keshavarzi, D. Nomura et al., _Phys. Rev._ D101 (2020) 014029 [1911.00367].
* [10] S. Borsanyi et al., _Nature_ 593 (2021) 51 [2002.12347].
* [11] RBC, UKQCD collaboration, T. Blum, P. A. Boyle et al., _Phys. Rev. Lett._ 121 (2018) 022003 [1801.07224].
* [12] M. Cè et al., 2206.06582.
* [13] M. Bruno et al., _JHEP_ 02 (2015) 043 [1411.3982].
* [14] M. Lüscher and S. Schaefer, _JHEP_ 07 (2011) 036 [1105.4749].
* [15] D. Bernecker and H. B. Meyer, _Eur. Phys. J._ A47 (2011) 148 [1107.4388].
* [16] M. Della Morte, A. Francis et al., _JHEP_ 10 (2017) 020 [1705.01775].
* [17] A. Gérardin, T. Harris et al., _Phys. Rev._ D99 (2019) 014519 [1811.08209].
* [18] ALPHA collaboration, J. Heitger and F. Joswig, _Eur. Phys. J. C_ 81 (2021) 254 [2010.09539].
* [19] P. Fritzsch, _JHEP_ 06 (2018) 015 [1805.07401].
* [20] M. Cè, A. Gérardin et al., _JHEP_ 08 (2022) 220 [2203.08676].
* [21] H. B. Meyer, _Phys. Rev. Lett._ 107 (2011) 072002 [1105.1892].
* [22] M. Lüscher, _Nucl. Phys._ B364 (1991) 237.
* [23] M. Lüscher, _Nucl. Phys._ B354 (1991) 531.
* [24] G. J. Gounaris and J. J. Sakurai, _Phys. Rev. Lett._ 21 (1968) 244.
* [25] M. T. Hansen and A. Patella, _Phys. Rev. Lett._ 123 (2019) 172001 [1904.10010].
* [26] M. T. Hansen and A. Patella, _JHEP_ 10 (2020) 029 [2004.03935].
* [27] QCDSF/UKQCD collaboration, D. Brömmel et al., _Eur. Phys. J. C_ 51 (2007) 335 [hep-lat/0608021].
* [28] W. I. Jay and E. T. Neil, _Phys. Rev. D_ 103 (2021) 114502 [2008.01069].
* [29] E.-H. Chao, R. J. Hudspith et al., _Eur. Phys. J. C_ 82 (2022) 664 [2204.08844].
* [30] ALPHA collaboration, J. Heitger, F. Joswig et al., _JHEP_ 05 (2021) 288 [2101.02694].
* [31] HPQCD collaboration, B. Colquhoun, R. J. Dowdall et al., _Phys. Rev. D_ 91 (2015) 074514 [1408.5768].
* [32] A. Ferrenberg and R. Swendsen, _Phys. Rev. Lett._ 61 (1988) 2635.
* [33] A. Duncan, E. Eichten et al., _Phys. Rev. D_ 71 (2005) 094509 [hep-lat/0405014].
* [34] A. Hasenfratz, R. Hoffmann et al., _Phys. Rev. D_ 78 (2008) 014515 [0805.2369].
* [35] J. Finkenrath, F. Knechtli et al., _Nucl. Phys. B_ 877 (2013) 441 [1306.3962].
* [36] RM123 collaboration, G. M. de Divitiis, R. Frezzotti et al., _Phys. Rev._ D87 (2013) 114505 [1303.4896].
* [37] A. Risch and H. Wittig, _PoS_ LATTICE2021 (2022) 106 [2112.00878].
* [38] A. Risch and H. Wittig, _PoS_ LATTICE2019 (2019) 296 [1911.04230].
* [39] A. Risch and H. Wittig, _PoS_ LATTICE2018 (2018) 059 [1811.00895].
* [40] A. Risch and H. Wittig, _EPJ Web Conf._ 175 (2018) 14019 [1710.06801].
* [41] A. Risch, _Isospin breaking effects in hadronic matrix elements on the lattice_ , Ph.D. thesis, Mainz U., 2021. 10.25358/openscience-6314.
* [42] D. Giusti and S. Simula, _PoS_ LATTICE2021 (2022) 189 [2111.15329].
* [43] C. Alexandrou et al., 2206.15084.
* [44] G. Colangelo, A. X. El-Khadra et al., _Phys. Lett. B_ 833 (2022) 137313 [2205.12963].
* [45] RBC/UKQCD collaboration, C. Lehner, 2022, Fifth Plenary Workshop of the Muon g-2 Theory Initiative, https://indico.ph.ed.ac.uk/event/112/contributions/1660/.
* [46] Fermilab Lattice/HPQCD/MILC collaboration, S. Gottlieb, 2022, First LatticeNET workshop on challenges in Lattice field theory, https://www.benasque.org/2022lattice_workshop/talks_contr/158_Gottlieb_gm2_LatticeNET.pdf.
* [47] A. Portelli, The 39th International Symposium on Lattice Field Theory, https://indico.hiskp.uni-bonn.de/event/40/contributions/542/.
* [48] N. Tantalo, The 39th International Symposium on Lattice Field Theory, https://indico.hiskp.uni-bonn.de/event/40/contributions/847/.
* [49] C. Aubin, T. Blum et al., _Phys. Rev. D_ 101 (2020) 014503 [1905.09307].
* [50] C. Lehner and A. S. Meyer, _Phys. Rev. D_ 101 (2020) 074515 [2003.04177].
* [51] $\chi$QCD collaboration, G. Wang, T. Draper et al., 2204.01280.
* [52] C. Aubin, T. Blum et al., _Phys. Rev. D_ 106 (2022) 054503 [2204.12256].
* [53] D. Boito, M. Golterman et al., 2210.13677.
* [54] C. Andersen, J. Bulava et al., _Nucl. Phys. B_ 939 (2019) 145 [1808.05007].
* [55] A. Gérardin, M. Cè et al., _Phys. Rev. D_ 100 (2019) 014510 [1904.03120].
* [56] S. Paul, A. D. Hanlon et al., _PoS_ LATTICE2021 (2022) 551 [2112.07385].
* [57] ALPHA collaboration, U. Wolff, _Comput. Phys. Commun._ 156 (2004) 143 [hep-lat/0306017].
* [58] A. Ramos, _Comput. Phys. Commun._ 238 (2019) 19 [1809.01289].
* [59] F. Joswig, S. Kuberski et al., 2209.14371.
|
# High-Fidelity Guided Image Synthesis with Latent Diffusion Models
Jaskirat Singh1 Stephen Gould1,2 Liang Zheng1,2
1The Australian National University 2Australian Centre for Robotic Vision
{jaskirat.singh, stephen.gould<EMAIL_ADDRESS>
###### Abstract
Controllable image synthesis with user scribbles has gained huge public
interest with the recent advent of text-conditioned latent diffusion models.
The user scribbles control the color composition while the text prompt
provides control over the overall image semantics. However, we note that prior
works in this direction suffer from an intrinsic domain shift problem wherein
the generated outputs often lack details and resemble simplistic
representations of the target domain. In this paper, we propose a novel guided
image synthesis framework, which addresses this problem by modelling the
output image as the solution of a constrained optimization problem. We show
that while computing an exact solution to the optimization is infeasible, an
approximation of the same can be achieved while just requiring a single pass
of the reverse diffusion process. Additionally, we show that by simply
defining a cross-attention based correspondence between the input text tokens
and the user stroke-painting, the user is also able to control the semantics
of different painted regions without requiring any conditional training or
finetuning. Human user study results show that the proposed approach
outperforms the previous state-of-the-art by over 85.32% on the overall user
satisfaction scores. Project page for our paper is available at
https://1jsingh.github.io/gradop.
Figure 1: _Overview_. We propose a novel stroke based guided image synthesis
framework which _(Left)_ resolves the intrinsic domain shift problem in prior
works (b), wherein the final images lack details and often resemble simplistic
representations of the target domain (e) (generated using only text-
conditioning). Iteratively reperforming the guided synthesis with the
generated outputs (c) seems to improve realism but it is expensive and the
generated outputs might lose faithfulness with the reference (a) with each
iteration. _(Right)_ Additionally, the user is also able to specify the
semantics of different painted regions without requiring any additional
training or finetuning.
## 1 Introduction
Guided image synthesis with user scribbles has gained widespread public
attention with the recent advent of large-scale language-image (LLI) models
[30, 25, 32, 28, 42]. A novice user can gain significant control over the
final image contents by combining text-based conditioning with unsupervised
guidance from a reference image (usually a coarse stroke painting). The text
prompt controls the overall image semantics, while the provided coarse stroke
painting allows the user to define the color composition in the output scene.
Existing methods often aim attempt to achieve this through two means. The
first category leverages conditional training using semantic segmentation maps
[30, 8, 41]. However, the conditional training itself is quite time-consuming
and requires a large scale collection of dense semantic segmentation labels
across diverse data modalities. The second category, typically leverages an
inversion based approach for mapping the input stroke painting to the target
data manifold without requiring any paired annotations. For instance, a
popular solution by [23, 16, 37] introduces the painting based generative
prior by considering a noisy version of the original image as the start of the
reverse diffusion process. However, the use of an inversion based approach
causes an intrinsic domain shift problem if the domain gap between the
provided stroke painting and the target domain is too high. In particular, we
observe that the resulting outputs often lack details and resemble simplistic
representations of the target domain. For instance, in Fig. 1, we notice that
while the target domain consists of _realistic photos_ of a landscape, the
generated outputs resemble simple pictorial arts which are not very realistic.
Iteratively reperforming the guided synthesis with the generated outputs [4]
seems to improve realism but it is costly, some blurry details still persist
(refer Fig. 4), and the generated outputs tend to lose faithfulness to the
reference with each successive iteration.
To address this, we propose a diffusion-based guided image synthesis framework
which models the output image as the solution of a constrained optimization
problem (Sec. 3). Given a reference painting $y$, the constrained optimization
is posed so as to find a solution $x$ with two constraints: 1) upon painting
$x$ with an autonomous painting function we should recover a painting similar
to reference $y$, and, 2) the output $x$ should lie in the target data
subspace defined by the text prompt (_i.e_., if the prompt says _“photo”_ then
we want the output images to be realistic photos instead of cartoon-like
representations of the same concept). Subsequently, we show that while the
computation of an exact solution for this optimization is infeasible, a
practical approximation of the same can be achieved through simple gradient
descent.
Finally, while the proposed optimization allows the user to generate image
outputs with high realism and faithfulness (with reference $y$), the fine-
grain semantics of different painting regions are inferred implicitly by the
diffusion model. Such inference is typically dependent on the generative
priors learned by the diffusion model, and might not accurately reflect the
user’s intent in drawing a particular region. For instance, in Fig. 1, we see
that the light blue regions can be inferred as blue-green grass instead of a
river. To address this, we show that by simply defining a cross-attention
based correspondence between the input text tokens and user stroke-painting,
the user can control semantics of different painted regions without requiring
semantic-segmentation based conditional training or finetuning.
## 2 Related Work
GAN-based methods have been extensively explored for performing guided image
synthesis from coarse user scribbles. [44, 1, 2, 15, 29, 3, 39] use GAN-
inversion for projecting user scribbles on to manifold of real images. While
good for performing small scale inferences these methods fail to generate
highly photorealistic outputs when the given stroke image is too far from the
real image manifold. Conditional GANs [26, 45, 19, 7, 14, 22, 38] learn to
directly generate realistic outputs based on user-editable semantic
segmentation maps. In another work, Singh _et al_. [36] propose an image
synthesis framework which leverages autonomous painting agents [34, 35, 20,
46] for inferring photorealistic outputs from rudimentary user scribbles.
Despite its efficacy, this requires the creation of a new dataset and
conditional training for each target domain, which is expensive.
Guided image synthesis with LLI models [30, 25, 32, 28, 42, 43, 6] has gained
widespread attention [9, 21, 16, 33, 18, 13, 31] due to their ability to
perform high quality image generation from diverse target modalities. Of
particular interest are works wherein the guidance is provided using a coarse
stroke painting and the model learns to generate outputs conditioned on both
text and painting. Current works in this direction typically 1) use semantic
segmentation based conditional training [30, 8, 41] which is expensive, or, 2)
adopt an inversion-based approach for mapping the input stroke painting to the
target data manifold without requiring paired annotations. For instance, Meng
_et al_. [23, 16] propose guided image synthesis framework, wherein the
generative prior is introduced by simply considering a noisy version of the
original sketch input as the start of the reverse diffusion process. Choi _et
al_. [5] propose an iterative conditioning strategy wherein the intermediate
diffusion outputs are successively refined to move towards the reference
image. While effective, the use of an inversion-like approach causes an
implicit domain shift problem, wherein the output images though faithful to
the provided reference show blurry or less textured details. Iteratively
reperforming guided synthesis with generated outputs [4] seems to improve
realism but it is costly. In contrast, we show that it is possible to perform
highly photorealistic image synthesis while just requiring a single reverse
diffusion pass.
Figure 2: _Method Overview._ (a) Given a reference painting $y$ and text
prompt $\tau_{text}$, we first formulate the guided synthesis output as the
solution $x^{\star}$ of a constrained optimization problem with 2 properties:
1) $x^{\star}$ lies in the subspace $\mathcal{S}_{\tau_{text}}$ of outputs
conditioned only on the text, and, 2) upon painting $x$ we should recover
reference painting $y$. While computing an exact solution of this optimization
is infeasible, we show that an approximation can be obtained by solving the
unconstrained optimization in (b). Here we first use gradient descent to
compute a point $x^{\star}$ close to a random sample
$x_{\tau_{text}}\in\mathcal{S}_{\tau_{text}}$, while still minimizing the
painting loss $\mathcal{L}(f(x),y)$. This $x^{\star}$ is usually non-
photorealistic due to gradient descent. We therefore use the diffusion based
inversion from [37] to map it back to target domain
$\mathcal{S}_{\tau_{text}}$.
Cross-attention control. Recently, Hertz _et al._ [10] propose a prompt-to-
prompt image editing approach with text-conditioned diffusion models. By
constraining the cross-attention features of all non-targeted text tokens to
remain the same, they show that by only modifying the text prompt, it is
possible to perform diverse image editing without changing the underlying
structure of the original input image. In contrast, we use cross-attention
control for from-scratch synthesis and show that by simply defining a cross-
attention based correspondence between input text tokens and the user stroke-
painting, it is possible to control and define the fine-grain semantics of
different painted regions.
## 3 Our Method
Let $f:\mathcal{D}_{real}\rightarrow\mathcal{D}_{paint}$ be a function mapping
a real input image $x$ to its painted image $f(x)$. Then given a colored
stroke image $y$ and input text prompt $\tau_{text}$, we formulate the
computation of output image $x^{\star}$ as the solution to the following
constrained optimization problem,
$\displaystyle x^{\star}=\ $ $\displaystyle\text{argmin}_{x}\
\mathcal{L}\left(f(x),y\right)$ (1) $\displaystyle\text{subject to}\
x\in\mathcal{S}_{\tau_{text}}$ (2)
where $\mathcal{L}\left(f(x),y\right)$ represents a distance measure between
the painted output $f(x)$ of image $x$ and the target painting $y$, while
$\mathcal{S}_{\tau_{text}}$ represents the subspace of output images
conditioned only on the text input.
In other words, by additionally conditioning on a stroke image $y$, we wish to
find a solution $x^{\star}$ such that 1) the distance between the painted
image of $x$ and reference painting $y$ is minimized, while at the same time
ensuring 2) the final solution lies in the subspace of images conditioned only
on the text prompt $\tau_{text}$. For instance, if the text says _“a realistic
photo of a tree”_ then the use of stroke-based guidance should still produce a
_“realistic photo”_ , wherein the composition of the tree regions is
controlled by the painting $y$.
### 3.1 GradOP: Obtaining an Approximate Solution
The optimization problem in Eq. 1 can be reformulated as an unconstrained
optimization problem as,
$\displaystyle x^{\star}=\text{argmin}_{x}\
\mathcal{L}\left(f(x),y\right)+\gamma\ d(x,\mathcal{S}_{\tau_{text}}),$ (3)
where $d(x,\mathcal{S}_{\tau_{text}})$ represents a distance measure between
$x$ and subspace $\mathcal{S}_{\tau_{text}}$, and $\gamma$ is a
hyperparameter.
A cursory glance at the above formulation should make it evident that the
computation of an exact solution is infeasible without first generating a
large enough sample size for the $\mathcal{S}_{\tau_{text}}$ subspace, which
will be quite time consuming.
To address this, we propose to obtain an approximate solution by estimating
$d(x,\mathcal{S}_{\tau_{text}})$ through the distance of $x$ from a single
random sample $x_{\tau_{text}}\in\mathcal{S}_{\tau_{text}}$. Thus, we can
approximate the optimization problem as follows,
$\displaystyle x^{\star}=\text{argmin}_{x}\
\mathcal{L}\left(f(x),y\right)+\gamma\ d(x,x_{\tau_{text}}).$ (4)
Assuming a latent diffusion model with decoder $\mathcal{D}$, we can rewrite
the above above optimization in latent space as,
$\displaystyle z^{\star}=\text{argmin}_{z}\
\mathcal{L}\left(f(\mathcal{D}(z)),y\right)+\gamma\
\|z-z_{\tau_{text}}\|_{2}.$ (5)
where the image output $x^{\star}$ can be computed as
$x^{\star}=\mathcal{D}(z^{\star})$.
In order to solve the above optimization problem, we first use the diffusion
model to sample $x_{\tau_{text}}\in\mathcal{S}_{\tau_{text}}$. Initializing
$z=\mathcal{E}(x_{\tau_{text}})$, where $\mathcal{E}$ represents the encoder,
we solve the above optimization using gradient descent (assuming $f$ and
$\mathcal{L}$ are differentiable). Finally, we note that the solution
$x^{\star}=\mathcal{D}(z^{\star})$ to the above approximation of the
optimization problem might be non-photorealistic due to gradient descent. We
therefore use the diffusion based inversion approach from [16] in order to map
it to the target image manifold. Please refer Alg. 1 for the detailed
implementation.
Algorithm 1 GradOP: Solution Approximation
Input: Stroke Painting $y$, text prompt $\tau_{text}$
Output: Output image $x$ conditioned on both $\tau_{text},y$
Require: Differentiable painting function $f$, differentiable distance measure
$\mathcal{L}$, hyperparameter $\gamma,t_{0}$.
1:Sample $x_{\tau_{text}}\in\mathcal{S}_{\tau_{text}}$;
2:Initialize $z=z_{\tau_{text}}=\mathcal{E}(x_{\tau_{text}})$;
3:for $0\leq i\leq M$ do
4: $\mathcal{L}_{total}=\mathcal{L}\left(f(\mathcal{D}(z)),y\right)+\gamma\
\|z-z_{{\tau}_{text}}\|_{2}$;
5: $z=z-\lambda\nabla_{z}\mathcal{L}_{total}$;
6:end for
7:$z_{t_{0}}=\textsc{ForwardDiff}(z^{\star}=z,0\rightarrow t_{0})$;
8:$z=\textsc{ReverseDiff}(z_{t_{0}},t_{0}\rightarrow 0)$;
9:return $x_{out}=\mathcal{D}(z)$.
### 3.2 GradOP+ : Improving Sampling Efficiency
While the guided synthesis solution in Alg. 1 shows great output results, for
each output image it first requires the sampling of a text-only conditioned
output $x_{\tau_{text}}\in\mathcal{S}_{\tau_{text}}$. To address this, we
propose a modified guided image synthesis approach which allows for equally
high quality outputs while requiring just a single reverse diffusion pass for
each output. Our key insight is that a lot of information in $z^{\star}$ is
discarded during the forward diffusion from $z^{\star}\rightarrow z_{t_{0}}$.
Thus, instead of performing the optimization to first compute $z^{\star}$, we
would like to directly optimize the intermediate latent states $z_{t}$ by
injecting the optimization gradients within the reverse diffusion process
itself (refer Fig. 3).
In particular, at any timestep $t$ during the reverse diffusion, we wish to
introduce optimization gradients in order to solve the following optimization
problem,
$\displaystyle z^{\star}_{t}=\text{argmin}_{z}\
\mathcal{L}\left(f(\mathcal{D}(z)),y\right)+\gamma\ \|z-z_{t}\|_{2}.$ (6)
However, the introduction of gradients will cause $z^{\star}_{t}$ to not
conform with the expected latent distribution at timestep $t$. We therefore
pass it through the forward diffusion process in order to map it back to the
expected latent variable distribution. Please refer Alg. 2 for the detailed
implementation.
Figure 3: _GradOP+ Overview._ At any timestep $t$, the optimization in Eq. 6
($z_{t}\rightarrow z^{\star}_{t}$) reduces the painting recovery loss, while
the forward diffusion step $z^{\star}_{t}\rightarrow\tilde{z}_{t}$ maps it
back to the expected latent distribution. By iteratively performing this
optimization, _GradOP+_ modifies the reverse sampling trajectory to lead to
output $x_{out}=\mathcal{D}(z_{0})$ which is also faithful to the target
painting $y$. Algorithm 2 GradOP+ : Improving Sampling Efficiency
Input: Stroke Painting $y$, text prompt $\tau_{text}$
Output: Output image $x$ conditioned on both $\tau_{text},y$
Require: Differentiable painting function $f$, distance measure $\mathcal{L}$,
hyperparameter $\gamma,t_{0},t_{start},t_{end}$.
1:Sample $z_{T}\sim\mathcal{N}(0,\bm{I})$;
2:for $0\leq t<T$ do
3: $z_{t}=\textsc{ReverseDiff}(z_{t+1},t+1\rightarrow t)$;
4: if $t_{start}\leq t\leq t_{end}$ then
5: Initialize $z=z_{t}$;
6: for $0\leq i\leq M$ do
7: $\mathcal{L}_{total}=\mathcal{L}\left(f(\mathcal{D}(z)),y\right)+\gamma\
\|z-z_{t}\|_{2}$;
8: $z=z-\lambda\nabla_{z}\mathcal{L}_{total}$;
9: end for
10: $z_{t}=\textsc{ForwardDiff}(z^{\star}_{t}=z,0\rightarrow t)$
11: end if
12:end for
13:return $x_{out}=\mathcal{D}(z_{0})$.
### 3.3 Controlling Semantics of Painted Regions
Finally, while the above approximate guided image synthesis algorithm allows
for generation of image outputs with high _faithfulness_ and _realism_ , the
semantics of different painted regions are inferred in an implicit manner.
Such inference is typically based on the cross-attention priors (learned by
the diffusion model) between the provided text tokens and the input painting
throughout the reverse diffusion process. For instance, in the first example
from Fig. 5, we note that for different outputs, the blue region can be
inferred as a river, waterfall, or a valley. Also note that some painting
regions might be entirely omitted (_e.g_. the brown strokes for the hut), if
the model does not understand that the corresponding strokes indicate a
distinct semantic entity _e.g_. a hut, small castle _etc_. Moreover, as shown
in Fig. 5 such discrepancies persist even if the corresponding text tokens
(_e.g_. a hut) are added to the textual prompt.
Our key motivation is that the when generated faithfully, the average
attention maps across different cross-attention layers show high overlap with
the target object segmentation during the initial to intermediate parts of the
reverse diffusion process. In our experiments, we found the reverse to also be
true. That is, by constraining the cross attention map corresponding to a
target semantic label to have a high overlap with the desired painting region,
it is possible to control the semantics of different painting regions without
the need for segmentation based conditional training.
In particular, given the binary masks corresponding to different painting
regions $\\{\mathcal{B}_{1},\dots\mathcal{B}_{N}\\}$ and the corresponding
semantic labels $\\{u_{1},\dots u_{N}\\}$, we first modify the input text
tokens as follows,
$\displaystyle\tau_{modified}=\tau+\left\\{\textsc{CLIP}(u_{i})\mid
i\in[1,N]\right\\},$ (7)
where $\tau$ is the set of CLIP [27] tokens for input text prompt.
At any timestep $t\in[0,T]$ during the reverse diffusion process, we then
enforce semantic control by modifying the cross-attention map
$\mathcal{A}^{i}_{t}$ corresponding to label $u_{i}$ as follows,
$\displaystyle\tilde{\mathcal{A}^{i}_{t}}=w_{i}\left[(1-\kappa_{t})\
\mathcal{A}^{i}_{t}+\kappa_{t}\ \frac{\mathcal{B}_{i}\
}{\|\mathcal{B}_{i}\|_{F}}\ \|\mathcal{A}_{t}^{i}\|_{F}\right]$ (8)
where $\|.\|_{F}$ represents the Frobenius norm, $\kappa_{t}=t/T\in[0,1]$
helps regulate the overlap between the cross-attention output
$\tilde{\mathcal{A}^{i}_{t}}$ and the desired painting region
$\mathcal{B}_{i}$ during the reverse diffusion process, and, weights $w_{i},\
i\in[1,N]$ help the user to control the relative importance of expressing
different semantic concepts in the final image.
Figure 4: _Qualitative comparisons._ We compare the performance of our
approach with prior works [23, 4, 5] based on their _faithfulness_ to the
provided reference, and the _realism_ with respect to the target domain
(generated by conditioning only on the text prompt). Please note that for our
results, we show the _GradOP_ (Alg. 1) and _GradOP+_ (Alg. 2) outputs in row
1,2 and row-3,4 respectively.
## 4 Experiments
_Implementation Details._ We use publicly available text-conditioned latent
diffusion models [40, 30] for implementing the purposed approach in Sec. 3.
The constrained optimization is performed using gradient descent with the Adam
[17] optimizer and number of gradient steps $N_{grad}\in[20,60]$ (please refer
Sec. 5.2 for detailed analysis). While several formulations of the distance
measure $\mathcal{L}$ and painting function $f$ are possible (refer supp.
material for details), we find that simply approximating the function
$\mathcal{L}$ using mean squared distance and $f$ as a convolution operation
with a gaussian kernel seems to give the fastest inference time performance
with our method. For consistency reasons, we use the non-differentiable
painting function from SDEdit [23] while reporting quantitative results (refer
Sec. 4.1).
### 4.1 Stroke Guided Image Synthesis
_Evaluation metrics._ Given an input stroke painting, we compare the
performance of our approach with prior works in guided image synthesis when no
paired data is available. The performance of the final outputs is measured in
terms of both _faithfulness_ of the generated image with the target stroke
painting as well as the _realism_ of the final output distribution. In
particular, given an input painting $y$ and output real image prediction $x$,
we define faithfulness $\mathcal{F}(x,y)$ as,
$\displaystyle\mathcal{F}(x,y)=\mathcal{L}_{2}(f(x),y)$ (9)
where $f(.)$ is the painting function. Thus an output image $x$ is said to
have high faithfulness with the given painting $y$ if upon painting the final
output $x$ we get a painting $\tilde{y}=f(x)$ which is similar to the original
target painting $y$.
Similarly, given a set of output data samples $\mathcal{S}({y,\tau_{text}})$
conditioned on both painting $y$ and text $\tau_{text}$, and,
$\mathcal{S}({\tau_{text}})$ conditioned only on the text, the _realism_
$\mathcal{R}$ is defined as,
$\displaystyle\mathcal{R}(\mathcal{S}({y,\tau_{text}}))=FID\left(\mathcal{S}({y,\tau_{text}}),\mathcal{S}({\tau_{text}})\right)$
(10)
where $FID$ represents the Fisher inception distance [11].
_Baselines._ We compare our approach with prior works on guided image
synthesis from stroke paintings with no paired data. In particular we show
comparisons with, _1) SDEdit_ [23] wherein the generative prior is introduced
by first passing the painting $y$ through the forward diffusion pass
$y\rightarrow y_{t_{0}}$ [16, 37], and then performing reverse diffusion
$y_{t_{0}}\rightarrow y_{0}$ to get the output image $x=y_{0}$111Please note
that due to space constraints, we primarily use the standard hyperparameter
value of $t_{0}=0.8$ in the main paper, and refer the reader to the supp.
material for detailed comparisons with changing $t_{0}\in[0,1]$.. _2) SDEdit +
Loopback_ [4] which reuses the last diffusion output to iteratively increase
the realism of the final output. _3) ILVR_ 222Please note that the original
ILVR [5] algorithm was proposed for iterative refinement with diffusion models
in pixel space. We adapt the ILVR implementation for inference with latent
diffusion models [30] for the purposes of this paper. Please refer supp.
material for further details.[23]: which uses an iterative refinement approach
for conditioning the output $x$ of the diffusion model with a guidance image
$y$. Unless otherwise specified, we use the GradOP+ algorithm (refer Alg. 2)
when reporting evaluation results.
Method | Evaluation criteria | User Study Results
---|---|---
$\mathcal{F}({x,y})\downarrow$ | $\mathcal{R}(.)\downarrow$ | Realism $\uparrow$ | Satisfaction $\uparrow$
SDEdit [23] | 88.93 | 223.8 | 94.09 % | 91.98%
Loopback [4] | 104.6 | 132.9 | 54.28 % | 85.32%
ILVR [5] | 108.2 | 161.7 | 76.54 % | 93.47%
Ours | 94.40 | 134.2 | N/A | N/A
Table 1: _Quantitative Evaluations_. _(Left)_ Method comparison w.r.t
_faithfulness_ $\mathcal{F}$ to the reference painting and _realism_
$\mathcal{R}$ to the target domain. _(Right)_ User-study results, showing % of
inputs for which human subjects prefer our approach over prior works.
Figure 5: _Controlling semantics of different painted regions._ We compare
image generation outputs (Col 3-5) using the cross-attention control approach
from Sec. 3.3 with outputs (Col 6-8) generated by only modifying the input
text prompt. Note that for each semantic guide (Col 2), the text prompt
modification is performed by adding the corresponding semantic labels at the
end of the text prompt. For instance, the modified text prompt for examples in
row-1 would be “a fantasy landscape, trending on artstation showing a hut”.
_Qualitative Results._ Results are shown in Fig. 4. We observe that both
proposed approximate optimization methods (_i.e_. _GradOP_ in row-1,2 and
_GradOP+_ in row-3,4) lead to output images which are both highly
photorealistic as well as _faithful_ with reference painting. In contrast,
while SDEdit [23] shows high faithfulness to the input painting, the final
outputs lack details and resemble more of a pictorial art rather than
realistic photos. Iteratively reperforming the guided synthesis with the
generated outputs (SDEdit + Loopback [4]) helps improve the realism of output
images, however, we find that this has two main disadvantages. First, the
iterative loop increases the effective time required for generating each data
sample (_e.g_. four reverse sampling steps instead of just one). Second, we
note that as the number of successive iterations increase the final outputs
become less and less faithful to the original painting input. Finally, ILVR
[5] leads to more realistic outputs, however, the final outputs are not fully
faithful to the reference painting in terms of the overall color composition.
_Quantitative Results._ In addition to qualitative results we also
quantitatively evaluate the final outputs on the _faithfulness_
$\mathcal{F}(x,y)$ and _targeted-realism_ $\mathcal{R}(.)$ metrics defined
earlier. Additionally, similar to [23] we also perform a human user study
wherein the _realism_ and the overall satisfaction score (_faithfulness_ \+
_realism_) are evaluated by human subjects (please refer supp. material for
details). Results are shown in Tab. 1. We find that as expected, while SDEdit
[23] leads to the best faithfulness with the target painting, it exhibits very
poor performance in terms of the _realism_ score. SDEdit with loopback [4]
improves the realism score but the resulting images start loosing faithfulness
with the given reference. In contrast, our approach leads to the best tradeoff
between faithfulness to the target image and realism with respect to the
target domain. These findings are also reflected in the user-study results
wherein our method is preferred by $>85.32\%$ of human subjects in terms of
the overall satisfaction score.
### 4.2 Controlling Semantics of Painted Regions
Results are shown in Fig. 5. We observe that in absence of semantic attention
control, the model tries to infer the semantics of different painting regions
in an implicit manner. For instance, the orange strokes in the sky region can
be inferred as the sun, moon, or even as a yellow tree. Similarly, the brown
strokes in the lower-left region (intended to draw a _hut_ or small _castle_)
are often inferred as muddy or rocky parts of the terrain. Moreover, such
disparity continues even after modifying the input prompt to describe the
intended semantic labels. For instance, in row-1 from Fig. 5, while changing
the text prompt to include the text “hut” leads to the emergence of “hut” like
structures, the inference is often done in a manner that is not intended by
the user.
In contrast, by ensuring a high overlap between the intended painting regions
and the cross-attention maps for the corresponding semantic labels (refer Sec.
3.3), we are able to generate outputs which follow the intended semantic guide
in a much more accurate manner. For instance, the user is able to explicitly
specify that the brown regions on the ground describes a hut (row 1) or castle
(row 2-4). Similarly, the semantics of different regions can be controlled,
_e.g_. the blue region is specified as a river or waterfall, the orange
strokes in the sky is specified as moon or sun _etc_.
## 5 Analysis
### 5.1 Variation in Target Domain
Figure 6: _Method Analysis._ Comparing guided image synthesis performance
across _(left)_ variation in target domain, and _(right)_ variation in number
of gradient descent steps $N_{grad}$ used for performing the proposed
optimization. Please zoom-in for best comparisons.
In this section, we analyse the generalizability of the our approach across
different target domains (_e.g_. children drawings, disney scenes) and compare
the output performance with prior works. Results are shown in Fig. 6-a. We
observe that our approach is able to adapt the final image outputs reliably
across a range of target domains while still maintaining a high level of
faithfulness with the target image. In contrast, SDEdit [23] generates outputs
which lack details and thereby look very similar across a range of target
domains. SDEdit + Loopback [4] addresses this problem to some extent, but it
requires multiple reverse diffusion passes and the generated outputs tend to
lose their faithfulness to the provided reference with each iteration.
Figure 7: _Out-of-distribution performance._ Analysing _success_ (top) and
_failure_ (bottom) cases for out-of-distribution prompts.
### 5.2 Variation with Number of Gradient Steps
In this section, we analyse the variation in output performance as we change
the number of gradient descent steps $N_{grad}$ used to solve the
unconstrained optimization problem in Sec. 3. Results are shown in Fig. 6-b.
As expected, we find that for $N_{grad}=0$, the generated outputs are sampled
randomly from the subspace of outputs $(\mathcal{S}_{\tau_{text}})$
conditioned only on the text. As the number of gradient-descent steps
increase, the model converges to a subset of solutions within the target
subsapce $\mathcal{S}_{\tau_{text}}$ which exhibit higher _faithfulness_ with
the provided reference. Please note that this behaviour is in contrast with
SDEdit [23], wherein the increase in _faithfulness_ to the reference is
corresponded with a decrease in the _realism_ of the generated outputs [23].
### 5.3 Out-of-Distribution Generalization
As shown in Sec. 4, 5, we find that the proposed approach allows for a high
level of semantic control (both color composition and fine-grain semantics)
over the output image attributes, while still maintaining the _realism_ with
respect to the target domain. Thus a natural question arises: _Can we use the
proposed approach to generate realistic photos with out-of-distribution text
prompts?_
As shown in Fig. 7, we observe that both success and failure cases exist for
out-of-distribution prompts. For instance, while the model was able to
generate “ _realistic_ photos of cats with six legs” (note that for the same
inputs prior works either generate faithful but cartoon-like outputs, or,
simply generate regular cats), it shows poor performance while generating “a
photo of a rat chasing a lion”.
## 6 Conclusions
In this paper, we present a novel framework for performing guided image
synthesis synthesis with user scribbles, without the need for paired
annotation data. We point that prior works in this direction [23, 4, 5],
typically adopt an inversion-like approach which leads to outputs which lack
details and are often simplistic representations of the target domain. To
address this, we propose a novel formulation which models the guided synthesis
output as the solution of a constrained optimization problem. While obtaining
an exact solution to this optimization is infeasible, we propose two methods
_GradOP_ and _GradOP+_ which try to obtain an approximate solution to the
constrained optimization in a sample-efficient manner. Additionally, we show
that by defining a cross-attention based correspondence between the input text
tokens and user painting, it is possible to control semantics of different
painted regions without the need for semantic segmentation based conditional
training.
## References
* [1] Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4432–4441, 2019.
* [2] Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan++: How to edit the embedded images? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8296–8305, 2020.
* [3] Yuval Alaluf, Or Patashnik, and Daniel Cohen-Or. Restyle: A residual-based stylegan encoder via iterative refinement. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021.
* [4] AUTOMATIC1111. Stable-diffusion-webui. https://github.com/AUTOMATIC1111/stable-diffusion-webui, 2022.
* [5] Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. arXiv preprint arXiv:2108.02938, 2021.
* [6] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. Advances in Neural Information Processing Systems, 34:19822–19835, 2021.
* [7] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12873–12883, 2021.
* [8] Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. Make-a-scene: Scene-based text-to-image generation with human priors. arXiv preprint arXiv:2203.13131, 2022.
* [9] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. arXiv preprint arXiv:2208.01618, 2022.
* [10] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022.
* [11] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
* [12] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
* [13] Nisha Huang, Fan Tang, Weiming Dong, and Changsheng Xu. Draw your art dream: Diverse digital art synthesis with multimodal guided diffusion. In Proceedings of the 30th ACM International Conference on Multimedia, pages 1085–1094, 2022.
* [14] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
* [15] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. Advances in Neural Information Processing Systems, 33:12104–12114, 2020.
* [16] Gwanghyun Kim, Taesung Kwon, and Jong Chul Ye. Diffusionclip: Text-guided diffusion models for robust image manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2426–2435, 2022.
* [17] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [18] Gihyun Kwon and Jong Chul Ye. Diffusion-based image translation using disentangled style and content representation. arXiv preprint arXiv:2209.15264, 2022.
* [19] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, and Ping Luo. Maskgan: Towards diverse and interactive facial image manipulation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5549–5558, 2020.
* [20] Songhua Liu, Tianwei Lin, Dongliang He, Fu Li, Ruifeng Deng, Xin Li, Errui Ding, and Hao Wang. Paint transformer: Feed forward neural painting with stroke prediction. arXiv preprint arXiv:2108.03798, 2021.
* [21] Xihui Liu, Dong Huk Park, Samaneh Azadi, Gong Zhang, Arman Chopikyan, Yuxiao Hu, Humphrey Shi, Anna Rohrbach, and Trevor Darrell. More control for free! image synthesis with semantic diffusion guidance. arXiv preprint arXiv:2112.05744, 2021.
* [22] Xihui Liu, Guojun Yin, Jing Shao, Xiaogang Wang, et al. Learning to predict layout-to-image conditional convolutions for semantic image synthesis. Advances in Neural Information Processing Systems, 32, 2019.
* [23] Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations, 2022.
* [24] Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017.
* [25] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
* [26] Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2337–2346, 2019.
* [27] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763. PMLR, 2021.
* [28] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
* [29] Elad Richardson, Yuval Alaluf, Or Patashnik, Yotam Nitzan, Yaniv Azar, Stav Shapiro, and Daniel Cohen-Or. Encoding in style: a stylegan encoder for image-to-image translation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021.
* [30] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021.
* [31] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. arXiv preprint arXiv:2208.12242, 2022.
* [32] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
* [33] Junyoung Seo, Gyuseong Lee, Seokju Cho, Jiyoung Lee, and Seungryong Kim. Midms: Matching interleaved diffusion models for exemplar-based image translation. arXiv preprint arXiv:2209.11047, 2022.
* [34] Jaskirat Singh, Cameron Smith, Jose Echevarria, and Liang Zheng. Intelli-paint: Towards developing more human-intelligible painting agents. In European Conference on Computer Vision, pages 685–701. Springer, 2022.
* [35] Jaskirat Singh and Liang Zheng. Combining semantic guidance and deep reinforcement learning for generating human level paintings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021.
* [36] Jaskirat Singh, Liang Zheng, Cameron Smith, and Jose Echevarria. Paint2pix: interactive painting based progressive image synthesis and editing. In European Conference on Computer Vision, pages 678–695. Springer, 2022.
* [37] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
* [38] Vadim Sushko, Edgar Schönfeld, Dan Zhang, Juergen Gall, Bernt Schiele, and Anna Khoreva. You only need adversarial supervision for semantic image synthesis. arXiv preprint arXiv:2012.04781, 2020.
* [39] Omer Tov, Yuval Alaluf, Yotam Nitzan, Or Patashnik, and Daniel Cohen-Or. Designing an encoder for stylegan image manipulation. arXiv preprint arXiv:2102.02766, 2021.
* [40] Patrick von Platen, Suraj Patil, Anton Lozhkov, Pedro Cuenca, Nathan Lambert, Kashif Rasul, Mishig Davaadorj, and Thomas Wolf. Diffusers: State-of-the-art diffusion models. https://github.com/huggingface/diffusers, 2022.
* [41] Weilun Wang, Jianmin Bao, Wengang Zhou, Dongdong Chen, Dong Chen, Lu Yuan, and Houqiang Li. Semantic image synthesis via diffusion models. arXiv preprint arXiv:2207.00050, 2022.
* [42] Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2022.
* [43] Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, and Tong Sun. Towards language-free training for text-to-image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17907–17917, 2022.
* [44] Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In European conference on computer vision, pages 597–613. Springer, 2016.
* [45] Peihao Zhu, Rameen Abdal, Yipeng Qin, and Peter Wonka. Sean: Image synthesis with semantic region-adaptive normalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5104–5113, 2020.
* [46] Zhengxia Zou, Tianyang Shi, Shuang Qiu, Yi Yuan, and Zhenwei Shi. Stylized neural painting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15689–15698, 2021.
Supplementary Material
High-Fidelity Guided Image Synthesis with Latent Diffusion Models
## Appendix A Additional Results
In this section, we provide additional results which could not be included in
the main paper due to space constraints. In particular, we note that baseline
methods like SDEdit [23] can often be run using different values of the
hyperparameter $t_{0}$. We therefore provide additional results comparing the
performance of SDEdit at different $t_{0}\in[0,1]$ (refer Sec. A.1).
Additionally, we introduce some custom baselines (which could be used for
improving the realism of final image outputs) and show results comparing their
output performance with our approach (refer Sec. A.2).
### A.1 Additional Comparisons with SDEdit
Recall, given a stroke painting $y$, SDEdit [23] follows an inversion-based
approach for performing guided image synthesis. In particular, the generative
prior is introduced by first passing the painting $y$ through the forward
diffusion pass $y\rightarrow y_{t_{0}}$ [16, 37], and then performing reverse
diffusion $y_{t_{0}}\rightarrow y_{0}$ to get the output image $x=y_{0}$. Due
to space constraints, we primarily use the standard hyperparameter value of
$t_{0}=0.8$ in the main paper. In this section, we provide additional results
which comprehensively compare our approach with SDEdit [23] under changing
values of $t_{0}$.
_Qualitative Comparisons._ Results are shown in Fig. 9, 10. We observe that
for lower values of $t_{0}$, SDEdit generates outputs which though highly
faithful to the reference painting, lack details and represent simplistic
representations of the target domain. Increasing the value of hyperparameter
$t_{0}$ helps improve realism but the outputs become less and less faithful
with the reference painting. In contrast, we find that the proposed approach
leads to outputs which are both _faithful_ to the reference painting as well
as exhibit high _realism_ _w.r.t_ the target domain (which is generated using
only text prompt conditioning).
_Quantitative Comparisons._ In addition to qualitative results, we also report
quantitative results by analysing the relationship between the _faithfulness_
$\mathcal{F}$ and _realism_ $\mathcal{R}$ metrics (refer Sec. 4.1 of main
paper), under changing values hyperparameter $t_{0}$. Results are shown in
Fig. 8. We observe that as compared to prior works, our method provides the
best tradeoff generating _realistic_ outputs and maintaining _faithfulness_
with the provided reference painting.
Figure 8: _Visualizing faithfulness-realism tradeoff._ We analyse the tradeoff
between faithfulness-realism distances for different methods (note that lower
is better for both metrics). We observe that as compared to prior works, our
method provides the best tradeoff between generating realistic outputs and
maintaining faithfulness with the provided reference painting.
Figure 9: _Additional comparisons._ We provide comprehensive comparisons with
SDEdit [23] under changing value of hyperparameter $t_{0}$. We find that
SDEdit [23] either generates faithful but cartoon-like outputs for low
$t_{0}$, or, generates realistic but unfaithful outputs at high $t_{0}$. In
contrast, our approach leads to outputs which are both realistic (_w.r.t_ the
target domain) as well as faithful (to the provided reference).
Figure 10: _Additional comparisons._ We provide comprehensive comparisons with
SDEdit [23] under changing value of hyperparameter $t_{0}$. We find that
SDEdit [23] either generates faithful but cartoon-like outputs for low
$t_{0}$, or, generates realistic but unfaithful outputs at high $t_{0}$. In
contrast, our approach leads to outputs which are both realistic (_w.r.t_ the
target domain) as well as faithful (to provided reference).
### A.2 Comparison with Custom Baselines
Figure 11: _Comparison with Custom Baselines - AttnRW [10]._ We compare the
performance of our method with the Attention Reweighting (AttnRW) approach for
increasing realism _w.r.t_ the target domain. We find that increasing the
weight of cross attention maps corresponding to the domain-specific text
tokens (_e.g_. photo in above), leads to improved realism of the generated
outputs. However, we note that certain blurry details persist _e.g_. grass in
row 1-4. Also, the increase in realism is accompanied by some image artifacts
_e.g_. blotched regions in row 1-4, image in image artifacts in row 4-8 _etc_.
In contrast, our approach improves output realism in a more coherent manner.
Figure 12: _Comparison with Custom Baselines - CFG_ [12]. We analyse the
impact of increasing the classifier-free guidance scale $\alpha_{cfg}$ on
outputs generated using SDEdit [23] and our method. We find that while
increasing the value of $\alpha_{cfg}$ leads to increase in level of details,
the final outputs still represent simplistic representations of the target
domain (row-3). Furthermore, as the value of $\alpha_{cfg}$ is increased, the
faithfulness with respect to the reference painting is compromised (_e.g_. red
regions in row-1).
In this section, we introduce some custom methods (as baselines) for
increasing the realism of generated outputs with SDEdit [23], and then compare
the output performance for the same with our approach. In particular, we show
additional comparisons with the following custom baselines,
* •
_Attention Re-weighting (AttnRW)_ [10] wherein the realism _w.r.t_ the target
domain is enhanced by increasing the attention weighting for the corresponding
domain specific text tokens (_e.g_. photo, painting _etc_.). For instance, if
the text prompt says _“a photo of a tree”_ , then we aim to increase the
realism of the generated outputs by increasing the weightage of the cross-
attention maps corresponding to the the word _“photo”_ [10]. Results are shown
in Fig. 11. We observe that while increasing the weightage of domain specific
text tokens (_e.g_. photo, painting _etc_.) helps improve the realism of the
output images to some extent, the final images still lack details and certain
blurry regions still persist (_e.g_. grass in row-1). Furthermore, the
increase in realism is accompanied by some image artifacts _e.g_. blotched
image regions in row 1-4, image-in-image artifacts in row 4-8 _etc_. In
contrast, we find that our method provides a more practical approach for
increasing the output realism in a semantically coherent manner.
* •
_Increasing Classifier Guidance Scale_ [12], wherein we attempt to increase
the realism of the SDEdit [23] outputs by increasing the scale of classifier
free guidance used during the reverse diffusion process. Results are shown in
Fig. 12. We observe that while increasing the scale of classifier free
guidance improves the level of detail in the generated images, the final
outputs still resemble cartoon-like or simplistic representations of the
target domain. Furthermore, we also note that our approach can also benefit
from the increase in guidance scale in order to increase the level of fine-
grain detail in the output images (_e.g_. details of castle, water reflections
in Fig. 12).
## Appendix B Experiment Details
### B.1 Implementation Details
In this section, we provide further details for the implementation of our
approach as well as other baselines used while reporting results in the main
paper.
Ours. We use publicly available text-conditioned latent diffusion models [40,
30] for implementing the purposed approach in the main paper. The constrained
optimization is performed using gradient descent with the Adam [17] optimizer
and number of gradient steps $N_{grad}\in[20,60]$. While several formulations
of the distance measure $\mathcal{L}$ and painting function $f$ are possible
(refer Sec. C), we find that simply approximating the function $\mathcal{L}$
using mean squared distance and $f$ as a convolution operation with a gaussian
kernel seems to give the fastest inference time performance with our method.
For consistency with prior works, we use the non-differentiable painting
function from SDEdit [23] while reporting quantitative results. All results
are reported using the DDIM sampling [37] with 50 inference steps for
performing the reverse diffusion process.
SDEdit[23]. We use the standard image-to-image pipeline from the open-source
_diffusers_ library [40] for reporting results for _SDEdit_ [23] with
different values of hyperparameter $t_{0}\in[0,1]$. Similar to our method, all
results are reported at $512\times 512$ resolution using DDIM sampling [37]
with 50 inference steps for performing the reverse diffusion process. Unless
otherwise specified, a classifier-free guidance scale [12] of $7.5$ is used
for all experiments.
SDEdit + Loopback[4]. We use the previously described SDEdit implementation
and iteratively reperform guided synthesis with the previous diffusion outputs
to improve realism of the generated outputs. In particular, we use
$N_{iter}=4$ iterations for the iterative process. Also, similar to [4], in
order to increase the realism of generated outputs with each iteration, the
hyperparameter $t_{0}$ is updated as,
$\displaystyle t_{0}^{n+1}\leftarrow min(t_{0}^{n}\ \cdot\ k,1.0),\quad
k\in[1.0,1.1]$ (11)
where $n\in[1,N_{iter}]$ is the iteration number. Unless otherwise specified,
we use the standard hyperparameter selection of $k=1.05$ and $t^{n=1}_{0}=0.8$
for our experiments.
ILVR [5]. The original ILVR [5] algorithm was proposed for iterative
refinement with diffusion models in pixel space. We adapt the ILVR
implementation for inference with latent diffusion models [30] for the
purposes of this paper. In particular given a reference painting $y$, the
original ILVR algorithm modifies the diffusion output $x_{t}$ (in pixel space)
at any timestep $t$ during reverse diffusion process as,
$\displaystyle\tilde{x}_{t}=\phi_{N}(y_{t})+x_{t}-\phi_{N}(x_{t}),\quad
y_{t}\sim q(y_{t}\mid y)$ (12)
where $q(y_{t}\mid y)$ represents the forward diffusion process from
$y\rightarrow y_{t}$, $\phi_{N}(.)$ is a low pass filter achieved by scaling
down the image by a factor of $N$ and then upsampling it back to the original
dimensions. Assuming a latent diffusion model with encoder $\mathcal{E}$ and
decoder $\mathcal{D}$, we simply adapt the above update in latent space as
follows,
$\displaystyle x_{t}=\mathcal{D}(z_{t})$ (13) $\displaystyle
z_{y}=\mathcal{E}(y),\ z_{y_{t}}\sim q(z_{y_{t}}\mid z_{y})$ (14)
$\displaystyle\tilde{x}_{t}=\phi_{N}(y_{t})+x_{t}-\phi_{N}(x_{t}),\quad
y_{t}=\mathcal{D}(z_{y_{t}})$ (15)
$\displaystyle\tilde{z}_{t}=\mathcal{E}(\tilde{x}_{t})$ (16)
where Eq. 13, 16 map the latent features $z_{t}$ to pixel space $x_{t}$, and
vice-versa. Eq. 14 computes $y_{t}$ from $y$ by first mapping $y$ to $z_{y}$,
computing the forward diffusion $z_{y}\rightarrow z_{y_{t}}$ and then
reverting back $z_{y_{t}}$ to $y_{t}$. Finally, Eq. 15 is simply the original
update rule from ILVR algorithm [5]. A hyperparmeter value of $N=4$ is used
while reporting results.
### B.2 Quantitative Experiments
Data Collection. Since there is no predefined dataset for guided image
synthesis with user-scribbles and text prompts, we create our own dataset for
reporting quantitative results. In particular, we first collect a set of 100
stroke painting and text prompt pairs from diverse data modalities with the
help of actual human users. We then augment the collected data using a prompt-
engineering approach to increase the diversity of the collected data pairs. In
particular, the text prompt for each data-pair is modified in order to replace
the domain specific text words (_e.g_. photo, painting) with pre-designed
target domain templates, while keeping the underlying content the same. During
prompt engineering, the target domain template is chosen randomly from _[
‘photo’,‘watercolor painting’, ‘Vincent Van Gogh painting’,‘children
drawing’,‘high resolution disney scene’, ‘high resolution anime scene’,
‘fantasy scene’,‘colored pencil sketch’]_. For each data pair, we then sample
four random guided image synthesis outputs for each baseline and our method.
The resulting dataset consists of 800 (painting, text-prompt) pairs and 3200
overall samples from diverse data modalities for final method evaluation.
Quantitative Metrics. In order to evaluate the performance of our approach, we
introduce two metrics for measuring the _faithfulness_ of the output _w.r.t_
the reference painting, and the _realism_ of the generated samples _w.r.t_ the
target domain (specified through text-only conditioning). In particular, given
an input painting $y$ and output real image prediction $x$, we define
faithfulness distance $\mathcal{F}(x,y)$ as,
$\displaystyle\mathcal{F}(x,y)=\mathcal{L}_{2}(f(x),y)$ (17)
where $f(.)$ is the painting function. Thus an output image $x$ is said to
have high faithfulness with the given painting $y$ if upon painting the final
output $x$ we get a painting $\tilde{y}=f(x)$ which is similar to the original
target painting $y$ (Fig. 13).
The painting function $f$ is implemented using the human stroke-simulation
algorithm from SDEdit [23]. In particular, given an $256\times 256$ input
image, the output painting is computed by first passing the image through a
median filter with kernel size $23$, and then perform color quantization to
reduce the number of colors to $20$ using an adaptive palette.
Figure 13: Visualizing input painting $y$, output $x$ and painted
reconstruction $\tilde{y}=f(x)$. The goal is to generate an output $x$ which
is realistic and for which painting loss $\mathcal{L}_{2}(f(x),y)$ is
minimized.
Figure 14: _Visualizing the effect of GradOP on cross attention maps._ We
analyse the effect of our approach on the cross-attention maps generated
during the reverse diffusion process. We find that our method leads to cross-
attention outputs which help the model pay better attention to desired image
areas in the reference painting. For instance, in the first example, the
cross-attention features show high overlap with the desired _dog_ and _field_
regions. In contrast, the cross attention maps from SDEdit [23] reveal that
the model is not paying adequate attention to the desired image areas (_e.g_.
_field_ in row-1, _tree and forest_ in row-3) while generating the final
output.
Similarly, given a set of output data samples $\mathcal{S}({y,\tau_{text}})$
conditioned on both painting $y$ and text $\tau_{text}$, and,
$\mathcal{S}({\tau_{text}})$ conditioned only on the text, the _realism_
$\mathcal{R}$ is defined as,
$\displaystyle\mathcal{R}(\mathcal{S}({y,\tau_{text}}))=FID\left(\mathcal{S}({y,\tau_{text}}),\mathcal{S}({\tau_{text}})\right)$
(18)
where $FID$ represents the Fisher inception distance [11].
Please note that while the above defined _realism_ distance measure
$\mathcal{R}$ captures the realism with respect to the target domain, we
expect the computed $FID$ scores to be higher than those expected of
unconditioned image outputs. This is because while the proposed method
generates outputs which seem realistic to human eyes, the variance of output
distribution is significantly lower than that of real images. The decreased
variance in output images occurs simply because the layout and color
composition are predominantly fixed as a result of additional conditioning on
the stroke painting $y$. In contrast, natural images or images conditioned
only on the text prompt have a much higher diversity in terms of scene layout
and the overall color composition. We therefore try to overcome of lack of
diversity in generated image outputs by performing random data augmentations
(random horizontal flip and random resized crop of size $448\times 448$ on a
$512\times 512$ image) while computing the final realism scores across
different methods 333Note that while this helps increase the diversity in
scene layout the diversity in color composition is still lower than that of
real images or image outputs conditioned only on the text prompt..
Human User Study. In addition to reporting quantitative results using the
above defined measures for _faithfulness_ and _realism_ , similar to [23], we
also perform a human user study wherein the _realism_ and the overall
satisfaction score (_faithfulness_ \+ _realism_) are evaluated by actual human
users. For the _realism_ scores, given an input text prompt (with target
domain $\tau_{domain}$ _e.g_. $\tau_{domain}=$ ‘ _photo_ ’) and sample images
conditioned only on the text prompt, the participants were shown a pair of
image generation outputs comparing our method with prior works. For each pair,
the human subject is then asked to select the output image which is more
realistic with respect to the target domain ($\tau_{domain}$). Similarly, for
computing the overall satisfaction scores, given an input stroke painting,
text prompt and sample images conditioned only on the text prompt, the
participants were shown a pair of image generation outputs comparing our
method with prior works. For each pair, the instruction is: “Given the input
painting and text prompt, how would you imagine this image to look like in
reality? Your selection should be based on how realistic and less blurry the
image is (please check level of details), consistency with the target domain
($\tau_{domain}$) and whether it is faithful with the reference painting in
terms of scene layout, color composition”. For each task (_e.g_. computing
overall satisfaction score), the collected data samples (discussed above) were
divided among 50 human participants, who were given an unlimited time in order
to ensure high quality of the final results. Additionally, in order to remove
data noise, we use a repeated comparison (control seed) for each user.
Responses of users who answer differently to this repeated seed are discarded
while reporting the final results.
## Appendix C Method Analysis: Continued
Figure 15: _Analysing role of painting guidance in semantic control._ We
analyse the effect of using an underlying reference painting as guidance in
controlling the semantics of different image areas using cross-attention based
correspondence approach presented in the main paper (refer Sec. 3.3 in main
paper). We find that additional guidance using reference stroke painting helps
the user gain much accurate control over the semantics of different image
regions (_e.g_. lake in row-3,4, mountains in row-3, rocks, forest in row-4
_etc_.).
### C.1 Effect of GradOP on Cross Attention Maps
As shown by Hertz _et al_. [10] and our results, the cross-attention maps
corresponding to different words in the input text prompt play a key role in
deciding the overall semantic contents of the final image output. In this
section, we try to analyse how the proposed approach leads to more realistic
image content generation by analysing the average cross-attention maps
generated while performing the reverse diffusion process with SDEdit [23] and
our method.
Results are shown in Fig. 14. We find that our method leads to cross-attention
outputs which help the model pay better attention to desired image areas in
the reference painting. For instance, in the first example, the cross-
attention features show high overlap with the desired _dog_ and _field_
regions. In contrast, the cross attention maps from SDEdit [23] reveal that
the model is not paying adequate attention to the some desired image areas
(_e.g_. _field_ in row-1, _forest_ in row-3) while generating the final
output.
### C.2 Semantic Control without Painting Guidance
Recall that in addition to performing _high-fidelity_ guided image synthesis,
we also show that by simply defining a cross attention based correspondence
between the input text tokens and the user painting, it is possible to control
the semantics of different image regions without the need for any semantic
segmentation based conditional training. In this section, we analyse whether
similar semantic control is possible without having additional guidance
through a stroke painting. In particular, we wish to analyse if such fine-
grain control is only possible while providing additional guidance through the
reference stroke painting?
To answer this question, we compare the outputs generated through semantic
control with and without using a reference painting for the guided synthesis
process. Results are shown in Fig. 15. We observe that while for it is
feasible to define the semantics of one or two parts of the image accurately
using _cross-attention_ correspondence, the performance decreases as the
number of semantic labels increases (_e.g_. lake in row-3,4, mountains in
row-3, rocks, forest in row-4 _etc_.). In contrast, we find that the use of a
reference painting results in much better control over the semantics of
different image regions. We believe that the same is because the use of a
reference painting sets up a generic semantic structure for the output image
which can then be easily refined by defining a cross-attention based
correspondence. For instance, in row-4 of Fig. 15, adding the blue strokes for
lake region sets up a semantic prior which constrains the inference of output
semantics to semantic categories like river, lake, sea, stream, blue-green
grass, blue pavement _etc_. The use of semantic correspondence then helps
refine these output semantics to what is actually desired by the user. In
contrast, without stroke guidance, the initial semantics for lake region could
me much more diverse (_e.g_. sand, rocky terrain in row-4), and thereby more
challenging to refine through the proposed semantic correspondence strategy.
### C.3 Inference Time Analysis
We report a comparison of the average inference times required for each output
image in Tab. 2. All results are reported using the DDIM sampling [37] with 50
inference steps, on a single Nvidia RTX 3090 GPU.
Method | Inference Time (s)
---|---
_w/o mixed precision_ | _with mixed precision_ [24]
SDEdit [23] | 6.32 s | 4.45 s
Loopback [4] | 27.2 s | 20.46 s
ILVR [5] | 8.24 s | 6.17 s
GradOP (Ours) | 20.1 s | 15.8 s
GradOP+ (Ours) | 12.3 s | 8.86 s
Table 2: _Inference time analysis_. Comparing inference time required for
generating each output image for different methods. All results are reported
with DDIM sampling and 50 inference steps.
### C.4 Variation in Painting Function
Please recall that a key requirement for solving the proposed constrained
optimization in Sec. 3 is to define a differentiable painting function $f$,
which provides a good approximation for _“how a human would paint a given
image with coarse user-scribbles”_. In this section, we therefore look at some
possible formulations for obtaining an approximation of the painting function
in a differentiable manner, and compare the corresponding output results.
Figure 16: Analysing performance for different differentiable approximations
of the painting function $f$. We find that while using a more accurate
painting function [23] (Col-2) leads to slightly more details (_e.g_. notice
the gradient of the grass regions in row-1, detailed shadows of the castle and
island in row-2), in practice more simpler approximations (_e.g_. Gaussian
Blur) also produces highly realistic outputs while allowing for much faster
inference times.
_Painting Function Formulation._ In particular, we consider three main
formulations for constructing a differentiable painting function $f$, 1)
_Median Filter + Color Quantization_ , wherein we implement a differentiable
approximation of the human-stroke simulation algorithm in [23]. In particular,
given a reference painting $y$ and output $x$, we first pass $x$ through a
median filter of size 23. We then pass the output of the last step through a
differentiable color quantization function which maps the image pixels to
their nearest $rgb$ value in the painting $y$ (that is, we are performing
color quantization _w.r.t_ the palette of the reference painting.) 2) _Median
Filter_ wherein we use the median filter alone for approximating the painting
function, and 3) _Gaussian Blur_ wherein approximate the painting function
through a convolution operation with a Gaussian kernel (size 31 and
$\sigma$=7).
Results are shown in Fig. 16. We observe that while the use of a more accurate
human-stroke simulation function from [23] allows for the generation of
slightly more detailed outputs (_e.g_. notice the gradient of the grass
regions in row-1, detailed shadows of the castle and island in row-2), it
increases the overall inference time required for the proposed gradient
descent optimization (40.7s on _GradOP+_). In contrast, we find that using
much more simpler approximations (_e.g_. Median Filter, Gaussian Blur) for the
painting function also produces highly realistic outputs while allowing for
much faster inference times (8.86s, 14.1s on _GradOP+_ for Gaussian Blur and
Median Filter respectively).
|
# Precession of magnetars: dynamical evolutions and modulations on polarized
electromagnetic waves
Yong Gao,1,2 Lijing Shao,2,3,4 Gregory Desvignes,3,5 David Ian Jones,6 Michael
Kramer,3,7 and Garvin Yim2,6
1Department of Astronomy, School of Physics, Peking University, Beijing
100871, China
2Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing
100871, China
3Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn,
Germany
4National Astronomical Observatories, Chinese Academy of Sciences, Beijing
100012, China
5Laboratoire d’Études Spatiales et d’Instrumentation en Astrophysique,
Observatoire de Paris, Université Paris-Sciences-et-Lettres,
Centre National de la Recherche Scientifique, Sorbonne Université, Université
de Paris, 5 place Jules Janssen, 92195 Meudon, France
6Mathematical Sciences and STAG Research Centre, University of Southampton,
Southampton SO17 1BJ, UK
7Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The
University of Manchester, Manchester M13 9PL, UK E-mail<EMAIL_ADDRESS>(LS)
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Magnetars are conjectured to be highly magnetized neutron stars (NSs). Strong
internal magnetic field and elasticity in the crust may deform the stars and
lead to free precession. We study the precession dynamics of triaxially-
deformed NSs incorporating the near-field and the far-field electromagnetic
torques. We obtain timing residuals for different NS geometries and torques.
We also investigate the polarized X-ray and radio signals from precessing
magnetars. The modulations on the Stokes parameters are obtained for thermal
X-rays emitted from the surface of magnetars. For radio signals, we apply the
simple rotating vector model (RVM) to give the modulations on the position
angle (PA) of the polarization. Our results are comprehensive, ready to be
used to search for magnetar precession with timing data and polarizations of
X-ray and radio emissions. Future observations of precessing magnetars will
give us valuable information on the geometry and the strength of the strong
magnetic fields, the emission geometry, as well as the equation of state (EoS)
of NSs.
###### keywords:
stars: magnetars – methods: analytical – X-rays: general – polarization
††pubyear: 2022††pagerange: Precession of magnetars: dynamical evolutions and
modulations on polarized electromagnetic waves–A
## 1 Introduction
The free precession of neutron stars (NSs) has been studied since the
discovery of radio pulsars. The wobbling motion caused by the free precession
is closely related to the structure of NSs, such as the elasticity in the
crust (Ushomirsky et al., 2000; Cutler et al., 2003), the strength and the
geometry of the internal and external magnetic fields (Haskell et al., 2008;
Mastrano et al., 2013), the superfluid and superconducting states of the fluid
interior (Pines, 1974; Shaham, 1977; Sedrakian et al., 1998; Link, 2007;
Glampedakis et al., 2008), as well as the evolution of the magnetic
inclination angle (Mestel & Takhar, 1972; Goldreich, 1970; Melatos, 2000;
Lander & Jones, 2018, 2020).
Although many theoretical works were devoted to this field, free precession
has not been firmly observed yet. The most probable evidence of free
precession comes from the radio pulsar PSR B1828$-$11, which shows highly
periodical variations in the pulse phase over a period of $\sim 500\,\rm d$,
accompanied by correlated changes in the beam width of pulses (Stairs et al.,
2000). The data can be well fitted by the free precession model with the
precession-modulated spin-down torque (Jones & Andersson, 2001; Link &
Epstein, 2001; Akgun et al., 2006). Meanwhile, radio observations started to
show that several pulsars, with harmonic features in their timing residuals,
including PSR B1828$-$11, undergo sudden sharp changes in pulse profiles
which, at least for PSR B1931+24, correlates with sharp changes in spin-down
torque (Kramer et al., 2006; Lyne et al., 2010; Shaw et al., 2022). Lyne et
al. (2010) argued that the radio emissions switch back and forth between two
different magnetospheric states, which leads to the harmonic timing residuals
and changes in the pulse shape. The sharpness of the transitions is thought as
strong evidence that free precession is not a viable mechanism to explain the
harmonic timing residuals in PSR B1828$-$11\. Further analysis by Stairs et
al. (2019) also disfavored the free precession scenario. Mode switching is
very appealing and reasonable to explain the harmonic timing data and shape
variations, but the physics behind the sharp changes and the regular clock of
the transitions needs more development. Jones (2012) provided an argument as
to why abrupt magnetospheric changes can occur in precessing stars. In this
regard, the free precession of PSR B1828$-$11 cannot be ruled out firmly yet
(Ashton et al., 2016, 2017).
Different from normal pulsars, magnetars are a class of young NSs with strong
magnetic fields, typically above $10^{14}\,\rm G$, and long spin periods,
typically $2$–$10\,\rm s$ (Kaspi & Beloborodov, 2017). The strong internal
magnetic fields may distort the star (Melatos, 1997, 2000). Due to the young
age and energetic processes, the star may also develop large elastic
deformation in the crust. Glitches or crust fracture may excite wobble angles
and set the deformed magnetars into free precession. Strong magnetic fields
also indicate large external electromagnetic torques, which include the far-
field torque dissipating the kinetic energy and the near-field torque
originating from the moment of inertia of the electromagnetic field itself
(Goldreich, 1970; Good & Ng, 1985; Melatos, 1997). The forced precession can
be significant for magnetars due to their strong electromagnetic torques, but
this effect would not affect the free precession if the object has even
stronger internal magnetic fields reaching $10^{16}\,\rm G$ (Makishima et al.,
2014). We will investigate both free and forced precession in this work.
Early observations of timing irregularities of magnetars motivated Melatos
(1997) to suggest that precession is common in AXP populations. However,
further observations from X-ray timing in the band $\sim 1$–$10\,\rm keV$ have
ruled out precession at an amplitude level above the root-mean-square
amplitude of timing noise (Kaspi et al., 1999; Kaspi et al., 2001). Small-
amplitude precession that is buried in the timing noise of magnetars is still
possible. Possible evidence of magnetar precession were found by the combined
timing analysis of the hard and soft X-rays from three magnetars, 4U 0142+61
(Makishima et al., 2014), 1E 1547.0$-$5408 (Makishima et al., 2021a), and SGR
1900+14 (Makishima et al., 2021b). The phase modulations of hard X-ray are
observed in those magnetars, which may indicate large magnetic deformations on
the order of $\sim 10^{-4}$ (Makishima et al., 2014; Makishima et al., 2021a;
Makishima et al., 2021b). Recently, the precession of magnetars was also used
to interpret the possible periodicity found in fast radio bursts (FRBs; Levin
et al., 2020; Zanazzi & Lai, 2020; Wasserman et al., 2022).
All above evidences of magnetar precession come from the timing residuals,
where the rotation phase is modulated by precession. During the precession,
the body itself also precesses around the deformation axis, which leads to the
swing of the emission region. The polarization directly maps the emission
geometry. Thus it is also promising to reveal precession from the variations
of polarizations.
The soft component of the X-ray emission ($\sim 1$–$10\,\rm keV$) of magnetars
is usually interpreted as thermal emission from the magnetar surface, which is
reprocessed by the strongly magnetized atmosphere (Thompson et al., 2002;
Turolla et al., 2015). Numerous studies have been dedicated to investigate the
opacities and radiative transfer in strongly magnetized atmospheres, showing
that the surface emission could be highly polarized (Meszaros, 1992; Pavlov &
Zavlin, 2000; Ho & Lai, 2003; Lai & Ho, 2003; Taverna et al., 2015). Recently,
the IXPE observation of 4U 0142+61 gave the first ever measurement of
polarized emission from a magnetar in the soft X-ray band (Taverna et al.,
2022). The observations provided us completely new information about the NS
surface and magnetosphere and showed some evidence of vacuum birefringence in
a strong magnetic field.
The radio emissions from magnetars are only observed in transient magnetars
after energetic bursts and the signal itself is also transient (Camilo et al.,
2006; Camilo et al., 2007; Levin et al., 2010; Eatough et al., 2013; Lower et
al., 2020). The emission is highly polarized. By applying the rotating vector
model (RVM; Radhakrishnan & Cooke, 1969), useful information was obtained on
the magnetic field geometry of magnetars (Kramer et al., 2007; Levin et al.,
2012; Lower et al., 2021).
In this paper, we aim to systematically study the dynamics of precessing
magnetars and model the observational consequences in timing and polarization
of electromagnetic waves. The structure of the paper is organized as follows.
In Sec. 2, we discuss the possible deformation of magnetars. The free and
forced precession dynamics of general triaxially-deformed magnetars are
studied in Sec. 3. We give the phase modulations and timing residuals due to
precession in Sec. 4. The modulations on polarized X-ray and radio signals are
investigated in Sec. 5. We give discussions and conclusions in Sec. 6 and Sec.
7 respectively.
## 2 Deformation of magnetars
To make the body precess, NSs must have some deformation misaligned with the
rotational bulge. The strains in the solid crust and the strong internal
magnetic fields are usually considered as potential causes of the deformation.
In the following, we consider the possible sources of deformation. In the body
frame, we can write the moment of inertia tensor of a slowly rotating NS as
(Jones & Andersson, 2001; Wasserman et al., 2022)
$I_{ij}=I_{0}\left[\delta_{ij}+\epsilon_{\mathrm{rot}}\left(\frac{1}{3}\delta_{ij}-\hat{\boldsymbol{\omega}}_{i}\hat{\boldsymbol{\omega}}_{j}\right)+M_{ij}\right]\,.$
(1)
Here $I_{0}$ is the spherical part of the moment of inertia. The rotational
deformation is quadrupolar that is symmetric about the spin axis
$\boldsymbol{\hat{\omega}}$. We let $\epsilon_{\rm rot}$ denote the
ellipticity sourced from the centrifugal force, while the magnetic and elastic
deformations are not necessarily axisymmetric. Thus, we use the symmetric and
trace free (STF) tensor $M_{ij}$ to describe the deformations sourced from the
internal magnetic field, elasticity in the crust, or the combination of both.
For magnetars, the spin period $P$ is on the order of several seconds and
$\epsilon_{\rm rot}$ can be approximated as the rotational energy over the
gravitational energy
$\epsilon_{\rm rot}\approx\frac{\omega^{2}R^{3}}{GM}=8.5\times
10^{-9}P_{5}^{-2}R_{6}^{3}/M_{1.4}\,,$ (2)
where $\omega=2\pi/P$ is the spin angular frequency, $P_{5}$ is the spin
period in units of $5\,\rm s$, $R_{6}$ is the NS radius $R$ in units of
$10^{6}\,\rm cm$, and $M_{1.4}$ is the NS mass $M$ in units of
$1.4\,M_{\odot}$. The centrifugal deformation is of no importance for free
precession (Glampedakis & Jones, 2010; Wasserman et al., 2022), which can be
understood as follows. The angular momentum for a freely-precessing NS can be
written as
$\displaystyle L_{i}$
$\displaystyle=I_{0}\left[\left(1-\frac{2\epsilon_{\mathrm{rot}}}{3}\right)\delta_{ij}\omega_{j}+M_{ij}\omega_{j}\right]=I_{0}^{\prime}\left(\delta_{ij}+M_{ij}^{\prime}\right)\omega_{j}\,,$
(3)
where Einstein summation is used, $I_{0}^{\prime}=I_{0}(1-2\epsilon_{\rm
rot}/3)$, and $M_{ij}^{\prime}=M_{ij}/(1-2\epsilon_{\rm rot}/3)$. Since the
rotational ellipticity $\epsilon_{\rm rot}$ is quite small, we can absorb the
rotational bulges into the spherical part and approximately rewrite the moment
of inertia tensor as
$I_{ij}\simeq I_{0}\left(\delta_{ij}+M_{ij}\right)\,.$ (4)
We let $\boldsymbol{\hat{e}}_{1}$, $\boldsymbol{\hat{e}}_{2}$, and
$\boldsymbol{\hat{e}}_{3}$ denote the three unit eigenvectors along the
principal axes of moment of inertia tensor $I_{ij}$ with corresponding
eigenvalues $I_{1}\leq I_{2}\leq I_{3}$. The angular velocity is
$\boldsymbol{\omega}=\omega_{1}\boldsymbol{\hat{e}}_{1}+\omega_{2}\boldsymbol{\hat{e}}_{2}+\omega_{3}\boldsymbol{\hat{e}}_{3}$
and the angular momentum is
$\boldsymbol{L}=L_{1}\boldsymbol{\hat{e}}_{1}+L_{2}\boldsymbol{\hat{e}}_{2}+L_{3}\boldsymbol{\hat{e}}_{3}$.
To describe the motion of the body, we define
$\displaystyle\epsilon$ $\displaystyle\equiv\frac{I_{3}-I_{1}}{I_{1}},$
$\displaystyle\delta$
$\displaystyle\equiv\frac{I_{3}(I_{2}-I_{1})}{I_{1}(I_{3}-I_{2})},$
$\displaystyle\theta$ $\displaystyle\equiv\arccos\frac{L_{3}}{L}\,,$ (5)
where $\epsilon$ is the ellipticity, $\delta$ measures the deviation from
axisymmetry, and $\theta$ is the wobble angle between
$\boldsymbol{\hat{e}}_{3}$ and $\boldsymbol{L}$.
Before investigating the dynamics of free precession, we first give an
estimation of $\epsilon$. The shear stresses of the crystallized solid crust
can prevent a small fraction of the hydrostatic rotational bulge from aligning
with the instantaneous spin axis. We denote the ellipticity sourced from
elastic deformation as $\epsilon_{\rm c}$. The upper limit of $\epsilon_{\rm
c}$ is approximated as (Ushomirsky et al., 2000; Haskell et al., 2006;
Johnson-McDaniel & Owen, 2013; Gittins et al., 2020; Morales & Horowitz, 2022)
$\epsilon_{\mathrm{c}}^{\max}\approx
10^{-6}\left(\frac{\sigma_{\mathrm{br}}}{10^{-1}}\right)\,,$ (6)
where the breaking strain $\sigma_{\rm br}\sim 0.1$ is taken from the
molecular dynamics simulations of crustal fracture in Horowitz & Kadau (2009).
The actual value of $\epsilon_{\rm c}$ depends on the evolution history of the
star and is hard to estimate. It may be much smaller than
$\epsilon_{\mathrm{c}}^{\max}$ since plastic processes may relieve the strain
in long time evolution.
The magnetic fields inside the star create deformation because non-radial
field gradients can support non-radial matter-density gradients in
hydromagnetic equilibrium. However, the strength and geometry of the internal
magnetic fields are still very uncertain. The magnetic ellipticity
$\epsilon_{\rm B}$ is on the order of the magnetic energy over the
gravitational energy
$\epsilon_{\mathrm{B}}=\frac{\kappa H\bar{B}R^{4}}{M^{2}}\approx 1.93\times
10^{-6}\kappa\,R_{6}^{4}M_{1.4}^{-2}H_{15}\bar{B}_{15}\,,$ (7)
which is a crude estimation but consistent with more rigorous calculations
(see e.g., Haskell et al., 2008; Akgun & Wasserman, 2008; Lander & Jones,
2009; Ciolfi et al., 2010; Mastrano et al., 2013). Here $\bar{B}$ is the
volume average of the internal magnetic field, ${\bar{B}}_{15}$ represents the
magnetic field in units of $10^{15}\,\rm G$, $H=\bar{B}$ for a normal
conducting interior while $H\simeq 10^{15}\,\rm G$ if the core sustains
protons in the type II superconducting state (Wasserman, 2003; Cutler et al.,
2003). The parameter $\kappa$ can be positive or negative depending on the
relative strength between the poloidal and toroidal components of the internal
magnetic field. One can quickly notice that, magnetars with large internal
magnetic fields may cause large magnetic deformation.
Most earlier studies have been devoted to axisymmetric magnetic field,
regardless of whether the magnetic field is poloidal, toroidal or “mixed”. The
star is deformed into a biaxial shape with the deformation axis along the
dipole field. We relax this axisymmetric assumption and study a more general
case with triaxial deformations and misalignment between the magnetic dipole
and deformation axes. On the one hand, the tilted poloidal-toroidal
configuration is more general and not physically forbidden (Lasky & Melatos,
2013; Wasserman et al., 2022). For instance, Lasky & Melatos (2013) obtained a
tilted torus magnetic field from magnetohydrodynamic (MHD) simulation, which
is stable but the equilibrium state could not be specified freely (Glampedakis
& Lasky, 2016). On the other hand, multipolar magnetic fields may exist in the
interior of magnetars. Mastrano et al. (2013, 2015) found that the mixed odd
and even multipoles can create a deformation that is misaligned with the
magnetic dipole axis even if the magnetic field is axisymmetric. Moreover, the
mixture of elastic and magnetic deformations can produce a triaxial shape
(Wasserman, 2003; Glampedakis & Jones, 2010).
The relative strength between the poloidal and toroidal fields is also not
clear. There is no stable mixed poloidal-toroidal field in NSs for a
barotropic normal fluid (Lander & Jones, 2012). According to MHD simulations,
Braithwaite (2009) argued that an axisymmetric field is stable in stratified
fluid if the poloidal field is much weaker than the toroidal field. In this
case, the deformation is prolate. Lander (2013) presented the first self-
consistent superconducting NS equilibria with poloidal and mixed poloidal-
toroidal fields. The poloidal component was dominant in all the configurations
that Lander (2013) studied.
According to the above discussions on the deformations and the structures of
the internal magnetic fields, we give following arguments and assumptions.
1. 1.
Generally, the deformed NS is in a triaxial shape. The biaxial case is only a
good approximation if the deformation is along a specific axis.
2. 2.
The external magnetic axis is not necessarily aligned with any deformation
axes.
3. 3.
The ellipticity can be positive or negative depending on the magnetic field
geometry and possible elastic deformations.
4. 4.
The precession of magnetically distorted NSs differs qualitatively from the
elastic body precession although they have the same mathematical form. There
are slow and non-rigid internal motions in addition to the uniform rotation
(Mestel & Takhar, 1972; Lander & Jones, 2017). Although these non-rigid
motions are important for the evolution of the magnetic inclination angle, we
ignore them in this work since they are higher-order effects.
## 3 Dynamics of precession
### 3.1 Free precession
The Euler equation of a freely-precessing body is (Landau & Lifshitz, 1960)
$\dot{\boldsymbol{L}}+\boldsymbol{\omega}\times\boldsymbol{L}=0\,,$ (8)
where the dot denotes the derivative with respect to time $t$. Eq. (8) can be
solved analytically in terms of Jacobian elliptic functions (Landau &
Lifshitz, 1960; Wasserman, 2003; Akgun et al., 2006; Gao et al., 2020;
Wasserman et al., 2022). The angular momentum $\boldsymbol{L}$ and the kinetic
energy $E$ are conserved for free precession. Different branches of the
solutions are determined by the relation between $L^{2}$ and $2EI_{2}$. One
usually sets $\omega_{2}=0$ at the initial time $t=0$. Thus, the solutions are
also equivalently determined by the parameter
$m=\delta\tan^{2}\theta_{0}\,,$ (9)
with $\theta_{0}$ denoting the wobble angle $\theta$ at $t=0$.
When $m<1$ ($L^{2}>2EI_{2}$), the precession is around
$\boldsymbol{\hat{e}}_{3}$ and the components of the unit angular momentum
$\widehat{\boldsymbol{L}}\equiv\boldsymbol{L}/L$ are
$\displaystyle\hat{L}_{1}=\sin\theta_{0}\operatorname{cn}\left(\omega_{\rm
p}t,m\right)\,,$
$\displaystyle\hat{L}_{2}=\sin\theta_{0}\sqrt{1+\delta}\operatorname{sn}\left(\omega_{\rm
p}t,m\right)\,,$
$\displaystyle\hat{L}_{3}=\cos\theta_{0}\operatorname{dn}\left(\omega_{\rm
p}t,m\right)\,,$ (10)
where $\operatorname{cn}$, $\operatorname{sn}$, and $\operatorname{dn}$ are
Jacobi elliptic functions (see Appendix A for more details), and
$\displaystyle\omega_{\mathrm{p}}=\frac{\epsilon
L\cos\theta_{0}}{I_{3}\sqrt{1+\delta}}\,.$ (11)
The time evolution of the angular frequencies in the body frame is periodic
with a period
$T=\frac{4I_{3}}{\epsilon L\cos\theta_{0}}\sqrt{1+\delta}\,K(m)\,,$ (12)
where $K(m)$ is the complete elliptic integral of the first kind. One can
notice that $2\pi/\omega_{\rm p}$ is not equal to the period $T$ because the
Jacobi elliptic functions are not periodic in $2\pi$, but rather periodic in
$4K(m)$. In Fig. 1, we illustrate the geometry and the motion in the
corotating body frame. The angular momentum precesses around
$\hat{\boldsymbol{e}}_{3}$ with a period $T$. Following the definition in
Cutler & Jones (2001), we call $T$ the free precession period of the deformed
NS. The wobble angle $\theta$ nutates with a period $T/2$.
When $m=1$, the solution is unstable and the trajectories of the angular
momentum will decay exponentially to the intermediate axis
$\boldsymbol{\hat{e}}_{2}$. The detailed solution can be found in Landau &
Lifshitz (1960). We omit this special case.
Figure 1: The geometry and the motion of a deformed NS in the corotating body
frame. The NS precesses around $\hat{\boldsymbol{e}}_{3}$ with period $T$,
which is called the free precession period of the NS. The wobble angle
$\theta$ between $\hat{\boldsymbol{e}}_{3}$ and $\hat{\boldsymbol{L}}$ nutates
with a period of $T/2$. For the special biaxial case, the wobble angle
$\theta$ is fixed. An observer views the NS in a direction
$\hat{\boldsymbol{k}}$ fixed in the inertia frame but rotating around
$\hat{\boldsymbol{L}}$ in the body frame with spin angular frequency
$\boldsymbol{\omega}$ with an inclination angle $\iota$. The direction of the
dipole moment $\hat{\boldsymbol{\mu}}$ is fixed in the body frame and is
described by the polar angle $\chi$ and the azimuthal angle $\eta$.
When $m>1$, the precession is around $\boldsymbol{\hat{e}}_{1}$ and the
solutions are given in Akgun et al. (2006) and Zanazzi & Lai (2015). We want
to retain the definition of $\theta$ as the angle between
$\hat{\boldsymbol{L}}$ and $\boldsymbol{\hat{e}}_{3}$, which is helpful to map
the latter calculations to the $m<1$ case directly. So, we make a redefinition
of the basis vectors,
$\displaystyle\boldsymbol{\hat{e}}_{1}$
$\displaystyle=\boldsymbol{\hat{e}}_{3}\,,$
$\displaystyle\boldsymbol{\hat{e}}_{2}$
$\displaystyle=-\boldsymbol{\hat{e}}_{2}\,,$
$\displaystyle\boldsymbol{\hat{e}}_{3}$
$\displaystyle=\boldsymbol{\hat{e}}_{1}\,.$ (13)
Then the solutions of $\hat{\boldsymbol{L}}$ can be represented in an
identical form to the $m<1$ case, except for $I_{1}\geq I_{2}\geq I_{3}$. Note
that $\epsilon<0$ and the precession direction is opposite to the $m<1$ case.
A biaxial body corresponds to the special cases of oblate deformation
$(\delta=0,\epsilon>0)$ or prolate deformation $(\delta=0,\epsilon<0)$. Eq.
(10) degenerates into the form
$\displaystyle\hat{L}_{1}$ $\displaystyle=\sin\theta_{0}\cos(\omega_{\rm
p}t)\,,$ $\displaystyle\hat{L}_{2}$
$\displaystyle=\sin\theta_{0}\sin(\omega_{\rm p}t)\,,$
$\displaystyle\hat{L}_{3}$ $\displaystyle=\cos\theta_{0}\,,$ (14)
where
$\omega_{\rm p}=\epsilon L\cos\theta_{0}/I_{3}\,,$ (15)
becomes the precession frequency, and the precession period is
$T=\frac{2\pi}{\lvert\omega_{\rm p}\rvert}=\frac{2\pi
I_{3}}{\lvert\epsilon\rvert L\cos\theta_{0}}\,.$ (16)
In later examples, we only study the $m<1$ branch, the cases for $m>1$ can be
easily obtained by redefining $I_{1}\geq I_{2}\geq I_{3}$. Although the
precession period $T$ is different for distinct NS geometries, we can roughly
estimate the timescale of the precession as
$\tau_{\rm p}=\frac{P}{\epsilon}=1.58P_{5}\epsilon_{7}^{-1}\,\rm yr\,,$ (17)
where $\epsilon_{7}=\epsilon/10^{-7}$. The angular velocity in the body frame
is
$\boldsymbol{\omega}=L\left(\frac{\hat{\boldsymbol{L}}_{1}}{I_{1}}+\frac{\hat{\boldsymbol{L}}_{2}}{I_{2}}+\frac{\hat{\boldsymbol{L}}_{3}}{I_{3}}\right)=\frac{L}{I_{3}}\left(\frac{I_{3}\hat{\boldsymbol{L}}_{1}}{I_{1}}+\frac{I_{3}\hat{\boldsymbol{L}}_{2}}{I_{2}}+\hat{\boldsymbol{L}}_{3}\right)\,.$
(18)
It is obvious that the angle $\theta^{\prime}$ between the angular momentum
$\boldsymbol{L}$ and the angular velocity $\boldsymbol{\omega}$ is on the
order of $\theta^{\prime}\sim\epsilon\theta\ll 1$. We can approximate
$\boldsymbol{L}\parallel\boldsymbol{\omega}$ to the zeroth order of $\epsilon$
when we evaluate the geometry of the NS. We denote the unit vector of the
angular frequency as
$\hat{\boldsymbol{\omega}}\equiv\boldsymbol{\omega}/\omega_{0}$, where
$\omega_{0}$ is the magnitude of the angular frequency at $t=0$. For
simplicity, we also take the external magnetic field as a dipole field. The
dipole moment, $\boldsymbol{\mu}=\mu\hat{\boldsymbol{\mu}}$, is fixed in the
body frame and
$\hat{\boldsymbol{\mu}}=\hat{\mu}_{1}\hat{\boldsymbol{e}}_{1}+\hat{\mu}_{2}\hat{\boldsymbol{e}}_{2}+\hat{\mu}_{3}\hat{\boldsymbol{e}}_{3}=\left(\sin\chi\cos\eta\,,\sin\chi\sin\eta\,,\cos\chi\right)\,.$
(19)
Note that the angle $\eta$ is not necessarily zero in the general triaxial
case. Only in the biaxial case can one choose $\eta=0$ due to the axisymmetry
of the NS.
### 3.2 Forced precession
Figure 2: The relation between the spin period $P$ and the magnetic field at
the magnetic pole $B_{\rm p}$ for magnetars with measured period and period
derivative (green). Lines of constant $\tau_{\rm rad}$ (blue) and constant
$\tau_{\rm m}$ (brown) are also illustrated. The horizontal gray line
represents the Schwinger limit of the magnetic field, $B_{\rm c}=4.4\times
10^{13}\,\rm G$.
For magnetars, large magnetic fields also indicate large electromagnetic
torques. Generally, rotating NSs endowed with external magnetic fields have
two kinds of electromagnetic torques acting on it. The first one is the far-
field torque (the so-called spin-down torque), which originates from the fact
that the electromagnetic emission carries away angular momentum (Deutsch,
1955; Davis & Goldstein, 1970). For a dipole field, we express the far-field
torque $\boldsymbol{N}_{\rm rad}$ as
$\boldsymbol{N}_{\rm
rad}=\frac{k_{1}\mu^{2}\omega^{3}}{c^{3}}\left[(\hat{\boldsymbol{\omega}}\cdot\hat{\boldsymbol{\mu}})\hat{\boldsymbol{\mu}}-k_{2}\hat{\boldsymbol{\omega}}\right]\,,$
(20)
where $k_{1}$ and $k_{2}$ are numerical constants on the order of unity.
For the simplest vacuum dipole, $k_{1}=2/3$, $k_{2}=1$ (Deutsch, 1955; Davis &
Goldstein, 1970), and the rotational energy dissipates at a rate
$\boldsymbol{N}_{\rm
rad}\cdot\boldsymbol{\boldsymbol{\omega}}=-\frac{2\mu^{2}\omega^{4}\sin^{2}\alpha}{3c^{3}}\,,$
(21)
where $\alpha$ is the magnetic inclination angle between
$\hat{\boldsymbol{\omega}}$ and $\hat{\boldsymbol{\mu}}$. There is no
dissipation for the vacuum dipole case when the angular velocity and the
dipole moment are aligned, namely $\sin\alpha=0$.
Although the vacuum magnetosphere torque predicts the spin-down rate somehow
close to the observed values for pulsars, the magnetosphere is filled with
plasma in reality. The charges and currents in the plasma inevitably modify
the structure of magnetosphere. There are no analytical expressions for the
far-field torque for a plasma-filled magnetosphere. Li et al. (2012) and
Philippov et al. (2014) analyzed the results of force-free MHD simulations and
found that the plasma-filled torque can be approximately parameterized by
taking $k_{1}\simeq 1$ and $k_{2}\simeq 2$ if the weak dependence on $R/R_{\rm
LC}$ is ignored, with $R_{\rm LC}$ being the radius of the light cylinder.
This parametrization of the far-field torque was also applied to study the
precession of pulsars (Arzamasskiy et al., 2015). In this case, the rotational
energy dissipates at a rate
$\boldsymbol{N}_{\rm
rad}\cdot\boldsymbol{\boldsymbol{\omega}}=-\frac{\mu^{2}\omega^{4}}{c^{3}}(1+\sin^{2}\alpha)\,.$
(22)
The energy still can be dissipated when $\alpha=0$ compared to the vacuum
case. The form of the plasma-filled torque is equivalent to adding a parallel
component compared to the vacuum case.
The far-field torque not only dissipates the rotational energy but changes the
geometry of the star, such as the wobble angle and the magnetic inclination
angle. We define the spin-down timescale induced by the far-field torque as
$\tau_{\rm rad}=\frac{3c^{3}I_{0}}{2\mu^{2}\omega^{2}}=3.61\times
10^{5}M_{1.4}P_{5}^{2}B_{14}^{-2}\,\rm yr\,.$ (23)
The second kind of the electromagnetic torque is the near-field torque, which
arises from the inertia of the external magnetic field (Davis & Goldstein,
1970; Good & Ng, 1985; Melatos, 1997). The near-field torque is denoted by
$\boldsymbol{N}_{\rm m}$ and can be expressed as
$\boldsymbol{N}_{\rm
m}=\frac{k_{3}\omega^{2}\mu^{2}}{Rc^{2}}(\hat{\boldsymbol{\omega}}\cdot\hat{\boldsymbol{\mu}})(\hat{\boldsymbol{\omega}}\times\hat{\boldsymbol{\mu}})\,,$
(24)
where the external magnetic field is assumed to be a dipole field. Using
different methods, many authors have obtained slightly different values of
$k_{3}$ (Goldreich, 1970; Good & Ng, 1985; Melatos, 2000; Beskin et al., 2013;
Zanazzi & Lai, 2015). Here, we adopt the value $k_{3}=3/5$, which is
consistent with Melatos (1997) and Zanazzi & Lai (2015). This value can be
obtained by assuming a uniform internal magnetic field $\boldsymbol{B}_{\rm
p}$ rotating rigidly around the spin axis, and the electric field given by
$\boldsymbol{E}=-(\boldsymbol{v}/c)\times\boldsymbol{B}_{\rm p}$ for a
perfectly conducting fluid. Although an internal electromagnetic field is
assumed, the near-field torque in Eq. (24) only depends on the exterior
electromagnetic field of the NS (Beskin & Zheltoukhov, 2014; Zanazzi & Lai,
2015).
The near-field torque is perpendicular to $\boldsymbol{\omega}$ and scales as
$\omega^{2}$. It does not dissipate energy or angular momentum but affects the
spin and the wobble angle of the precessing NS in a timescale of
$\tau_{\rm
m}=\frac{5RI_{0}c^{2}}{3\omega\mu^{2}}=16.8M_{1.4}R_{6}P_{5}B_{14}^{-2}\,\rm
yr\,.$ (25)
In Fig. 2, we plot the relation between the dipole magnetic field at the
magnetic pole $B_{\rm p}$ and the rotation period $P$ for magnetars with
measured period and period derivative (Olausen & Kaspi,
2014).111http://www.physics.mcgill.ca/~pulsar/magnetar/main.html We also plot
the contour lines for $\tau_{\rm m}$ and $\tau_{\rm rad}$. For typical
magnetars with $B_{\rm p}\sim 10^{14}$–$10^{15}\,\rm G$, $\tau_{\rm m}$ is on
the order of $0.1$–$10\,\rm yr$ and $\tau_{\rm rad}$ is on the order of
$10^{3}$–$10^{5}\,\rm yr$. It is very interesting that $\tau_{\rm m}$ and the
free precession timescale $\tau_{\rm p}$ could be comparable in some cases,
and if so the free precession solution will be affected substantially. Melatos
(1997, 2000) studied this effect and gave detailed numerical solutions for
different NS geometries. In our work, we adopt an analytical method developed
by Glampedakis & Jones (2010) and Zanazzi & Lai (2015) to study the precession
dynamics under the near-field torque. Since $\tau_{\rm rad}$ is much larger
than $\tau_{\rm m}$ and $\tau_{\rm p}$, we use a perturbative method to study
the forced precession under the far-field torque following Goldreich (1970),
Link & Epstein (2001), Wasserman (2003), and Wasserman et al. (2022).
#### 3.2.1 Near-field torque
Under the near-field torque, the Euler equation is
$\dot{\boldsymbol{L}}+\boldsymbol{\omega}\times\boldsymbol{L}=\frac{3\omega^{2}\mu^{2}}{5Rc^{2}}(\hat{\boldsymbol{\omega}}\cdot\hat{\boldsymbol{\mu}})(\hat{\boldsymbol{\omega}}\times\hat{\boldsymbol{\mu}})\,.$
(26)
The near-field torque arises from the inertia of the electromagnetic field,
which can actually be absorbed into the moment of inertia tensor
$\boldsymbol{I}$ of the star (Melatos, 2000; Glampedakis & Jones, 2010;
Zanazzi & Lai, 2015). Eq. (26) can be written as
$\dot{\boldsymbol{L}}+\boldsymbol{\omega}\times(\boldsymbol{L}+\boldsymbol{\omega}\cdot\boldsymbol{M})=0\,,$
(27)
by introducing the effective deformation tensor
$\boldsymbol{M}=-I_{0}\epsilon_{\rm
m}(\hat{\boldsymbol{\mu}}\otimes\hat{\boldsymbol{\mu}})\,,$ (28)
Here, $\epsilon_{\rm m}$ is the effective ellipticity induced by the external
magnetic field and
$\epsilon_{\rm m}=\frac{3\mu^{2}}{5I_{0}Rc^{2}}=1.5\times
10^{-9}M_{1.4}^{-1}B_{14}^{2}R_{6}^{3}\,.$ (29)
Since $\epsilon_{\rm m}$ is quite small, we can introduce an effective moment
of inertia tensor $\boldsymbol{I}_{\rm eff}=\boldsymbol{I}+\boldsymbol{M}$ and
write the Euler equations as (Zanazzi & Lai, 2015)
$\dot{\boldsymbol{L}}_{\rm eff}+\boldsymbol{\omega}\times\boldsymbol{L}_{\rm
eff}=0\,,$ (30)
with the effective angular momentum $\boldsymbol{L}_{\rm
eff}=\boldsymbol{I}_{\rm eff}\cdot\boldsymbol{\omega}$. Thus, the forced
precession under the near-field torque is transformed into free precession by
introducing a new prolate deformation along the magnetic dipole axis with an
ellipticity $\epsilon_{\rm m}$. In principle, one can solve the forced-
precession problem in Eq. (26) numerically, but the transformation used here
gives analytical solutions and more insight into this problem. Therefore, we
give the analytical solutions of the free precession in the effective
principal frame. In practice, one just needs to substitute all the quantities
in Sec. 3.1 into corresponding effective ones.
In Sec. 3.1, we assumed $\omega_{2}=0$ at $t=0$ for simplicity. The phase of
the precession is just $\omega_{\rm p}t$. For consistency, one must be
cautious of the initial phase when calculating the effective problem. For a
general triaxial star, the magnetic dipole moment does not necessarily lie in
the $\hat{\boldsymbol{e}}_{1}$-$\hat{\boldsymbol{e}}_{3}$ plane ($\eta\neq
0$). Thus, the effective deformation caused by the near-field torque makes
$\omega_{{\rm eff},2}\neq 0$ at the initial time. So, the precession phase of
the solutions should be $\omega_{\rm p,eff}\,t+\psi_{0}$, where
$\omega_{\mathrm{p,eff}}=\frac{\epsilon_{\rm eff}L_{\rm eff}\cos\theta_{0,\rm
eff}}{I_{3,\rm eff}\sqrt{1+\delta_{\rm eff}}}\,,$ (31)
and the initial phase
$\psi_{0}=-\arcsin\sqrt{L_{1,\rm eff}^{2}+\frac{L_{2,\rm
eff}^{2}}{1+\delta_{\rm eff}}}.$ (32)
Only in the triaxial case with $\eta=0$ and the special biaxial case, one can
take $\psi_{0}=0$. Here, we denote the basis vectors, the eigenvalues of the
moment of inertia tensor, the components of the angular velocity, the related
geometric parameters in the effective principal frame with a lower index “eff”
based on the quantities in the original principal frame. Generally, we find
that the effects of the near-field torque can be ignored only if
$\epsilon_{\rm m}\lesssim 0.1\epsilon$.
#### 3.2.2 Far-field torque
Table 1: The intrinsic and effective parameters for forced precession shown in Fig. 3-4. Case | Intrinsic parameters | Effective parameters
---|---|---
| $P_{0}\,(\rm s)$ | $B\,(\rm G)$ | $\epsilon$ | $\delta$ | $\theta_{0}\,(\degr)$ | $\chi\,(\degr)$ | $\eta\,(\degr)$ | $\epsilon_{\rm eff}$ | $\delta_{\rm eff}$ | $\theta_{\rm eff,0}\,(\degr)$ | $\chi_{\rm eff}\,(\degr)$ | $\eta_{\rm eff}\,(\degr)$ | $T_{\rm eff}\,(\rm yr)$
I | 5 | $5\times 10^{14}$ | $10^{-7}$ | 0 | 10 | 45 | 0 | $1.07\times 10^{-7}$ | 0.261 | 20.3 | 55.3 | 0 | 1.79
II | 5 | $5\times 10^{14}$ | $10^{-7}$ | 1 | 10 | 45 | 45 | $9.99\times 10^{-8}$ | 1.15 | 20.5 | 60.2 | 36.1 | 2.59
III | 5 | $10^{14}$ | $10^{-7}$ | 0 | 10 | 45 | 0 | $1.00\times 10^{-7}$ | 0.007 | 10.4 | 45.4 | 0 | 1.62
IV | 5 | $10^{14}$ | $10^{-7}$ | 1 | 10 | 45 | 45 | $9.96\times 10^{-8}$ | 1.01 | 10.3 | 45.6 | 45.6 | 2.31
Figure 3: The fractional change of the angular frequency due to spin down for
the biaxial case (upper) and the triaxial case (lower). For comparison, both
of the vacuum torque ($k_{1}=2/3,k_{2}=1$) and the plasma-filled torque
($k_{1}=1,k_{2}=2$) are illustrated. The parameters for the biaxial and
triaxial cases are shown in Case I and Case II of Table 1 respectively. Figure
4: Same as Fig. 3 but for a deformed magnetar with $B=10^{14}\,\rm G$. The
parameters for the biaxial and triaxial cases are shown in Case III and Case
IV of Table 1 respectively.
After absorbing the near-field torque into the effective moment of inertia
tensor, the Euler equation under the far-field torque can be written as
$\dot{\boldsymbol{L}}_{\rm eff}+\boldsymbol{\omega}\times\boldsymbol{L}_{\rm
eff}=\boldsymbol{N}_{\rm
rad}=\frac{k_{1}\mu^{2}\omega^{3}}{c^{3}}\left[(\hat{\boldsymbol{\omega}}\cdot\hat{\boldsymbol{\mu}})\hat{\boldsymbol{\mu}}-k_{2}\hat{\boldsymbol{\omega}}\right]\,.$
(33)
For simplicity, we omit the “eff” notation in later equations and only give
“effective” parameters in specific examples. The far-field torque can be
decomposed into parallel and perpendicular components with respect to the
angular momentum
$\displaystyle\boldsymbol{N}_{\rm rad}^{\parallel}=$
$\displaystyle\frac{k_{1}\mu^{2}\omega^{3}}{c^{3}}\left[(\hat{\boldsymbol{\omega}}\cdot\hat{\boldsymbol{\mu}})(\hat{\boldsymbol{L}}\cdot\hat{\boldsymbol{\mu}})-k_{2}(\hat{\boldsymbol{L}}\cdot\hat{\boldsymbol{\omega}})\right]\hat{\boldsymbol{L}}\,,$
(34) $\displaystyle\boldsymbol{N}_{\rm rad}^{\perp}=$
$\displaystyle\boldsymbol{N}_{\rm rad}-\boldsymbol{N}_{\rm
rad}^{\parallel}=\boldsymbol{N}_{\rm
rad}-(\hat{\boldsymbol{L}}\cdot\hat{\boldsymbol{N}})\hat{\boldsymbol{L}}\,.$
(35)
Taking the dot product between Eq. (33) and $\hat{\boldsymbol{L}}$, we obtain
$\dot{L}=\boldsymbol{N}_{\rm
rad}^{\parallel}\cdot\hat{\boldsymbol{L}}\simeq\frac{3k_{1}I_{0}\omega}{2\tau_{\rm
rad}}\left(\cos^{2}\alpha-k_{2}\right)\,,$ (36)
where $\hat{\boldsymbol{L}}$ and $\hat{\boldsymbol{\omega}}$ have been
approximated as the same direction on the right-hand side. Eq. (36) determines
the magnitude of the angular momentum. The angle $\alpha$ oscillates during
the precession, which produces variability in the spin-down rate. The
perpendicular Euler equation is
$L\dot{\hat{\boldsymbol{L}}}+\omega
L(\hat{\boldsymbol{\omega}}\times\hat{\boldsymbol{L}})=\boldsymbol{N}_{\rm
rad}^{\perp}\,,$ (37)
which determines the direction of the angular momentum. The second term on the
left-hand side arises from the precession of the angular momentum around
$\hat{\boldsymbol{e}}_{3}$ in the body frame. The right-hand side is the term
that originates from the far-field torque, which contributes to the secular
change of $\theta$ and $\alpha$.
In this work, we concentrate on the spin evolution on the precession
timescale. According to Eq. (37), the angles $\theta$ and $\alpha$ change
secularly on the order of $\sim\tau_{\rm p}/\tau_{\rm rad}\ll 1$ under the
far-field torque. Thus, we can neglect the secular variation of $\alpha$ when
calculating the change of the angular momentum with Eq. (36).
To describe the spin evolution, we introduce
$\omega(t)=\omega_{0}(1+\ell(t))\,,$ (38)
where $\omega_{0}$ is the angular frequency at the initial time $t=0$, and
$\ell$ is the fractional change of the angular frequency due to spin down.
Since $\tau_{\rm p}\ll\tau_{\rm rad}$ and the spin-down rate is quite small,
we can set $\omega=\omega_{0}$ on the right-hand side of Eq. (36) and write
the solution of $\ell$ as
$\displaystyle\ell=-\frac{3k_{1}}{2\tau_{\rm
rad}}\left(k_{2}t-\int_{0}^{t}\cos^{2}\alpha{\rm d}t\right)\,.$ (39)
By introducing $\tau=\omega_{\rm p}t+\psi_{0}$, the magnetic inclination angle
$\alpha$ satisfies
$\cos\alpha=\hat{\mu}_{1}\sin\theta_{0}\operatorname{cn}\tau+\hat{\mu}_{2}\sin\theta_{0}(1+\delta)^{1/2}\operatorname{sn}\tau+\hat{\mu}_{3}\cos\theta_{0}\operatorname{dn}\tau\,,$
(40)
for the case of $m<1$, where the modulus $m$ has been omitted in the
expressions of Jacobi elliptic functions. Substituting Eq. (A) into Eq. (39),
one gets the fractional change of the angular frequency for different NS
geometries.
In Fig. 3, we show the fractional change of the angular frequency due to spin-
down for both biaxial and triaxial cases with $B_{\rm p}=5\times 10^{14}\,\rm
G$ and $\epsilon=10^{-7}$. The spin-down rate oscillates since $\alpha$ varies
with the precession. The ellipticity induced by the near-field torque is
$\epsilon_{\rm m}=3.75\times 10^{-8}=0.375\epsilon$. Thus, the near-field
torque affects the precession substantially. The initial wobble angle for the
motion is amplified from $10^{\circ}$ to about $20^{\circ}$, which leads to
large variations of the spin-down rate. We also plot the spin-down for both of
the vacuum torque and the plasma-filled torque. The angular velocity decreases
faster for the plasma-filled torque because the radiation power is stronger.
From Eq. (39), we also notice that the second term on the right-hand side only
depends on the coefficient $k_{1}$.
We also give the case with $B_{\rm p}=10^{14}\,\rm G$ and $\epsilon=10^{-7}$
in Fig. 4. The ellipticity induced by the near-field torque is $\epsilon_{\rm
m}=1.5\times 10^{-9}=0.015\epsilon$, which is negligible. The effective wobble
angle is nearly the same as the free precession case and the variation of the
spin-down rate is much smaller compared to the cases in Fig. 3. If
$\epsilon\gtrsim 10^{-6}$ for the case with $B_{\rm p}=5\times 10^{14}\,\rm
G$, the effects of the near-field torque can be also neglected.
For the biaxial case with $\epsilon_{\rm m}\ll\epsilon$ or the effective
biaxial case, the parameters $\delta$ and $\eta$ can be set to zero. The angle
$\alpha$ satisfies
$\cos\alpha=\sin\chi\sin\theta_{0}\cos\omega_{\rm
p}t+\cos\chi\cos\theta_{0}\,,$ (41)
and the integration of $\cos\alpha$ simplifies into
$\displaystyle\int_{0}^{t}\cos^{2}\alpha{\rm d}t=$
$\displaystyle\frac{1}{4\omega_{\rm
p}}\left[\left(\mu_{1}^{2}+2\mu_{3}^{2}\right)\omega_{\rm
p}t-\left(\mu_{1}^{2}-2\mu_{3}^{2}\right)\omega_{\rm p}t\cos
2\theta_{0}\right.$ $\displaystyle\left.+4\mu_{1}\mu_{3}\sin
2\theta_{0}\sin\omega_{\rm p}t+\mu_{1}^{2}\sin^{2}\theta_{0}\sin 2\omega_{\rm
p}t\right]\,.$ (42)
where the initial phase $\psi_{0}=0$. The biaxial case in Fig. 4 can be
obtained by taking Eq. (3.2.2) into Eq. (39) because $\epsilon_{\rm
m}\ll\epsilon$.
Our studies are similar to Melatos (1997), but we give analytical solutions
and consider different models of the far-field torques. Compared to Akgun et
al. (2006) and Wasserman et al. (2022), we have considered the effects of the
near-field torque.
### 3.3 Precession dynamics in the inertial frame
The calculations in Secs. 3.1 to 3.2 are performed in the body frame of the
NS. Before investigating the emissions from precessing magnetars, we give the
geometry and the motion of the NS in the inertial frame, which is related to
the body frame through a rotation matrix constructed from Euler angles $\phi$,
$\theta$, and $\psi$ (Landau & Lifshitz, 1960). We take the basis of the
inertial frame as $\hat{\boldsymbol{e}}_{\rm X}$, $\hat{\boldsymbol{e}}_{\rm
Y}$, and $\hat{\boldsymbol{e}}_{\rm Z}$ with $\hat{\boldsymbol{L}}$ parallel
to $\hat{\boldsymbol{e}}_{\rm Z}$. The Euler angles satisfy
$\displaystyle\cos\phi$
$\displaystyle=\hat{e}_{\mathrm{X}}\cdot\hat{\boldsymbol{N}},$
$\displaystyle\cos\theta$
$\displaystyle=\hat{\boldsymbol{e}}_{3}\cdot\hat{e}_{\mathrm{Z}},$
$\displaystyle\cos\psi$
$\displaystyle=\hat{\boldsymbol{e}}_{1}\cdot\hat{\boldsymbol{N}}\,,$ (43)
where
$\hat{\boldsymbol{N}}=\hat{\boldsymbol{e}}_{Z}\times\hat{\boldsymbol{e}}_{3}$.
The time evolution of the Euler angles are given by
$\displaystyle\cos\theta$ $\displaystyle=\hat{{L}}_{3}\,,$
$\displaystyle\tan\psi$ $\displaystyle=\hat{{L}}_{1}/\hat{{L}}_{2}\,,$
$\displaystyle\dot{\phi}$ $\displaystyle=L/I_{3}-\dot{\psi}/\hat{{L}}_{3}\,.$
(44)
Substituting the evolution of the angular momentum for different cases into
Eqs. (44), one obtains the specific expressions for Euler angles. For the
general triaxial case, the precession angle $\psi$ and the wobble angle
$\theta$ evolve with free precession period $T$, while the evolution of the
angle $\phi$ is not periodic. Thus, the motion for a triaxial NS in the
inertial frame is not periodic. We illustrate the geometry and the motion of
the NS in the inertial frame in Fig. 5.
The components of $\hat{\boldsymbol{\mu}}$ in the inertial frame are
$\displaystyle\hat{\mu}_{\rm X}$
$\displaystyle=\hat{\mu}_{1}(\cos\psi\cos\phi-\cos\theta\sin\phi\sin\psi)$
$\displaystyle\quad-\hat{\mu}_{2}(\sin\psi\cos\phi+\cos\theta\sin\phi\cos\psi)+\hat{\mu}_{3}\sin\theta\sin\phi\,,$
$\displaystyle\hat{\mu}_{\rm Y}$
$\displaystyle=\hat{\mu}_{1}(\cos\psi\sin\phi+\cos\theta\cos\phi\sin\psi)$
$\displaystyle\quad+\hat{\mu}_{2}(-\sin\psi\sin\phi+\cos\theta\cos\phi\cos\psi)-\hat{\mu}_{3}\sin\theta\cos\phi\,,$
$\displaystyle\hat{\mu}_{\rm Z}$
$\displaystyle=\hat{\mu}_{1}\sin\theta\sin\psi+\hat{\mu}_{2}\sin\theta\cos\psi+\hat{\mu}_{3}\cos\theta\,,$
(45)
where $\hat{\mu}_{1}$, $\hat{\mu}_{2}$, and $\hat{\mu}_{3}$ are the components
of $\hat{\boldsymbol{\mu}}$ in the body frame in Eq. (19). The polar angle
$\Theta$ and the azimuthal angle $\Phi$ of the magnetic dipole in the inertial
frame satisfy
$\displaystyle\Phi$ $\displaystyle=\arctan\left(\frac{\hat{\mu}_{\rm
Y}}{\hat{\mu}_{\rm X}}\right)\,,$ $\displaystyle\cos\Theta$
$\displaystyle=\hat{\mu}_{\rm Z}\,.$ (46)
We can treat $\Theta$ as the magnetic inclination angle $\alpha$ because the
angle between $\hat{\boldsymbol{L}}$ and $\hat{\boldsymbol{\omega}}$ is first
order in $\epsilon$.
The time evolution of $\hat{\boldsymbol{\mu}}$ is vital to determine the
emission properties. The variations of the angle $\alpha$ during free
precession lead to the swing of the emission regions and may modulate the beam
shape parameters, polarization, and flux of the emission. While the precession
also affects the rotational phase and time of arrivals of the emissions, which
are closely related to the time evolution of $\Phi$. In the following
sections, we will first investigate the phase modulations and timing residuals
buried in the time evolution of $\Phi$, and then study the modulations of
polarized radio/X-ray signals due to the variations of $\alpha$ during
precession. Note that the calculations will be performed in the inertial
frame.
Figure 5: The geometry and the motion of a NS in the inertial frame. The NS
rotates around $\hat{\boldsymbol{L}}$ with angular frequency
$\boldsymbol{\omega}$. The NS itself rotates around $\hat{\boldsymbol{e}}_{3}$
with free precession period $T$, which is clockwise in the case of $m<1$ and
counterclockwise in the case of $m>1$. The wobble angle $\theta$ between
$\hat{\boldsymbol{e}}_{3}$ and $\hat{\boldsymbol{L}}$ nutates with period
$T/2$. For the special biaxial case, the wobble angle $\theta$ is fixed. An
observer in the $\hat{\boldsymbol{e}}_{\rm X}$-$\hat{\boldsymbol{e}}_{\rm Z}$
plane views the NS in the direction $\hat{\boldsymbol{k}}$ with an inclination
angle $\iota$. The magnetic dipole ${\boldsymbol{\mu}}$ is attached on the NS
and is described by the polar angle $\Theta$ and azimuthal angle $\Phi$.
## 4 Timing residuals
In this section, we investigate the timing residuals of precessing magnetars,
which may be used to search for precession from X-ray pulsations. The main
manifestation of magnetars occurs in the X-ray energy band. Some magnetars are
persistent X-ray sources with a luminosity $L_{\rm X}\sim
10^{34}$–$10^{35}\,\rm erg\,\rm s^{-1}$ while others are transient that are
much dimmer in quiescence, $L_{\rm X}\lesssim 10^{32}\,\rm erg\,\rm s^{-1}$
(Turolla et al., 2015; Makishima, 2016; Kaspi & Beloborodov, 2017). Most
magnetars show clear X-ray pulsations due to the spin. The periods are
clustered in the range $P=2$–$12\,\rm s$. Most of them show a large spin-down
rate in the range of $\dot{P}=10^{-13}$–$10^{-10}\,\rm s\,s^{-1}$. The timing
of radio signals are also obtained for some transient magnetars.
The rotation phase and spin rate will be modulated since the emission
direction rotates around $\hat{\boldsymbol{e}}_{3}$ during the precession. For
simplicity, we assume that the emission is centered around the magnetic dipole
axis $\hat{\boldsymbol{\mu}}$. As shown in Fig. 5, the observer sees the
pulsation once the azimuthal angle of $\hat{\boldsymbol{\mu}}$ becomes
$\Phi=2\pi n,\quad(n=0,1,2,\dots)\,,$ (47)
which is equivalent to $\mu_{\rm X}>0$ and $\mu_{\rm Y}=0$. Taking Eq. (45),
the Euler angle $\phi$ at this epoch can be written as (Jones & Andersson,
2001; Akgun et al., 2006)
$\phi=2\pi n+\frac{\pi}{2}+\arctan\phi_{1}\,,$ (48)
where
$\tan\phi_{1}=\frac{\hat{\mu}_{1}\cos\psi-\hat{\mu}_{2}\sin\psi}{\hat{\mu}_{2}\cos\theta\cos\psi-\hat{\mu}_{3}\sin\theta+\hat{\mu}_{1}\cos\theta\sin\psi}\,.$
(49)
To obtain the timing residual, we first study the effective free precession
case including the near-field torque. The Euler angle $\phi$ integrating from
Eq. (44) is
$\phi(t)=\phi_{0}+\frac{L}{I_{3}}t+\frac{\sqrt{1+\delta}\omega_{\rm
p}}{\cos\theta_{0}}\int_{0}^{t}\frac{{\rm
d}t}{1+\delta\operatorname{sn}^{2}\tau}\,,$ (50)
where $\phi_{0}$ is the initial phase of $\phi$ and $\tau=\omega_{\rm
p}t+\psi_{0}$. Combining Eq. (48) and Eq. (50), we obtain the time of arrival
(TOA) $t_{\rm n}$ of the $n$-th pulse
$\frac{L}{I_{3}}t_{\rm n}=2\pi
n+\frac{\pi}{2}+\arctan\phi_{1}-\phi_{0}-\frac{\sqrt{1+\delta}\omega_{\rm
p}}{\cos\theta_{0}}\int_{0}^{t_{n}}\frac{{\rm
d}t}{1+\delta\operatorname{sn}^{2}\tau}\,.$ (51)
The TOA contains all the information of the precessing timing behaviours. To
further investigate the spin modulations, we give the residuals of the period
and the period derivatives. In practice, one first obtains the period
$P_{0}=2\pi/\omega_{0}$ at some epoch $t_{0}$, and finds the period derivative
$\dot{P}_{0}$ that is attributed to the secular spin down. Then the period
residuals can be determined by subtracting the two contributions
$\Delta P=P(t)-P_{0}(t_{0})-\dot{P}_{0}(t-t_{0})\,.$ (52)
Since the precession timescale is much longer than the rotation timescale, we
can approximate the differences by derivatives, and
$\displaystyle\frac{L}{I_{3}}P-2\pi$ $\displaystyle=\frac{L}{I_{3}}\Delta
P_{\rm fp}$ $\displaystyle=\left(\frac{{\rm d}\arctan\phi_{1}}{{\rm
d}t}-\frac{\sqrt{1+\delta}\omega_{\rm
p}/\cos\theta_{0}}{1+\delta\operatorname{sn}^{2}\tau}\right)\frac{{\rm
d}t}{{\rm d}n}$ $\displaystyle=\left(\frac{{\rm d}\arctan\phi_{1}}{{\rm
d}t}-\frac{\sqrt{1+\delta}\omega_{\rm
p}/\cos\theta_{0}}{1+\delta\operatorname{sn}^{2}\tau}\right)P\,,$ (53)
where the period $P=t_{\rm n}-t_{\rm n-1}$, and $\Delta P_{\rm fp}$ is the
period residual owing to the free precession. By approximating $P\simeq P_{0}$
on the right-hand side, we get
$\Delta P_{\rm fp}=\left(\frac{{\rm d}\arctan\phi_{1}}{{\rm
d}\tau}-\frac{\sqrt{1+\delta}/\cos\theta_{0}}{1+\delta\operatorname{sn}^{2}\tau}\right)\frac{\epsilon\cos\theta_{0}P_{0}}{\sqrt{1+\delta}}\,.$
(54)
For an effectively biaxial case or a biaxial case with
$\epsilon_{m}\ll\epsilon$, we can set $\hat{\mu}_{2}=0$ and
$\displaystyle\Delta P_{\rm fp}=$ $\displaystyle-\epsilon\sin\theta_{0}P_{0}$
$\displaystyle\times\biggl{[}\frac{\mu_{1}^{2}\sin\theta_{0}\sin^{2}\omega_{\rm
p}t+\mu_{3}^{2}\sin\theta_{0}-\mu_{1}\mu_{3}\cos\theta_{0}\cos\omega_{\rm
p}t}{\left(\mu_{1}\cos\theta_{0}\cos\omega_{\rm
p}t-\mu_{3}\sin\theta_{0}\right)^{2}+\mu_{1}^{2}\sin^{2}\omega_{\rm
p}t}\biggr{]}\,,$ (55)
where the initial phase induced by the near-field torque $\psi_{0}=0$. Because
$\Delta P_{\rm fp}$ purely originates from the geometry of the free
precession, we name $\Delta P$ as the geometric term of residual in period.
The far-field torque contributes to the period residual via the spin down
$\frac{\Delta P_{\rm sd}}{P}\simeq-\ell(t)\,,$ (56)
where $\ell(t)$ has been given in Eq. (39) with the integration of
$\cos\alpha$ in Eq. (A). For period residuals, we only care about the
oscillation terms. After subtracting the secular terms, the period residual is
$\displaystyle\Delta P_{\rm sd}$ $\displaystyle=-\frac{3k_{1}P_{0}}{2\tau_{\rm
rad}}\left(\int_{0}^{t}\cos^{2}\alpha{\rm
d}t-\left\langle\int_{0}^{t}\cos^{2}\alpha{\rm d}t\right\rangle t\right)$
$\displaystyle\approx\frac{3k_{1}P_{0}}{2\tau_{\rm rad}\omega_{\rm
p}}\biggl{\\{}a_{1}\operatorname{cn}\tau+a_{2}\operatorname{sn}\tau+a_{3}\operatorname{dn}\tau$
$\displaystyle+a_{4}\left[\frac{E(m)}{K(m)}\tau-E\left({\rm
am}\,\tau\right)\right]+B_{\rm c}\biggr{\\}}\,,$ (57)
where $\left\langle\ell\right\rangle$ means an average over the precession
period and
$\displaystyle a_{1}=\sin
2\theta_{0}(1+\delta)^{\frac{1}{2}}\hat{\mu}_{2}\hat{\mu}_{3}\,,$
$\displaystyle a_{2}=-\sin
2\theta_{0}\hat{\mu}_{1}\hat{\mu}_{3}\operatorname{sn}(\tau)\,,$
$\displaystyle
a_{3}=\frac{2\cos^{2}\theta_{0}\hat{\mu}_{1}\hat{\mu}_{2}(1+\delta)^{\frac{1}{2}}}{\delta}\,,$
$\displaystyle
a_{4}=\frac{\cos^{2}\theta_{0}}{\delta}\left[\hat{\mu}_{1}^{2}-(1+\delta)\hat{u}_{2}^{2}+\hat{\mu}_{3}^{2}\delta\right]\,.$
(58)
The constant $B_{\rm c}$ is an integration constant, which can be easily
obtained from $\Delta P_{\rm sd}(t=0)=0$. For the special biaxial case, we set
$\hat{\mu}_{2}=0$ and
$\displaystyle\Delta P_{\rm sd}=$ $\displaystyle\
-\frac{3k_{1}P_{0}}{2\tau_{\rm rad}\omega_{\rm p}}$
$\displaystyle\times\left(\frac{1}{2}\sin 2\chi\sin
2\theta_{0}\sin(\omega_{\rm
p}t)+\frac{1}{4}\sin^{2}\theta_{0}\sin^{2}\chi\sin(2\omega_{\rm
p}t)\right)\,,$ (59)
which is consistent with Jones & Andersson (2001) and Link & Epstein (2001).
We name the period residual resulting from the far-field torque as the spin-
down term.
Table 2: The intrinsic and effective parameters for the timing residuals shown in Fig. 6-10. Case | Intrinsic parameters | Effective parameters
---|---|---
| $P_{0}\,(\rm s)$ | $B\,(\rm G)$ | $\epsilon$ | $\delta$ | $\theta_{0}\,(\degr)$ | $\chi\,(\degr)$ | $\eta\,(\degr)$ | $\epsilon_{\rm eff}$ | $\delta_{\rm eff}$ | $\theta_{\rm eff,0}\,(\degr)$ | $\chi_{\rm eff}\,(\degr)$ | $\eta_{\rm eff}\,(\degr)$ | $T_{\rm eff}\,(\rm yr)$
I | 5 | $10^{14}$ | $10^{-7}$ | 0 | 15 | 45 | 0 | $1.00\times 10^{-7}$ | $7.61\times 10^{-3}$ | 15.4 | 45.4 | 0 | 1.65
II | 5 | $10^{14}$ | $10^{-7}$ | 1 | 15 | 45 | 45 | $9.96\times 10^{-8}$ | 1.01 | 15.3 | 45.6 | 44.8 | 2.38
III | 5 | $10^{14}$ | $10^{-7}$ | 0 | 15 | 85 | 0 | $1.01\times 10^{-7}$ | 0.0149 | 15.1 | 85.1 | 0 | 1.63
IV | 5 | $10^{14}$ | $10^{-7}$ | 1 | 15 | 85 | 45 | $1.01\times 10^{-7}$ | 0.986 | 15.1 | 85.1 | 44.2 | 2.34
V | 5 | ${10^{14}}$ | ${10^{-4}}$ | 0 | 15 | 45 | 0 | ${1.00\times 10^{-4}}$ | ${7.50\times 10^{-6}}$ | 15.0 | 45.0 | 0 | 0.00164
VI | 5 | ${10^{14}}$ | $10^{-4}$ | 1 | 15 | 45 | 45 | ${1.00\times 10^{-4}}$ | 1.00 | 15.0 | 45.0 | 45.0 | 0.00236
VII | 5 | $5\times 10^{14}$ | $10^{-7}$ | 0 | 15 | 45 | 0 | $1.07\times 10^{-7}$ | 0.261 | 25.3 | 55.3 | 0 | 1.87
VIII | 5 | $5\times 10^{14}$ | $10^{-7}$ | 1 | 15 | 45 | 45 | $9.99\times 10^{-8}$ | 1.15 | 24.9 | 60.2 | 36.1 | 2.75
IX | 5 | ${10^{14}}$ | ${10^{-5}}$ | 0 | 15 | 45 | 0 | ${1.00\times 10^{-5}}$ | ${7.50\times 10^{-5}}$ | 15.0 | 45.0 | 0 | 0.0164
X | 5 | ${10^{14}}$ | ${10^{-5}}$ | 8 | 15 | 45 | 0 | ${1.00\times 10^{-5}}$ | 8.01 | 15.0 | 45.0 | 45.0 | 0.0603
Figure 6: The residuals of the period and the period derivative for biaxial
and triaxial NSs with $k_{1}=1$ and $k_{2}=2$. The parameters for the biaxial
and triaxial cases are shown in Case I and Case II of Table 2 respectively.
The initial period derivative is $\dot{P}_{0}=8.22\times 10^{-13}\,\rm
s\,s^{-1}$ for the biaxial case and $\dot{P}_{0}=8.82\times 10^{-13}\,\rm
s\,s^{-1}$ for the triaxial case. Figure 7: Same as Fig. 6, except for
$\chi=85^{\circ}$. The parameters for the biaxial and triaxial cases are shown
in Case III and Case IV of Table 2 respectively. The initial period derivative
is $\dot{P}_{0}=1.24\times 10^{-12}\,\rm s\,s^{-1}$ for the biaxial case and
$\dot{P}_{0}=1.27\times 10^{-12}\,\rm s\,s^{-1}$ for the triaxial case. Figure
8: Same as Fig. 6, but with $\epsilon=10^{-4}$. The parameters for the biaxial
and triaxial cases are shown in Case V and Case VI of Table 2 respectively.
The initial period derivative is $\dot{P}_{0}=8.22\times 10^{-13}\,\rm
s\,s^{-1}$ for the biaxial case and $\dot{P}_{0}=8.82\times 10^{-13}\,\rm
s\,s^{-1}$ for the triaxial case. Figure 9: The period residual for a biaxial
case (upper) and a triaxial case (lower) for $B=5\times 10^{14}\,\rm G$. The
parameters for the biaxial and triaxial cases are shown in Case VII and Case
VIII of Table 2 respectively. The period derivative $\dot{P}_{0}=2.06\times
10^{-11}\rm s\,s^{-1}$ for the biaxial case and $\dot{P}_{0}=2.20\times
10^{-11}\rm s\,s^{-1}$ for the triaxial case. Figure 10: The residuals of the
period for biaxial and triaxial NSs. The parameters are shown in Case IX and
Case X of Table 2 respectively. The initial period derivative is
$\dot{P}_{0}=8.22\times 10^{-13}\,\rm s\,s^{-1}$ for the biaxial case and
$\dot{P}_{0}=8.82\times 10^{-13}\,\rm s\,s^{-1}$ for the triaxial case.
The total period residual $\Delta P$ can be expressed as
$\Delta P=\Delta P_{\rm fp}+\Delta P_{\rm sd}\,.$ (60)
Here the geometric term can be obtained from the effectively free precession
problem. While the spin-down term is determined by the far-field torque. The
relative amplitude of the two terms depends on the geometry of the star. One
notices that
$\displaystyle\frac{\Delta P_{\rm fp}}{P_{0}}$ $\displaystyle\sim{\rm
coefficient}\times\frac{P_{0}}{\tau_{\rm f}}\,,$ $\displaystyle\frac{\Delta
P_{\rm sd}}{P_{0}}$ $\displaystyle\sim{\rm coefficient}\times\frac{\tau_{\rm
f}}{\tau_{\rm rad}}\,,$ (61)
where the coefficients are some geometric factors depending on the geometry of
the deformed magnetars. When the free precession timescale $\tau_{\rm f}$ is
sufficiently long, corresponding to small ellipticities, the spin-down term
dominates over the geometric term. This is the case for the possible
precession of PSR B1828$-$11 (Link & Epstein, 2001; Akgun et al., 2006). If
the free precession timescale $\tau_{\rm f}$ is closer to the spin period
other than the spin-down timescale $\tau_{\rm rad}$, corresponding to large
ellipticities, the geometric term will dominate. This is the case for the
possible precession of 4U 0142$+$61 (Makishima et al., 2014).
When $m>1$, the ellipticity is negative. Thus the precession direction is
opposite to the case of $m<1$. If we change $\epsilon$ into $-\epsilon$ and
keep the other parameters fixed, the geometric term changes sign while the
spin-down term stays the same. The amplitude of the spin-down term is
proportional to $k_{1}$. So the amplitude of the period residual due to vacuum
torque is just $2/3$ times that of the plasma-filled torque. In following
examples, we only study the timing residuals of the case with $k_{1}=1$.
To study the effects of the NS geometries and the near-field torque
separately, we first neglect the contributions of the near-field torque by
taking $\epsilon=10^{-7}$ and $B=10^{14}\,\rm G$, where $\epsilon_{\rm
m}=0.015\epsilon\ll\epsilon$. In Fig. 6, we give an example of $\Delta P_{\rm
sd}\gg\Delta P_{\rm fp}$. The spin-down term dominates over the geometric term
by a factor of $\sim 10$. The morphologies for the biaxial case and the
triaxial case are basically the same. The main differences are the amplitude
and the period of the modulations.
Another important feature is that $\Delta P$ does not deviate much from a
single harmonic. We take the biaxial case as an example to understand this
point. The triaxial case can be understood in the same way qualitatively. For
the biaxial case, the spin-down term $\Delta P_{\rm sd}$ has components both
at frequencies $\omega_{\rm p}$ and $2\omega_{\rm p}$. The amplitude of
$\Delta P_{\rm sd}$ at $\omega_{\rm p}$ is larger than the harmonics at
$2\omega_{\rm p}$ only if $\cot\theta>\tan\chi/8$. For the biaxial case in
Fig. 6, $\cot\theta=\cot\theta_{0}=3.73$ while $\tan\chi/8=0.125$. The
residuals at $\omega_{\rm p}$ is about 30 times larger. Therefore, the
residuals mainly come from the term at the frequency $\omega_{\rm p}$ of the
spin-down contribution.
For the biaxial case, the first harmonic at $2\omega_{\rm p}$ of the spin-down
term only plays an important role when $\cot\theta<\tan\chi/8$. So we present
an example with $\chi=85^{\circ}$ in Fig. 7 and keep other parameters fixed.
One can notice that the residuals are quite different from that in Fig. 6 due
to that the contribution at $2\omega_{\rm p}$ is comparable with that at
$\omega_{\rm p}$. While the geometric term is still about 0.1 of the spin-down
term. The precession of PSR B1828$-$11 belongs to this kind.
To make the geometric term dominate over the spin-down term, the ellipticity
should be sufficiently large. In Fig. 8, we show an example with
$\epsilon=10^{-4}$ and keep the other parameters the same as Fig. 6. The
geometric term is much larger than the spin-down term by a factor $\sim
10^{5}$. The period residual is quite substantial and the effects of the
electromagnetic torques are negligible.
Actually, such kind of modulations have been possibly observed by combined
timing analysis of hard and soft X-rays for three magnetars, 4U 0142+61
(Makishima et al., 2014), 1E 1547.0$-$5408 (Makishima et al., 2021a), and SGR
1900+14 (Makishima et al., 2021b). Makishima et al. (2014); Makishima et al.
(2021a); Makishima et al. (2021b) gave the phase modulations of hard X-rays,
which are physically equivalent to the timing residuals. In their model, the
internal strong toroidal magnetic field creates a large prolate deformation
along the magnetic dipole. The soft X-ray emission is centered around the
magnetic dipole while the hard X-ray emission is somewhat misaligned with the
magnetic dipole. This model is different from ours but can be simply obtained
by redefining $\hat{\boldsymbol{\mu}}$ as the emission direction of the hard
X-rays in a direction other than the magnetic dipole and treating the star as
a biaxial one. Thus, the period residual for 4U 0142$+$61 should be in the
order of $\epsilon P\sim 0.001\,\rm s$ according to Eq. (4). For the magnetar
4U 0142$+$61, Makishima et al. (2014) found that the rotation period at
$8.69\,\rm s$ suffers slow phase modulations of $0.7\,\rm s$, with a period of
$\sim 15\,\rm h$ in the hard X-ray band ($15$–$40\,\rm keV$), indicating an
internal magnetic deformation $\epsilon\sim-1.6\times 10^{-4}$ if the
modulations are interpreted as free precession.
For the examples in Figs. 6–8, the near-field torque can be neglected. Thus,
in Fig. 9, we show biaxial and triaxial examples with a large near-field
torque. The amplitude of the residuals becomes larger compared to the cases
without the near-field torque. The period of the modulations turns into
$T_{\rm eff}$.
In all above examples, the parameter $m$ is on the order of $0.1$. Although
the period and amplitude of the residuals for the triaxial cases are different
from the biaxial ones, the morphologies are basically the same. It is easier
to tell whether a NS is triaxial or not from the timing residuals if the
parameter $m$ is much larger. In Fig. 10, we show a triaxial example with
$\delta=8$, $\theta_{0}=15{\degr}$, and $m=0.574$. Since the wobble angle
nutates in a wider range and the Jacobi elliptic functions deviate from the
harmonic functions substantially, the morphology and period of the timing
residuals for the triaxial case are quite different from the biaxial one.
## 5 Modulations on polarizations
The Stokes parameters for the polarizations directly reflect the magnetic
field rotating around the NS and the emission geometry. The precession leads
to the swing of the emission and changes the polarization. In this section, we
model the polarization of precessing magnetars and study the prospects of
detecting the free precession with polarized X-ray and radio emissions.
### 5.1 Emission model of X-rays
We use the model developed by Ho & Lai (2003), Lai & Ho (2003), and van
Adelsberg & Lai (2006) to calculate the soft thermal X-ray emission from the
surface of a NS. We assume that the emission comes from a hot region, which is
centered around the magnetic dipole axis, much smaller than the surface area
of the star, and composed of a fully ionized hydrogen atmosphere with an
effective temperature $T_{\rm eff}\simeq 5\times 10^{6}\,\rm K$. The magnetic
field is also assumed to be a dipole field, which is approximately constant
and normal to the stellar surface across the emission region.
In the highly magnetized plasma that characterizes the magnetosphere of NSs,
X-ray photons propagate in the extraordinary mode (X mode) and the ordinary
mode (O mode). The X mode is mostly polarized perpendicular to the
$\boldsymbol{k}_{0}$-$\boldsymbol{B}$ plane while the ordinary mode (O mode)
is mostly polarized within the $\boldsymbol{k}_{0}$-$\boldsymbol{B}$ plane,
where $\boldsymbol{k}_{0}$ is the direction of photon propagation direction at
the emission point and $\boldsymbol{B}$ is the external magnetic field. The
opacities for each mode are associated with the energy and the propagation
direction of the X-ray photons, as well as the strength and the direction of
the magnetic field in the magnetized plasma. The typical X mode opacity
$\kappa_{\rm X}$ is much smaller than the O mode opacity $\kappa_{\rm O}$
(Meszaros, 1992), satisfying $\kappa_{\rm
X}\sim\left(E/E_{Be}\right)^{2}\kappa_{\rm O}$, where $E_{\rm Be}=\hbar
eB/m_{\rm e}c$ is the electron cyclotron energy in the magnetic field. The
decoupling density of the X mode photon $\rho_{\rm X}$ is much larger than
that of the O mode photon $\rho_{\rm O}$. As a result, the X mode photons can
escape from deeper and hotter layers than the O mode photons. The emergent
radiation is linearly polarized to a high degree (Gnedin & Sunyaev, 1974;
Meszaros et al., 1988; Pavlov & Zavlin, 2000; Ho & Lai, 2003; Lai & Ho, 2003).
In strong magnetic fields, it has long been predicted that the vacuum becomes
birefringent, and the dielectric tensor describing the atmospheric plasma of
magnetars must be corrected for quantum electrodynamics (QED) vacuum effects
(Heisenberg & Euler, 1936; Tsai & Erber, 1975). At the vacuum resonance, the
contributions of the plasma and the vacuum to the dielectric tensor cancel
each other (Lai & Ho, 2003). When a photon with energy $E$ traverses through
the density gradient of the plasma, it will encounter the vacuum resonance at
the density
$\rho_{\rm V}=0.96Y_{e}^{-1}E_{1}^{2}B_{14}^{2}f_{\rm
B}^{-2}\mathrm{~{}g}\mathrm{~{}cm}^{-3}\,,$ (62)
where $Y_{e}=Z/A$ with Z and A the charge number and mass number of the ion
respectively, $E_{1}=E/(1\,\rm keV)$, and $f_{\rm B}$ is a slowly varying
function of $B$ that is on the order of unity. If the density variations of
the plasma are sufficiently gentle as the photon propagates through the
inhomogeneous plasma, an X mode (O mode) photon will be converted into an O
mode (X mode) photon when it traverses the vacuum resonance. For the mode
conversion to be effective, the adiabatic condition $E\geq E_{\rm ad}$ must be
satisfied (Ho & Lai, 2003; Lai & Ho, 2003), with
$E_{\mathrm{ad}}=2.52\left[f_{\rm B}\tan\theta_{\rm kB}\left|1-\left(E_{\rm
Bi}/E\right)^{2}\right|\right]^{2/3}\left(\frac{1\mathrm{~{}cm}}{H_{\rm\rho}}\right)^{1/3}\,.$
(63)
Here $\theta_{\rm kB}$ is the angle between the magnetic field and the photon
propagation direction, $E_{\rm Bi}$ is the ion cyclotron energy, and
$H_{\rm\rho}$ is the density scale-height along the ray. For a photon with
energy $E\sim E_{\rm ad}$, it undergoes partial mode conversion. In general,
the mode conversion probability of a photon is (Lai & Ho, 2003)
$P_{\rm c}=1-\exp\left[-(\pi/2)\left(E/E_{\mathrm{ad}}\right)^{3}\right]\,.$
(64)
To obtain the emergent intensities, one needs to solve the radiative transfer
equations (RTEs) of the two modes subject to the constraints of hydrostatic
and radiative equilibria (Ho & Lai, 2001, 2003). van Adelsberg & Lai (2006)
provided the fitting formulae of the temperature profile for different
atmospheric models with different magnetic fields $B$ and effective
temperatures $T_{\rm eff}$. Once the temperature profile is known, the
emergent intensities can be obtained by a single integration of the RTEs. We
use the fitted temperature profile and integrate the RTEs including the vacuum
effect following van Adelsberg & Lai (2006). The spectral intensities for the
X mode photons $I_{\rm X}(\theta_{\rm em})$ and the O mode photons $I_{\rm
O}(\theta_{\rm em})$ at different emission angles $\theta_{\rm em}$ are
obtained. The “intrinsic” linear polarization fraction at the emission point
is defined as
$\Pi_{\rm em}=\frac{I_{\rm O}(\theta_{\rm em})-I_{\rm X}(\theta_{\rm
em})}{I_{\rm O}(\theta_{\rm em})+I_{\rm X}(\theta_{\rm em})}\,,$ (65)
where $\theta_{\rm em}$ is the angle between the photon propagation direction
and the surface normal.
Figure 11: The X-ray emission geometry. The observer lies in the
$\hat{\boldsymbol{e}}_{\rm X}$-$\hat{\boldsymbol{e}}_{\rm Z}$ plane at an
inclination angle $\iota$. A hotspot is located at one of the magnetic poles.
An X-ray photon emitted at an angle $\theta_{\rm em}$ respect to the surface
normal will be received at colatitude $\Theta$ due to the light bending
effect. A coordinate system $IJK$ with the basis
$\\{\hat{\boldsymbol{i}},\hat{\boldsymbol{j}},\hat{\boldsymbol{k}}\\}$ is
introduced, where $\hat{\boldsymbol{k}}$ is along the line of sight,
$\hat{\boldsymbol{i}}$ lies in the plane spanned by the line of sight and the
angular momentum $\boldsymbol{L}$, and $\hat{\boldsymbol{j}}$ is determined by
$\hat{\boldsymbol{L}}\times\hat{\boldsymbol{k}}=-\hat{\boldsymbol{j}}\sin\iota$.
To determine the polarization state of the signals, one must consider the
propagation of polarized radiation in the magnetosphere of magnetars, whose
dielectric properties are dominated by the vacuum polarization in the X-ray
band (Heyl et al., 2003). When an X-ray photon propagates in the
magnetosphere, its polarization state evolves adiabatically along the varying
magnetic field up to the polarization limiting radius $r_{\rm pl}$, which is
far from the surface of the NS. Thus, the observed Stokes parameters are
determined by the “frozen” polarization state at $r_{\rm pl}$. Adiabatic
evolution of the photon modes in the magnetosphere leads to a significant
polarization fraction even when the emission comes from extended regions on
the surface (Heyl et al., 2003; Fernandez & Davis, 2011; Taverna et al.,
2015). In contrast, if the polarization state is determined by the emission at
the surface , additions of the Stokes parameters from distinct regions tend to
cancel each other and lead to low polarization fraction.
The magnetic field direction that determines the polarization can be
characterized by the polar angle $\Theta$ and the azimuthal angle $\Psi$. As
shown in Fig. 11, the polar angle between the magnetic dipole field and the
line of sight satisfies
$\cos\Theta=\cos\iota\cos\alpha+\sin\iota\sin\alpha\cos\Phi\,.$ (66)
The azimuthal angle $\Psi$ is the position angle (PA) of the polarized
emission. To obtain the PA, we project the dipole field onto the
$\hat{\boldsymbol{i}}$-$\hat{\boldsymbol{j}}$ plane. Introducing the
polarization basis
$\displaystyle\hat{\boldsymbol{e}}_{1}^{\rm p}$
$\displaystyle=\frac{(\hat{\boldsymbol{k}}\times\hat{\boldsymbol{\mu}})\times\hat{\boldsymbol{k}}}{\sin\Theta}\,,$
$\displaystyle\hat{\boldsymbol{e}}_{2}^{\rm p}$
$\displaystyle=\frac{\hat{\boldsymbol{k}}\times\hat{\boldsymbol{\mu}}}{\sin\Theta}\,,$
(67)
the PA measured from the projection of the spin axis onto the plane of the sky
in the counterclockwise direction is given by
$\displaystyle\cos\Psi$ $\displaystyle=\hat{\boldsymbol{e}}_{1}^{\rm
p}\cdot\hat{\boldsymbol{i}}=\frac{\sin\iota\cos\alpha-\cos\iota\sin\alpha\cos\Phi}{\sin\Theta}\,,$
(68) $\displaystyle\sin\Psi$ $\displaystyle=\hat{\boldsymbol{e}}_{1}^{\rm
p}\cdot\hat{\boldsymbol{j}}=-\frac{\sin\alpha\sin\Phi}{\sin\Theta}\,.$ (69)
Then, we obtain the expressions of PA in the RVM as (Radhakrishnan & Cooke,
1969; Lorimer & Kramer, 2005)
$\tan\Psi=\frac{\sin\alpha\sin\Phi}{\cos\iota\sin\alpha\cos\Phi-\sin\iota\cos\alpha}\,.$
(70)
The rotation phase at the polarization limiting radius is $\Phi(r_{\rm
pl})=\Phi(R)+r_{\rm pl}/R_{\rm LC}$, where $\Phi(R)$ is the rotation phase
when the photon is emitted at the surface. Magnetars rotate slowly, with
$r_{\rm pl}/R_{\rm LC}\ll 1$ and $\Phi(r_{\rm pl})\simeq\Phi(R)$.
In principle, one needs to evolve the polarization state along the magnetic
field to determine $\Psi$ for different points on the extended hotspot (Heyl
et al., 2003; Taverna et al., 2015). However, we consider a hot region much
smaller than the surface area of magnetars and the magnetic field is constant
across the emission region. Under this condition, the observed PA can be
approximated as $\Psi(r_{\rm pl})\simeq\pi+\Psi(R)$ (Lai & Ho, 2003; van
Adelsberg & Lai, 2006). Therefore, the polarization state only changes with a
constant phase shift compared to the intrinsic one. The Stokes parameters $Q$
and $U$ that are normalized to the total intensity $I$ are
$\displaystyle Q/I=$ $\displaystyle\Pi_{\rm em}\cos 2\Psi(r_{\rm pl})\,,$ (71)
$\displaystyle U/I=$ $\displaystyle\Pi_{\rm em}\sin 2\Psi(r_{\rm pl})\,.$ (72)
To obtain the spectral “flux” of the Stokes parameters, the propagation
effects in the curved spacetime such as the light bending and gravitational
redshift need to be considered. In the $IJK$ frame shown in Fig. 11, the
points on the surface of the NS are described by the azimuthal angle
$\phi_{\rm h}$ and polar angle $\theta_{\rm h}$. A photon emitted at an angle
$\theta_{\rm em}$ with respect to the surface normal escapes to infinity at a
different angle $\theta_{\rm h}$ due to the light bending effect in the curved
spacetime. The relation between the two angles is given by the ray tracing
function (Pechenick et al., 1983; Page, 1995)
$\theta_{\rm h}(\theta_{\rm
em})=\int_{0}^{R_{\mathrm{s}}/2R}x\left[\left(1-\frac{R_{\mathrm{s}}}{R}\right)\left(\frac{R_{\mathrm{s}}}{2R}\right)^{2}-(1-2u)u^{2}x^{2}\right]^{-1/2}{\rm
d}u\,,$ (73)
where $x\equiv\sin\theta_{\rm em}$, $R_{\rm s}\equiv 2GM/c^{2}$ is the
Schwarzschild radius. In a flat spacetime, the visible condition is simply
$\cos\theta_{\rm h}>0$. The strong gravity of the NS allows the observer to
see the region with negative $\cos\theta_{\rm h}$. The critical value of
$\cos\theta_{\rm h}$ that defines the dark side of star is determined by the
condition $\theta_{\rm em}=90^{\circ}$.
The differential spectral flux from the hotspot is (Pechenick et al., 1983;
Beloborodov, 2002)
$\displaystyle{\rm d}F_{j}(E_{\infty},\Phi)=$
$\displaystyle\left(1-\frac{r_{g}}{R}\right)^{1/2}I_{j}(\theta_{\rm
em},E)\cos\theta_{\rm em}\frac{{\rm d}\cos\theta_{\rm em}}{{\rm
d}\cos\theta_{\rm h}}\frac{{\rm d}S}{D^{2}}$ $\displaystyle=$
$\displaystyle\frac{R^{2}}{D^{2}}\left(1-\frac{r_{g}}{R}\right)^{1/2}I_{j}(\theta_{\rm
em},E)\sin\theta_{\rm em}\,{\rm d}\sin\theta_{\rm em}\,{\rm d}\phi_{\rm h}$
$\displaystyle=$
$\displaystyle\frac{R^{2}}{D^{2}}\left(1-\frac{r_{g}}{R}\right)^{1/2}I_{j}(\arcsin
x,E)x\,{\rm d}x\,{\rm d}\phi_{\rm h}\,,$ (74)
where ${\rm d}S=R^{2}\sin\theta_{\rm h}{\rm d}\theta_{\rm h}{\rm d}\phi_{\rm
h}$ is the visible surface element, $D$ is the distance between the NS and the
observer, $I_{j}\,(j=\rm X,O)$ are the specific intensities of the X and O
mode photons at the emission point, and $E_{\infty}=(1-R_{\rm s}/R)^{1/2}E$ is
the observed energy of the X-ray photons. The spectral flux then can be
integrated as (Page, 1995)
$F_{j}(E_{\infty},\Phi)=\frac{R^{2}}{D^{2}}\left(1-\frac{r_{g}}{R}\right)^{1/2}\int_{0}^{1}xI_{j}(\arcsin
x,E){\rm d}x\int_{0}^{2\pi}{\rm d}\phi_{\rm h}\,.$ (75)
At a specific rotation phase $\Phi$, one first obtains the angle $\Theta$.
Then the ranges of $\theta_{\rm h}$ and $\phi_{\rm h}$ are determined by
$\Theta$ and the opening angle $\rho$. The dependence of $\theta_{\rm h}$ has
been transformed into that of $\theta_{\rm em}$ by the relation in Eq. (73).
Finally, the observed spectral flux for a given mode is given by Eq. (75).
Note that the integration domain is restricted to the hot region with the
intensities being zero outside the hot region. The observed spectral flux
$F_{\rm I}$, $F_{\rm Q}$, and $F_{\rm U}$ that are associated with the Stokes
parameters $I$, $Q$, and $U$ are (van Adelsberg & Lai, 2006; van Adelsberg &
Perna, 2009)
$\displaystyle F_{\rm I}=$ $\displaystyle F_{\rm O}+F_{\rm X}\,,$ (76)
$\displaystyle F_{\rm Q}=$ $\displaystyle F_{\rm I}\,\Pi_{\rm em}\cos
2\Psi(r_{\rm pl})\,,$ (77) $\displaystyle F_{\rm U}=$ $\displaystyle F_{\rm
I}\,\Pi_{\rm em}\sin 2\Psi(r_{\rm pl})\,,$ (78)
and the observed polarization fraction is
$\Pi_{\rm L}=\frac{(F_{\rm Q}^{2}+F_{\rm U}^{2})^{1/2}}{F_{\rm
I}}=\lvert\Pi_{\rm em}\rvert\,.$ (79)
The observed polarization fraction is equal to the intrinsic polarization
fraction, which arises from the assumption that the magnetic field is constant
across the hot region which is much smaller than the surface area of the star.
Figure 12: The phase evolutions of the spectral flux $F_{\rm I}$ (upper), the
linear polarization $F_{\rm Q}/F_{\rm I}$ (middle), and the linear
polarization $F_{\rm U}/F_{\rm I}$ (lower) for photon energies $E=2\,\rm
keV,3\,\rm keV$, and $5\,\rm keV$. The parameters of the model are the opening
angle of the polar cap, $\rho=5^{\circ}$, the dipole magnetic field
$B=10^{14}\,\rm G$, the effective temperature $T_{\rm eff}=5\times 10^{6}\,\rm
K$, the inclination angle of the observer $\iota=45^{\circ}$, and the magnetic
inclination angle $\alpha=65^{\circ}$.
We give an example with a magnetic field $B=10^{14}\,\rm G$, an effective
temperature $T_{\rm eff}=5\times 10^{6}\,\rm K$, and a surface gravity
$g=GM/R^{2}\left(1-2GM/R/c^{2}\right)^{-1/2}=2.4\times 10^{14}\,\rm
cm\,s^{-2}$ in Fig. 12. The normalized $F_{\rm I}$, $F_{\rm Q}/F_{\rm I}$, and
$F_{\rm U}/F_{\rm I}$ for different photon energies are shown. There are
distinctive features that reflect the interplay between the NS geometry, the
strong magnetic field, and vacuum birefringence. Because the magnetic field
$B>B_{l}\simeq 7\times 10^{13}\,\rm G$, the vacuum resonance density lies
between the decoupling densities of the X mode and O mode photons ($\rho_{\rm
O}<\rho_{\rm V}<\rho_{\rm X}$), the linear polarization $F_{\rm Q}/F_{\rm I}$
for different photon energies coincides in phase as the star rotates (van
Adelsberg & Lai, 2006).
The emergent radiation is dominated by the X mode except for $\theta_{\rm em}$
close to zero. The polarization degree $\Pi_{\rm L}$ is smaller when the
rotation phase $\Phi$ is close to 0 than for when it is close to $90^{\circ}$.
This can be understood by considering the variation in X and O mode opacities
when varying the angle between the photon propagation and magnetic field
directions. In our chosen NS geometry, the emission angle $\theta_{\rm em}$ is
closer to zero for a rotation phase around $\sim 0^{\circ}$ compared to other
phases. The difference between the X and the O mode opacities becomes smaller.
Thus, the polarization fraction is smaller than at other phases. In contrast,
the emission angle $\theta_{\rm em}$ is close to $\sim 45^{\circ}$ when the
rotation phase is far away from $0^{\circ}$ or $360^{\circ}$. At those angles,
the difference between the X and O mode opacities is maximal and the
polarization fraction is larger.
### 5.2 Modulations on the Stokes parameters of X-rays
Figure 13: The phase-resolved spectral flux $F_{\rm I}$ (upper), $F_{\rm Q}/F_{\rm I}$ (middle), and $F_{\rm U}/F_{\rm I}$ (lower) of $E=3\,\rm keV$ at different precession phases for a biaxial NS. The parameters are shown in Case I of Table 3. Table 3: The intrinsic and effective parameters for modulated polarizations of X-rays and radio signals shown in Fig. 13-17. Case | Intrinsic parameters | Effective parameters
---|---|---
| $P_{0}\,(\rm s)$ | $B\,(\rm G)$ | $\epsilon$ | $\delta$ | $\theta_{0}\,(\degr)$ | $\chi\,(\degr)$ | $\eta\,(\degr)$ | $\iota\,(\degr)$ | $\epsilon_{\rm eff}$ | $\delta_{\rm eff}$ | $\theta_{\rm eff,0}\,(\degr)$ | $\chi_{\rm eff}\,(\degr)$ | $\eta_{\rm eff}\,(\degr)$ | $T_{\rm eff}\,(\rm yr)$
I | 5 | $10^{14}$ | $10^{-7}$ | 0 | 15 | 65 | 0 | 45 | $1.01\times 10^{-7}$ | 0.0124 | 15.3 | 65.3 | 0 | 1.64
II | 5 | $10^{14}$ | $10^{-7}$ | 1 | 15 | 65 | 45 | 45 | $1.00\times 10^{-7}$ | 0.993 | 15.2 | 65.5 | 44.4 | 2.35
III | 5 | $10^{14}$ | $10^{-7}$ | 0 | 15 | 45 | 0 | 45 | $1.00\times 10^{-7}$ | 0.00761 | 15.4 | 45.4 | 0 | 1.65
IV | 5 | $10^{14}$ | $10^{-7}$ | 1 | 15 | 45 | 45 | 45 | $9.96\times 10^{-8}$ | 1.01 | 15.3 | 45.6 | 44.8 | 2.38
V | 5 | $5\times 10^{14}$ | $10^{-7}$ | 0 | 15 | 45 | 0 | 45 | $1.07\times 10^{-7}$ | 0.261 | 25.3 | 55.3 | 0 | 1.87
VI | 5 | $5\times 10^{14}$ | $10^{-7}$ | 1 | 15 | 45 | 45 | 45 | $9.99\times 10^{-8}$ | 1.15 | 24.9 | 60.2 | 36.1 | 2.75
VII | 5 | ${10^{14}}$ | ${10^{-5}}$ | 0 | 18 | 40 | 0 | 45 | ${1.00\times 10^{-5}}$ | ${6.20\times 10^{-5}}$ | 18.0 | 40.0 | 0 | 0.0167
VIII | 5 | ${10^{14}}$ | ${10^{-5}}$ | 5 | 18 | 40 | 0 | 45 | ${1.00\times 10^{-5}}$ | ${5.00}$ | 18.0 | 40.0 | 0 | 0.0490
Since the phase evolution of the Stokes parameters are similar for different
energies in Fig. 12, we fix $E=3\,\rm keV$ to study the modulations due to the
precession. In Fig. 13, we show the phase evolution of the normalized $F_{\rm
I}$, $F_{\rm Q}/F_{\rm I}$, and $F_{\rm U}/F_{\rm I}$ at different precession
phases for a biaxial NS. Only half of the precession period is shown because
the precession is periodic. The modulations for the triaxial case are similar.
The precession mainly causes variations at a rotation phase close to
$0^{\circ}$.
Figure 14: The time evolutions of the phase-averaged normalized spectral flux
$\left\langle F_{\rm I}\right\rangle/\left\langle F_{\rm I}^{\rm
max}\right\rangle$ (upper), the phase-averaged linear polarization
$\left\langle F_{\rm Q}\right\rangle/\left\langle F_{\rm I}\right\rangle$
(middle), and the phase-averaged polarization fraction $\left\langle\Pi_{\rm
L}\right\rangle$ (lower) for photon energies $E=2\,\rm keV$ (left), $E=3\,\rm
keV$ (middle) and $E=5\,\rm keV$ (right). The parameters for the biaxial and
triaxial cases are shown in Case I and Case II of Table 3 respectively.
The phase-resolved X-ray polarization is usually hard to get from
observations. Thus, we give the phase-averaged Stokes parameters $\left\langle
F_{\rm I}\right\rangle$, $\left\langle F_{\rm Q}\right\rangle/\left\langle
F_{\rm I}\right\rangle$ and polarization degree $\left\langle\Pi_{\rm
L}\right\rangle$ in Fig. 14. The phase averaged $F_{\rm U}$ is zero and it is
omitted in the figure. Both biaxial and triaxial cases are shown and the
phase-averaged spectral flux $\left\langle F_{\rm I}\right\rangle$ is
normalized to the maximal value.
The amplitude of $\left\langle F_{\rm I}\right\rangle/\left\langle F_{\rm
I}^{\rm max}\right\rangle$ can vary up to $\sim 40\%$ during the precession,
which is quite substantial. Heyl & Hernquist (2002) used free precession to
explain the flux variations from the magnetar 1E 161348$-$5055\. They modeled
the hotspot emission in a similar way to our work. The large variations of the
phase-averaged flux are partially caused by the emission model. We assume that
the emission comes from a small hot region centered around the magnetic axis.
If the emission comes from different patches or even the whole stellar surface
with temperature profiles, the large modulations on the spectral flux might be
reduced.
The phase-averaged linear polarization $\left\langle F_{\rm
Q}\right\rangle/\left\langle F_{\rm I}\right\rangle$ and the phase-averaged
polarization fraction $\left\langle\Pi_{\rm L}\right\rangle$ vary $\sim
10\%$–$20\%$ in our examples. Different from the spectral flux, the
modulations on the polarizations may not be reduced if the emission comes from
different patches of the stellar surface. As we discussed before, when an
X-ray photon propagates in the magnetosphere, its polarization state evolves
adiabatically along the varying magnetic field up to the polarization limiting
radius $r_{\rm pl}$, which is far from the surface of the NS. Polarization
states of photons from different patches of the star largely do not cancel.
The magnetic field direction at $r_{\rm pl}$ changes during the precession and
the modulations should always exist.
### 5.3 Modulations on polarized radio emission
Figure 15: The upper panel shows the PA evolution at different precession
phases for both biaxial and triaxial cases. The lower panel shows the time
evolution of the inverse of the steepest gradient, $-\sin\beta/\sin\alpha$.
The parameters for the biaxial and triaxial cases are shown in Case III and
Case IV of Table 3 respectively. Figure 16: Same as Fig. 15 but with a larger
magnetic field $B=5\times 10^{14}\,\rm G$. The parameters for the biaxial and
triaxial cases are shown in Case V and Case VI of Table 3 respectively. Figure
17: The time evolution of $\theta$, $\alpha$, and $-\sin\beta/\sin\alpha$. The
parameters for the biaxial and triaxial cases are shown in Case VII and Case
VIII of Table 3 respectively.
We use the RVM to study the modulations on the polarized radio emission. It
may be possible to observe the swing of the emission region and the
modulations on the PA due to the precession.
We present the PA evolutions at different precession phases for a magnetar
with $B=10^{14}\,\rm G$ in the upper panel of Fig. 15. Due to the precession,
the slope of the PA will change. The steepest gradient of the PA is
$\frac{{\rm d}\Psi}{{\rm
d}\Phi}\bigg{|}_{\Phi=0}=-\frac{\sin\alpha}{\sin\beta}\,.$ (80)
Practically, the precession of magnetars may be observed from the variations
of the steepest gradient of the PA. As we take the inclination angle of the
observer to be $\iota={\pi}/{4}$, the impact parameter $\beta=\iota-\alpha$
changes sign during precession. Thus the slope of the PA can potentially
change sign. In the lower panel of Fig. 15, we show the inverse of the
steepest gradient, $-\sin\beta/\sin\alpha$ for a better illustration.
For comparison, we also give examples with $B=5\times 10^{14}\,\rm G$ in Fig.
16. The effects of the near-field torque cannot be neglected. The wobble angle
varies across a much larger range and the modulations of the steepest gradient
are distinct from that in Fig. 15. Moreover, the differences between the
biaxial and triaxial cases are more obvious than Fig. 15.
The parameter $m$ is on the order of $0.1$ for the examples shown in Table 3.
As shown in the lower panels of Fig. 15 and Fig. 16, the modulations for the
biaxial and triaxial cases are similar. In contrast, the triaxiality can be
observed directly from polarizations if $m$ is large enough. We show a
triaxial case with $m=0.528$ in Fig. 17. For the biaxial case, the wobble
angle is constant, the variation of $\alpha$ and $-\sin\beta/\sin\alpha$ is
harmonic and has only one peak. In fact, these features are true for any
biaxial case according to Eq. (41). For the triaxial case, the modulations of
$\alpha$ and $-\sin\beta/\sin\alpha$ are not harmonic. Due to the variation of
the wobble angle, the time evolution of $\alpha$ and $-\sin\beta/\sin\alpha$
also shows a “double-peak” structure in our case.
## 6 Discussions
In our work, we gave the analytical solutions to the free precession of
triaxially-deformed NSs following Landau & Lifshitz (1960), Akgun et al.
(2006), and Wasserman et al. (2022). We assumed that the rotation is rigid and
ignored superfluid pinning or any internal dissipations. The pinning of the
superfluid can lead to fast precession comparable to the spin frequency
(Shaham, 1977; Sedrakian et al., 1998). It is also possible that the
precession is damped quickly by the coupling of normal fluids to the
superfluid core if the strong internal magnetic field of magnetars unpairs the
proton superconductor in the stellar core (Sedrakian, 2016). In this regard,
our studies can serve as a starting point to further investigate the internal
couplings and dissipative mechanisms.
The strong magnetic fields of magnetars also induce large electromagnetic
torques, which are important to determine the spin and geometry evolutions of
precessing magnetars. Assuming a dipole field, we considered both the near-
field and far-field electromagnetic torques in the forced precession problem.
For magnetars with large external magnetic fields, the near-field torque
couples to the precession solution and affects the motions substantially. This
is the central idea in the so called radiative precession advocated by Melatos
(1997, 2000). The near-field torque can be effectively absorbed into the
moment of inertia tensor of the star (Melatos, 2000; Glampedakis & Jones,
2010; Zanazzi & Lai, 2015). We solved the forced precession problem
analytically by transforming the near-field torque into an effective
deformation along the magnetic axis. We found that the effects of the near-
field torque cannot be ignored in the dynamical evolution if $\epsilon_{\rm
m}\gtrsim 0.1\epsilon$, where $\epsilon_{\rm m}$ is the effective ellipticity
induced by the external magnetic field and $\epsilon$ is the ellipticity
sourced from the magnetic and elastic deformations.
The far-field torque leads to the spin down and secular change of the magnetic
inclination angle. Perturbation methods were used to study the forced
precession under the far-field torque. We obtained analytical solutions of the
spin evolution for general triaxial stars. One part of the far-field torque
comes from the direct emission of electromagnetic waves due to the time-
varying magnetic multipolar moments of the star. Another part is caused by
electromagnetic emission from charged particles being accelerated in the
magnetosphere. Therefore, we not only used the simple vacuum torque, but also
applied a parametrized plasma-filled torque proposed by Li et al. (2012) and
Philippov et al. (2014) according to MHD simulations. The form of the plasma-
filled torque is equivalent to adding an alignment component of the far-field
torque compared to the vacuum case. Note that the near-field torque affects
the spin-down rate indirectly because it leads to the variations of the
magnetic inclination angle for a precessing magnetar.
In our calculations, we assumed that the external magnetic field was dipolar.
It is commonly believed that higher multipoles should be considered crucially
for magnetars. Zanazzi & Lai (2015) investigated the near-field torque
contributed by the quadrupole field. The effective deformation is not
symmetric about a specific axis and can be classified into two independent
components. The multipoles contributing to the near-field torque can also be
absorbed into the moment of inertia tensor of the star. The solutions in our
work can still be applied but with different effective parameters. The direct
electromagnetic emission from the magnetic multipoles is probably dominated by
the magnetic dipole. On the perspective of observations, we may leave the
coefficients in the parametrized far-field torque in Eq. (20) as free
parameters to absorb the effects of higher magnetic multipoles, as well as
complex charges and currents in the magnetosphere.
During the precession, the torques are locked in phase with the precession,
which in turn modulates the spin-down rate. We first studied the spin
evolution of triaxially-deformed magnetars and gave the analytical timing
residuals, which contained the geometric term resulting from the phase
modulations of the precession and the spin-down term arising from the varying
far-field torque.
The polarization maps out the geometry of the emission region and can serve as
a useful probe to find the precession of magnetars. For the soft X-rays, we
used the model given in Lai & Ho (2003) and van Adelsberg & Lai (2006) to
calculate the observed Stokes parameters emitted from a hot region centered
around the magnetic dipole. The general relativistic effects and the vacuum
birefringence were considered. We investigated the modulations on the spectral
intensities, the linear polarization, and the polarization fraction for both
phase-resolved and phase-averaged scenarios. For radio signals, we simply used
the RVM to study the PA evolution for different geometries during the
precession. It is possible to detect the precession of transient magnetars
through the variations of the steepest gradient of the PA if large-amplitude
precession is excited.
The polarization state of X-rays evolves adiabatically following the varying
magnetic field it experiences up to the polarization limiting radius $r_{\rm
pl}$. The polarization states of photons from different patches of the star
tend to align at $r_{\rm pl}$, and largely do not cancel (Heyl et al., 2003;
Lai & Ho, 2003). Therefore, the polarizations can still be modulated in the
precession, even when the photons come from different patches (or even the
whole stellar surface) because the inclination of the magnetic field at
$r_{\rm pl}$ changes. In contrast, the modulations of the flux and the
spectrum may be reduced or even eliminated if the emission comes from a large
extended region.
From the perspective of observations, IXPE has conducted the first observation
of the polarized X-ray emissions from the magnetar 4U 0142+61 (Taverna et al.,
2022). In near future, the eXTP mission will give more accurate measurements
of X-ray polarizations (Zhang et al., 2019; in ’t Zand et al., 2019) which
will give us more opportunities to find the precession of magnetars via
polarizations.
## 7 Conclusions
We gave a detailed model of precessing magnetars with triaxial deformation.
The dynamical motion of the precession both in free and forced conditions was
studied analytically. For magnetars with $B\sim 5\times 10^{14}-10^{15}\,\rm
G$, the effects of the electromagnetic torques must be considered crucially if
the ellipticity $\epsilon\lesssim 10^{-7}$.
Precession can produce observational consequences in timing and polarization.
We gave the timing residuals from both the geometric term arising from the
precession and the spin-down term arising from the variations of the far-field
torque. The relative strength of the two terms is determined by the relative
strength between the rotation period $P$, the precession timescale $\tau_{\rm
p}$, and the spin-down timescale $\tau_{\rm rad}$. If $\tau_{\rm p}/\tau_{\rm
rad}\gg P/\tau_{\rm p}$, the spin-down term dominates. Otherwise, the
geometric term dominates over the spin-down term.
We also modeled the modulations on polarized X-ray and radio signals in
different NS geometries. Assuming the emission is centered around one of the
magnetic pole, we showed that the prospects of detecting precession with
polarization are promising if large wobble angle is excited. Thanks to the QED
effects in strongly magnetized magnetosphere, the modulations on the
polarization of X-rays may always exist even if the emission comes from a much
extended region or the whole star. Advanced detectors, such as IXPE and eXTP,
will give us more opportunities to find the precession of magnetars via
polarizations.
A firm detection of magnetar precession will answer many questions about the
strong internal magnetic field, the emission geometry, and the equation of
state of NSs. Our timing and polarization models can be used to search and
interpret magnetar precession.
In this work, we assume that the emission comes from a small region centered
around the magnetic pole and the magnetic field itself is dipolar. However,
the emission may come from a much extended region and possibly is distorted by
the scattering processes (Caiazzo et al., 2022) for magnetars. The actual
magnetic structures of magnetars are likely to have a twisted magnetic field
configuration which contains contributions from higher-order multipoles,
affecting the polarization-state evolution for both radio signals from
transient magnetars (Tong et al., 2021) and X-rays (Fernandez & Davis, 2011;
Taverna et al., 2015). We leave the modelling of the polarized X-rays and
radio emission from precessing magnetars with more complex emission mechanisms
and magnetic fields into future studies.
## Acknowledgements
We thank Kuo Liu for carefully reading the manuscript, and Jingyuan Deng,
Zexin Hu, and Rui Xu for discussions. This work was supported by the National
SKA Program of China (2020SKA0120300), the National Natural Science Foundation
of China (11975027, 11991053, 11721303), the Max Planck Partner Group Program
funded by the Max Planck Society, and the High-performance Computing Platform
of Peking University. DIJ acknowledges support from the STFC via grant number
ST/R00045X/1.
## Data Availability
The data underlying this paper will be shared on reasonable request to the
corresponding authors.
## References
* Akgun & Wasserman (2008) Akgun T., Wasserman I., 2008, MNRAS, 383, 1551
* Akgun et al. (2006) Akgun T., Link B., Wasserman I., 2006, MNRAS, 365, 653
* Arzamasskiy et al. (2015) Arzamasskiy L., Philippov A., Tchekhovskoy A., 2015, MNRAS, 453, 3540
* Ashton et al. (2016) Ashton G., Jones D. I., Prix R., 2016, MNRAS, 458, 881
* Ashton et al. (2017) Ashton G., Jones D., Prix R., 2017, MNRAS, 467, 164
* Beloborodov (2002) Beloborodov A. M., 2002, ApJL, 566, L85
* Beskin & Zheltoukhov (2014) Beskin V. S., Zheltoukhov A. A., 2014, Phys. Usp., 57, 799
* Beskin et al. (2013) Beskin V. S., Zheltoukhov A. A., Obukhova A. K., Stroinov E. E., 2013, Bull. Lebedev Phys. Inst., 40, 265
* Braithwaite (2009) Braithwaite J., 2009, MNRAS, 397, 763
* Caiazzo et al. (2022) Caiazzo I., González-Caniulef D., Heyl J., Fernández R., 2022, MNRAS, 514, 5024
* Camilo et al. (2006) Camilo F., Ransom S., Halpern J., Reynolds J., Helfand D., Zimmerman N., Sarkissian J., 2006, Nature, 442, 892
* Camilo et al. (2007) Camilo F., Ransom S. M., Halpern J. P., Reynolds J., 2007, ApJL, 666, L93
* Ciolfi et al. (2010) Ciolfi R., Ferrari V., Gualtieri L., 2010, MNRAS, 406, 2540
* Cutler & Jones (2001) Cutler C., Jones D. I., 2001, Phys. Rev. D, 63, 024002
* Cutler et al. (2003) Cutler C., Ushomirsky G., Link B., 2003, ApJ, 588, 975
* Davis & Goldstein (1970) Davis L., Goldstein M., 1970, ApJ, 159, L81
* Deutsch (1955) Deutsch A. J., 1955, Annales d’Astrophysique, 18, 1
* Eatough et al. (2013) Eatough R. P., et al., 2013, Nature, 501, 391
* Fernandez & Davis (2011) Fernandez R., Davis S. W., 2011, ApJ, 730, 131
* Gao et al. (2020) Gao Y., Shao L., Xu R., Sun L., Liu C., Xu R.-X., 2020, MNRAS, 498, 1826
* Gittins et al. (2020) Gittins F., Andersson N., Jones D. I., 2020, MNRAS, 500, 5570
* Glampedakis & Jones (2010) Glampedakis K., Jones D. I., 2010, MNRAS, 405, L6
* Glampedakis & Lasky (2016) Glampedakis K., Lasky P. D., 2016, MNRAS, 463, 2542
* Glampedakis et al. (2008) Glampedakis K., Andersson N., Jones D. I., 2008, Phys. Rev. Lett., 100, 081101
* Gnedin & Sunyaev (1974) Gnedin I. N., Sunyaev R. A., 1974, A&A, 36, 379
* Goldreich (1970) Goldreich P., 1970, ApJL, 160, L11
* Good & Ng (1985) Good M. L., Ng K. K., 1985, ApJ, 299, 706
* Haskell et al. (2006) Haskell B., Jones D. I., Andersson N., 2006, MNRAS, 373, 1423
* Haskell et al. (2008) Haskell B., Samuelsson L., Glampedakis K., Andersson N., 2008, MNRAS, 385, 531
* Heisenberg & Euler (1936) Heisenberg W., Euler H., 1936, Z. Phys., 98, 714
* Heyl & Hernquist (2002) Heyl J. S., Hernquist L., 2002, ApJ, 567, 510
* Heyl et al. (2003) Heyl J. S., Shaviv N. J., Lloyd D., 2003, MNRAS, 342, 134
* Ho & Lai (2001) Ho W. C. G., Lai D., 2001, MNRAS, 327, 1081
* Ho & Lai (2003) Ho W. C. G., Lai D., 2003, MNRAS, 338, 233
* Horowitz & Kadau (2009) Horowitz C. J., Kadau K., 2009, Phys. Rev. Lett., 102, 191102
* Johnson-McDaniel & Owen (2013) Johnson-McDaniel N. K., Owen B. J., 2013, Phys. Rev. D, 88, 044004
* Jones (2012) Jones D. I., 2012, MNRAS, 420, 2325
* Jones & Andersson (2001) Jones D. I., Andersson N., 2001, MNRAS, 324, 811
* Kaspi & Beloborodov (2017) Kaspi V. M., Beloborodov A., 2017, ARA&A, 55, 261
* Kaspi et al. (1999) Kaspi V. M., Chakrabarty D., Steinberger J., 1999, ApJL, 525, L33
* Kaspi et al. (2001) Kaspi V. M., Gavriil F. P., Chakrabarty D., Lackey J. R., Muno M. P., 2001, ApJ, 558, 253
* Kramer et al. (2006) Kramer M., Lyne A. G., O’Brien J. T., Jordan C. A., Lorimer D. R., 2006, Science, 312, 549
* Kramer et al. (2007) Kramer M., Stappers B. W., Jessner A., Lyne A. G., Jordan C. A., 2007, MNRAS, 377, 107
* Lai & Ho (2003) Lai D., Ho W. C. G., 2003, Phys. Rev. Lett., 91, 071101
* Landau & Lifshitz (1960) Landau L. D., Lifshitz E. M., 1960, Mechanics. Oxford
* Lander (2013) Lander S. K., 2013, Phys. Rev. Lett., 110, 071101
* Lander & Jones (2009) Lander S. K., Jones D. I., 2009, MNRAS, 395, 2162
* Lander & Jones (2012) Lander S. K., Jones D. I., 2012, MNRAS, 424, 482
* Lander & Jones (2017) Lander S. K., Jones D. I., 2017, MNRAS, 467, 4343
* Lander & Jones (2018) Lander S. K., Jones D. I., 2018, MNRAS, 481, 4169
* Lander & Jones (2020) Lander S. K., Jones D. I., 2020, MNRAS, 494, 4838
* Lasky & Melatos (2013) Lasky P. D., Melatos A., 2013, Phys. Rev. D, 88, 103005
* Levin et al. (2010) Levin L., et al., 2010, ApJL, 721, L33
* Levin et al. (2012) Levin L., et al., 2012, MNRAS, 422, 2489
* Levin et al. (2020) Levin Y., Beloborodov A. M., Bransgrove A., 2020, ApJL, 895, L30
* Li et al. (2012) Li J., Spitkovsky A., Tchekhovskoy A., 2012, ApJ, 746, 60
* Link (2007) Link B., 2007, Ap&SS, 308, 435
* Link & Epstein (2001) Link B., Epstein R. I., 2001, ApJ, 556, 392
* Lorimer & Kramer (2005) Lorimer D. R., Kramer M., 2005, Handbook of Pulsar Astronomy. Cambridge University Press, Cambridge, England
* Lower et al. (2020) Lower M. E., Shannon R. M., Johnston S., Bailes M., 2020, ApJL, 896, L37
* Lower et al. (2021) Lower M. E., Johnston S., Shannon R. M., Bailes M., Camilo F., 2021, MNRAS, 502, 127
* Lyne et al. (2010) Lyne A., Hobbs G., Kramer M., Stairs I., Stappers B., 2010, Science, 329, 408
* Makishima (2016) Makishima K., 2016, Proceedings of the Japan Academy, Series B, 92, 135
* Makishima et al. (2014) Makishima K., Enoto T., Hiraga J. S., Nakano T., Nakazawa K., Sakurai S., Sasano M., Murakami H., 2014, Phys. Rev. Lett., 112, 171102
* Makishima et al. (2021a) Makishima K., Enoto T., Yoneda H., Odaka H., 2021a, MNRAS, 502, 2266
* Makishima et al. (2021b) Makishima K., Tamba T., Aizawa Y., Odaka H., Yoneda H., Enoto T., Suzuki H., 2021b, ApJ, 923, 63
* Mastrano et al. (2013) Mastrano A., Lasky P. D., Melatos A., 2013, MNRAS, 434, 1658
* Mastrano et al. (2015) Mastrano A., Suvorov A. G., Melatos A., 2015, MNRAS, 447, 3475
* Melatos (1997) Melatos A., 1997, MNRAS, 288, 1049
* Melatos (2000) Melatos A., 2000, MNRAS, 313, 217
* Mestel & Takhar (1972) Mestel L., Takhar H. S., 1972, MNRAS, 156, 419
* Meszaros (1992) Meszaros P., 1992, High-energy radiation from magnetized neutron stars. University of Chicago Press
* Meszaros et al. (1988) Meszaros P., Novick R., Szentgyorgyi A., Chanan G. A., Weisskopf M. C., 1988, ApJ, 324, 1056
* Morales & Horowitz (2022) Morales J. A., Horowitz C. J., 2022, MNRAS, 517, 5610
* Olausen & Kaspi (2014) Olausen S. A., Kaspi V. M., 2014, ApJS, 212, 6
* Page (1995) Page D., 1995, ApJ, 442, 273
* Pavlov & Zavlin (2000) Pavlov G. G., Zavlin V. E., 2000, ApJ, 529, 1011
* Pechenick et al. (1983) Pechenick K. R., Ftaclas C., Cohen J. M., 1983, ApJ, 274, 846
* Philippov et al. (2014) Philippov A., Tchekhovskoy A., Li J. G., 2014, MNRAS, 441, 1879
* Pines (1974) Pines D., 1974, Nature, 248, 483
* Radhakrishnan & Cooke (1969) Radhakrishnan V., Cooke D. J., 1969, Astrophys. Lett., 3, 225
* Sedrakian (2016) Sedrakian A., 2016, A&A, 587, L2
* Sedrakian et al. (1998) Sedrakian A., Wasserman I., Cordes J. M., 1998, ApJ, 524, 341
* Shaham (1977) Shaham J., 1977, ApJ, 214, 251
* Shaw et al. (2022) Shaw B., et al., 2022, MNRAS, 513, 5861
* Stairs et al. (2000) Stairs I. H., Lyne A. G., Shemar S. L., 2000, Nature, 406, 484
* Stairs et al. (2019) Stairs I. H., et al., 2019, MNRAS, 485, 3230
* Taverna et al. (2015) Taverna R., Turolla R., Caniulef D. G., Zane S., Muleri F., Soffitta P., 2015, MNRAS, 454, 3254
* Taverna et al. (2022) Taverna R., et al., 2022, Science, 378, 646
* Thompson et al. (2002) Thompson C., Lyutikov M., Kulkarni S. R., 2002, ApJ, 574, 332
* Tong et al. (2021) Tong H., Wang P. F., Wang H. G., Yan Z., 2021, MNRAS, 502, 1549
* Tsai & Erber (1975) Tsai W. Y., Erber T., 1975, Phys. Rev. D, 12, 1132
* Turolla et al. (2015) Turolla R., Zane S., Watts A., 2015, Rept. Prog. Phys., 78, 116901
* Ushomirsky et al. (2000) Ushomirsky G., Cutler C., Bildsten L., 2000, MNRAS, 319, 902
* Wasserman (2003) Wasserman I., 2003, MNRAS, 341, 1020
* Wasserman et al. (2022) Wasserman I., Cordes J. M., Chatterjee S., Batra G., 2022, ApJ, 928, 53
* Zanazzi & Lai (2015) Zanazzi J., Lai D., 2015, MNRAS, 451, 695
* Zanazzi & Lai (2020) Zanazzi J. J., Lai D., 2020, ApJ, 892, L15
* Zhang et al. (2019) Zhang S.-N., et al., 2019, Sci. China Phys. Mech. Astron., 62, 29502
* in ’t Zand et al. (2019) in ’t Zand J. J. M., et al., 2019, Sci. China Phys. Mech. Astron., 62, 029506
* van Adelsberg & Lai (2006) van Adelsberg M., Lai D., 2006, MNRAS, 373, 1495
* van Adelsberg & Perna (2009) van Adelsberg M., Perna R., 2009, MNRAS, 399, 1523
## Appendix A Jacobi elliptic functions
The Jacobi elliptic functions are standard forms of elliptic functions. The
three basic functions are denoted as $\operatorname{cn}(\tau,m)$,
$\operatorname{sn}(\tau,m)$, and $\operatorname{dn}(\tau,m)$, where $0\leq
m\leq 1$. They naturally arise from the following integral
$\tau=\int_{0}^{s}\frac{{\rm d}t}{\sqrt{1-m\sin^{2}t}}\,,$ (81)
where $s={\rm am}(\tau,m)$ is called the Jacobi amplitude. Then, it follows
$\displaystyle\cos s=$ $\displaystyle\operatorname{cn}(\tau,m)\,,$
$\displaystyle\sin s=$ $\displaystyle\operatorname{sn}(\tau,m)\,,$
$\displaystyle\sqrt{1-m\sin^{2}s}=$
$\displaystyle\operatorname{dn}(\tau,m)\,.$ (82)
The Jacobi elliptic functions are periodic with period $T=4K(m)$, where $K(m)$
is the complete elliptic integral of the first kind.
The expansions of the Jacobi elliptic function in series of $m$ are
$\displaystyle\operatorname{cn}(\tau,m)=\cos\tau-\frac{1}{8}m\sin\tau(-2\tau+\sin
2\tau)+\mathcal{O}(m^{2})\,,$
$\displaystyle\operatorname{sn}(\tau,m)=\sin\tau+\frac{1}{8}m\cos\tau(-2\tau+\sin
2\tau)+\mathcal{O}(m^{2})\,,$
$\displaystyle\operatorname{dn}(\tau,m)=1-\frac{1}{2}m\sin^{2}\tau+\mathcal{O}(m^{2})\,.$
(83)
When $m=0$, $K(0)=\pi/2$, the variable $\tau$ equals to the Jacobi amplitude
$s$, and the three elliptic functions turn into
$\displaystyle\operatorname{cn}(\tau,0)$ $\displaystyle=\cos\tau\,,$
$\displaystyle\operatorname{sn}(\tau,0)$ $\displaystyle=\sin\tau\,,$
$\displaystyle\operatorname{dn}(\tau,0)$ $\displaystyle=1\,.$ (84)
The trigonometric functions are $2\pi$ periodic. In Fig. 18, we show the
relation between the parameter $m$ and the period of the Jacobi elliptic
functions over the period of the trigonometric functions.
Figure 18: The relation between $m$ and $2K(m)/\pi$.
The quantity $\cos\alpha$ shown in Eq. (40) is a function of the Jacobi
elliptic functions. The integration of $\cos^{2}\alpha$ is
$\displaystyle\int_{0}^{t}\cos^{2}\alpha{\rm d}t=$
$\displaystyle\frac{\cos^{2}\theta_{0}}{\omega_{\rm
p}\delta}\left[\hat{\mu}_{1}^{2}-(1+\delta)\hat{u}_{2}^{2}+\hat{\mu}_{3}^{2}\delta\right]E\left({\rm
am}\,\tau\right)$ $\displaystyle+\frac{1}{\omega_{\rm p}}\left[-\sin
2\theta_{0}(1+\delta)^{\frac{1}{2}}\hat{\mu}_{2}\hat{\mu}_{3}\operatorname{cn}\tau\right]$
$\displaystyle+\frac{1}{\omega_{\rm p}}\sin
2\theta_{0}\hat{\mu}_{1}\hat{\mu}_{3}\operatorname{sn}$
$\displaystyle+\frac{\cos^{2}\theta_{0}}{\delta\omega_{\rm
p}}\left[(-1+\delta\tan^{2}\theta_{0})\hat{\mu}_{1}^{2}+(1+\delta)\hat{\mu}_{2}^{2}\right]\tau$
$\displaystyle-\frac{2\cos^{2}\theta_{0}\hat{\mu}_{1}\hat{\mu}_{2}(1+\delta)^{\frac{1}{2}}}{\omega_{\rm
p}\delta}\operatorname{dn}\tau+A_{\rm c}\,,$ (85)
where $E({\rm am}\,\tau)$ is the Jacobi elliptic integral of the second kind,
and ${\rm am}\,\tau=\arcsin(\operatorname{sn}\tau)$ is the Jacobi amplitude.
The term $A_{\rm c}$ is an integration constant, which can be obtained
directly by setting the integral to be zero at the initial time $t=0$.
|
# Better Transcription of UK Supreme Court Hearings
Hadeel Saadany
Centre for Translation Studies
University of Surrey
United Kingdom
<EMAIL_ADDRESS>
Constantin Orăsan
Centre for Translation Studies
University of Surrey
United Kingdom
<EMAIL_ADDRESS>
Catherine Breslin
Kingfisher Labs Ltd
United Kingdom
<EMAIL_ADDRESS>
###### Abstract
Transcription of legal proceedings is very important to enable access to
justice. However, speech transcription is an expensive and slow process. In
this paper we describe part of a combined research and industrial project for
building an automated transcription tool designed specifically for the Justice
sector in the UK. We explain the challenges involved in transcribing court
room hearings and the Natural Language Processing (NLP) techniques we employ
to tackle these challenges. We will show that fine-tuning a generic off-the-
shelf pre-trained Automatic Speech Recognition (ASR) system with an in-domain
language model as well as infusing common phrases extracted with a collocation
detection model can improve not only the Word Error Rate (WER) of the
transcribed hearings but avoid critical errors that are specific of the legal
jargon and terminology commonly used in British courts.
Better Transcription of UK Supreme Court Hearings
Hadeel Saadany Centre for Translation Studies University of Surrey United
Kingdom<EMAIL_ADDRESS>Constantin Orăsan Centre for Translation
Studies University of Surrey United Kingdom<EMAIL_ADDRESS>
Catherine Breslin Kingfisher Labs Ltd United Kingdom
<EMAIL_ADDRESS>
## 1 Introduction
There has been a recent interest in employing NLP techniques to aid in the
textual processing of the legal domain (Elwany et al., 2019; Nay, 2021;
Mumcuoğlu et al., 2021; Frankenreiter and Nyarko, 2022). In contrast,
understanding spoken court hearings has not received the same attention as
understanding the legal text documents. In the UK legal system, the court
hearings sessions have a unique tradition of verbal argument. Moreover, these
hearings crucially aid in new case preparation, provide guidance for court
appeals, help in legal training and even guide future policy. However, the
audio material for a case typically spans over several hours, which makes it
both time and effort consuming for legal professionals to extract important
information relevant to their needs. Currently, the existing need for legal
transcriptions (covering 449K cases p.a in the UK across all court tribunals
Sturge (2021) is largely met by human transcribers.
Model | Transcript
---|---
| Reference
---
AWS ASR
| So my lady um it is difficult to..
---
So melody um it is difficult to…
| Reference
---
AWS ASR
| it makes further financial order
---
it makes further five natural
Table 1: Examples of Errors Produced by AWS ASR on Legal Hearings. Errors are
typed in bold.
Although there are several current speech-to-text (STT) technology providers
which could be used to transcribed this data automatically, most of these
systems are trained on general domain data which may result in domain-specific
transcription errors if applied to a specialised domain. One way to address
this problem is for end-users to train their own ASR engines using their in-
domain data. However, in most of the cases the amount of data available is too
low to enable them to train a system which can compete with well-knows cloud-
based ASR systems which are trained on much larger datasets. In commercial
scenarios, using generic cloud-based ASR systems to transcribe a specialised
domain may result in a sub-optimal quality transcriptions for clients who
require this service.
This holds particularly true for British court room audio procedures. When
applying a generic cloud-based ASR system on British court rooms, the Word
Error Rate (WER) remains relatively high due to long hearings, multiple
speakers, complex speech patterns as well as unique pronunciation and
vocabulary. Examples in Table 1 show common problems with transcribing speech
from the legal domain using an on-the-shelf ASR systems such as AWS
Transcribe111https://aws.amazon.com/transcribe/. The references are taken from
gold-standard edited transcripts of the UK Supreme Court
Hearings222https://www.supremecourt.uk/decided-cases/index.html created by the
legal editors in our project. The first error is due to a special
pronunciation of the phrase ‘my lady’ in British court rooms as it is
pronounced like ‘mee-lady’ when barristers address a female judge. In the
second example, the error is related to legal terminology critical of the
specific transcribed case. Errors like in the second example are numerous in
our dataset and affect also names and numbers. These errors can lead to
serious information loss and cause confusion.
Figure 1: Pipeline for Improving ASR Output for Legal Specific Errors
In this paper, we describe a joint research and commercial effort to perform
domain adaptation of a generic ASR system to mitigate the errors in the
automated UK court transcription services. We propose to minimise legal-
specific errors by fine-tuning off-the-shelf ASR systems with a custom
language model (CLM) trained on legal documents as well as 139 hours of gold-
standard transcriptions of UK Supreme Court hearings. We also employ NLP
techniques to automatically build a custom vocabulary of common multi-word
expressions and word n-gram collocations that are critical in court hearings.
We infuse our custom vocabulary to the CLM at transcription time. In this
research, we evaluate the benefits of our proposed domain adaptation methods
by comparing the WER of the CLM output with two off-the-shelf ASR systems:
Amazon Web Services (AWS) Transcribe (commercial) and the OpenAI Whisper model
(open-source) (Radford et al., 2022). We also compare the general improvement
in the ASR system’s ability to correctly transcribe legal entities with and
without adopting our proposed methods.
## 2 Related Work
Automatic speech recognition (ASR) models convert audio input to text and they
have optimal performance when used to transcribe data which is similar to the
one they were trained on. However, performance degrades when there is a
mismatch between the data used for training and the one that is being
transcribed. Additionally, some types of audio material is intrinsically
harder for speech recognition systems to transcribe. In practice, this means
that speech recognition system performance degrades when, for example, there
is background noise (Watanabe et al., 2020), non-native accents (Feng et al.,
2021; Zhang, 2022), young or elderly speakers (Feng et al., 2021), or a shift
in domain (Mai and Carson-Berndsen, 2022).
Performance degradation is typically mitigated by adapting or fine-tuning ASR
models towards the domain of the targeted data by using a domain-specific
dataset (Huo et al., 2021; Sato et al., 2022; Dingliwa et al., 2022). Some
methods for domain adaptation adopt NLP techniques such as using machine
translation models to learn a mapping from out-of-domain ASR errors to in-
domain terms (Mani et al., 2020). An alternative approach is to build a large
ASR model with a substantially varied training set, so that the model is more
robust to data shifts. An example of this latter approach is the recently
released OpenAI Whisper model which is trained on 680k hours of diverse domain
data to generalise well on a range of unseen datasets without the need for
explicit adaptation (Radford et al., 2022).
Moreover, ASR models are evaluated using Word Error Rate (WER), which treats
each incorrect word equally. However, ASR models do not perform equally on
different categories of words. Performance is worse for categories like names
of people and organisations as compared to categories like numbers or dates
(Del Rio et al., 2021). ASR research targeted improving specific errors such
as different named entities using NLP techniques (Wang et al., 2020; Das et
al., 2022).
In this paper, we propose simple techniques to improve the effect of the
domain mismatch between a generic ASR model and the specialised domain of
British court room hearings. Our proposed method, improves both the system’s
WER rate as well as its ability to capture case-specific terms and entities.
In the next section, we present our experiment set up and the evaluation
results.
## 3 Experiment Set up
Figure 1 illustrates our proposed pipeline to improve ASR systems performance
by legal domain-adaptation techniques. First, we build a custom language model
(CLM) by fine-tuning a base ASR system, such as AWS Transcribe base system,
using training data from the domain and a corpus of gold-standard legal
transcriptions. Then, we use NLP techniques to extract domain-specific phrases
and legal entities from the in-domain data to create a vocabulary list. We use
both the CLM and the vocabulary list for transcribing legal proceedings. The
following sections explain details of our experiment where we implemented this
pipeline on the AWS Transcribe base model. We compare the performance of our
CLM model with different settings to AWS Transcribe base ASR system and OpenAI
Whisper open-source ASR system when transcribing $\approx$ 12 hours of UK
Supreme Court Hearings.
### 3.1 Dataset
For training our CLM, we use two datasets from the legal domain. The first is
Supreme Court written judgements of 43 cases consisting of 3.26M tokens
scraped from the official site of the UK Supreme
Court333https://www.supremecourt.uk/decided-cases/. The second dataset
consists of $\approx$ 81 hours of gold-standard transcripts of 10 Supreme
Court hearings. The gold-standard transcripts are created by post-editing the
AWS Transcribe output of the court hearings by a team of legal professionals
using a specially designed interface. We use both datasets to train CLM on the
AWS platform.
For the vocabulary list, we use a dataset of $\approx$ 139 hours of gold-
standard transcriptions of Supreme Court hearings along with the supreme court
judgements used for training the CLM. To extract the vocabulary from this
dataset, we implement two methods. First, we use this dataset to train a
phrase detection model that collocates bigrams based on Pointwise Mutual
Information (PMI) scoring of the words in context (Mikolov et al., 2013)444We
train the model using the Gensim Python library (Řehůřek and Sojka, 2010).
Then, the collocation model is used to extract a list of most common bigrams
in the dataset. This list includes frequent legal terms and phrases specific
of the Supreme Court cases included in the training corpus. Second, we use
Blackstone555https://research.iclr.co.uk/blackstone, an NLP library for
processing long-form and unstructured legal text, to extract a list of legal
entities from the dataset. The list of legal entities included: Case Name,
Court Name, Provision (i.e. a clause in a legal instrument), Instrument (i.e.
a legal term of art) and Judge. We concatenated this Blackstone entity list
with the spaCy v3.4 library list of non-legal entities such as: Cardinals,
Persons and Dates. The results of applying our methods for the transcription
of 2 Supreme Court case hearings consisting of 12 hours is explained in the
next section.
## 4 Results
Model | | WER
---
Case1
| WER
---
Case2
| WER
---
Average
Transcription Time
AWS base | 8.7 | 16.2 | 12.3 | 85 mins
CLM1 | 8.5 | 16.5 | 12.4 | 77 mins
CLM2 | 7.9 | 15.5 | 11.6 | 77 mins
CLM2+Vocab | 7.9 | 15.6 | 11.6 | 132 mins
CLM2+Vocab2 | 8.0 | 15.6 | 11.7 | 112 mins
Whisper | 9.6 | 15.3 | 12.4 | 191 mins
Table 2: Average WER and Transcription Time Entity | AWS BASE | Whisper | CLM2+vocab
---|---|---|---
Judge | 0.66 | 0.77 | 0.84
CASE NAME | 0.69 | 0.85 | 0.71
Court | 0.98 | 1 | 0.93
Provision | 0.88 | 0.95 | 0.97
Cardinal | 1 | 0.97 | 1
Table 3: Ratio of Correctly Captured Legal Entities by the ASR Systems
Table 2 shows the WER scores and WER average score for the 2 transcribed cases
with different CLM system settings, as well as, for the two baseline systems:
the AWS Transcribe (AWS base) and Whisper. The different CLM settings are as
follows: CLM1 is trained on only the texts of the Supreme Court judgements,
CLM2 is trained on both the judgements and the gold-standard transcripts,
CLM2+Vocab uses CLM2 for transcription plus the global vocabulary list
extracted by our phrase detection model, and CLM2+Vocab2 uses CLM2 for
transcription plus the legal entities vocabulary list extracted by Blackstone
library.
As can be seen in Table 2, the ASR performance is consistently better with the
CLM models than with the generic ASR systems for the two transcribed cases.
CLM2 model, trained on textual data (i.e. the written judgements) and gold-
standard court hearing transcriptions, outperforms AWS base and Whisper with a
9% and 8% WER improvement, respectively. Moreover, we observe around 9%
improvement in average WER score over the two generic models when
concatenating the list of legal phrases that is extracted by our phrase
detection model with the CLM2 system. While ASR error correction indicates an
improved transcription quality with our proposed domain adaptation methods, we
also evaluated the ASR systems performance with specific errors such as legal
entities and terms.
Table 3 shows the average ratio of correctly transcribed legal entities in the
two studied court room hearings. We compare the performance of CLM2 infused
with the legal terms list (CLM2+Vocab) to the two generic ASR systems. The
ratios in Table 3 indicate that CLM2+Vocab is generally more capable of
transcribing legal-specific terms than the other two models. It is also better
at transcribing critical legal entities such as Provisions.666A Provision, a
statement within an agreement or a law, typically consists of alphanumeric
utterances in British court hearings (e.g. ‘section 25(2)(a)-(h)’ or ‘rule
3.17’). Such legal terminology needs to be accurately transcribed. Our CLM2
model with legal vocabulary demonstrates better reliability in transcribing
these terms.
A similar trend is evident with the legal entity Judge which refers to the
forms of address used in British court rooms (e.g. ‘Lord Phillips’, ‘Lady
Hale’). This entity is typically repeated in court hearings whenever a
barrister or solicitor addresses the court. We see that both the generic ASR
systems perform badly on this category with ratios of 0.66 and 0.69,
respectively. On the other hand, we observe a significant improvement in
correctly transcribing this entity by the CLM2+Vocab with a ration of 0.84
correct transcriptions.
In addition to evaluating the output of the ASR engines, we also recorded the
time required to produce the transcription. The models based on AWS were run
in the cloud using the Amazon infrastructure. Whisper was run on a Linux
desktop with an NVIDIA GeForce RTX 2070 GPU with 8G VRAM. For all the
experiments, the medium English-only model was used. As expected the fastest
running time is obtained using the AWS base model. Running the best performing
model increases the time by 155%, whilst Whisper more than doubles it.
## 5 Conclusion
In this paper, we present a study to show the effect of domain adaption
methods on improving the off-the-shelf ASR system performance in transcribing
a specialised domain such as British court hearings. We optimised the
performance of the ASR system by training an ASR custom language model on
gold-standard legal transcripts and textual data from the legal domain. We
also trained a phrase detection model to incorporate extracted list of data-
specific bigram collocations at transcription time. We evaluated the ASR
quality improvements both in terms of average WER and ratio of correctly
transcribed legal-specific terms. We observe significant gains in the ASR
transcription quality by our domain adaptation techniques. For commercial use
of ASR technologies, improving error rate in general and transcription quality
of critical legal terms in particular would minimise manual post-editing
effort and hence save both time and money. We plan to evaluate the impact of
different configurations proposed in this paper on the editors’ postediting
effort.
In the future, we will expand to record data from a variety of accents to
address another axis of degradation in British audio procedures different than
the Supreme Court hearings which are mostly a homogeneous group of speakers.
We will also explore the ability to use NLP topic modelling techniques to
connect legal entities that were crucial in a court’s case decision.
## References
* Das et al. (2022) Nilaksh Das, Duen Horng Chau, Monica Sunkara, Sravan Bodapati, Dhanush Bekal, and Katrin Kirchhoff. 2022. Listen, Know and Spell: Knowledge-Infused Subword Modeling for Improving ASR Performance of OOV Named Entities. In _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 7887–7891. IEEE.
* Del Rio et al. (2021) Miguel Del Rio, Natalie Delworth, Ryan Westerman, Michelle Huang, Nishchal Bhandari, Joseph Palakapilly, Quinten McNamara, Joshua Dong, Piotr Zelasko, and Miguel Jetté. 2021. Earnings-21: a practical benchmark for asr in the wild. _arXiv preprint arXiv:2104.11348_.
* Dingliwa et al. (2022) Saket Dingliwa, Ashish Shenoy, Sravan Bodapati, Ankur Gandhe, Ravi Teja Gadde, and Katrin Kirchhoff. 2022. Domain prompts: Towards memory and compute efficient domain adaptation of ASR systems. In _Interspeech 2022_.
* Elwany et al. (2019) Emad Elwany, Dave Moore, and Gaurav Oberoi. 2019. Bert goes to law school: Quantifying the competitive advantage of access to large legal corpora in contract understanding. _arXiv preprint arXiv:1911.00473_.
* Feng et al. (2021) Siyuan Feng, Olya Kudina, Bence Mark Halpern, and Odette Scharenborg. 2021. Quantifying bias in automatic speech recognition. _arXiv preprint arXiv:2103.15122_.
* Frankenreiter and Nyarko (2022) Jens Frankenreiter and Julian Nyarko. 2022. Natural language processing in legal tech. _Legal Tech and the Future of Civil Justice (David Engstrom ed.)_.
* Huo et al. (2021) Zhouyuan Huo, Dongseong Hwang, Khe Chai Sim, Shefali Garg, Ananya Misra, Nikhil Siddhartha, Trevor Strohman, and Françoise Beaufays. 2021. Incremental layer-wise self-supervised learning for efficient speech domain adaptation on device. _arXiv preprint arXiv:2110.00155_.
* Mai and Carson-Berndsen (2022) Long Mai and Julie Carson-Berndsen. 2022. Unsupervised domain adaptation for speech recognition with unsupervised error correction. _Proc. Interspeech 2022_ , pages 5120–5124.
* Mani et al. (2020) Anirudh Mani, Shruti Palaskar, Nimshi Venkat Meripo, Sandeep Konam, and Florian Metze. 2020. Asr error correction and domain adaptation using machine translation. In _ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pages 6344–6348. IEEE.
* Mikolov et al. (2013) Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. _Advances in neural information processing systems_ , 26.
* Mumcuoğlu et al. (2021) Emre Mumcuoğlu, Ceyhun E Öztürk, Haldun M Ozaktas, and Aykut Koç. 2021. Natural language processing in law: Prediction of outcomes in the higher courts of turkey. _Information Processing & Management_, 58(5):102684.
* Nay (2021) John J. Nay. 2021. _Natural Language Processing for Legal Texts, DOI=10.1017/9781316529683.011_ , page 99–113. Cambridge University Press.
* Radford et al. (2022) Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022. Robust Speech Recognition via Large-Scale Weak Supervision. _OpenAI_.
* Řehůřek and Sojka (2010) Radim Řehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In _Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks_ , pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/publication/884893/en.
* Sato et al. (2022) Hiroaki Sato, Tomoyasu Komori, Takeshi Mishima, Yoshihiko Kawai, Takahiro Mochizuki, Shoei Sato, and Tetsuji Ogawa. 2022. Text-Only Domain Adaptation Based on Intermediate CTC. _Proc. Interspeech 2022_ , pages 2208–2212.
* Sturge (2021) Georgina Sturge. 2021. Court statistics for England and Wales. Technical report, House of Commons Library.
* Wang et al. (2020) Haoyu Wang, Shuyan Dong, Yue Liu, James Logan, Ashish Kumar Agrawal, and Yang Liu. 2020. ASR Error Correction with Augmented Transformer for Entity Retrieval. In _Interspeech_ , pages 1550–1554.
* Watanabe et al. (2020) Shinji Watanabe, Michael Mandel, Jon Barker, Emmanuel Vincent, Ashish Arora, Xuankai Chang, Sanjeev Khudanpur, Vimal Manohar, Daniel Povey, Desh Raj, et al. 2020. CHiME-6 Challenge: Tackling multispeaker speech recognition for unsegmented recordings. In _CHiME 2020-6th International Workshop on Speech Processing in Everyday Environments_.
* Zhang (2022) Yuanyuan Zhang. 2022. Mitigating bias against non-native accents. _Delft University of Technology_.
|
11institutetext: Center for Astrophysics and Cosmology, University of Nova
Gorica, Vipavska 11c, 5270 Ajdovščina, Slovenia.
11email<EMAIL_ADDRESS>22institutetext: DTU Space, National Space
Institute, Technical University of Denmark, Elektrovej 327, 2800 Kgs. Lyngby,
Denmark.
22email<EMAIL_ADDRESS>33institutetext: Department of Astronomy,
University of Belgrade - Faculty of Mathematics, Studentski trg 16, 11000
Belgrade, Serbia.
33email<EMAIL_ADDRESS>44institutetext: Hamburger Sternwarte,
Universitat Hamburg, Gojenbergsweg 112, 21029 Hamburg, Germany.
55institutetext: Tuorla Observatory, Department of Physics and Astronomy,
University of Turku, FI-20014 Turku, Finland 66institutetext: IAASARS,
National Observatory of Athens, 15236 Penteli, Greece 77institutetext:
Department of Astrophysics, Astronomy & Mechanics, Faculty of Physics,
National and Kapodistrian University of Athens, 15784 Athens, Greece.
88institutetext: Faculty of Natural Sciences and Mathematics, University of
Banjaluka, Mladena Stojanovića 2, 78000 Banjaluka, Bosnia and Herzegovina.
99institutetext: Oskar Klein Centre, Department of Physics, Stockholm
University, SE-10691 Stockholm, Sweden. 1010institutetext: Department of
Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064,
USA. 1111institutetext: European Southern Observatory, Alonso de Córdova 3107,
Casilla 19, Santiago, Chile. 1212institutetext: Institute for Astronomy,
University of Hawaii, 2680 Woodlawn Drive, Honolulu HI 96822, USA.
1313institutetext: Oskar Klein Centre, Department of Astronomy, Stockholm
University, AlbaNova, SE-10691 Stockholm, Sweden. 1414institutetext:
Astronomical Observatory, Volgina 7, 11060 Belgrade, Serbia.
1515institutetext: School of Physics, Trinity College Dublin, The University
of Dublin, Dublin 2, Ireland. 1616institutetext: Astronomical Observatory,
University of Warsaw, Al. Ujazdowskie 4, 00-478 Warszawa, Poland.
1717institutetext: Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer
de Can Magrans, s/n, E-08193 Barcelona, Spain. 1818institutetext: Institut
d’Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Spain.
1919institutetext: Birmingham Institute for Gravitational Wave Astronomy and
School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT,
UK. 2020institutetext: INAF-Osservatorio Astronomico d’Abruzzo, via M. Maggini
snc, I-64100 Teramo, Italy. 2121institutetext: Cosmic DAWN centre, Niels Bohr
Institute, University of Copenhagen, Rådmandsgade 62-64, 2200, Copenhagen,
Denmark. 2222institutetext: Astrophysics Research Centre, School of
Mathematics and Physics, Queen’s University Belfast, Belfast, BT7 1NN,
Northern Ireland, UK. 2323institutetext: School of Physics, O’Brien Centre for
Science North, University College Dublin, Belfield, Dublin 4, Ireland
# The rise and fall of the iron-strong nuclear transient PS16dtm
T. Petrushevska The rise and fall of the iron-strong nuclear transient
PS16dtmThe rise and fall of the iron-strong nuclear transient PS16dtm G.
Leloudas The rise and fall of the iron-strong nuclear transient PS16dtmThe
rise and fall of the iron-strong nuclear transient PS16dtm D. Ilić The rise
and fall of the iron-strong nuclear transient PS16dtmThe rise and fall of the
iron-strong nuclear transient PS16dtmThe rise and fall of the iron-strong
nuclear transient PS16dtmThe rise and fall of the iron-strong nuclear
transient PS16dtm M. Bronikowski The rise and fall of the iron-strong nuclear
transient PS16dtmThe rise and fall of the iron-strong nuclear transient
PS16dtm P. Charalampopoulos The rise and fall of the iron-strong nuclear
transient PS16dtmThe rise and fall of the iron-strong nuclear transient
PS16dtmThe rise and fall of the iron-strong nuclear transient PS16dtmThe rise
and fall of the iron-strong nuclear transient PS16dtm G. K. Jaisawal The rise
and fall of the iron-strong nuclear transient PS16dtmThe rise and fall of the
iron-strong nuclear transient PS16dtm E. Paraskeva The rise and fall of the
iron-strong nuclear transient PS16dtmThe rise and fall of the iron-strong
nuclear transient PS16dtmThe rise and fall of the iron-strong nuclear
transient PS16dtmThe rise and fall of the iron-strong nuclear transient
PS16dtm M. Pursiainen The rise and fall of the iron-strong nuclear transient
PS16dtmThe rise and fall of the iron-strong nuclear transient PS16dtm N. Rakić
The rise and fall of the iron-strong nuclear transient PS16dtmThe rise and
fall of the iron-strong nuclear transient PS16dtmThe rise and fall of the
iron-strong nuclear transient PS16dtmThe rise and fall of the iron-strong
nuclear transient PS16dtm S. Schulze The rise and fall of the iron-strong
nuclear transient PS16dtmThe rise and fall of the iron-strong nuclear
transient PS16dtm K. Taggart The rise and fall of the iron-strong nuclear
transient PS16dtmThe rise and fall of the iron-strong nuclear transient
PS16dtm C. K. Wedderkopp The rise and fall of the iron-strong nuclear
transient PS16dtmThe rise and fall of the iron-strong nuclear transient
PS16dtm J. P. Anderson The rise and fall of the iron-strong nuclear transient
PS16dtmThe rise and fall of the iron-strong nuclear transient PS16dtm T. de
Boer The rise and fall of the iron-strong nuclear transient PS16dtmThe rise
and fall of the iron-strong nuclear transient PS16dtm K. Chambers The rise and
fall of the iron-strong nuclear transient PS16dtmThe rise and fall of the
iron-strong nuclear transient PS16dtm T. W. Chen The rise and fall of the
iron-strong nuclear transient PS16dtmThe rise and fall of the iron-strong
nuclear transient PS16dtm G. Damljanović The rise and fall of the iron-strong
nuclear transient PS16dtmThe rise and fall of the iron-strong nuclear
transient PS16dtm M. Fraser The rise and fall of the iron-strong nuclear
transient PS16dtmThe rise and fall of the iron-strong nuclear transient
PS16dtm H. Gao The rise and fall of the iron-strong nuclear transient
PS16dtmThe rise and fall of the iron-strong nuclear transient PS16dtm A.
Gomboc The rise and fall of the iron-strong nuclear transient PS16dtmThe rise
and fall of the iron-strong nuclear transient PS16dtm M. Gromadzki The rise
and fall of the iron-strong nuclear transient PS16dtmThe rise and fall of the
iron-strong nuclear transient PS16dtm N. Ihanec The rise and fall of the iron-
strong nuclear transient PS16dtmThe rise and fall of the iron-strong nuclear
transient PS16dtmThe rise and fall of the iron-strong nuclear transient
PS16dtmThe rise and fall of the iron-strong nuclear transient PS16dtm K.
Maguire The rise and fall of the iron-strong nuclear transient PS16dtmThe rise
and fall of the iron-strong nuclear transient PS16dtm B. Marčun The rise and
fall of the iron-strong nuclear transient PS16dtmThe rise and fall of the
iron-strong nuclear transient PS16dtm T. E. Müller-Bravo The rise and fall of
the iron-strong nuclear transient PS16dtmThe rise and fall of the iron-strong
nuclear transient PS16dtmThe rise and fall of the iron-strong nuclear
transient PS16dtmThe rise and fall of the iron-strong nuclear transient
PS16dtm M. Nicholl The rise and fall of the iron-strong nuclear transient
PS16dtmThe rise and fall of the iron-strong nuclear transient PS16dtm F. Onori
The rise and fall of the iron-strong nuclear transient PS16dtmThe rise and
fall of the iron-strong nuclear transient PS16dtm T. M. Reynolds The rise and
fall of the iron-strong nuclear transient PS16dtmThe rise and fall of the
iron-strong nuclear transient PS16dtm S. J. Smartt The rise and fall of the
iron-strong nuclear transient PS16dtmThe rise and fall of the iron-strong
nuclear transient PS16dtm J. Sollerman The rise and fall of the iron-strong
nuclear transient PS16dtmThe rise and fall of the iron-strong nuclear
transient PS16dtm K. W. Smith The rise and fall of the iron-strong nuclear
transient PS16dtmThe rise and fall of the iron-strong nuclear transient
PS16dtm T. Wevers The rise and fall of the iron-strong nuclear transient
PS16dtmThe rise and fall of the iron-strong nuclear transient PS16dtm Ł.
Wyrzykowski The rise and fall of the iron-strong nuclear transient PS16dtmThe
rise and fall of the iron-strong nuclear transient PS16dtm
(Received XX; accepted XX)
###### Abstract
Context. Thanks to the advent of large-scale optical surveys, a diverse set of
flares from the nuclear regions of galaxies has recently been discovered.
These include the disruption of stars by supermassive black holes at the
centers of galaxies - nuclear transients known as tidal disruption events
(TDEs). Active galactic nuclei (AGN) can show extreme changes in the
brightness and emission line intensities, often referred to as changing-look
AGN (CLAGN). Given the physical and observational similarities, the
interpretation and distinction of nuclear transients as CLAGN or TDEs remains
difficult. One of the obstacles of making progress in the field is the lack of
well-sampled data of long-lived nuclear outbursts in AGN.
Aims. Here, we study PS16dtm, a nuclear transient in a Narrow Line Seyfert 1
(NLSy1) galaxy, which has been proposed to be a TDE candidate. Our aim is to
study the spectroscopic and photometric properties of PS16dtm, in order to
better understand the outbursts originating in NLSy1 galaxies.
Methods. Our extensive multiwavelength follow-up that spans around $2000$ days
includes photometry and spectroscopy in the UV/optical, as well as mid-
infrared (MIR) and X-ray observations. Furthermore, we improved an existing
semiempirical model in order to reproduce the spectra and study the evolution
of the spectral lines.
Results. The UV/optical light curve shows a double peak at $\sim 50$ and $\sim
100$ days after the first detection, and it declines and flattens afterward,
reaching preoutburst levels after 2000 days of monitoring. The MIR light curve
rises almost simultaneously with the optical, but unlike the UV/optical which
is approaching the preoutburst levels in the last epochs of our observations,
the MIR emission is still rising at the time of writing. The optical spectra
show broad Balmer features and the strongest broad Fe II emission ever
detected in a nuclear transient. This broad Fe II emission was not present in
the archival preoutburst spectrum and almost completely disappeared +1868 days
after the outburst. We found that the majority of the flux of the broad Balmer
and Fe II lines is produced by photoionization. We detect only weak X-ray
emission in the 0.5-8 keV band at the location of PS16dtm, at +848, +1130, and
+1429 days past the outburst. This means that the X-ray emission continues to
be lower by at least an order of magnitude, compared to archival, preoutburst
measurements.
Conclusions. We confirm that the observed properties of PS16dtm are difficult
to reconcile with normal AGN variability. The TDE scenario continues to be a
plausible explanation for the observed properties, even though PS16dtm shows
differences compared to TDE in quiescent galaxies. We suggest that this event
is part of a growing sample of TDEs that show broad Balmer line profiles and
Fe II complexes. We argue that the extreme variability seen in the AGN host
due to PS16dtm may have easily been misclassified as a CLAGN, especially if
the rising part of the light curve had been missed. This implies that some
changing look episodes in AGN may be triggered by TDEs. Imaging and
spectroscopic data of AGN with good sampling are needed to enable testing of
possible physical mechanisms behind the extreme variability in AGN.
## 1 Introduction
In recent years, the detection of a variety of nuclear transients has become
attainable due to the advent of large-scale optical robotic surveys such as
the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry, 2011), the
All-Sky Automated Survey for Supernovae (ASAS-SN; Shappee et al., 2014), the
Pan-STARRS Survey for Transients (PSST; Chambers et al., 2016b), the Gaia
Photometric Science Alerts program (Hodgkin et al., 2021), the intermediate
Palomar Transient Factory (iPTF; Kulkarni, 2013), and, its successor, the
Zwicky Transient Facility (ZTF; Bellm et al., 2019). It has long been known
that galaxies with an active galactic nucleus (AGN), where matter is accreted
onto the central supermassive black hole (SMBH), show variability in their
brightness (e.g., Fitch et al., 1967). Apart from regular, low-level
stochastic variability, some AGN occasionally show exceptionally large changes
in the luminosity, spectral shape, and/or X-ray absorption (e.g., Frederick et
al., 2019; MacLeod et al., 2019; Sánchez-Sáez et al., 2018; Zhang, 2021; Hon
et al., 2022; Green et al., 2022; Ren et al., 2022). The most notable of these
changes is when a Seyfert 1 type AGN transforms into a Seyfert 2 AGN and vice
versa, the so-called changing-look AGN (CLAGN). The physical explanation of
this phenomenon is still debated. It has been argued that the variability
arises in the intrinsic changes in the accretion state of the SMBH, which
include several scenarios (Noda & Done, 2018; Sniegowska et al., 2020; Stern
et al., 2018; Mehdipour et al., 2022). A number of works have argued that
there might be more than one mechanism that explains CLAGN, given the
diversity in the timescale of changes, and their amplitude (e.g., Wang et al.,
2012; Campana et al., 2015; Komossa et al., 2017; Trakhtenbrot et al., 2019;
Śniegowska et al., 2022).
One important class of optical transients occurring in the nuclear regions of
the host galaxies are the flares from tidally disrupted stars by SMBHs (Rees,
1988; Phinney, 1989). They are particularly interesting since there are
several suggestions of using tidal disruption events (TDEs) to study SMBH
properties such as its mass (Mockler et al., 2019) and perhaps even more
elusive, its spin (Leloudas et al., 2016; Gafton & Rosswog, 2019). Optically
discovered TDEs differ between each other in the spectroscopic features,
shape, and timescale of the light curve, and have been detected in both
quiescent (e.g., Gezari, 2021) and active galaxies (e.g., Wyrzykowski et al.,
2017; Frederick et al., 2021). The discoveries of TDEs in AGN were initiated
later compared to TDEs in quiescent galaxies, partly because, initially,
transients in galaxies with known AGN have been excluded in optical surveys
from follow-up investigation, so as to avoid being overwhelmed by spurious
candidates. Furthermore, TDEs arising in a galaxy with an existing AGN are
more difficult to identify simply because the nucleus itself can be a variable
source.
In recent years, transient phenomena in galaxies with AGN have been observed,
with extraordinary changes in photometric and spectroscopic properties on
short timescales (e.g., Frederick et al., 2021). However, the physical
mechanism to explain these phenomena is unclear. One example is the transient
AT 2018dyk, initially classified as a TDE (Arcavi et al., 2018); however, a
subsequent study argued that it is a CLAGN (Frederick et al., 2019). Another
example is the transient CSS100217 which Drake et al. (2011) argued was a
supernova, while Frederick et al. (2021) favored the AGN scenario.
Furthermore, Zhang et al. (2022) and Cannizzaro et al. (2022) suggested that
it could be explained as a TDE, after gathering data for more than 10 years.
The reason for this ambiguity is that, at present, there is no single
observational signature that disentangles, unambiguously, the physical
mechanism that causes the luminous phenomena (see Zabludoff et al., 2021, for
a recent review). Since a TDE happening in AGN can also cause intrinsic change
in the accretion disk, it can also provide one of the channels to explain
CLAGN (see e.g., Kool et al., 2020; Cannizzaro et al., 2020; Zhang, 2021;
Hinkle et al., 2022). An ongoing issue in understanding long-lived nuclear
outbursts in AGN is, on the one hand, the low numbers of such events, and on
the other hand, often the data are sparsely sampled from the long-lived
nuclear outbursts, so it does not allow for one to study them in detail.
Here, we focus on the nuclear transient, PS16dtm, for which we gathered an
extensive dataset over a time scale of almost six years. PS16dtm was first
classified as supernova Type IIn (Terreran et al., 2016; Dong et al., 2016),
but further in-depth analysis based on follow-up observations concluded that
it is more consistent with a TDE interpretation (Blanchard et al., 2017,
hereafter B17). B17 presented observations up to 200 days after it was
detected. In this paper, we present photometric and spectroscopic data for a
period of $\sim 2000$ days after the first detection of PS16dtm. As we
subsequently show in this paper, PS16dtm is perhaps the nuclear transient with
the most dramatic increase of Fe II emission after the outburst, although
there are other examples, albeit not with such strong Fe II emission, such as
J123359.12+084211.5 (MacLeod et al., 2019), AT 2019dsg (Cannizzaro et al.,
2021), AT 2018fyk (Wevers et al., 2019), CSS100217 (Drake et al., 2011), and
PS1-10adi (Kankare et al., 2017). We also show that PS16dtm exhibits strong
MIR emission similar to other nuclear flares in hosts with AGN. Intriguingly,
the X-ray emission continues to be dimmed compared the archival preoutburst
state, despite that UV/optical photometry and spectroscopy are approaching the
preoutburst levels.
The paper is organized as follows. In Sect. 2, we summarize the findings on
PS16dtm that have been previously published. In Sect. 3 we present our
observations, while in Sect. 4 we perform the analysis. The discussion is
found in Sect. 5 and finally, in Sect. 6 we present our conclusions. We use
the AB magnitude system throughout this work. Furthermore, we assume a
cosmology with $H_{0}=67$ km s-1 Mpc-1, $\Omega_{m}=0.32$ and
$\Omega_{\Lambda}=0.68$ as in B17, which implies a luminosity distance of 381
Mpc to PS16dtm. The Galactic extinction along the line of sight to PS16dtm is
$\rm A_{V}=0.069\pm 0.001$ mag (Schlafly & Finkbeiner, 2011).
## 2 Summary of previous work on PS16dtm
In this section, we summarize previously published work on PS16dtm.111PS16dtm
is registered as SN 2016ezh on the Transient Name Serverhttps://www.wis-
tns.org/object/2016ezh. We use the discovery name throughout this paper as the
supernova interpretation has been disfavored. It was discovered by PSST on 12
August 2016 (MJD 57612) (Chambers et al., 2016a), and it was also observed
independently by ATLAS and ASSA-SN. The host galaxy of PS16dtm,
SDSSJ015804.75-005221.8, is a narrow-line Seyfert 1 (NLSy1) galaxy at
$z=0.0804$ with a SMBH with $\sim 10^{6}M_{\odot}$, measured by the velocity
dispersion method (Xiao et al., 2011) and from the AGN luminosity and the
radius of the broad-line region (BLR) (B17). B17 reported the optical and UV
light curves of PS16dtm which brightened approximately two magnitudes above
the archival host brightness in $\sim 50$ days, with two recognizable peaks
distanced $\sim 50$ days between them. Furthermore, the light curves showed
little color evolution, and stayed approximately at the Eddington luminosity
of the SMBH.
B17 considered scenarios of a variable AGN, a Type IIn supernova, and a TDE as
possible explanations for PS16dtm. They concluded that the small evolution in
brightness and color seen in PS16dtm, is not typical for supernovae where
cooling is expected due to expansion and radiative losses, but it is similar
to that of TDEs. Another argument in favor of the TDE scenario put forward by
B17 is the short timescale of the rising part of the PS16dtm light curve
($\sim 50$ days) and the amplitude (2 magnitudes rise over the host level).
The typical variability amplitudes of AGN are low in large AGN samples and
they vary on timescales of years ($\sim 0.1-0.2$ magnitudes per year, see e.g.
Sánchez-Sáez et al., 2018). Another interesting aspect of PS16dtm is that
after the outburst, X-ray emission from the nucleus decreased by at least an
order of magnitude, compared to archival measurements (Pons & Watson, 2014).
B17 argued that neither the supernova scenario nor AGN variability can explain
the X-ray dimming, but that it could be explained by the accreted stellar
debris that obscures the X-ray emitting region of the AGN accretion disk. They
proposed a simple model which assumes a face-on orientation for the
preexisting, X-ray emitting disk, and a nearly edge-on orientation for the
disk newly formed in the disruption, which also blocks the X-ray emission.
They argued that such a geometrical configuration can also explain the
spectral properties of both the debris disk and the host galaxy.
The spectra shown in B17 are dominated by hydrogen Balmer and Fe II emission
lines. B17 compared the PS16dtm spectra to those of Type IIn supernovae, and
found no reasonable matches. B17 interpreted the spectra as being more similar
to those of some NLSy1 galaxies, as optical Fe II (4000-5400 Å) emission are
common features in NLSy1 spectra.
PS16dtm also flared in the mid-infrared (MIR) and it was detected by WISE as
part of the NEOWISE survey (Jiang et al., 2017, J17; hereafter). J17 reported
three NEOWISE detections starting 11 days before the optical ASSA-SN survey
and extending to 327 days after. The authors conclude that the MIR flare is
consistent with a dust echo, despite the fact that the MIR data shows no delay
with respect to the optical. They estimated that the inner radius of the
preexisting dust torus increased from $\sim 10$ light days to $\sim 70$ light
days, where the emptied region was replaced with a gas torus. They also argued
that the detected Fe II emission lines are produced in the gas coming from the
evaporated dust due to the strong radiation field.
The peculiarity of PS16dtm has prompted other authors to examine what powers
its luminosity. Frederick et al. (2021), who looked at NLSy1 transients from
the literature, put PS16dtm in the class of TDE with strong Balmer and Fe II
complexes, despite that it does not satisfy all their defined requirements for
TDE classification. Moriya et al. (2017) argued that PS16dtm can be explained
by changes related to the AGN, and not as a result of tidal disruption of a
star by the SMBH. They explain it as an interaction between accretion disk
winds and clouds in the BLRs surrounding them. They also argue that the
observed broad ($\sim 10,000$ $\rm km\ s^{-1}$) Mg II absorption in the UV
(B17) could be related to the fast SMBH disk wind. Nevertheless, their model
predicts that the emission timescale is $\sim 110$ days, while PS16dtm has
stayed bright above the host-galaxy baseline for much longer, as will be shown
in the next sections.
## 3 Observations
In this section, we present new observations of PS16dtm out to $\sim 2000$
days after discovery. We also present ATLAS and PSST data of PS16dtm for the
first time. Notably, ATLAS detected PS16dtm before the PSST and ASSA-SN
surveys, and this enabled us to roughly estimate the time of outburst, as it
will be shown in Sec. 3.2.1. The evolution of the host-subtracted photometry
corrected for Milky Way extinction is shown in Figs. 1 and 2. The gaps in the
data (from February to August) are when PS16dtm was behind the Sun. We note
that at $\sim 2000$ days after the discovery, PS16dtm is fading and the
emission is approaching preoutburst, host levels (see Hinkle et al., 2021, for
measured and synthetically computed PS16dtm host magnitudes). All our
photometry will be available through the WISeREP
archive222https://www.wiserep.org (Yaron & Gal-Yam, 2012).
### 3.1 Swift observations
PS16dtm was monitored by the Neil Gehrels Swift Observatory (Gehrels et al.,
2004) with the UV/Optical Telescope (UVOT Roming et al., 2005) in six filters;
three optical ($V$ at 5468 $\AA$, $B$ at 4392 $\AA$, $U$ at 3465 $\AA$), and
three near-UV ($UVW1$ at 2600 $\AA$, $UVM2$ at 2246 $\AA$, and $UVW2$ at 1928
$\AA$). There are 83 epochs of UVOT/Swift observations spanning from MJD 57632
to MJD 59493. The Swift/UVOT observations were reduced following the standard
pipeline from HEAsoft333https://heasarc.gsfc.nasa.gov/docs/software/lheasoft/.
The photometry was extracted using the tool _UVOTSOURCE_ and a source
extraction region with a radius of 5”. We corrected for Galactic extinction
and subtracted the contribution from the host galaxy, for which we used the
PS16dtm host galaxy magnitudes from Hinkle et al. (2021). Hinkle et al. (2021)
provided corrections to the NUV photometry of TDEs published after 2015 in the
literature, which were using Swift/UVOT. Their work was motivated by an update
by the Swift team to the UVOT calibration to correct for the loss of
sensitivity over time. Hinkle et al. (2021) fitted archival multiwavelength
photometry from GALEX, 2MASS, SDSS and WISE of the host galaxy and modeled the
spectral energy distribution (SED) of the host galaxy, from which they updated
the magnitudes in the NUV UVOT filters. Therefore, the NUV Swift/UVOT
magnitudes of the PS16dtm host galaxy are different to those used in B17,
leading to an average difference in the host-subtracted photometry of PS16dtm
compared to the ones in B17 by $0.58$ magnitudes for $UVW2$ and by $0.02-0.07$
magnitudes for $U$, $B$, $V$, $UVW1$ and $UVM2$.
Simultaneous with the Swift/UVOT observations, PS16dtm was observed with the
Swift X-ray Telescope (XRT). After building the XRT light-curve using the
online tool 444https://www.swift.ac.uk/user_objects/ we conclude that there
was no confident detection by Swift/XRT and we can only place upper-limits to
the X-ray emission, similar to those presented in B17 ($F_{X}<1\times$10-14
erg s-1 cm-2 and $L_{X}<1.7\times$1041 erg s-1).
Figure 1: Photometric evolution of PS16dtm. The photometry has been corrected
for the Galactic extinction and the host contribution has been subtracted. The
reference MJD for the outburst is 57577 obtained from a second order
polynomial fit to the ATLAS $o$ photometry in the rising part of the light
curve. The photometry has been arbitrarily shifted in the y axis for easier
viewing, as indicated in the legend.
### 3.2 Optical ground-based photometry
#### 3.2.1 ATLAS
ATLAS is a robotic survey that uses at least two identical 50-centimeter
telescopes, located at Haleakala and Mauna Loa observatories in Hawaii.
PS16dtm was observed by ATLAS in two bands, $c$ (cyan) and $o$ (orange), that
cover the wavelength range of $4200-6500$ and $5600-8200$ $\AA$, respectively.
The ATLAS image processing is done with a fully automated pipeline that
performs flat fielding, astrometric calibration, and photometric calibration.
The photometry of PS16dtm is publicly available online, of which we used the
reduced photometry from the latest ATLAS data release (Tonry et al., 2018;
Smith et al., 2020). Since the reference images contain the flux from PS16dtm,
we calculated the mean flux between MJD 57250 and 57500 (prediscovery epochs)
and subtract the flux from all the data subsequently. As visible from Fig. 2,
the first ATLAS detection of PS16dtm was made at MJD 57591.6, which is $\sim
6$ days before the first detection reported by ASSA-SN and $\sim 20$ days
before Pan-STARRS. We used a second order polynomial to fit the points in the
rising part of the light curve and found the intersection with the host level.
We found that the outburst likely happened around MJD 57577, with an
uncertainty of $\sim 10$ days. Throughout the paper we use this date as a
reference epoch. We note that this is slightly different than the reference
epoch used by B17, which was arbitrarily set to MJD 57600.
#### 3.2.2 Pan-STARRS
Pan-STARRS1 uses a 1.8 m telescope and it is located at Haleakala, Hawaii.
After Pan-STARRS1 discovered PS16dtm on 12 August 2016, it also provided
photometry from the follow-up observations in two broad filters, the $w_{\rm
PS1}$ and $i_{\rm PS1}$ bands, which have wavelength ranges 4000-8300 $\AA$
and 6800-8300 $\AA$, respectively (Tonry et al., 2012). Images obtained by the
Pan-STARRS1 system are processed automatically with the Image Processing
Pipeline and transient sources are identified through analysis of difference
images, created by subtracting a template from the observed image taken as
part of the search for the counterpart (see details in Magnier et al., 2013;
Huber et al., 2015).
#### 3.2.3 Liverpool telescope
We collected ten epochs of multiband ($griz$) imaging of PS16dtm with the IO:O
instrument at the robotic 2-m Liverpool telescope at the Roque de los
Muchachos Observatory, Spain (Steele et al., 2004). The images were reduced
with the IO:O555https://telescope.livjm.ac.uk/TelInst/Inst/IOO/ pipeline and
were subtracted against PSST (Tonry et al., 2012) reference imaging, leaving
only the transient light. Then PSF photometry was done on the source if it is
detected after the subtraction and the photometry was calibrated relative to
PSST photometric standards.
### 3.3 NEOWISE MIR photometry
NEOWISE is a project by the Wide-field Infrared Survey Explorer, WISE (Wright
et al., 2010), which surveys the sky in 3.4 ($W1$) and 4.6 $\mu m$ ($W2$)
(Mainzer et al., 2014). We retrieved the photometry from the IRSA public data
archive666https://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-
scan?mission=irsa&submit=Select&projshort=WISE. We first computed the
variance-weighted average for 1-day bins and filter for all epochs after
PS16dtm was detected. To remove the host contribution from the transient light
curve, we computed the variance-weighted average of all preoutburst data and
subtracted the host contribution from the transient light curve. In Fig. 2 we
show the host-subtracted MIR light curve together with the optical data from
ATLAS and PSST. The first detection by NEOWISE is on MJD 57587 (2016 July 18),
it is $\sim 4$ days before the first ATLAS detection of PS16dtm which was made
at MJD 57591.6 and it is the earliest optical detection, to our knowledge.
With the ATLAS data reported here, the gap between the first PS16dtm optical
and MIR outburst are closer that previously shown in J17, which presented the
ASSA-SN data only.
Figure 2: Optical (ATLAS and PSST) and mid-infrared NEOWISE light curves of
PS16dtm. The photometry has been extinction corrected for Milky Way dust and
host subtracted. The limits from ASSASN in V band are also shown. The WISE
error-bars are smaller than the symbols. The inset shows a zoom-in at the
epochs around first detection.
### 3.4 NTT/EFOSC2 spectroscopy
We carried out observations of PS16dtm within the Public ESO Spectroscopic
Survey for Transient Objects (PESSTO; Smartt et al., 2015), and its
continuations, ePESSTO and ePESSTO+. PESSTO uses the low-resolution ESO Faint
Object Spectrograph and Camera v.2 (EFOSC2) on the New Technology Telescope
(NTT) in La Silla Observatory, Chile. We have collected NTT/EFOSC2 spectra of
PS16dtm at 16 different epochs, using various settings including the standard
grims 11, 13 and 16 (Smartt et al., 2015). The spectra were reduced in a
standard manner with the aid of the PESSTO pipeline (Smartt et al., 2015), to
apply bias and flat-field corrections, determine a wavelength solution, and
calibrate the relative flux with a standard star observed in the same setup.
Following common practice, the intraday spectra of grism 11 and grism 16 were
combined, since they are characterized by almost the same spectral resolution.
The spectra were corrected for the Galactic extinction and the absolute flux
calibration was improved by scaling the spectra with the aid of photometry. A
spectroscopic log can be found in Table 1 and the spectral series is shown in
Fig. 3. All reduced spectra will be available through the WISeREP
archive777https://www.wiserep.org (Yaron & Gal-Yam, 2012).
Figure 3: Temporal evolution of the spectra taken with NTT/EFOSC2 (and the
final X-shooter spectrum at +1868 days). The spectra are dominated by the
Balmer series and a plethora of Fe II lines. The spectra are colored as
indicated in the legend and the phases refer to rest-frame days after MJD
57577. Vertical solid lines indicate the position of hydrogen Balmer lines
whereas the shaded pink areas mark the location of the strongest Fe II
templates. Table 1: Spectroscopic observations of PS16dtm
Date | MJD | Phase∗ | Wavelength range | Exposure time | Grism
---|---|---|---|---|---
2016-12-22 | 57744 | +155 | 3380–10320 | 1500 | gr11 & gr16
2017-01-17 | 57770 | +179 | 3380–10320 | 900 | g11 & gr16
2017-02-05 | 57789 | +196 | 3380–10320 | 900 | g11 & gr16
2017-08-20 | 57985 | +378 | 3380–10320 | 1500 | g11 & gr16
2017-09-11 | 58007 | +398 | 3380–10320 | 1500 | g11 & gr16
2017-10-19 | 58045 | +433 | 3380–10320 | 1500 | g11 & gr16
2017-11-09 | 58065 | +452 | 3380–10320 | 1500 | g11 & gr16
2017-12-13 | 58099 | +483 | 3380–10320 | 1500 | g11 & gr16
2018-08-17 | 58347 | +713 | 3380–10320 | 1500 | g11 & gr16
2018-09-16 | 58377 | +740 | 3685–9315 | 1500 | g13
2019-01-25 | 58508 | +862 | 3380–10320 | 1500 | g11 & gr16
2019-07-29 | 58693 | +1033 | 3380–7520 | 1800 | g11
2020-10-23 | 59145 | +1451 | 3380–10320 | 1800 | g11 & gr16
2020-12-30 | 59213 | +1514 | 3380–10320 | 1800 | g11 & gr16
2021-09-04 | 59460 | +1743 | 3380-10320 | 2700/1800 | g11 & 2xgr16
2022-01-16 | 59595 | +1868 | 3000-24800 | | UVB, VIS, NIR
888∗ In the rest frame with respect to the estimated time of outburst MJD
57577.
### 3.5 VLT/X-shooter spectrum
We also observed PS16dtm with X-shooter (Vernet et al., 2011) on the Very
Large Telescope (VLT), on MJD 59595, that is at +1868 rest-frame days.
X-shooter is a medium resolution spectrograph covering the wavelength range
from 3000 to 24800 $\AA$ in three spectroscopic arms. We used slit widths of
1.0”, 0.9” and 0.9” for the UVB, VIS and NIR arms respectively, resulting in
nominal resolutions of R=5400, 8900 and 5600. The data were reduced by
employing the X-shooter pipeline in the EsoReflex GUI environment (Freudling
et al., 2013), as implemented in Selsing et al. (2019).
### 3.6 X-ray observations
The Chandra X-ray Observatory (CXO) observed PS16dtm on three epochs on 2019
January 10 (MJD 58493), 2019 November 11 (MJD 58798), and 2020 September 29
(MJD 59121)999P.I. Blanchard P., ID 21460, 21461 and 22618 for a net exposure
of 10, 10, and 20 ks, respectively. We reduced these IDs with the CIAO v4.14
software (Fruscione et al., 2006) in presence of the latest calibration files
using the chandra_repro pipeline. Standard filtering and analysis methods are
further applied to Advanced CCD Imaging Spectrometer mode data in our study.
We found weak X-ray emission in the 0.5-8 keV band at the location of PS16dtm.
The source is moderately present in a 1 arcsec region at a significance level
of 2.6, 1.6, and 2.5$\sigma$ level in the first, second, and third Chandra
epochs with a count rate of about 4$\times$10-4, 2.3$\times$10-4, and
2.7$\times$10-4 counts s-1, respectively. No significant emission, however, is
detected in the 0.5-1 keV range.
We subsequently performed the spectral analysis to examine the long-term X-ray
flux evolution of the host galaxy and its connection to the transient. As the
number of detected source photons is limited to $<$5 in each observation, all
three Chandra spectra were stacked together to carry out the analysis using
the _Cash_ statistics in XSPEC (Arnaud, 1996). Using a redshifted power-law
model with a column density fixed at 2.5$\times$1020 cm-2 (Pons & Watson,
2014, B17), the photon index is found to be 0.8${}_{-0.1}^{+8.5}$ along with a
fit null hypothesis probability of 0.9 at four degrees of freedom. The errors
are calculated for a 90% confidence interval using the Markov chain Monte
Carlo simulation method in XSPEC. The 0.3-10 keV unabsorbed source flux is
(1.1$\pm$0.8)$\times$10-14 erg s-1 cm-2 that is obtained by the _cflux_
convolution model. Similarly, in place of the power-law component, a red-
shifted blackbody model at the above-fixed column density is also tested on
the Chandra data. We found a blackbody temperature of 1.5${}_{-0.5}^{+2.0}$
keV along with a null hypothesis probability of 0.8 at four degrees of
freedom. The 0.3-10 keV unabsorbed model flux is estimated to be
(8$\pm$3)$\times$10-15 erg s-1 cm-2 in the latter case. Based on our
spectroscopy, we conclude that the source flux remains close to the last
measurement (upper limit of 1$\times$10-14 erg s-1 cm-2) provided by XRT in
2016 and 2017 (B17). In 2019 and 2020 the system was still present at an order
of magnitude lower than the 2005 XMM-Newton detection (Pons & Watson, 2014)
before the transient.
## 4 Analysis
### 4.1 Light curve and SED analysis
The data presented here, make PS16dtm one of the few nuclear outbursts that
have been observed with a photometric and spectroscopic campaign that spans
over six years of the transient activity. After the first peak, at which
PS16dtm reached the absolute magnitude of $M_{V}=-22.0$ mag in the UVOT $V$
band, the luminosity dropped for $\sim 50$ days, after which it rose again to
a second peak and stayed at almost plateau level for $\sim 100$ days. Then the
flux decreased steadily with time at all wavelengths, as seen in Fig. 1.
However, the decline is most pronounced up to $+270$ days in the NUV/UVOT
filters, afterward the decline is slower. Interestingly, there have been few
other TDEs which have shown double peaks in the light curve, such as
ASSASN-15lh (Leloudas et al., 2016) and AT 2018fyk (Wevers et al., 2019), but
the physical process behind this feature is still not clear.
In order to explore the physical parameters of the transient, we attempted to
fit a blackbody to SEDs constructed from Milky Way extinction-corrected and
host-subtracted photometry. We used the Markov chain Monte Carlo (MCMC) method
with a publicly available code101010https://github.com/nblago/utils that uses
the Python package emcee (v 3.0.2) (Foreman-Mackey et al., 2013). It was
already pointed out in B17 that a simple blackbody is not a good fit to the
SED, which they constructed with the first 200 days of PS16dtm data.
Therefore, B17 fitted a blackbody by excluding the photometry from the three
NUV/UVOT filters by arguing that there must be a significant obscuring
material in the line of sight. This implies that true luminosity could be
higher, but they argued that it should not be more than a factor 2 (that is
not higher than $5\times 10^{44}$ erg s-1). We also note the updated
Swift/UVOT calibration correction from Hinkle et al. (2021) which only
affected the NUV filters, making them fainter compared to B17 measurements.
This means that this problem persists and it is even more pronounced than from
what B17 estimated.
We repeated the same exercise of fitting a blackbody to our photometry without
the NUV/UVOT bands, requiring at least four photometric measurements in each
epoch. The results are shown in Fig. 4. Keeping in mind this problem with
likely underestimated temperature due to the omission of NUV/UVOT bands, the
blackbody fits would indicate an initial rise in the temperature to $\sim
15000$ K contemporaneous with the rise in the luminosity. After that, the
temperature drops to $\sim 10000$ K, and is held relatively constant. After
$\sim 500$ rest-frame days, the blackbody fit to the SED appears increasingly
erratic, as at later epochs when the host-subtracted photometry of PS16dtm
becomes fainter, thus noisier.
We explored the possibility that the inability to fit a single blackbody to
the NUV/optical data is caused by the extinction in the line of sight by the
circumnuclear dust near the SMBH, despite that there is no clear consensus
regarding the wavelength dependence of the extinction in the line of sight to
AGN. However, some studies based on individual reddened AGN have suggested
they have an extinction similar to that measured in the Small Magellanic Cloud
(SMC).111111We note that this implies the presence of small dust grains, in
contrast to what other authors have argued, that the largest grains are those
able to survive in the vicinity of the SMBH. SMC extinction strongly affects
the UV part of the SED (Willott, 2005) and compared to the Milky Way or the
one used for starburst galaxies (Calzetti, 2001), SMC average extinction curve
has essentially no 2175 $\AA$ bump and it has a strong far-UV rise (see e.g.,
Salim & Narayanan, 2020, for a recent review). We attempted to fit blackbody
SEDs reddened with the SMC-like extinction curve (Gordon et al., 2003) where
we add $E(B-V)$ as a free parameter. It was possible to fit the SED to a
blackbody with $T\sim 2-4\cdot 10^{5}$ K and a bolometric luminosity of $L\sim
10^{48}-10^{49}$ erg $s^{-1}$, at a corresponding blackbody radius of $R\sim
1\cdot 10^{15}$ cm, and SMC-like extinction curve with $E(B-V)$ $\sim 0.5$
mag. This best fit model is clearly nonphysical, with such high temperatures,
which would also imply an extremely bright X-ray source. In fact, this
solution for sufficiently large (that is above 20000 K) temperatures
corresponds to a degeneracy between temperature and radius, where all the
measurement points lie on the high-wavelength slope of the distribution, and
for any arbitrarily large temperature there exists a radius that produces the
observed light curve. In conclusion, a single blackbody does not provide a
satisfactory fit to the SED, even when we included extinction. However, the
fact that the SED cannot be described with a single blackbody does not exclude
the interpretation of PS16dtm as a TDE. According to the radiative transfer
calculations by Roth et al. (2016) and Thomsen et al. (2022), the optical/UV
continuum of TDEs is not necessarily described by a single blackbody. In
addition, it is possible that the SEDs of TDEs in AGN are different compared
to those in inactive galaxies due to the preexisting AGN disk, as suggested by
the simulations in Chan et al. (2021).
We also fitted the fading light curve with a power-law profile, where the
power-law index is fit freely, $L=L_{o}(t-t_{o})^{-\alpha}$. Our best fit
parameter, $\alpha\sim{5/7}$ for the NUV/UVOT light curves for the initial
($158<t<270$) days, while $\sim 270$ days after the outburst, it follows a
$\alpha\sim{1/6}$ decay. For the bolometric luminosity, the best-fit power-law
has the index $\alpha\sim{0.98}$ (see Fig. 4). Interestingly, a similar
decline trend is also seen in the bolometric light curve of the TDE AT 2017gge
($L\propto t^{-1}$, Onori et al. 2022), which also shows a MIR echo and
coronal emission lines in the late-time spectra, although more numerous and
more intense than what it has been detected in PS16dtm, as shown in the next
sections. Nevertheless, the luminosity evolution of PS16dtm cannot be simply
explained by the $t^{-5/3}$ decline as expected from simulations of the
fallback rate for the stellar debris onto the SMBH (Rees, 1988; Gezari et al.,
2009). At the moment, the origin of the optical emission from TDEs remains a
puzzle, but it might be powered by an inner accretion disk or by shocks of the
intersecting debris streams (see van Velzen et al., 2020, for a recent
review). In the first months, optical and UV light curves of TDEs in quiescent
galaxies often decline as $t^{-5/3}$, but after a few months, models that
include reprocessing of the disk emission by outer debris predict weaker
temperature evolution, are expected to show weaker temperature evolution and
to follow a flatter decline as $t^{-5/12}$ (Lodato & Rossi, 2011).
Furthermore, if the disruption of the star is only partial, the expected
fallback rate could also be shallower (Guillochon & Ramirez-Ruiz, 2013).
Interestingly, van Velzen et al. (2019), by observing TDEs at late times in
the far-UV (FUV), found that for the light curves of TDEs from low-mass SMBHs
($\rm M_{BH}<10^{6.5}M_{\odot}$), the early-time decay follow a $t^{-5/3}$
power-law decline, but the later-time evolution is much shallower. They
conclude that this could be the sign of different disk emission mechanisms
operating at early and late times. They argue that unobscured accretion disk
models, rather than reprocessing and circularization paradigms, can explain
the late-time FUV emission.
Figure 4: Evolution of the bolometric luminosity, temperature and radius for
PS16dtm, by fitting a blackbody to the Galactic extinction and host-corrected
photometry, excluding the photometry in the three NUV UVOT filters and
requiring at least 4 photometric measurements in each epoch. The purple line
in the upper insert represents the best fit of the fading light curve with a
power-law profile, where the power-law index is fit freely,
$L=L_{o}(t-t_{o})^{-\alpha}$.
### 4.2 Blackbody fits with the NEOWISE MIR photometry.
As mentioned in the previous sections, PS16dtm exhibits MIR emission almost
simultaneously with the optical one. In Fig. 2 we show that the first NEOWISE
detection is $\sim 4$ days before the first ATLAS detection. In the first 500
days, PS16dtm shows a steep rise in the MIR light curve. After that, it keeps
rising, albeit at a much slower rate. Another noticeable trait from Fig. 2 is
that not only the rise in brightness, but also the change in color that
happens in the first epochs, that is at +9 days $W1-W2$ has a negative value
of $-0.37$, then at +165 days it is $-0.07$, and in the epochs afterward to
the last epoch, from +349 to +1700 days, $W1-W2$ remains steady with a
positive value of $0.24-0.35$ mag.
First, we attempted to fit a single blackbody to the data by including the
optical and the MIR photometry at quasi simultaneous epochs. As visible from
Fig. 2, the first MIR point has a very different color than the rest of the
MIR light curve, so we examined the possibility that it is compatible with the
Rayleigh-Jeans tail of a single blackbody. As shown in the example in Fig. 5,
the fit would indicate a temperature of 8900 K at the earliest epochs after
the outburst, however the single blackbody is a poor fit to the data. Second,
we attempted to fit a double-blackbody model to the optical and the MIR
NEOWISE photometry, as it was done in J17, for the first three NEOWISE epochs.
The first NEOWISE epoch at +9 days, yields $\sim 2300$ K for the dust
blackbody temperature, while the second epoch at +165 days, yields $\sim 1300$
K, and the epochs afterward (from +349 to +1700 days) settle at $\sim
900-1000$ K. J17 interpreted the MIR emission as light coming from the dust
thermal emission heated by the TDE; in the first epoch, the temperature is
above the sublimation temperature; after the second epoch, the dust
temperature dropped below the sublimation temperature, the MIR emission is
from larger distances from the SMBH, so the dust in the inner region must be
optically thin.
Figure 5: Examples of three epochs of blackbody fits to the photometry that
includes the NEOWISE data. In the left panel, the best fit by using one
blackbody is shown. For the first epoch of the MIR detection, together with
the quasi-simultaneous ATLAS photometry, the best fit temperature is $\sim
8900$ K. A better fit is obtained by using a double blackbody model, and the
resulting blackbody temperatures are shown. In the right panel, the fits of
the two-blackbody model are shown to the photometry of two epochs, +165 and
+835 days where the NUV bands are plotted, but have not been included in the
fitting procedure.
### 4.3 Spectroscopic analysis
#### 4.3.1 Summary of the spectral features presented in B17
Before we proceed with our spectroscopic analysis, we summarize here the
findings presented in B17 in which they used ten spectra that spanned from +54
to +198 days:
* •
The main spectral features of PS16dtm are the multicomponent hydrogen Balmer
lines and Fe II lines, visible at all epochs.
* •
The $\rm H\alpha$ exhibited a complex asymmetric profile consisting of a broad
component and a superimposed narrower component that has a slight shoulder on
the blue side of the peak.
* •
Several broad features appear near the [O II] $\lambda\lambda 7320$, 7330, [Ca
II] $\lambda\lambda 7291$, 7324, which they interpret as a blend of several Fe
II lines. The Fe II lines became stronger with time.
* •
In the NIR part of the spectra, they noticed several broad emission features
with asymmetric profiles, mainly Fe II lines, out to 1.2 $\mu m$, beyond which
the spectrum is relatively featureless,
* •
Paschen $\gamma$ and other lines of the Paschen series were detected. Paschen
$\alpha$ and $\beta$ were not detected because they fall in the NIR telluric
bands.
* •
The Ca II triplet shows an unusual shape where the central line in the triplet
is much stronger than the others.
* •
Blueward of $\sim 4500$ $\AA$ the spectrum shows a very complex combination of
narrow features superimposed on broad features. They identify [O II]
$\lambda\lambda 3727$, Balmer lines, and additional Fe II lines.
* •
In their UV spectrum obtained with HST, they found evidence for absorption, in
particular, broad Mg II $\lambda\lambda 2800$ absorption lines were detected,
with $\sim 10,000$ $\rm km\,s^{-1}$, that can perhaps indicate the presence of
an outflow.
B17 also studied the temporal evolution of the hydrogen Balmer lines $\rm
H\alpha$ and $\rm H\beta$. They fitted the spectral lines with the sum of
three Gaussians. They reported that the widths of the intermediate component
of $\rm H\alpha$ is around 750 $\rm km\,s^{-1}$ that narrows as a function of
time, going from 900 $\rm km\,s^{-1}$ to 600 $\rm km\,s^{-1}$. For the broad
component, they found 3500 $\rm km\,s^{-1}$ which increases to 4000 $\rm
km\,s^{-1}$ in the decline phase of the light curve. They found that the
narrow component is 100 $\rm km\,s^{-1}$ from their spectrum with the highest
resolution. For their low-resolution spectra the narrow component is
unresolved, so they fix it at the instrumental resolution. Furthermore, they
found that the width in the earliest epoch of their intermediate component was
similar to the width of the preexisting broad component in the archival host
spectra. For this reason, B17 argues that the intermediate component that they
measure in PS16dtm might be associated with the BLR of the NLSy1 host galaxy.
The flux of the narrow component of $\rm H\alpha$ has remained unchanged
relative to the host spectrum.
#### 4.3.2 Spectral evolution to 1868 rest-frame days after the outburst
Now we turn to the analysis of our spectra, 16 of which were taken with the
low-resolution NTT/EFOSC spectrograph at +155 to +1743 rest-frame days, and
one medium-resolution spectrum with the VLT/X-shooter at +1868 days. The
spectra are shown in Fig. 3. Similar to B17 (see their Fig. 7), we also found
that the main spectral characteristics are the hydrogen Balmer lines and the
Fe II complexes. The most noticeable variability in our spectra is that the
blue continuum becomes weaker and almost disappears in the final X-shooter
spectrum at +1868 rest-frame days after the outburst. Perhaps even more
striking are the Fe II optical lines, in the blue (around H$\beta$) and red
(around H$\alpha$) line, that become weaker as time progresses (see the
zoomed-in view around the H$\beta$ line in Fig. 6). This is a first time such
strong Fe II emission is seen in a TDE, even when compared to other nuclear
transients for which Fe II emission has been claimed, such as
J123359.12+084211.5 in MacLeod et al. (2019) and PS1-10adi (Jiang et al.,
2017; He et al., 2021), AT 2018fyk (Wevers et al., 2019) and AT 2019dsg
(Cannizzaro et al., 2021). We show the spectroscopic comparison of the iron-
rich TDE candidates in Fig. 7. Some TDE and flaring AGN spectra show O III and
N III lines which have been explained as due to the mechanism of Bowen
fluorescence (Blagorodnova et al., 2019; Leloudas et al., 2019; Trakhtenbrot
et al., 2019; Onori et al., 2019). We also searched for these lines, but in
PS16dtm these are difficult to resolve due to strong blending with Fe II
emission. An extremely weak feature is resolved on the place of a N III
$\lambda$4640 line in the X-shooter spectrum (see later discussion). We also
detected strong features in the red part of the spectrum, in the range
7000-8000 Å, which is fading with time, and completely disappearing in the
X-shooter spectrum. These are most likely blended Fe II emission, as already
noted by B17, with some probable contribution from the other nearby lines,
such as O I $\lambda$8446, is clearly identified in the +1514 spectrum (see
Fig. 3).
Figure 6: Zoomed-in view of the spectra containing the $\rm H_{\beta}$ and the
Fe II multiplets. The phase indicated in the legend refer to the rest-frame
days after MJD 57577. Vertical solid lines indicate the position of hydrogen
Balmer lines whereas the shaded pink areas mark the location of the strongest
Fe II features. Figure 7: Spectroscopic comparison of PS16dtm with TDEs from
the literature where Fe II emission has been identified: PS1-10adi (Kankare et
al., 2017), AT 2018fyk (Wevers et al., 2019), and AT 2019dsg (Cannizzaro et
al., 2021). Vertical solid lines indicate the position of hydrogen Balmer
lines, the dashed line denotes He II $\lambda$4686, whereas the shaded pink
areas mark the location of the strongest Fe II complexes. The spectra of the
three comparison TDEs have been scaled up and offset for display.
#### 4.3.3 Iron emission-line model for the spectral fitting
Next, we proceed to identify the emission lines and understand their temporal
evolution, despite that, given the richness of features and their strong
variability, their behavior is difficult to follow and disentangle since all
emission lines are strongly affected by blending (see Fig. 6). The complex Fe
II ion can produce thousands of line transitions, thus these lines are
typically blended and difficult to identify in the spectra. With this caveat
in mind, we fit all spectra using a python-based tool called Fully Automated
pythoN tool for AGN Spectra analYsis (FANTASY)121212https://fantasy-
agn.readthedocs.io/en/latest/, that was initially developed for fitting AGN
optical spectra (Ilić et al. 2020; Rakić 2022, Ilić et al. 2022, in prep.).
The code fits the underlying broken power-law continuum and sets of emission
lines, with the Fe II model based on the atomic parameters of Fe II. In
contrast to widely used fully empirical iron templates (for recent discussion
on different Fe II templates, see Park et al. 2022 and references therein),
such as that by Boroson & Green (1992), our approach is slightly different and
it was first developed by Kovačević et al. (2010). The semi-empirical model of
Kovačević et al. (2010) relies not only on the observed properties of AGN
spectra, but also on atomic properties of the transitions, so the Fe II line
sets are grouped according to the same lower energy level in the transition
with line ratios connected through the line transition oscillatory strengths.
Kovačević et al. (2010) produced a multicomponent template covering 4000-5500
Å. Building up on that work, we extended the Fe II model to include the
wavelength range up to 7000 $\AA$ to cover the area around H$\alpha$ line
where Fe II emission is also very strong, using the atomic data from Kurucz
database131313https://lweb.cfa.harvard.edu/amp/ampdata/kurucz23/sekur.html
(Ilić et al., in prep). This approach can be useful to understand newly
discovered transients, which show strong and complex Fe II emission. We assume
the following constraints for the emission line ratios, widths and shifts:
1. (i)
the broad Balmer emission lines (H$\alpha$, H$\beta$, H$\gamma$) are fixed to
have the same width and shift; from other strong broad lines in this range, we
included He I $\lambda$5876, He II $\lambda$4686, the Na I doublet
$\lambda\lambda$5890, 5896, and O I $\lambda$6046;
2. (ii)
the model of broad Fe II, with all lines having the same widths and shifts,
and line ratios connected as described above. The broad Fe II lines are
assumed to have the same profiles as broad Balmer lines (see e.g., Dong et
al., 2011, and references therein), so this component is set to have the same
width and shift as other broad lines;
3. (iii)
the AGN narrow emission lines, such as H$\alpha$, H$\beta$, H$\delta$, [O
III], [N II], [S II], Ti II, Cr II (see Véron-Cetty et al. 2006; Park et al.
2022, for the list of significant narrow lines) are constrained to have the
same width and shift. In addition to fixing the ratio of [O III] and [N II]
doublets to the theoretical values of $\approx 3$ (Dimitrijević et al., 2007);
4. (iv)
the model of narrow Fe II, plus the forbidden Fe II lines, have the same width
as narrow lines (Véron-Cetty et al., 2006; Park et al., 2022).
#### 4.3.4 Results of the modeling of the spectra in the $4100-7000$ $\AA$
range
In Fig. 8 we show an example of the results of the multicomponent spectral
fitting in the $4100-7000$ $\AA$ rest-frame wavelength range. Notably, our
model was able to reconstruct the observed spectra, but only when assuming a
broad component of Fe II multiplets. The presence of the broad Fe II component
has been previously detected in AGN (e.g., Dong et al., 2011; Park et al.,
2022); however, since this component is typically very weak, it is often
merged with the continuum emission, especially in poor spectral resolution
data. We also detected narrow features on top of the broad Fe II components,
which our code usually identified as narrow Fe II emission, despite the
limited instrumental resolution. We were also able to identify the broad
features blueward from H$\alpha$ region with the Fe II model (see Fig. 8). We
performed a series of tests with other emission-line models (such as no broad
Fe II, no forbidden Fe II lines, etc.), but none were able to produce
reasonable fit based on the fitting residuals. The goodness of the fit was
evaluated using the $\chi^{2}$ parameter.
Figure 8: Examples of multicomponent spectral fitting in the $4100-7000$ $\AA$
rest-frame range for the +179 days spectrum (solid gray line). The underlying
continuum emission and all emission-line components are plotted with different
colors, as indicated in the legend. For details on the assumed emission-line
components see text. The final model (solid red line) which is the sum of all
components is also shown, whereas the residual spectrum, is plotted below.
Figure 9: Evolution of the FWHM of different line components (denoted in the
upper right corner) during the spectroscopic campaign. Figure 10: Temporal
evolution of the extracted emission lines and line-blends during the campaign.
The bottom panel shows the evolution of $L_{\rm bol}/L_{\rm Edd}$ ratio, in
which the bolometric luminosity is estimated from the spectral continuum at
5100 Å. The vertical dashed line indicates +400 days after the estimated
outburst time to guide the eye.
In Fig. 9 we show the time evolution of the FWHM of the most prominent
emission lines, broad H$\alpha$ and H$\beta$, broad Fe II, and all narrow
lines. It is evident that the strength of the lines slowly decreases with
time. The width of the narrow lines remains the same, but this is constrained
by the instrumental resolution.
In Fig. 10 we show the time evolution of the integrated emission-line
features, that is, for the broad H$\alpha$ and H$\beta$ line, total broad Fe
II, total narrow and forbidden Fe II lines. The strength of broad line-blends
(H$\alpha$, H$\beta$, total Fe II broad) are showing similar trends, slight
increase followed by long decrease with time. A similar evolution is seen in
the narrow Fe II blend. The exact time of peak is difficult to identify due
the fluctuations in line fluxes. Furthermore, the extraction of single lines
H$\alpha$ and H$\beta$ from the low-resolution spectra is difficult; it
depends on the possibility for the code to identify the narrow [O III]
$\lambda$5007 line. Nevertheless, the general similar trend is evident in all
light curves, especially of line-blends.
In Fig. 10 we also show the ratio of the bolometric luminosity and the
Eddington luminosity, $L_{\rm bol}/L_{\rm Edd}$. The Eddington luminosity was
estimated as $L_{\rm Edd}=1.26\times 10^{38}(M_{\rm BH}/M_{\odot})$
$\rm{erg\,s}^{-1}$ with the SMBH mass of $M_{\rm BH}\sim 10^{6}M_{\odot}$,
which we determine in Sect. 4.3.8. For the bolometric luminosity $L_{\rm bol}$
we used a different approach than the one used in Sect. 4.1. Here, to obtain
the bolometric luminosity we used a standard procedure for quasars, where
$L_{\rm bol}=k_{\rm bol}\lambda L_{\lambda}$ by applying the mean quasar
bolometric correction $k_{\rm bol}\approx 10$ (e.g., Richards et al., 2006;
Runnoe et al., 2012) to the continuum luminosity at 5100 Å extracted from the
multicomponent fitting. This assumes that the underlying continuum originates
from the accretion disk of an AGN, and it remains to be investigated if this
is a valid approximation for an AGN hosting a TDE. We subtracted the host-
galaxy contribution to the $L_{5100}$ luminosity before calculating the
Eddington ratios (see Fig. 11). The host-galaxy spectrum was extracted from
the SDSS spectrum, using the principal component analysis (Vanden Berk et al.,
2006), as explained in Ilić et al. (2020) (see Sect. 5.2 for further
discussion). A word of caution is given for the flux at 5100 Å measured from
the fits, since it is sensitive to the spectral multicomponent fitting, but
even more to the absolute spectral calibration. Here all spectra are
calibrated using photometry, however, for more accurate absolute calibration
in the AGN monitoring, one usually applies more precise inter-calibration
based on the constant narrow-line flux, such as [O III] or [S II] lines (van
Groningen & Wanders, 1992; Fausnaugh, 2017).
Figure 11: The pure AGN components for H$\alpha$ (left) and H$\beta$ (right)
region are shown, obtained when the host-galaxy contribution is subtracted
from the observed total spectrum in the archival, preoutburst, SDSS spectrum.
In order to assess the source of ionization of the broad lines and blends, in
Fig. 12 we plot the correlation between the continuum flux at 5800 Å ($F_{\rm
cnt}$), and the flux of H$\alpha$ (left), H$\beta$ (middle), and total Fe II
blend (right). $F_{\rm cnt}$ is measured from the observed spectra as the
median of 5790-5810 Å range, since this part is identified as free of emission
lines. Apart from the first three epochs (+155, +179, +196) the line fluxes
correlate with the continuum flux, supporting that photoionization by the
central continuum source is the main heating source of the line emitting gas
(see e.g., Netzer, 2006). The outliers are seen in each plot, indicating that
in the first period the emission may be induced by other mechanisms, such as
shock ionization. This is also supported by the higher gas velocities measured
from the line widths. It is important to note that for consistency, all epochs
were uniformly fitted with the same initial constraints, listed above.
Figure 12: Correlations between the continuum flux at 5800 $\AA$, and the flux
of H$\alpha$ (left), H$\beta$ (middle), and total broad Fe II blend (right) in
units of $10^{-15}$ $\rm{erg\,s}^{-1}\rm{cm}^{-2}$. The color-bar indicates
the day after MJD 57577. The clear correlation of the continuum and line
fluxes (apart from the first three epochs) supports the photoionization as the
main heating source (see text for details).
#### 4.3.5 Spectral features in the $7000-9000$ Å range
In the spectral region redward of the H$\alpha$ line, there are broad features
visible in the spectra shown in Fig. 3. These are most notably He I lines,
namely He I 6680 (which is attached to the H$\alpha$ red wing), He I
$\lambda$7065, He I $\lambda$7281\. There are also Ca II triplet
($\lambda$8498, $\lambda$8542, $\lambda$8662), well known oxygen lines O I
$\lambda$7774 and O I $\lambda$8446, as well as the Mg II $\lambda$7892 line
present. All of these broad features are of similar width as the broad
H$\alpha$ component, being at the maximum at a similar time, and decreasing in
intensity and width across time in the similar fashion. A broad feature around
7300 Å was identified as [O II] $\lambda$7320, $\lambda$7330, [Ca II]
$\lambda\lambda$7291, 7324 by B17, but the authors suggested that these
feature could be associated to Fe II emission lines. However, we believe that
this a mixture of Fe II blend and He I $\lambda$7281, as well as the feature
centered around 8200 Å. As it will be shown in the next section, all broad
features disappeared in the last VLT/X-shooter spectrum, with possibly only
weak Mg II and O I being present.
#### 4.3.6 Analysis of the medium-resolution VLT/X-shooter spectrum
Figure 13: The upper panels show the Balmer H$\beta$, H$\alpha$, and $H\gamma$
lines from the medium-spectral resolution VLT/X-shooter spectrum. The lower
panels zoom in at the strongest identified Fe II and [Fe II] lines, as well as
He II, [O I] and [S II]. The weak coronal line [Fe VII] $\lambda$6087 is also
detected.
Given that the resolution of the VLT/X-shooter spectrum is much higher
compared to those taken with NTT/EFOSC2, we were able to clearly resolve the
broad component in the hydrogen Balmer H$\alpha$ and H$\beta$ lines (Fig. 13),
whereas the broad component of the Balmer H$\gamma$ line is too weak to be
detected). In the X-shooter spectrum, taken at +1868 days, it is noticeable
that the broad Fe II emission has fully faded, and that the galaxy is most
likely returning to its preoutburst state. Nevertheless, we were able to
identify the strongest narrow Fe II lines coming from the a6S, a4G, and b4F
multiplets, as well as forbidden [Fe II] lines, displayed in the zoom-in plots
in Fig. 13. The narrow emission lines can be now unambiguously identified and
used for the diagnostics of physical conditions in the ionized gas. We have
extracted the following narrow lines: H$\alpha$, H$\beta$, [O I] 6300, [S II]
6716, 6731, and [O II] 3727, 3729.
We fit the H$\alpha$ and H$\beta$ line region with the multicomponent model in
the same manner as described in Sect. 4.3.3 to measure the narrow emission
line fluxes, as well as to extract the broad component of the Balmer lines.
The strongest narrow emission lines, two [O III] lines and H$\alpha$ are
fitted with two Gaussians, one for the core and the other for the line wings
(Fig. 14). These are all constrained to have the same width and shift, and two
a6S Fe II lines ratios are fixed according to our model (see Sect. 4.3.3).
Figure 14: Fits of the $\rm H_{\beta}$ (top) and $\rm H_{\alpha}$ (middle)
line regions in the X-shooter spectrum. The bottom panel compares the broad
components (obtained after subtracting the narrow lines) of the $\rm
H_{\beta}$ and $\rm H_{\alpha}$ in the velocity space.
The width of Fe II lines is $\sim$220 $\rm km\,s^{-1}$, larger than the narrow
lines, which have widths that are closer to $\sim$100 $\rm km\,s^{-1}$. This
is evident for all iron lines, including the coronal [Fe VII] $\lambda$6087
line. The extracted broad H$\alpha$ and H$\beta$ components (obtained after
subtracting narrow lines) have similar profiles (Fig. 14, bottom panel), with
H$\alpha$ being slightly broader with the width of $\sim$1900 $\rm
km\,s^{-1}$. Both broad lines show slightly asymmetric profiles, with
possibility that the line peak is a slightly shifted redward.
#### 4.3.7 The presence of coronal lines in the X-shooter spectrum
The spectral lines coming from forbidden transitions of highly ionized ions
(ionization potential larger than 54.4 eV) are typically referred to as
coronal lines, and are known to exist in the optical spectra of Seyfert
galaxies (see for example Korista & Ferland, 1989; Gelbord et al., 2009).
Komossa et al. (2008) and Wang et al. (2012) discovered several coronal line
emitters which can be interpreted as light echos from past TDEs. The high
ionization potential of coronal lines indicates that soft X-rays are required
for their production. These lines are typically of somewhat larger width than
the narrow lines, with higher critical densities ($\sim 10^{7.5}{\rm cm}^{-3}$
Osterbrock & Ferland 2006). It is assumed that they come from either the inner
narrow-line region or from the inner edge of the torus (e.g., Rose et al.,
2015). Their strengthening has been associated with some sort of transition
event, such as the awaking of a dormant AGN (Ilić et al., 2020) or a TDE in a
gas rich environment (Wang et al., 2012). Recently, Onori et al. (2022)
reported the detection of coronal lines in the TDE AT 2017gge 1700 days after
the initial outburst.
In the X-shooter spectrum, we identified a weak [Fe VII] $\lambda$6087 line
(see bottom mid-panel in Fig. 13). Moreover, two other [Fe VII] lines at 5159
Å and 5276 Å could be also misidentified as Fe II lines. Another well-known
coronal line [Fe X] 6374 seemed to be present as well, but some telluric
disturbance makes its detection quite hard. We examined all previous spectra
and the [Fe VII] $\lambda$6087 is either not present or too weak to be
resolved in the low-resolution spectra. We also retrieved the spectrum with
higher resolution from B17 at +192 taken with the MagE spectrograph to check
if the aforementioned line was present, but it is also either too weak to be
seen or not present. We note that there could be many other coronal lines (for
a review of coronal lines, see e.g., Rose et al., 2015) hidden in the strong
iron blends throughout the evolution of transient events such as PS16dtm.
Future monitoring campaigns of near-by transient events with high resolution
instruments and high S/N spectra could shed light on the variability of these
lines and on their connection with the occurrence of such transient events. It
remains unclear what could be the origin of coronal lines in PS16dtm, since
the X-ray emission remains weak in this object (see Sect. 3.6) contrary to the
case for AT 2017gge and AT 2019qiz which experienced considerable late-time
X-ray brightening.
#### 4.3.8 SMBH mass and AGN bolometric luminosity estimation
To be able to measure the SMBH mass from the X-shooter spectrum, we first
needed to check if the transition to the preoutburst state, has occurred.
Therefore, we compared the X-shooter with the archival, preoutburst SDSS
spectrum. We attempted to scale the X-shooter spectrum to have the same
spectral resolution as the SDSS spectrum, for which we used the [O I] 6300 Å
that is clearly seen in both spectra and is not contaminated with satellite
lines. However, since the SDSS and X-shooter spectra are taken with different
apertures (2″and $0.9-1$″, respectively), the absolute re-scaling of the
spectra using [O III] or [S II] emission line fluxes was not possible. This
was due to the fact that the narrow emission lines are predominantly
originating from the host galaxy (see Fig. 11), and the host-galaxy
contribution was significantly different for the two used apertures.
Therefore, we compared the X-shooter spectrum smoothed to the SDSS spectral
resolution, to the pure AGN component extracted from the SDSS spectrum, which
was scaled-up so that the H$\alpha$ or H$\beta$ have the same broad line
intensity, as shown in Fig. 15. We then concluded that the broad line profiles
of H$\alpha$ or H$\beta$ in the archival SDSS spectrum and in the X-shooter
spectrum at +1868 days, are similar.
Using the approach from Sect. 4.3.4 where the bolometric luminosity is
expected to scale with the luminosity at 5100$\,\AA\,$, we estimated the AGN
bolometric luminosity from the X-shooter spectrum to be $L_{\rm bol}=3.7\times
10^{43}$ $\rm{erg\,s}^{-1}$, which is almost reaching the preoutburst value,
which we extracted from the SDSS spectrum to be $L_{\rm bol}=1.6\times
10^{43}$ $\rm{erg\,s}^{-1}$. Both values are host-corrected using the same
constant host contribution estimated from the SDSS spectrum.
Given that the broad H$\beta$ line can be clearly extracted from the X-shooter
spectrum, we calculated the mass of the SMBH using the single-epoch method
(see e.g., Gaskell, 2009, and references therein) and the FWHM of the H$\beta$
line of $1160\pm 190$ $\rm km\,s^{-1}$. The radius of the BLR is estimated
using the radius-luminosity relation from Bentz et al. (2013) applied to the
AGN continuum luminosity at $5100\,\AA$. The MBH is then calculated using the
virial theorem assumption, with the virial factor of $f=0.75$ (the same as in
Xiao et al., 2011, who estimated the mass for this object). The obtained mass
is $M_{\rm BH}=10^{6.07\pm 0.18}M_{\odot}$, similar to the previous estimates
by B17 and Xiao et al. (2011). We note that the obtained value is dependent on
the assumption for the virial factor $f$ (see discussion in e.g., Dalla Bontà
et al., 2020, and references therein).
### 4.4 Spectral classification of the host galaxy
Previous works concluded that the host of PS16dtm is a NLSy1 galaxy (see Sect.
2 in B17), based on the detection of a weak broad-line component in the
H$\alpha$ line (Greene & Ho, 2007; Xiao et al., 2011) and high X-ray
luminosity of $L_{\rm 2-10keV}\sim 10^{42}$ $\rm{erg\,s}^{-1}$ (Pons & Watson,
2014). The later could not be easily accounted for by star formation only, as
it would require star formation rates of at least 200 $\rm M_{\odot}$ yr-1
(Ranalli et al., 2003). Some of the most common features of NLSy1 galaxies are
strong and rich Fe II multiplets, however we note that in the quiet phase, the
archival optical SDSS spectrum of the host shows no (or very weak) Fe II
emission, which classifies this object as a rare type of NLSy1 (Zhou et al.,
2006). It is interesting to note that the spectrum of the host galaxy of the
nuclear transient CSS100217 (SDSS J102912.58+404219.7) shows more typical
NLSy1 features, such as stronger broad hydrogen Balmer lines, He I, and very
prominent Fe II emission around H$\beta$ (Drake et al., 2011). However, both
host galaxies are not typical NLSy1 objects. Their location on the Baldwin-
Phillips-Terlevich (BPT) diagrams (Baldwin et al., 1981; Kewley et al., 2006)
is either left to or at the border of the AGN–starburst composite objects
location, suggesting the presence of significant star formation.
B17 placed the host AGN more on the border of the starburst-AGN division based
on the line-ratios measured from the archival SDSS spectrum. Given that the
X-shooter spectrum is showing that is returning to its preoutburst state, we
can make use of its higher resolution to perform the diagnostics using the
emission line ratios of [O III] $\lambda$5007/H$\beta$, [N II]
$\lambda$6583/H$\alpha$, [O I] $\lambda$6300/H$\alpha$, [S II]
$\lambda\lambda$6717,31/H$\alpha$, and [O III] $\lambda$5007/[O II]
$\lambda\lambda$3726,3729 (Baldwin et al., 1981; Kewley et al., 2006). In Fig.
16 we plot the diagnostics diagrams using the narrow emission lines measured
from the X-shooter spectrum. The location on the BPT diagrams, place the host
galaxy in the area of AGN–starburst composite objects. This means that the
narrow-line emission has significant contribution to the ionization from
stellar sources, much more than previously shown in B17.
Figure 15: X-shooter spectrum, scaled to the SDSS spectrum of poorer spectral
resolution, compared to the SDSS AGN component spectra (from Fig. 11), which
was scaled-up so that the broad line wings fit to the broad line of H$\alpha$
(left) and H$\beta$ (right). Figure 16: Diagnostic diagrams based on the line
ratios of [O III] $\lambda$5007/H$\beta$, [N II] $\lambda$6583/H$\alpha$, [O
I] $\lambda$6300/H$\alpha$, [S II] $\lambda\lambda$6717,31/H$\alpha$, and [O
III] $\lambda$5007/[O II] $\lambda\lambda$3726,3729 (Baldwin et al., 1981;
Kewley et al., 2006). The blue filled circle denotes position of the host
galaxy of PS16dtm measured from the X-shooter spectrum.
### 4.5 PS16dtm in terms of normal AGN variability
B17 discarded the possibility that PS16dtm can be explained in terms of normal
AGN variability. After six years of monitoring this nuclear transient, we can
re-open the same question. The variability of an AGN can be assessed through
the mean fractional variability parameter $F_{\rm var}$ (Peterson, 2001), that
can be approximated with the ratio of the variance and mean value. NLSy1 are
known to be low-level variability AGN (Ai et al., 2010) also on longer time-
scales (Shapovalova et al., 2012), except for the radio-loud NLSy1 which
exhibit blazar like variability (Berton et al., 2015). For PS16dtm, the
$F_{\rm var}$ is 0.43, 0.26, 0.58 for H$\beta$, H$\alpha$ and total broad Fe
II, respectively. Again, we can confirm, with data that spans over six years
that such high variability up to 50% indicates that the outburst is not due to
regular AGN activity.
In Fig. 17, we investigated the location of the PS16dtm outburst on the AGN
main-sequence plane, defined by two measured quantities: the FWHM of H$\beta$
and $R_{\rm FeII}$ (Sulentic et al., 2009; Marziani & Sulentic, 2014; Shen &
Ho, 2014), where $R_{\rm FeII}$ is the ratio of the equivalent width of the Fe
II (we used here only the broad component since it is dominant, measured in
the 4434–4684 Å wavelength range) to H$\beta$. Typical AGN have $R_{\rm
FeII}\sim 0.4$ with $\sim 90\%$ of objects in the range 0.1 to 1. The PS16dtm
outburst is located in the extreme right of the main sequence, in the location
of highly accreting population A objects with $R_{\rm FeII}>1$ (see e.g.,
Marziani & Sulentic, 2014).
During 1600 days of observations, $R_{\rm FeII}$ and FWHM(H$\beta$) are slowly
decreasing toward the preoutburst state, however the location on the main-
sequence remains within the extreme accreators, far from the majority of AGN
(indicated by SDSS quasars from Shen et al. (2011) catalog, gray dots) and
extreme end of NLSy1 (data from Rakshit et al. (2017) catalog, blue dots).
Such strong $R_{\rm FeII}$ indicates high-accretion rates, most likely super-
Eddington accretion (Marziani et al., 2018; Panda et al., 2018). This is
further supported with our estimated $L_{\rm bol}/L_{\rm Edd}$ ratio, shown in
the bottom panel of Fig. 10.
The strength of Fe II emission in PS16dtm is remarkably high, even when we
consider that there are other TDE candidates with Fe II emission lines such as
AT 2018fyk (Wevers et al., 2019) and PS1-10adi (Kankare et al., 2017; He et
al., 2021), with their spectroscopic comparison is shown in Fig. 7. What is
striking is that the $R_{\rm FeII}$ is $\sim$5 times larger than most AGN from
the literature. We note that fitted spectra are photometrically calibrated,
thus, the absolute fluxes should be taken with caution, however, the $R_{\rm
FeII}$ is representative of the line ratio, therefore it should be more
reliable.
Figure 17: Location of PS16dtm (full circles) on the AGN main-sequence plane,
that is the FWHM H$\beta$ line vs. $R_{\rm FeII}$, where $R_{\rm FeII}$ is the
ratio of the equivalent width of the FeII, measured in the 4434–4684 Å
wavelength range to H$\beta$. The SDSS quasars from Shen et al. (2011) catalog
(gray dots) as well as the NLSy1 from Rakshit et al. (2017) catalog (blue
dots) are also shown. Solid lines at 4000 $\rm km\,s^{-1}$ and $R_{\rm
FeII}=1$ indicate separations between different populations of AGN (Marziani
et al., 2018). The color-bar indicates the day after MJD 57577.
## 5 Discussion
### 5.1 Dust echo from MIR emission and the time lags with respect to the
optical peak
There are several optically discovered TDE candidates with MIR flares (see
e.g., Jiang et al., 2021). They are detected with a delay with respect to the
optical ones, so they are interpreted as dust echoes of the TDE optical
emission (van Velzen et al., 2016). The “reverberation time lag,” due to
light-travel times, between the optical/UV/X-ray variations and the IR
response is a commonly used technique in AGN science. Such a time delay can be
directly inferred by the light curves and it is presumed to provide an
estimate of the distance to the outer radius of the dust torus. However, many
parameters influence the IR reverberation response, including the convolution
of the UV/optical light curve with a transfer function that contains
information about the geometry and structure of the torus (see Almeyda et al.,
2017, for a detailed discussion).
For galaxies with AGN, it is possible to estimate the inner radius of the
dusty torus from the sublimation radius as (see e.g., Namekata & Umemura,
2016; Jiang et al., 2019):
$R_{sub}=0.121\;L_{45}^{0.5}\;T_{1800}^{-2.804}\;a_{0.1}^{-0.51}\;(pc)$ (1)
where $L_{45}$ is the bolometric luminosity of the AGN in units of $10^{45}$
erg s-1, $a_{0.1}$ is the grain size in units of 0.1 $\mu$m, and $T_{1800}$ is
the temperature of the dust, normalized to the expected sublimation
temperature of 1800 K. For PS16dtm, it is possible to make this calculation
using both the pre- and post-outburst bolometric luminosity, which will yield
different values, corresponding to PS16dtm clearing out a larger radius by
sublimating the dust closer to the nucleus (J17). Using our estimates of the
bolometric luminosity ($1.6\times 10^{43}$ $\rm{erg\,s}^{-1}$ and $\sim
10^{45}$ $\rm{erg\,s}^{-1}$ pre- and post-outburst), we calculated the inner
torus radius to change from $\sim$18 and $\sim$144 light days, respectively.
It is worth noting that several works have found that the innermost torus
radii based on dust reverberation were systematically smaller than the
theoretical prediction of Equation 1 by a factor of few (see the recent review
van Velzen et al., 2021).
The MIR light curve was steeply rising already at +9 days with respect to the
estimated optical outburst (Fig. 2), although the $W1-W2$ color at this early
phase is bluer and the MIR could be consistent with the Rayleigh-Jeans tail of
the UV/optical blackbody (Fig. 5). The MIR light curve is still rising at the
end of our monitoring campaign, albeit slowly (see Fig. 2) suggesting that the
time delay with respect to the optical must be at least $\gtrsim 1500$ rest-
frame days, implying for the outer radius of the dust medium to be $R\gtrsim
4\times 10^{18}$ cm, or $\gtrsim 1.3$ pc.
Some authors have used the duration of the MIR echo as a discriminator between
the CLAGN and TDE scenario (see e.g., Kool et al., 2020), suggesting that
CLAGN MIR echos evolve on much longer timescales of several years (see the
work by Sheng et al., 2017, 2020), compared to TDE echos that evolve on
timescales of months. In more recent works that gathered even more TDE and
CLAGN candidates, this distinction is not clear anymore. This is because there
are CLAGN candidates that have shorter dust echos and vice versa, there are
TDE candidates with longer MIR echos. Jiang et al. (2021) looked at 23 TDE
candidates from the literature and found 11 with NEOWISE emission; the time of
the MIR peak ranges from 0 to 800 days after the optical peak, and their
duration are from tens to more than 1000 days. In another TDE in a luminous
infrared galaxy, a MIR echo lasting for at least 13 years and still going
today, is observed (Mattila et al., 2018). Lyu et al. (2022) looked at the
WISE data of 13 CLAGN and found the time lags of the variation in the mid-IR
behind that in the optical band for 13 CLAGN with strong mid-IR variability,
from tens to hundreds of days. We have also inspected the NEOWISE light curve
of AT 2018dyk, which was classified as CLAGN by Frederick et al. (2019),
despite that initially was suspected as TDE candidate (Arcavi et al., 2018).
The AT 2018dyk MIR light curve has rosen more that a magnitude above the host
level and it is still declining at more than 4 years after the peak. The
nuclear transient Gaia16aax which could be explained by both the TDE scenario
or some variation in the acretion rate of the active SMBH, showed 140 days of
time lag of the NEOWISE light curve compared to the optical (Cannizzaro et
al., 2020). In conclusion, at the moment the duration of the MIR echo cannot
be used as a discriminator between the physical cause of the sudden optical
flare.
Another proposed discriminator between TDEs and CLAGN is the covering factor
of the dust, which can be determined from the ratio of the total energy
radiated by the MIR by the energy absorbed by the dust (Gezari, 2021). This
has been justified by the work of Jiang et al. (2021), who found values on the
order of $1\%$ for TDEs in quiescent galaxies141414with the exception of the
superluminous TDE ASSASN-15lh., while the characteristic values of CLAGN
covering factors are two orders of magnitude higher. The dust covering factor
of PS1-10adi is $\sim 40$% (Jiang et al., 2019), while for PS16dtm this is
estimated to be $>10\%$ (J17, Jiang et al., 2021) as the MIR peak has not been
reached yet. We note that, since it is possible that TDEs in AGN have similar
covering factors with CLAGN, the dust covering factor is not a discriminator
for TDE vs. CLAGN scenario, but perhaps can indicate the presence of
preexisting dusty torus in the cases where the AGN has not been already known.
In the case of PS16dtm we measure a time lag between the peak of the broad Fe
II component and the continuum light curves (Fig. 10). The nominal value of
this lag is about $\sim$300 days using the nominal peak of the Fe II
($\sim$400 days) and the continuum light curves (70 days for the first peak),
although this is uncertain and could could be smaller as the timescales
between 200 – 400 days were not well probed by our spectroscopy. This could be
consistent with the inner radius of the dust torus after PS16dtm sublimated
the dust closest to the black hole, and the idea proposed by He et al. (2021)
for PS1-10adi that Fe II emission is related to the torus inner radius and to
Fe that was initially locked in dust, which was evaporated by the TDE. Given
the large uncertainties in the time scales, it is not possible to directly
confirm their hypothesis. We therefore investigated among TDEs with reported
Fe II emission (see Fig. 7) and we found that nearly all of them have strong
associated MIR flares (Fig. 18), with the exception of AT 2018fyk, which is
also the one with the weakest Fe II. This is seems to strengthen the potential
relation between Fe II and dust, which needs to be investigated further in the
future. We note, however, that the Fe II and the MIR emissions in PS16dtm
evolve on very different time scales (dropping after 400 days and increasing
steadily for 1500 days), so it is not obvious how to establish a
straightforward relation between them.
We finally note that a time lag is also seen between the continuum and
H$\alpha$ (see Fig. 10), similar to what has been observed for other TDEs, but
a factor of 10 longer (Charalampopoulos et al., 2022). This time lag is more
evident than for Fe II and might or might not be of similar value, with both
light curves peaking $\sim 400$ days after the outburst.
Figure 18: Host-subtracted MIR light curves of TDEs from the literature where
the Fe II have been identified, PS1-10adi (Kankare et al., 2017; Jiang et al.,
2019; He et al., 2021), AT 2018fyk Wevers et al. (2019), and AT 2019dsg
(Cannizzaro et al., 2021).
### 5.2 AGN extreme variability and the TDE scenario
The nuclear outburst PS16dtm may shed light on an important question on what
triggers or maintains the AGN, especially in the cases where extreme
variability has been observed. In fact, TDEs is one of the suggested processes
that can fuel gas to the SMBH (Hills, 1975; Frank & Rees, 1976), and it could
work for the low-luminosity end of AGNs where the average accretion rate from
tidal disruptions is enough to account for the luminosity (see also Combes,
2001, for a review on this subject). After the publication of B17, $\sim 200$
CLAGN have been identified, with the detection of the event after it has
already occurred, so detailed studies have not been possible (Yang et al.,
2018; Frederick et al., 2019; MacLeod et al., 2019; Graham et al., 2020;
Sánchez-Sáez et al., 2021; López-Navas et al., 2022; Green et al., 2022; Hon
et al., 2022). The classification into a CLAGN is based on the extreme changes
in magnitude or emission line intensities, with a disappearance or appearance
of the broad component in emission lines (see e.g., Lyutyj et al., 1984;
Kollatschny & Fricke, 1985; Oknyansky et al., 2019; Ilić et al., 2020). For
example, MacLeod et al. (2019) searched for $|\Delta g|>1$ mag, $|\Delta
r|>0.5$ mag variability over any of the available time baselines probed by the
SDSS and Pan-STARRS1 surveys. The timescales of the variability of these CLAGN
cannot be pinpointed precisely since they do not have good coverage, but
typical timescales ranges from a roughly a year to 13 years in the rest-frame.
There is still a vivid debate about what could cause the extreme variability
in AGN, with many authors leaning toward the change in the accretion rate as
the most plausible scenario (Noda & Done, 2018; Sniegowska et al., 2020),
despite that it is difficult to explain the cases with rapid (less than a
year) rise in the light curve, as the time scales that govern the dynamics of
an accretion disk are much longer in the classical theoretical framework of
accretion disk physics (Cannizzaro et al., 2020). Other possible explanations
are the obscuration of the broad-line region (e.g., Elitzur, 2012), though
questioned by many works (e.g., Stern et al., 2018; Mehdipour et al., 2022),
close binary super-massive black holes (Wang & Bon, 2020), and supernova
explosions (Campana et al., 2015). Growing number of studies are suggesting
that some CLAGN are due to change in the accretion state in the SMBH triggered
by a TDE (Komossa et al., 2017; Trakhtenbrot et al., 2019; Zhang, 2021). In
fact, the observational signatures of CLAGN can be similar to TDEs that happen
in galaxies with AGN, in sense that there is a transition phase that can be
accompanied by a drastic change in the AGN continuum flux, so the optical and
MIR become brighter when the CLAGN turn on.
PS16dtm could have been classified as an extreme CLAGN with the change of
brightness of several magnitudes (see Figs. 1 and 2) according to the criteria
in MacLeod et al. (2019), but it would have been clearly distinguished from an
ordinary AGN variability (e.g., Vanden Berk et al., 2004). Given that PS16dtm
was discovered with an all-sky survey which is biased toward the live
selection of the most variable object, it is natural that PS16dtm may not be
representative of a large population. However, the rise of PS16dtm luminosity
on such short time-scales ($<50$ days), makes it difficult to explain in terms
of the intrinsic change to the AGN (e.g., the changes in the accretion disk
structure, see e.g., Stern et al., 2018; Cannizzaro et al., 2020). From the
spectroscopic point of view, the boost in the intensities of broad emission
lines, as well as the width of the broad lines, may have also classified as
CLAGN. The most striking change is the boost of Fe II emission, especially the
appearance of the broad Fe II emission, that completely disappears in time (or
gets so weak that it blends with the underlying continuum emission).
Interestingly, the nuclear transient SDSS J123359.12+084211.5 in NLSy1
reported by MacLeod et al. (2019) and classified as a CLAGN, also shows a
remarkable change in the Fe II emission. However, that event is a noticeable
outlier in the CLAGN sample, especially based on the large Eddington ratio
compared to the other CLAGN which are typically seen in lower Eddington ratio
AGN (MacLeod et al., 2019). Despite that the data of SDSS J123359.12+084211.5
are very limited, it is quite possible that PS16dtm and SDSS
J123359.12+084211.5 are powered by the same physical mechanism. We conclude
that PS16dtm, without the extensive follow-up campaign shown here, might
easily have been easily classified as a CLAGN, especially if the rising part
of the light curve was missed. This implies that some CLAGN, such as SDSS
J123359.12+084211.5, may actually be powered by a TDE.
## 6 Conclusions
Here, we studied PS16dtm, which is a nuclear outburst in a NLSy1 galaxy, with
photometric and spectroscopic data that spans up to six years after its
discovery. These are the main conclusions from these observations:
* •
The NUV/optical light curve (see Fig. 1), after an initial rise of $\sim$ 50
days, displays another peak after $\sim 50$ days. After a plateau phase, it
started to decay slowly. In the first $\sim 270$ days, the NUV light curve
scales with a $t^{-5/7}$ decay, then afterward with a shallower $t^{-1/6}$
decline. The NUV/optical photometry shows very little color evolution. A
single blackbody model is not a good fit to the NUV/optical data. We also
attempted to correct for extinction with a SMC-like extinction law, for which
some authors have suggested that can be applied to the line of sight to AGN,
but this did not yield satisfactory results.
* •
The MIR light curve which precedes the first ATLAS optical detection by
$\leavevmode\nobreak\ \sim 4$ days, has steeply risen in the first $\sim 500$
days and it is still slowly rising $\sim 1700$ days after the first outburst,
but the maximum value has not been reached already. The MIR color in the first
epochs compared to the later epochs are quite different. A blackbody fit to
the MIR data indicates that the temperature during the rising part of the
light curve was $\sim 2300$ K, it then dropped to $\sim 1200$ K, and then
settled to $\sim 900$ K.
* •
Our spectra taken between +155 and +1868 days past the outburst, show the
following main properties. The blue continuum has dropped over time, the
spectral evolution over the years proceeded with a very slow rate and almost
returned to preoutburst level in our last spectrum. The most striking
spectroscopic features, except the broad Balmer lines, are the Fe II emission
lines. We highlight that this broad Fe II emission is transient, in the sense
that it was not present in the archival preoutburst spectrum and almost
completely disappeared in our last spectrum at +1868 days after the outburst.
We made a spectroscopic comparison with other TDEs with reported broad Fe II
emission in the optical, and conclude that PS16dtm is the strongest iron
emitter so far. We have also investigated the NEOWISE light curves of these
TDE candidates and found that three out of four of them have also shown
exceptional MIR flaring.
* •
Thanks to our semi-empirical model which we developed here, we were able to
identify and study the evolution of the emission lines. The flux of the broad
H$\alpha$, H$\beta$, broad Fe II and narrow Fe II lines, after a rise in the
first $\sim 200$ days after the outburst, their strength dropped over time
after $\sim 400$ days after the outburst. We also found that the majority of
the flux of the broad Balmer and Fe II lines is consistent with being produced
by photoionization at least starting 200 days past the outburst.
* •
We have also studied the MIR light curve and the light curve of the broad Fe
II emission lines in order to explore the surrounding region. Given that the
quasi-simultaneous rise of the MIR, optical and Fe II emission, it is tempting
to assume that the regions from which they originate are nearby and that even
the Fe II emitting region might be in the inner radius of the AGN torus. On
one hand, there are the broad H$\alpha$, Fe II emission line that reach their
peaks at $\sim 350$ days after the optical data, even though this is uncertain
given the gaps in the data. On the other hand, we estimated that the
sublimation radius of the host AGN is $\sim 18$ light days. In addition, after
the rise, the MIR, optical and Fe II light curve evolve on different
timescales. Therefore, it was not possible to establish a direct link between
the dust and the iron emitting regions.
* •
We found a weak coronal [Fe VII] 6087 line in our X-shooter spectrum at +1868
days. There are possibly other coronal lines, but their detection is not
certain.
* •
B17 predicted that the X-ray emission, which dimmed after the PS16dtm
outburst, will reappear when the TDE accretion rate declines, that should be
approximately a decade after the outburst. We found only weak X-ray emission
in the 0.5-8 keV band at the location of PS16dtm, at +848, +1130 and +1429
days after the outburst. This is consistent with the levels observed by B17,
so this reappearance has still not occurred yet, despite that the optical
photometry and spectroscopy indicate that the emission is returning to the
preoutburst level.
* •
We have reexamined the host galaxy properties and found that the host galaxy,
beyond the AGN emission, has also an important contribution from the ongoing
star formation.
We have further discussed whether it is possible to use the duration of the
dust echo or the dust covering factor as a indicator of the CLAGN or TDE
scenario, and concluded that at this point this is not feasible given the
overlap of the values of the two. We have reopened the question of whether
PS16dtm can fit within the category of what is considered a ”normal” AGN
variability, and found that the answer is no. However, the extensive PS16dtm
data shown here, suggests that perhaps some AGN extreme variability can be
explained as a change in the accretion state in the SMBH triggered by a TDE.
The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST; Ivezić et
al., 2019), will provide a sizeable sample of nuclear outbursts, such as TDEs,
supernovae, and variable AGN, during its 10 years of surveying the sky (see
e.g., Bricman & Gomboc, 2020; Roth et al., 2021). Therefore, a large-scale
systematic study in real time from the newly discovered nuclear candidates
with the Vera Rubin Observatory will be needed in order to make progress in
understanding CLAGN and TDEs.
###### Acknowledgements.
TP acknowledges the financial support from the Slovenian Research Agency
(grants I0-0033, P1-0031, J1-8136 and Z1-1853). This work was supported with
travel grants by the Royal Swedish Academy of Sciences and the COST Action
CA16104 GWverse. GL, PC and MP were supported by a research grant (19054) from
VILLUM FONDEN. DI acknowledges funding from the grant 451-03-68/2022-14/200104
of the Ministry of Education, Science, and Technological Development of the
Republic of Serbia, and the support of the Alexander von Humboldt Foundation.
FO acknowledges support from MIUR, PRIN 2017 (grant 20179ZF5KS) ”The new
frontier of the Multi-Messenger Astrophysics: follow-up of electromagnetic
transient counterparts of gravitational wave sources” and the support of
AHEAD2020 grant agreement n.871158. TEMB acknowledges financial support from
the Spanish Ministerio de Ciencia e Innovación (MCIN), the Agencia Estatal de
Investigación (AEI) 10.13039/501100011033 under the PID2020-115253GA-I00
HOSTFLOWS project, from Centro Superior de Investigaciones Científicas (CSIC)
under the PIE project 20215AT016 and the I-LINK 2021 LINKA20409, and the
program Unidad de Excelencia María de Maeztu CEX2020-001058-M. MF is supported
by a Royal Society - Science Foundation Ireland University Research
Fellowship. MG is supported by the EU Horizon 2020 research and innovation
programme under grant agreement No 101004719. TMR acknowledges the financial
support of the Vilho, Yrjö and Kalle Väisälä Foundation of the Finnish academy
of Science and Letters. KM is funded by the EU H2020 ERC grant no. 758638. AG
acknowledges the financial support from the Slovenian Research Agency
(research core funding P1-0031, infrastructure program I0-0033, project grants
J1-8136, J1-2460). NI was partially supported by Polish NCN DAINA grant No.
2017/27/L/ST9/03221. GD was supported by the Ministry of Education, Science
and Technological Development of the Republic of Serbia (contract No
451-03-68/2022-14/200002) and by the observing grant support from the
Institute of Astronomy and Rozhen NAO BAS through the bilateral joint research
project ”Gaia Celestial Reference Frame (CRF) and fast variable astronomical
objects.” ŁW was partially supported from the Polish NCN grants: Harmonia No.
2018/30/M/ST9/00311, Daina No. 2017/27/L/ST9/03221, MNiSW grant DIR/WK/2018/12
and European Commission’s H2020 OPTICON RadioNet Pilot grant No. 101004719.
The Liverpool Telescope is operated on the island of La Palma by Liverpool
John Moores University in the Spanish Observatorio del Roque de los Muchachos
of the Instituto de Astrofisica de Canarias with financial support from the UK
Science and Technology Facilities Council. This research made use of
Astropy151515http://www.astropy.org, a community-developed core Python package
for Astronomy (Astropy Collaboration et al. 2013, 2018).
## References
* Ai et al. (2010) Ai, Y. L., Yuan, W., Zhou, H. Y., et al. 2010, ApJ, 716, L31
* Almeyda et al. (2017) Almeyda, T., Robinson, A., Richmond, M., Vazquez, B., & Nikutta, R. 2017, ApJ, 843, 3
* Arcavi et al. (2018) Arcavi, I., Burke, J., French, K. D., et al. 2018, The Astronomer’s Telegram, 11953, 1
* Arnaud (1996) Arnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17
* Baldwin et al. (1981) Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5
* Bellm et al. (2019) Bellm, E. C., Kulkarni, S. R., Barlow, T., et al. 2019, PASP, 131, 068003
* Bentz et al. (2013) Bentz, M. C., Denney, K. D., Grier, C. J., et al. 2013, ApJ, 767, 149
* Berton et al. (2015) Berton, M., Foschini, L., Ciroi, S., et al. 2015, A&A, 578, A28
* Blagorodnova et al. (2019) Blagorodnova, N., Cenko, S. B., Kulkarni, S. R., et al. 2019, ApJ, 873, 92
* Blanchard et al. (2017) Blanchard, P. K., Nicholl, M., Berger, E., et al. 2017, ApJ, 843, 106
* Boroson & Green (1992) Boroson, T. A. & Green, R. F. 1992, ApJS, 80, 109
* Bricman & Gomboc (2020) Bricman, K. & Gomboc, A. 2020, ApJ, 890, 73
* Calzetti (2001) Calzetti, D. 2001, PASP, 113, 1449
* Campana et al. (2015) Campana, S., Mainetti, D., Colpi, M., et al. 2015, A&A, 581, A17
* Cannizzaro et al. (2020) Cannizzaro, G., Fraser, M., Jonker, P. G., et al. 2020, MNRAS, 493, 477
* Cannizzaro et al. (2022) Cannizzaro, G., Levan, A. J., van Velzen, S., & Brown, G. 2022, MNRAS, 516, 529
* Cannizzaro et al. (2021) Cannizzaro, G., Wevers, T., Jonker, P. G., et al. 2021, MNRAS, 504, 792
* Chambers et al. (2016a) Chambers, K. C., Huber, M. E., Flewelling, H., et al. 2016a, Transient Name Server Discovery Report, 2016-562, 1
* Chambers et al. (2016b) Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016b, arXiv e-prints, arXiv:1612.05560
* Chan et al. (2021) Chan, C.-H., Piran, T., & Krolik, J. H. 2021, ApJ, 914, 107
* Charalampopoulos et al. (2022) Charalampopoulos, P., Leloudas, G., Malesani, D. B., et al. 2022, A&A, 659, A34
* Combes (2001) Combes, F. 2001, in Advanced Lectures on the Starburst-AGN, ed. I. Aretxaga, D. Kunth, & R. Mújica, 223
* Dalla Bontà et al. (2020) Dalla Bontà, E., Peterson, B. M., Bentz, M. C., et al. 2020, ApJ, 903, 112
* Dimitrijević et al. (2007) Dimitrijević, M. S., Popović, L. Č., Kovačević, J., Dačić, M., & Ilić, D. 2007, MNRAS, 374, 1181
* Dong et al. (2016) Dong, S., Chen, P., Bose, S., et al. 2016, The Astronomer’s Telegram, 9843, 1
* Dong et al. (2011) Dong, X.-B., Wang, J.-G., Ho, L. C., et al. 2011, ApJ, 736, 86
* Drake et al. (2011) Drake, A. J., Djorgovski, S. G., Mahabal, A., et al. 2011, ApJ, 735, 106
* Elitzur (2012) Elitzur, M. 2012, ApJ, 747, L33
* Fausnaugh (2017) Fausnaugh, M. M. 2017, PASP, 129, 024007
* Fitch et al. (1967) Fitch, W. S., Pacholczyk, A. G., & Weymann, R. J. 1967, ApJ, 150, L67
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306
* Frank & Rees (1976) Frank, J. & Rees, M. J. 1976, MNRAS, 176, 633
* Frederick et al. (2019) Frederick, S., Gezari, S., Graham, M. J., et al. 2019, The Astrophysical Journal, 883, 31
* Frederick et al. (2021) Frederick, S., Gezari, S., Graham, M. J., et al. 2021, ApJ, 920, 56
* Freudling et al. (2013) Freudling, W., Romaniello, M., Bramich, D. M., et al. 2013, A&A, 559, A96
* Fruscione et al. (2006) Fruscione, A., McDowell, J. C., Allen, G. E., et al. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6270, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. D. R. Silva & R. E. Doxsey, 62701V
* Gafton & Rosswog (2019) Gafton, E. & Rosswog, S. 2019, MNRAS, 487, 4790
* Gaskell (2009) Gaskell, C. M. 2009, New A Rev., 53, 140
* Gehrels et al. (2004) Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, ApJ, 611, 1005
* Gelbord et al. (2009) Gelbord, J. M., Mullaney, J. R., & Ward, M. J. 2009, Monthly Notices of the Royal Astronomical Society, 397, 172
* Gezari (2021) Gezari, S. 2021, ARA&A, 59 [arXiv:2104.14580]
* Gezari et al. (2009) Gezari, S., Heckman, T., Cenko, S. B., et al. 2009, ApJ, 698, 1367
* Gordon et al. (2003) Gordon, K. D., Clayton, G. C., Misselt, K. A., Landolt, A. U., & Wolff, M. J. 2003, ApJ, 594, 279
* Graham et al. (2020) Graham, M. J., Ross, N. P., Stern, D., et al. 2020, MNRAS, 491, 4925
* Green et al. (2022) Green, P. J., Pulgarin-Duque, L., Anderson, S. F., et al. 2022, ApJ, 933, 180
* Greene & Ho (2007) Greene, J. E. & Ho, L. C. 2007, ApJ, 670, 92
* Guillochon & Ramirez-Ruiz (2013) Guillochon, J. & Ramirez-Ruiz, E. 2013, ApJ, 767, 25
* He et al. (2021) He, Z., Jiang, N., Wang, T., et al. 2021, ApJ, 907, L29
* Hills (1975) Hills, J. G. 1975, Nature, 254, 295
* Hinkle et al. (2021) Hinkle, J. T., Holoien, T. W. S., Shappee, B. J., & Auchettl, K. 2021, ApJ, 910, 83
* Hinkle et al. (2022) Hinkle, J. T., Holoien, T. W. S., Shappee, B. J., et al. 2022, ApJ, 930, 12
* Hodgkin et al. (2021) Hodgkin, S. T., Harrison, D. L., Breedt, E., et al. 2021, A&A, 652, A76
* Hon et al. (2022) Hon, W. J., Wolf, C., Onken, C. A., Webster, R., & Auchettl, K. 2022, MNRAS, 511, 54
* Huber et al. (2015) Huber, M., Chambers, K. C., Flewelling, H., et al. 2015, The Astronomer’s Telegram, 7153, 1
* Ilić et al. (2020) Ilić, D., Oknyansky, V., Popović, L. Č., et al. 2020, A&A, 638, A13
* Ivezić et al. (2019) Ivezić, Ž., Kahn, S. M., Tyson, J. A., et al. 2019, ApJ, 873, 111
* Jiang et al. (2021) Jiang, N., Wang, T., Hu, X., et al. 2021, ApJ, 911, 31
* Jiang et al. (2019) Jiang, N., Wang, T., Mou, G., et al. 2019, ApJ, 871, 15
* Jiang et al. (2017) Jiang, N., Wang, T., Yan, L., et al. 2017, ApJ, 850, 63
* Kankare et al. (2017) Kankare, E., Kotak, R., Mattila, S., et al. 2017, Nature Astronomy, 1, 865
* Kewley et al. (2006) Kewley, L. J., Groves, B., Kauffmann, G., & Heckman, T. 2006, MNRAS, 372, 961
* Kollatschny & Fricke (1985) Kollatschny, W. & Fricke, K. J. 1985, A&A, 146, L11
* Komossa et al. (2017) Komossa, S., Grupe, D., Schartel, N., et al. 2017, in New Frontiers in Black Hole Astrophysics, ed. A. Gomboc, Vol. 324, 168–171
* Komossa et al. (2008) Komossa, S., Zhou, H., Wang, T., et al. 2008, ApJ, 678, L13
* Kool et al. (2020) Kool, E. C., Reynolds, T. M., Mattila, S., et al. 2020, MNRAS, 498, 2167
* Korista & Ferland (1989) Korista, K. T. & Ferland, G. J. 1989, ApJ, 343, 678
* Kovačević et al. (2010) Kovačević, J., Popović, L. Č., & Dimitrijević, M. S. 2010, ApJS, 189, 15
* Kulkarni (2013) Kulkarni, S. R. 2013, The Astronomer’s Telegram, 4807, 1
* Leloudas et al. (2019) Leloudas, G., Dai, L., Arcavi, I., et al. 2019, ApJ, 887, 218
* Leloudas et al. (2016) Leloudas, G., Fraser, M., Stone, N. C., et al. 2016, Nature Astronomy, 1, 0002
* Lodato & Rossi (2011) Lodato, G. & Rossi, E. M. 2011, MNRAS, 410, 359
* López-Navas et al. (2022) López-Navas, E., Martínez-Aldama, M. L., Bernal, S., et al. 2022, MNRAS, 513, L57
* Lyu et al. (2022) Lyu, B., Wu, Q., Yan, Z., Yu, W., & Liu, H. 2022, ApJ, 927, 227
* Lyutyj et al. (1984) Lyutyj, V. M., Oknyanskij, V. L., & Chuvaev, K. K. 1984, Soviet Astronomy Letters, 10, 335
* MacLeod et al. (2019) MacLeod, C. L., Green, P. J., Anderson, S. F., et al. 2019, ApJ, 874, 8
* Magnier et al. (2013) Magnier, E. A., Schlafly, E., Finkbeiner, D., et al. 2013, ApJS, 205, 20
* Mainzer et al. (2014) Mainzer, A., Bauer, J., Cutri, R. M., et al. 2014, ApJ, 792, 30
* Marziani et al. (2018) Marziani, P., Dultzin, D., Sulentic, J. W., et al. 2018, Frontiers in Astronomy and Space Sciences, 5, 6
* Marziani & Sulentic (2014) Marziani, P. & Sulentic, J. W. 2014, MNRAS, 442, 1211
* Mattila et al. (2018) Mattila, S., Pérez-Torres, M., Efstathiou, A., et al. 2018, Science, 361, 482
* Mehdipour et al. (2022) Mehdipour, M., Kriss, G. A., Brenneman, L. W., et al. 2022, ApJ, 925, 84
* Mockler et al. (2019) Mockler, B., Guillochon, J., & Ramirez-Ruiz, E. 2019, ApJ, 872, 151
* Moriya et al. (2017) Moriya, T. J., Tanaka, M., Morokuma, T., & Ohsuga, K. 2017, ApJ, 843, L19
* Namekata & Umemura (2016) Namekata, D. & Umemura, M. 2016, MNRAS, 460, 980
* Netzer (2006) Netzer, H. 2006, in Physics of Active Galactic Nuclei at all Scales, ed. D. Alloin, Vol. 693, 1
* Noda & Done (2018) Noda, H. & Done, C. 2018, MNRAS, 480, 3898
* Oknyansky et al. (2019) Oknyansky, V. L., Winkler, H., Tsygankov, S. S., et al. 2019, MNRAS, 483, 558
* Onori et al. (2019) Onori, F., Cannizzaro, G., Jonker, P. G., et al. 2019, MNRAS, 489, 1463
* Onori et al. (2022) Onori, F., Cannizzaro, G., Jonker, P. G., et al. 2022, MNRAS, 517, 76
* Osterbrock & Ferland (2006) Osterbrock, D. E. & Ferland, G. J. 2006, Astrophysics of gaseous nebulae and active galactic nuclei
* Panda et al. (2018) Panda, S., Czerny, B., Adhikari, T. P., et al. 2018, ApJ, 866, 115
* Park et al. (2022) Park, D., Barth, A. J., Ho, L. C., & Laor, A. 2022, ApJS, 258, 38
* Peterson (2001) Peterson, B. M. 2001, in Advanced Lectures on the Starburst-AGN, ed. I. Aretxaga, D. Kunth, & R. Mújica, 3
* Phinney (1989) Phinney, E. S. 1989, in The Center of the Galaxy, ed. M. Morris, Vol. 136, 543
* Pons & Watson (2014) Pons, E. & Watson, M. G. 2014, A&A, 568, A108
* Rakić (2022) Rakić, N. 2022, MNRAS, 516, 1624
* Rakshit et al. (2017) Rakshit, S., Stalin, C. S., Chand, H., & Zhang, X.-G. 2017, ApJS, 229, 39
* Ranalli et al. (2003) Ranalli, P., Comastri, A., & Setti, G. 2003, A&A, 399, 39
* Rees (1988) Rees, M. J. 1988, Nature, 333, 523
* Ren et al. (2022) Ren, W., Wang, J., Cai, Z., & Guo, H. 2022, ApJ, 925, 50
* Richards et al. (2006) Richards, G. T., Lacy, M., Storrie-Lombardi, L. J., et al. 2006, ApJS, 166, 470
* Roming et al. (2005) Roming, P. W. A., Kennedy, T. E., Mason, K. O., et al. 2005, Space Sci. Rev., 120, 95
* Rose et al. (2015) Rose, M., Elvis, M., & Tadhunter, C. N. 2015, MNRAS, 448, 2900
* Roth et al. (2016) Roth, N., Kasen, D., Guillochon, J., & Ramirez-Ruiz, E. 2016, ApJ, 827, 3
* Roth et al. (2021) Roth, N., van Velzen, S., Cenko, S. B., & Mushotzky, R. F. 2021, ApJ, 910, 93
* Runnoe et al. (2012) Runnoe, J. C., Brotherton, M. S., & Shang, Z. 2012, MNRAS, 422, 478
* Salim & Narayanan (2020) Salim, S. & Narayanan, D. 2020, ARA&A, 58, 529
* Sánchez-Sáez et al. (2021) Sánchez-Sáez, P., Lira, H., Martí, L., et al. 2021, AJ, 162, 206
* Sánchez-Sáez et al. (2018) Sánchez-Sáez, P., Lira, P., Mejía-Restrepo, J., et al. 2018, ApJ, 864, 87
* Schlafly & Finkbeiner (2011) Schlafly, E. F. & Finkbeiner, D. P. 2011, ApJ, 737, 103
* Selsing et al. (2019) Selsing, J., Malesani, D., Goldoni, P., et al. 2019, A&A, 623, A92
* Shapovalova et al. (2012) Shapovalova, A. I., Popović, L. Č., Burenkov, A. N., et al. 2012, ApJS, 202, 10
* Shappee et al. (2014) Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, ApJ, 788, 48
* Shen & Ho (2014) Shen, Y. & Ho, L. C. 2014, Nature, 513, 210
* Shen et al. (2011) Shen, Y., Richards, G. T., Strauss, M. A., et al. 2011, ApJS, 194, 45
* Sheng et al. (2020) Sheng, Z., Wang, T., Jiang, N., et al. 2020, ApJ, 889, 46
* Sheng et al. (2017) Sheng, Z., Wang, T., Jiang, N., et al. 2017, ApJ, 846, L7
* Smartt et al. (2015) Smartt, S. J., Valenti, S., Fraser, M., et al. 2015, A&A, 579, A40
* Smith et al. (2020) Smith, K. W., Smartt, S. J., Young, D. R., et al. 2020, PASP, 132, 085002
* Sniegowska et al. (2020) Sniegowska, M., Czerny, B., Bon, E., & Bon, N. 2020, A&A, 641, A167
* Śniegowska et al. (2022) Śniegowska, M., Grzedzielski, M., Czerny, B., & Janiuk, A. 2022, Astronomische Nachrichten, 343, e210065
* Steele et al. (2004) Steele, I. A., Smith, R. J., Rees, P. C., et al. 2004, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 5489, Proc. SPIE, ed. J. Oschmann, Jacobus M., 679–692
* Stern et al. (2018) Stern, D., McKernan, B., Graham, M. J., et al. 2018, ApJ, 864, 27
* Sulentic et al. (2009) Sulentic, J. W., Marziani, P., & Zamfir, S. 2009, New A Rev., 53, 198
* Terreran et al. (2016) Terreran, G., Berton, M., Benetti, S., et al. 2016, The Astronomer’s Telegram, 9417, 1
* Thomsen et al. (2022) Thomsen, L. L., Kwan, T. M., Dai, L., et al. 2022, ApJ, 937, L28
* Tonry (2011) Tonry, J. L. 2011, PASP, 123, 58
* Tonry et al. (2018) Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, PASP, 130, 064505
* Tonry et al. (2012) Tonry, J. L., Stubbs, C. W., Lykke, K. R., et al. 2012, ApJ, 750, 99
* Trakhtenbrot et al. (2019) Trakhtenbrot, B., Arcavi, I., MacLeod, C. L., et al. 2019, ApJ, 883, 94
* Trakhtenbrot et al. (2019) Trakhtenbrot, B., Arcavi, I., Ricci, C., et al. 2019, Nature Astronomy, 3, 242–250
* van Groningen & Wanders (1992) van Groningen, E. & Wanders, I. 1992, PASP, 104, 700
* van Velzen et al. (2020) van Velzen, S., Holoien, T. W.-S., Onori, F., Hung, T., & Arcavi, I. 2020, Space Science Reviews, 216
* van Velzen et al. (2016) van Velzen, S., Mendez, A. J., Krolik, J. H., & Gorjian, V. 2016, The Astrophysical Journal, 829, 19
* van Velzen et al. (2021) van Velzen, S., Pasham, D. R., Komossa, S., Yan, L., & Kara, E. A. 2021, Space Sci. Rev., 217, 63
* van Velzen et al. (2019) van Velzen, S., Stone, N. C., Metzger, B. D., et al. 2019, ApJ, 878, 82
* Vanden Berk et al. (2006) Vanden Berk, D. E., Shen, J., Yip, C.-W., et al. 2006, AJ, 131, 84
* Vanden Berk et al. (2004) Vanden Berk, D. E., Wilhite, B. C., Kron, R. G., et al. 2004, ApJ, 601, 692
* Vernet et al. (2011) Vernet, J., Dekker, H., D’Odorico, S., et al. 2011, A&A, 536, A105
* Véron-Cetty et al. (2006) Véron-Cetty, M. P., Joly, M., Véron, P., et al. 2006, A&A, 451, 851
* Wang & Bon (2020) Wang, J.-M. & Bon, E. 2020, A&A, 643, L9
* Wang et al. (2012) Wang, T.-G., Zhou, H.-Y., Komossa, S., et al. 2012, ApJ, 749, 115
* Wevers et al. (2019) Wevers, T., Pasham, D. R., van Velzen, S., et al. 2019, MNRAS, 488, 4816
* Willott (2005) Willott, C. J. 2005, ApJ, 627, L101
* Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868
* Wyrzykowski et al. (2017) Wyrzykowski, Ł., Zieliński, M., Kostrzewa-Rutkowska, Z., et al. 2017, MNRAS, 465, L114
* Xiao et al. (2011) Xiao, T., Barth, A. J., Greene, J. E., et al. 2011, ApJ, 739, 28
* Yang et al. (2018) Yang, Q., Wu, X.-B., Fan, X., et al. 2018, ApJ, 862, 109
* Yaron & Gal-Yam (2012) Yaron, O. & Gal-Yam, A. 2012, PASP, 124, 668
* Zabludoff et al. (2021) Zabludoff, A., Arcavi, I., La Massa, S., et al. 2021, Space Sci. Rev., 217, 54
* Zhang et al. (2022) Zhang, W. J., Shu, X. W., Sheng, Z. F., et al. 2022, A&A, 660, A119
* Zhang (2021) Zhang, X. 2021, ApJ, 919, 13
* Zhou et al. (2006) Zhou, H., Wang, T., Yuan, W., et al. 2006, ApJS, 166, 128
|
# Representations of Domains via CF-approximation Spaces
Guojun Wu Luoshan Xu College of Mathematical Science
Yangzhou University
Yangzhou 225002, P. R. China
###### Abstract
Representations of domains mean in a general way representing a domain as a
suitable family endowed with set-inclusion order of some mathematical
structures. In this paper, representations of domains via CF-approximation
spaces are considered. Concepts of CF-approximation spaces and CF-closed sets
are introduced. It is proved that the family of CF-closed sets in a CF-
approximation space endowed with set-inclusion order is a continuous domain
and that every continuous domain is isomorphic to the family of CF-closed sets
of some CF-approximation space endowed with set-inclusion order. The concept
of CF-approximable relations is introduced using a categorical approach, which
later facilitates the proof that the category of CF-approximation spaces and
CF-approximable relations is equivalent to that of continuous domains and
Scott continuous maps.
###### keywords:
CF-approximation space; CF-closed set; CF-approximable relation; continuous
domain; abstract base
††volume: NN††journal: Electronic Notes in Theoretical Informatics and
Computer Science††thanks: Supported by National Natural Science Foundation of
China (11671008).††thanks: Email<EMAIL_ADDRESS>Email:
<EMAIL_ADDRESS>
## 1 Introduction
Domain theory is one of the important research fields of theoretical computer
science [2]. In recent years of research in domain theory, there is a growing
body of scholarly work towards synthesizing various mathematical fields such
as ordered structures, topological spaces, formal contexts, rough sets, and
various kinds of logic. One of such syntheses is to create representation for
various kinds of domains using abstract bases [8, 12], formal topologies [13],
information systems [10, 12], formal contexts [4]-[11], and so on. Amongst
these, representation via abstract bases appears to be most natural due to its
simplicity.
By representation of domains, we mean any general way by which one can
characterize a domain using a suitable family of some mathematical structures
ordered by the set-theoretic inclusion. With this understanding, clearly,
every continuous domain can be represented by c-infs [10], abstract bases,
formal topologies [13], etc. Recently, Qingguo Li, et. al. in [11] introduced
attribute continuous formal contexts which are quadruples, and showed that
every continuous domain can be represented by attribute continuous formal
contexts.
While representation using abstract bases appears to be the most natural and
simple, its scope of study is unfortunately too narrow in that it is easy to
miss out on something deeper. Noticing from rough set theory [7] that abstract
bases are all special generalized approximation spaces (GA-spaces, for short)
[14], we consider generalize an abstract base to a CF-approximation space
which is a GA-space with some coordinating family of finite sets. Since the
lower approximation operator $\underline{R}$ and the upper approximation
operator $\overline{R}$ are mutually dual in a CF-approximation space, we
mainly use the upper approximation operator $\overline{R}$ and introduce CF-
closed sets which are generalizations of round ideals in abstract bases. With
these concepts, representations of domains via CF-approximation spaces are
obtained. We will see that this approach of representing domains is more
general than the approach of representing domains by abstract bases. We also
introduce the concept of CF-approximable relations using a categorical
approach and prove that the category of CF-approximation spaces and CF-
approximable relations is equivalent to that of continuous domains and Scott
continuous maps. This work makes links naturally between domains and rough
sets.
## 2 Preliminaries
We quickly recall some basic notions and results of domain theory. For a set
$U$ and $X\subseteq U$, we use $\mathcal{P}(U)$ to denote the power set of
$U$, $\mathcal{P}_{fin}(U)$ to denote the family of all nonempty finite
subsets of $U$ and $X^{c}$ to denote the complement of $X$ in $U$. The symbol
$F\subseteq_{fin}X$ means $F$ is a finite subset of $X$. For notions which we
do not explicitly define herein, the reader may refer to [2, 3].
Let ($L$, $\leqslant$) be a poset. A _principal ideal_ (resp., _principal
filter_) of $L$ is a set of the form ${\mathord{\downarrow}x}=\\{y\in L\mid
y\leqslant x\\}$ (resp., ${\uparrow\\!x}=\\{y\in L\mid x\leqslant y\\}$). For
$A\subseteq L$, we write $\mathord{\downarrow}A=\\{y\in L\mid\exists\ x\in
A,\,y\leqslant x\\}$ and $\uparrow\\!A=\\{y\in L\mid\exists\ x\in
A,\,x\leqslant y\\}$. A subset $A$ is a _lower set_ (resp., an _upper set_) if
$A=\mathord{\downarrow}A$ (resp., $A=\uparrow\\!A$). We say that $z$ is a
_lower bound_ (resp., an _upper bound_) of $A$ if $A\subseteq\uparrow\\!z$
(resp., $A\subseteq\downarrow\\!z$). The supremum of $A$ is the least upper
bound of $A$, denoted by $\bigvee A$ or $\sup A$. The infimum of $A$ is the
greatest lower bound of $A$, denoted by $\bigwedge A$ or $\inf A$. A nonempty
subset $D$ of $L$ is _directed_ if every finite subset of $D$ has an upper
bound in $D$. A subset $C$ of $L$ is _consistent_ if $C$ has an upper bound in
$L$. A poset $L$ is a _directed complete partially ordered set_ (dcpo, for
short) if every directed subset of $L$ has a supremum. A _semilattice_ (resp.,
sup-semilattice) is a poset in which every pair of elements has an infimum
(resp., a supremum). A _complete lattice_ is a poset in which every subset has
a supremum (equivlently, has an infimum). If any finite consistent subset $A$
of $L$ has a supremum, then $L$ is called a cusl. If any consistent subset $B$
of $L$ has a supremum, then $L$ is called a bc-poset.
Let $L$ be a semilattice and $K\subseteq L$. If for all $x,y\in K$ it holds
that $x\wedge y\in K$, then $K$ is called a subsemilattice of $L$.
Recall that in a poset $P$, we say that $x$ _way-below_ $y$, written $x\ll y$
if whenever $D$ is a directed set that has a supremum with $\sup D\geqslant
y$, then $x\leqslant d$ for some $d\in D$. If $x\ll x$, then $x$ is called a
compact element of $P$, the set $\\{x\in P\mid x\ll x\\}$ is denoted by
$K(P)$. The set $\\{y\in P\mid x\ll y\\}$ will be denoted by
$\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{1.72218pt}{$\uparrow$}}$\uparrow$}}x$
and $\\{y\in P\mid y\ll x\\}$ denoted by
$\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x$.
A poset $P$ is said to be _continuous_ (resp., algebraic) if for all $x\in P$,
$\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x$
is directed (resp., ${\downarrow}x\cap K(P)$ is directed) and
$x=\bigvee\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x$
(resp., $x=\bigvee({\downarrow}x\cap K(P))$). If a dcpo $P$ is continuous
(resp., algebraic), then $P$ is called a continuous domain (resp., an
algebraic domain). If a continuous domain $P$ is a semilattice (resp., sup-
semilattice, complete lattice), then $P$ is called a continuous semilattice
(resp., continuous sup-semilattice, continuous lattice). If a bc-poset $P$ is
also a continuous domain, then $P$ is called a bc-domain. If an algebraic
domain $L$ is a semilattice and $K(L)$ is a subsemilattice of $L$, then $L$ is
called arithmetic semmilattice.
Let $L$ and $P$ be dcpos, and $f:L\longrightarrow P$ a map. If for any
directed subset $D\subseteq L$, $f(\bigvee D)=\bigvee f(D)$, then $f$ is
called a Scott continuous map.
###### Lemma 2.1
([2]) Let $P$ be a poset. Then for all $x,y,u,z\in P$,
(1) $x\ll y\Rightarrow x\leqslant y$;(2) $u\leqslant x\ll y\leqslant
z\Rightarrow u\ll z$.
###### Lemma 2.2
If $P$ is a continuous poset, then the way-below relation $\ll$ has the
interpolation property:
$x\ll z\Rightarrow\exists y\in P$ such that $x\ll y\ll z$.
###### Definition 2.3
Let $P$ be a poset, $B\\!\subseteq\\!P$. The set $B$ is called a basis for $P$
if for all $a\in\\!P$, there is a directed set
$D_{a}\subseteq\\!B\cap\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}a$
such that $\sup_{P}D_{a}=\\!a$, where the subscripted $P$ indicates that the
operation (in this case, the supremum) is taken in poset $P$.
It is well known that a poset $P$ is continuous iff it has a basis and that
$P$ is algebraic iff $K(P)$ is a basis.
A binary relation $R\subseteq U\times U$ on a set $U$ is called _transitive_
if $xRy$ and $yRz$ implies $xRz$ for all $x,y,z\in U$. A binary relation $R$
is said a preorder if it is reflexive and transitive.
###### Definition 2.4
(see [2, 3]) Let $(U,\prec)$ be a set equipped with a binary relation. The
binary relation $\prec$ is called fully transitive if it is transitive and
satisfies the strong interpolation property:
$\forall|F|<\infty,F\prec z\Rightarrow\exists y\prec z$ such that $F\prec y$,
where $F\prec y$ means for all $t\in F$, $t\prec y$. If $(B,\prec)$ is a set
equipped with a binary relation which is fully transitive, then $(B,\prec)$ is
called an abstract basis.
###### Definition 2.5
([2, 3]) Let $(B,\prec)$ be an abstract basis. A non-empty subset $I$ of $B$
is a round ideal if
$(1)$ $\forall y\in I$, $x\prec y\Rightarrow x\in I$;
$(2)$ $\forall x$, $y\in I$, $\exists z\in I$ such that $x\prec z$ and $y\prec
z$.
All the round ideals of $B$ in set-inclusion order is called the round ideal
completion of $B$, denoted by $RI(B)$.
Observe that if $B$ is a basis for a continuous domain $P$, then $(B,\,\ll)$,
the restriction of the way-below relation to $B$, is an abstract basis. And it
is known (see in [10]) that $P$ in this case is isomorphic to $RI(B)$.
###### Proposition 2.6
If $P$ is a continuous domain, then $(P,\ll)$ is an abstract basis and
$RI(P,\ll)\cong(P,\leqslant)$.
Next, we introduce some terminologies imported from rough set theory. A set
$U$ with a binary relation $R$ is called a _generalized approximation space_
(_GA-space_ , for short). Let $(U,R)$ be a GA-space. Define
$R_{s},R_{p}:U\rightarrow\mathcal{P}(U)$ such that for all $x\in U$,
$R_{s}(x)=\\{y\in U\mid xRy\\},R_{p}(x)=\\{y\in U\mid yRx\\}.$
Lower and upper approximation operators are key notions in GA-spaces.
###### Definition 2.7
(cf. [17]) Let $(U,R)$ be a GA-space. For $A\subseteq U$, define
$\underline{R}(A)=\\{x\in U\mid\ R_{s}(x)\subseteq A\\}$,
$\overline{R}(A)=\\{x\in U\mid\ R_{s}(x)\cap A\neq\emptyset\\}.$
The operators
$\underline{R},\overline{R}:\mathcal{P}(U)\rightarrow\mathcal{P}(U)$ are
respectively called the lower and upper approximation operators in $(U,R)$,
where $\mathcal{P}(U)$ is the power set of $U$.
###### Lemma 2.8
(cf. [9]) Let $(U,R)$ be a GA-space. Then the lower and upper approximation
operators $\underline{R}$ and $\overline{R}$ have the following properties.
(1) $\underline{R}(A^{c})=(\overline{R}(A))^{c}$,
$\overline{R}(A^{c})=(\underline{R}(A))^{c}$, where $A^{c}$ is the complement
of $A\subseteq U$.
(2) $\underline{R}(U)=U$, $\overline{R}(\emptyset)=\emptyset$.
(3) Let $\\{A_{i}\mid i\in I\\}\subseteq\mathcal{P}(U)$. Then
$\underline{R}(\bigcap_{i\in I}A_{i})=\bigcap_{i\in I}\underline{R}(A_{i}),\
\overline{R}(\bigcup_{i\in I}A_{i})=\bigcup_{i\in I}\overline{R}(A_{i}).$
(4) If $A\subseteq B\subseteq U$, then
$\underline{R}(A)\subseteq\underline{R}(B),\overline{R}(A)\subseteq\overline{R}(B)$.
(5) For all $x\in U,\ \overline{R}(\\{x\\})=R_{p}(x)$.
###### Lemma 2.9
[16] Let $(U,R)$ be a GA-space. Then $R$ is reflexive iff for all $X\subseteq
U$, $X\subseteq\overline{R}(X)$.
###### Lemma 2.10
[16] Let $(U,R)$ be a GA-space. Then $R$ is transitive iff for all $X\subseteq
U$, $\overline{R}(\overline{R}(X))\subseteq\overline{R}(X)$.
###### Definition 2.11
([15]) Let $(U,R)$ be a GA-space and $A\subseteq U.$ The set $A$ is called
$R$-open if $A\subseteq\underline{R}(A)$ and $R$-closed if
$\overline{R}(B)\subseteq B$.
For a preorder $R$, the operator $\underline{R}$ is an interior operator of a
topology, so we have
###### Definition 2.12
([15]) If $R$ is a preorder, then GA-space $(U,R)$ is called a topological GA-
space.
The next proposition shows that all $R$-open sets of $(U,R)$ form a topology
on $U$.
###### Proposition 2.13
Let $(U,R)$ be a GA-space. Then $\tau_{R}=\\{A\subseteq U\mid
A\subseteq\underline{R}(A)\\}$ is an Alexandrov topology on $U$.
Proof. Obviously, $\emptyset,U\in\tau_{R}$. By Lemma 2.8(3), we have
$\tau_{R}$ is closed under arbitrary intersections. Let $A_{i}\in\tau_{R}$,
namely, $A_{i}\subseteq\underline{R}(A_{i})$ $(i\in\\!I)$. Then
$\bigcup_{i\in\\!I}A_{i}\subseteq\bigcup_{i\in\\!I}(\underline{R}(A_{i}))$. It
follows from $\underline{R}(A_{i})\subseteq\underline{R}(\bigcup_{i\in
I}A_{i})$ that
$\bigcup_{i\in\\!I}(\underline{R}(A_{i}))\subseteq\underline{R}(\bigcup_{i\in\\!I}A_{i})$.
Then $\bigcup_{i\in\\!I}A_{i}\subseteq\underline{R}(\bigcup_{i\in\\!I}A_{i})$.
So $\bigcup_{i\in\\!I}A_{i}\in\tau_{R}$, namely, $\tau_{R}$ is closed under
arbitrary unions. This shows that $\tau_{R}$ is an Alexandrov topology. $\Box$
The above topology $\tau_{R}$ is called a _topology induced by relation_ $R$.
Obviously, all the R-closed sets of $(U,R)$ are precisely all the closed sets
of $\tau_{R}$ .
## 3 CF-approximation Spaces and CF-closed Sets
For an abstract base $(B,\prec)$, we naturally have the triple
$(B,\prec,\\{\\{b\\}\mid b\in B\\})$ and that the family
$\\{{\downarrow^{\prec}b}\mid b\in B\\}$ is a base of the continuous domain
$RI(B)$, where $\downarrow^{\prec}b=\\{c\in B\mid c\prec b\\}$. We generalize
an abstract base to a GA-space with consistent family of finite subsets (CF-
approximation space, for short) by changing $(B,\prec)$ to a GA-space $(U,R)$
with $R$ being transitive and changing the family $\\{\\{b\\}\mid b\in B\\}$
to a suitable family $\mathcal{F}$ of some finite subsets of $U$. We hope that
the family $\mathcal{F}$ can also induce a base of a continuous domain.
###### Definition 3.1
Let $(U,R)$ be a GA-space, $R$ a transitive relation and
$\mathcal{F}\subseteq\mathcal{P}_{fin}(U)\cup\\{\emptyset\\}$. If for all
$F\in\mathcal{F}$, whenever $K\subseteq_{fin}\overline{R}(F)$, there always
exists $G\in\mathcal{F}$ such that $K\subseteq\overline{R}(G)$ and
$G\subseteq\overline{R}(F)$, then $(U,R,\mathcal{F})$ is called a generalized
approximation space with consistent family of finite subsets, or a CF-
approximation space, for short.
###### Lemma 3.2
Let $(U,R)$ be a GA-space, $A,B\subseteq U$. If $R$ is a transitive relation,
then $\overline{R}(B)\subseteq\overline{R}(A)$ when
$B\subseteq\overline{R}(A)$.
Proof. It follows from Lemma 2.8(4) and 2.10. $\Box$
###### Definition 3.3
Let $(U,R,\mathcal{F})$ be a CF-approximation space, $E\subseteq U$. If for
all $K\subseteq_{fin}E$, there always exists $F\in\mathcal{F}$ such that
$K\subseteq\overline{R}(F)\subseteq E$ and $F\subseteq E$, then $E$ is called
a CF-closed set of $(U,R,\mathcal{F})$. The collection of all CF-closed sets
of $(U,R,\mathcal{F})$ is denoted by $\mathfrak{C}(U,R,\mathcal{F})$.
###### Remark 3.4
$(1)$ If $\emptyset\in\mathfrak{C}(U,R,\mathcal{F})$, then
$\emptyset\in\mathcal{F}$ by $\overline{R}(\emptyset)=\emptyset$.
$(2)$ For CF-approximation space $(U,R,\mathcal{F})$, if
$\mathcal{F}=\\{\\{x\\}\mid x\in U\\}$, then $(U,R)$ is an abstract base by
Lemma 2.8(5), and all the CF-closed sets of $(U,R,\mathcal{F})$ are precisely
all the round ideals of $(U,R)$.
The following example shows that $(U,R)$ is not necessarily an abstract base
when $(U,R,\mathcal{F})$ is a CF-approximation space, showing that CF-
approximation spaces is a generalization of abstract bases.
###### Example 3.5
Let $U=\mathbb{N}$,
$R=\\{(1,1),(1,2),(1,3),(1,4),(2,3),(2,4),(4,3)\\}\cup\\{(i,i)\mid i\geqslant
5\\}$, $\mathcal{F}=\\{\\{1\\},\\{1,2\\},\emptyset\\}\cup\\{\\{i\\}\mid
i\geqslant 5\\}$. It is easy to check that $(U,R,\mathcal{F})$ is a CF-
approximation space and
$\mathfrak{C}(U,R,\mathcal{F})=\\{\emptyset,\\{1\\}\\}\cup\\{\\{i\\}\mid
i\geqslant 5\\}$ is a continuous domain. Notice that $(1,4),(2,4)\in R$, but
there is no $t\in U$ such that $(1,t),(2,t),(t,4)\in R$. So $(U,R)$ is not an
abstract base.
###### Proposition 3.6
Let $(U,R,\mathcal{F})$ be a CF-approximation space. If
$E\in\mathfrak{C}(U,R,\mathcal{F})$, then $E$ is an $R$-closed set.
Proof. If $x\in\overline{R}(E)$, then $R_{s}(x)\cap E\neq\emptyset$. So there
is $y\in U$ such that $xRy$ and $y\in E$. By Definition 3.3, there exists
$F\in\mathcal{F}$, such that $y\in\overline{R}(F)\subseteq E$ and $F\subseteq
E$. By the transitivity of $R$, we know that
$\overline{R}(\\{y\\})\subseteq\overline{R}(\overline{R}(F))\subseteq\overline{R}(F)\subseteq
E$. Thus $\overline{R}(\\{y\\})\subseteq E$. It is clear that
$x\in\overline{R}(\\{y\\})\subseteq E$ because of $xRy$. By the arbitrariness
of $x\in\overline{R}(E)$, we know that $\overline{R}(E)\subseteq E$. This
shows that $E$ is an $R$-closed set. $\Box$
###### Proposition 3.7
Let $(U,R,\mathcal{F})$ be a CF-approximation space, then the following
statements hold:
$(1)$ For any $F\in\mathcal{F}$,
$\overline{R}(F)\in\mathfrak{C}(U,R,\mathcal{F})$;
$(2)$ If $E\in\mathfrak{C}(U,R,\mathcal{F})$, $A\subseteq E$, then
$\overline{R}(A)\subseteq E$;
$(3)$ If $\\{E_{i}\\}_{i\in I}\subseteq\mathfrak{C}(U,R,\mathcal{F})$ is a
directed family, then $\bigcup_{i\in I}E_{i}\in\mathfrak{C}(U,R,\mathcal{F})$.
Proof. (1) Follows directly by Definition 3.1 and the transitivity of $R$.
(2) Follows from $A\subseteq E$ and Lemma 2.8(4) that
$\overline{R}(A)\subseteq\overline{R}(E)$. By Proposition 3.6, we know that
$\overline{R}(E)\subseteq E$. Thus $\overline{R}(A)\subseteq E$.
(3) Follows directly from Definition 3.3. $\Box$
The proposition above shows that $(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)$
is a dcpo. The following proposition gives equivalent characterizations of CF-
closed sets.
###### Proposition 3.8
The following statements are equivalent for a CF-approximation space
$(U,R,\mathcal{F})$:
$(1)$ $E\in\mathfrak{C}(U,R,\mathcal{F})$;
$(2)$ The family $\mathcal{A}=\\{\overline{R}(F)\mid
F\in\mathcal{F},F\subseteq E\\}$ is directed and $E=\bigcup\mathcal{A}$;
$(3)$ There exists a family $\\{F_{i}\\}_{i\in I}\subseteq\mathcal{F}$ such
that $\\{\overline{R}(F_{i})\\}_{i\in I}$ is directed, and $E=\bigcup_{i\in
I}\overline{R}(F_{i})$;
$(4)$ There always exists $F\in\mathcal{F}$ such that
$K\subseteq\overline{R}(F)\subseteq E$ whenever $K\subseteq_{fin}E$.
Proof. If $E=\emptyset\in\mathfrak{C}(U,R,\mathcal{F})$, the proposition holds
obviously. Let $E\neq\emptyset$.
$(1)\Rightarrow(2)$ By Definition 3.3, we know that $\mathcal{A}$ is not
empty. Let $X_{1},X_{2}\in\mathcal{A}$, then there exist
$F_{1},F_{2}\in\mathcal{F}$ and $F_{1},F_{2}\subseteq E$, such that
$X_{1}=\overline{R}({F_{1}})$, $X_{2}=\overline{R}({F_{2}})$. By $F_{1}\cup
F_{2}\subseteq_{fin}E$ and Definition 3.3 we know that there exists
$F_{3}\in\mathcal{F}$, such that $F_{1}\cup F_{2}\subseteq\overline{R}(F_{3})$
and $F_{3}\subseteq E$. By the transitivity of $R$ and Lemma 3.2 we know that
$\overline{R}(F_{1})\subseteq\overline{R}(F_{3})$,
$\overline{R}(F_{2})\subseteq\overline{R}(F_{3})$. This shows that
$\mathcal{A}$ is directed. Next we prove $E=\bigcup\mathcal{A}$. By
Proposition 3.7(2) we know that $\bigcup\mathcal{A}\subseteq E$ holds.
Conversely, if $x\in E$, then by Definition 3.3, there is $F\in\mathcal{F}$
such that $x\in\overline{R}(F)\subseteq E$ and $F\subseteq E$. So
$x\in\bigcup\mathcal{A}$. By the arbitrariness of $x\in E$ we know that
$E\subseteq\bigcup\mathcal{A}$. Thus $E=\bigcup\mathcal{A}$.
$(2)\Rightarrow(3)$ Trivial.
$(3)\Rightarrow(4)$ Follows directly from the finiteness of $K$ and the
directedness of $\\{\overline{R}(F_{i})\\}_{i\in I}$.
$(4)\Rightarrow(1)$ If $K\subseteq_{fin}E$, then there exists
$F\in\mathcal{F}$ such that $K\subseteq\overline{R}(F)\subseteq E$. By
Definition 3.1, there exists $G\in\mathcal{F}$ such that
$K\subseteq\overline{R}(G)$ and $G\subseteq\overline{R}(F)$. By Lemma 3.2 we
know that $\overline{R}(G)\subseteq\overline{R}(F)\subseteq E$. Thus
$K\subseteq\overline{R}(G)\subseteq E$. Noticing that
$G\subseteq\overline{R}(F)\subseteq E$, by Definition 3.3 we obtain that
$E\in\mathfrak{C}(U,R,\mathcal{F})$. $\Box$
The following theorem characterizes the way-below relation $\ll$ in dcpo
$(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)$.
###### Theorem 3.9
Let $(U,R,\mathcal{F})$ be a CF-approximation space,
$E_{1},E_{2}\in\mathfrak{C}(U,R,\mathcal{F})$. Then $E_{1}\ll E_{2}$ if and
only if there exists $F\in\mathcal{F}$ such that
$E_{1}\subseteq\overline{R}(F)$ and $F\subseteq E_{2}$.
Proof. $\Rightarrow$: It follows from $E_{2}\in\mathfrak{C}(U,R,\mathcal{F})$
and Proposition 3.8(2) that $E_{2}=\bigcup\\{\overline{R}(F)\mid
F\in\mathcal{F},F\subseteq E_{2}\\}$ and that $\\{\overline{R}(F)\mid
F\in\mathcal{F},F\subseteq E_{2}\\}$ is directed. If $E_{1}\ll E_{2}$, then
there exists $F\in\mathcal{F}$ such that $F\subseteq E_{2}$,
$E_{1}\subseteq\overline{R}(F)$.
$\Leftarrow$: For any directed family $\\{C_{i}\\}_{i\in
I}\subseteq\mathfrak{C}(U,R,\mathcal{F})$, if $E_{2}\subseteq\bigvee_{i\in
I}C_{i}=\bigcup_{i\in I}C_{i}$, then by $F\subseteq E_{2}$ and the finiteness
of $F$ we know that there exists $i_{0}\in I$ such that $F\subseteq
C_{i_{0}}$. By Proposition 3.7(2) and $E_{1}\subseteq\overline{R}(F)$, we know
that $E_{1}\subseteq\overline{R}(F)\subseteq C_{i_{0}}$, showing that
$E_{1}\ll E_{2}$. $\Box$
###### Corollary 3.10
Let $(U,R,\mathcal{F})$ be a CF-approximation space,
$E\in\mathfrak{C}(U,R,\mathcal{F})$, $F\in\mathcal{F}$. The following
statements hold:
$(1)$ If $F\subseteq E$, then $\overline{R}(F)\ll E$;
$(2)$ $\overline{R}(F)\ll\overline{R}(F)$ if and only if there exists
$G\in\mathcal{F}$, such that $G\subseteq\overline{R}(G)=\overline{R}(F)$.
Proof. Follows directly from Theorem 3.9. $\Box$
###### Theorem 3.11
Let $(U,R,\mathcal{F})$ be a CF-approximation space. Then
$(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)$ is a continuous domain.
Proof. By Proposition 3.7 we see that
$(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)$ is a dcpo. Set
$\mathcal{B}=\\{\overline{R}(F)\mid F\in\mathcal{F}\\}$. Then $\mathcal{B}$ is
a base of $(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)$ by Proposition 3.8(2)
and Corollary 3.10(1). Thus $(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)$ is a
continuous domain. $\Box$
The following theorem shows that any continuous domain $(L,\leqslant)$ can
induce a CF-approximation space.
###### Theorem 3.12
Let $(L,\leqslant)$ be a continuous domain, $R_{L}$ the way-below relation
“$\ll$” of $(L,\leqslant)$; $\mathcal{F}_{L}=\\{F\subseteq_{fin}L\mid F\
\mbox{has a top element}\\}$. For any $F\in\mathcal{F}_{L}$, let $c_{F}$ be
the top element of $F$. Then $(L,R_{L},\mathcal{F}_{L})$ is a CF-approximation
space.
Proof. By Lemma 2.1, we know that $R_{L}=\ll$ is transitive. For any
$F\in\mathcal{F}_{L}$, by Lemma 2.8(5), we have that
$\overline{R}_{L}(F)=\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}c_{F}$.
For
$K\subseteq_{fin}\overline{R}_{L}(F)=\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}c_{F}$,
by that $L$ is a continuous domain, we know that
$\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}c_{F}$
is directed. Then there exists
$x\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}c_{F}$
such that $K\subseteq{\downarrow}x$. It follows from $x\ll c_{F}$ and Lemma
2.2 that there is $y\in L$ such that $x\ll y\ll c_{F}$. Thus
$K\subseteq\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y$.
Set $G=\\{y\\}\in\mathcal{F}_{L}$. By that
$K\subseteq\overline{R}_{L}(G)=\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y$
and
$G\subseteq\overline{R}_{L}(F)=\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}c_{F}$,
we have that $(L,R_{L},\mathcal{F}_{L})$ is a CF-approximation space. $\Box$
###### Theorem 3.13
Let $(L,\leqslant)$ be a continuous domain, $(L,R_{L},\mathcal{F}_{L})$ the
one constructed in Theorem 3.12. Then
$\mathfrak{C}(L,R_{L},\mathcal{F}_{L})=\\{\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\mid
x\in L\\}$.
Proof. By the proof of Theorem 3.12 and Proposition 3.7(1), we know that
$\\{\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\mid
x\in L\\}\subseteq\mathfrak{C}(L,R_{L},\mathcal{F}_{L})$. Conversely, let
$E\in\mathfrak{C}(L,R_{L},\mathcal{F}_{L})$. Then by Proposition 3.8, there is
a directed set $D\subseteq L$ such that
$E=\bigcup\\{\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}d\mid
d\in D\\}$. Next we prove
$E=\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}\bigvee
D$. Obviously,
$E\subseteq\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}\bigvee
D$. Conversely, if
$x\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}\bigvee
D$, then by Lemma 2.2, there is $y\in L$ such that $x\ll y\ll\bigvee D$. So
there is $d\in D$ such that $x\ll y\leqslant d$. Thus
$x\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}d\subseteq
E$, and
$E=\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}\bigvee
D$. This shows that
$\mathfrak{C}(L,R_{L},\mathcal{F}_{L})\subseteq\\{\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\mid
x\in L\\}$ and
$\mathfrak{C}(L,R_{L},\mathcal{F}_{L})=\\{\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\mid
x\in L\\}$. $\Box$
###### Theorem 3.14
(Representation Theorem) Let $(L,\leqslant)$ be a poset. Then $L$ is a
continuous domain iff there exists a CF-approximation space
$(U,R,\mathcal{F})$ such that
$(L,\leqslant)\cong(\mathfrak{C}(U,R,\mathcal{F}),\subseteq))$.
Proof. $\Leftarrow$: Follows directly by Theorem 3.11.
$\Rightarrow$: If $L$ is a continuous domain, then by Theorem 3.12 we know
that $(L,R_{L},\mathcal{F}_{L})$ is a CF-approximation space. Define a map
$f:L\to\mathfrak{C}(L,R_{L},\mathcal{F}_{L})$ such that for all $x\in L$,
$f(x)=\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x$.
Then it follows from Theorem 3.13 and the continuity of $L$ that $f$ is an
order isomorphism. $\Box$
## 4 Representations of some special domains
In this section, we add some conditions to CF-approximation spaces, and then
discuss representations of some special types of continuous domains.
###### Lemma 4.1
Let $(L,\leqslant)$ be a continuous domain and $B\subseteq L$ a base. If
$(B,\leqslant)$ is a semilattice (resp., sup-semilattice, poset with bottom
element, poset with top element, sup-semilattice with bottom element, cusl),
then $(L,\leqslant)$ is a continuous semilattice (resp., continuous sup-
semilattic, continuous domain with bottom element, continuous domain with top
element, continuous lattice, bc-domain).
Proof. (1) Let $(B,\leqslant)$ be a semilattice. For any $x,y\in L$, set
$D=\\{a\wedge_{B}b\mid
a\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\cap
B,b\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y\cap
B\\}$. It is easy to show that
$\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\cap
B$ and
$\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y\cap
B$ are directed. Therefore $D$ is directed and $\bigvee D$ exists. It is clear
that $\bigvee
D\leqslant\bigvee(\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\cap
B)=x$, $\bigvee
D\leqslant\bigvee(\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y\cap
B)=y$. If $z\leqslant x,y$, then for any
$t\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}z\cap
B$, we have $t\ll
x=\bigvee(\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\cap
B)$, $t\ll
y=\bigvee(\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y\cap
B)$. Therefore there exist
$t_{1}\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\cap
B$,
$t_{2}\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y\cap
B$ such that $t\leqslant t_{1},t_{2}$. Thus $t\leqslant t_{1}\wedge_{B}t_{2}$.
Noticing that $t_{1}\wedge_{B}t_{2}\in D$ and the arbitrariness of
$t\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}z\cap
B$, we have that
$z=\bigvee(\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}z\cap
B)\leqslant\bigvee D$. This shows that $\bigvee D$ is a greatest lower bound
of $x,y$, namely, $x\wedge y=\bigvee D$. Thus $L$ is a continuous semilattice.
(2) Let $(B,\leqslant)$ be a sup-semilattice. For any $x,y\in L$, set
$D=\\{a\vee_{B}b\mid
a\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\cap
B,b\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y\cap
B\\}$. Clearly, $D$ is directed and $\bigvee D$ exists. It is obvious that
$x,y\leqslant\bigvee D$. Let $x,y\leqslant z$. Then for all
$a\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\cap
B$ and
$b\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y\cap
B$, we have that $a\ll z$, $b\ll z$. By the directedness of
$\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}z\cap
B$, there exists
$t\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}z\cap
B$ such that $a,b\leqslant t$. Therefore $a\vee_{B}b\leqslant t$. Noticing
that $a\vee_{B}b\in D$, we have $\bigvee
D\leqslant\bigvee(\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}z\cap
B)=z$. This shows that $\bigvee D$ is a least upper bound of $x,y$, namely,
$x\vee y=\bigvee D$. Thus $L$ is a continuous sup-semilattice.
(3)/(4) If $\perp$/$\top$ is a bottom/top element of $(B,\leqslant)$, then
$\perp$/$\top$ is also a bottom/top element of $L$.
(5) Let $(B,\leqslant)$ be a sup-semilattice with bottom element $\perp$. Then
by (2) and (3), we know that $L$ is a sup-semilattice with bottom element.
Since $L$ is a dcpo, $L$ is a complete lattice. Thus $L$ is a continuous
lattice.
(6) Let $(B,\leqslant)$ be a cusl. For any $x,y,z\in L$ which satisfy
$x,y\leqslant z$, we show that for all
$a\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x\cap
B$ and
$b\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}y\cap
B$, $a\vee_{B}b$ exists. By that $x,y\leqslant z$, we have $a\ll z,b\ll z$.
Since $B$ is a base, there exists
$c\in\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}z\cap
B$ such that $a,b\leqslant c$. That $a\vee_{B}b$ exists by that
$(B,\leqslant)$ is a cusl. Similar to the proof of (2), we have that $L$ is a
cusl. As $L$ is a continuous domain, we see that $L$ is a bc-domain. $\Box$
###### Theorem 4.2
Let $(U,R,\mathcal{F})$ be a CF-approximation space. If
$(\\{\overline{R}(F)\mid F\in\mathcal{F}\\},\subseteq)$ is a semilattice
(resp., sup-semilattice, poset with bottom element, poset with top element,
sup-semilattice with bottom element, cusl), then
$\mathfrak{C}(U,R,\mathcal{F})$ is a continuous semilattice (resp., continuous
sup-semilattic, continuous domain with bottom element, continuous domain with
top element, continuous lattice, bc-domain). Conversely, any continuous
semilattice (resp., continuous sup-semilattic, continuous domain with bottom
element, continuous domain with top element, continuous lattice, bc-domain)
$L$ is isomorphic to $(\mathfrak{C}(L,R_{L},\mathcal{F}_{L}),\subseteq)$ of
corresponding CF-approximation spaces, respectively.
Proof. The first half of the theorem follows directly from Theorem 3.11 and
Lemma 4.1.
For the second half, let $(L,R_{L},\mathcal{F}_{L})$ be the one in Theorem
3.12. Define a map $f:L\longrightarrow\\{\overline{R}(F)\mid
F\in\mathcal{F}_{L}\\}$ such that for all $x\in L$,
$f(x)=\mathord{\mbox{\makebox[0.0pt][l]{\raisebox{-1.72218pt}{$\downarrow$}}$\downarrow$}}x$.
Since $L$ is continuous, $f$ is an order isomorphism. Thus
$\\{\overline{R}(F)\mid F\in\mathcal{F}_{L}\\}$ is a semilattice (resp., sup-
semilattice, poset with bottom element, poset with top element, sup-
semmilattice with bottom element, cusl) whenever $L$ is a continuous
semilattice (resp., continuous sup-semilattic, continuous domain with bottom
element, continuous domain with top element, continuous lattice, bc-domain).
By Theorem 3.13, $L$ is isomorphic to
$(\mathfrak{C}(L,R_{L},\mathcal{F}_{L}),\subseteq)$. $\Box$
Next, we consider algebraic cases.
###### Definition 4.3
Let $(U,R)$ be a GA-space,
$\mathcal{F}\subseteq\mathcal{P}_{fin}(U)\cup\\{\emptyset\\}$. If $R$ is a
preorder, then $(U,R,\mathcal{F})$ is called a topological CF-approximation
space.
###### Remark 4.4
A topological CF-approximation space must be a CF-approximation space. In
fact, for all $F\in\mathcal{F}$ and $K\subseteq_{fin}\overline{R}(F)$, taking
$G=F$, then by Lemma 2.9 we have $K\subseteq\overline{R}(G)$ and
$G\subseteq\overline{R}(F)$. By Definition 3.1, $(U,R,\mathcal{F})$ is a CF-
approximation space.
###### Proposition 4.5
Let $(U,R,\mathcal{F})$ be a topological CF-approximation space,
$E_{1},E_{2}\in\mathfrak{C}(U,R,\mathcal{F})$. Then $E_{1}\ll E_{2}$ iff there
exists $F\in\mathcal{F}$ such that $E_{1}\subseteq\overline{R}(F)\subseteq
E_{2}$. Thus
$K((\mathfrak{C}(U,R,\mathcal{F}),\subseteq))=(\\{\overline{R}(F)\mid
F\in\mathcal{F}\\},\subseteq)$.
Proof. It follows directly from Lemma 2.9 and Theorem 3.9. $\Box$
###### Theorem 4.6
Let $(U,R,\mathcal{F})$ be a topological CF-approximation space. Then
$(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)$ is an algebraic domain.
Conversely, any algebraic domain can be represented by some topological CF-
approximation space.
Proof. The first half of the theorem follows from Proposition 4.5 and Theorem
3.8 that $(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)$ is an algebraic domain.
For the second half, let $(L,\leqslant)$ be an algebraic domain. Set
$(K(L),R_{K(L)},\mathcal{F}_{K(L)})$, where
$\mathcal{F}_{K(L)}=\\{F\subseteq_{fin}K(L)\mid F\ \mbox{has top element}\\}$,
$R_{K(L)}=\leqslant$ is a partial order. Thus
$(K(L),R_{K(L)},\mathcal{F}_{K(L)})$ is a topological CF-approximation space.
For any $F\in\mathcal{F}_{K(L)}$, let $c_{F}$ be the top element of $F$. By
Lemma 2.8(5), we know that for all $F\in\mathcal{F}_{K(L)}$,
$\overline{R_{K(L)}}(F)={\downarrow}c_{F}\cap K(L)$. Similar to the proof of
Theorem 3.13, we have that
$\mathfrak{C}(K(L),R_{K(L)},\mathcal{F}_{K(L)})=\\{{\downarrow}x\cap K(L)\mid
x\in L\\}.$ Since $L$ is an algebraic domain, we know that
$(\\{{\downarrow}x\cap K(L)\mid x\in L\\},\subseteq)\cong(L,\leqslant)$. The
proof is finished. $\Box$
###### Lemma 4.7
Let $(L,\leqslant)$ be an algebraic domain. If $(K(L),\leqslant)$ is a
semilattice, then $L$ is an arithmetic semilattice.
Proof. For any $x,y\in L$, let $D=\\{a\wedge_{K(L)}b\mid a\in\downarrow x\cap
K(L),b\in\downarrow y\cap K(L)\\}$. By Lemma 4.1, we see that $x\wedge
y=\bigvee D$ and $L$ is a semilattice. Next we prove $(K(L),\leqslant)$ is a
subsemilattice of $L$. If $x,y\in K(L)$, then $x\wedge_{K(L)}y\in D$ and
$x\wedge_{K(L)}y$ is the top element of $D$. So $x\wedge_{K(L)}y=\bigvee
D=x\wedge y$. Hence $x\wedge_{K(L)}y=x\wedge y$. This shows that
$(K(L),\leqslant)$ is a subsemilattice of $L$, and $L$ is an arithmetic
semilattice. $\Box$
###### Theorem 4.8
Let $(U,R,\mathcal{F})$ be a topological CF-approximation space. If poset
$\\{\overline{R}(F)\mid F\in\mathcal{F}\\},\subseteq)$ is a semilattice, then
$(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)$ is an arithmetic semilattice.
Conversely, any arithmetic semilattice can be represented in this way.
Proof. The first half of the theorem follows directly from Theorem 4.6 and
Lemma 4.7.
For the second half, let $L$ be an arithmetic semilattice and
$(K(L),R_{K(L)},\mathcal{F}_{K(L)})$ be the one in Theorem 4.6. Therefore
$(K(L),R_{K(L)},\mathcal{F}_{K(L)})$ is a topological CF-approximation space.
For $F_{1},F_{2}\in\mathcal{F}_{K(L)}$, then
$\overline{R_{K(L)}}(F_{1})={\downarrow}c_{F_{1}}\cap K(L)$,
$\overline{R_{K(L)}}(F_{2})={\downarrow}c_{F_{2}}\cap K(L)$. Since $L$ is an
arithmetic semilattice, we know that $c_{F_{1}}\wedge c_{F_{2}}=c\in K(L)$. So
$\overline{R_{K(L)}}(F_{1})\cap\overline{R_{K(L)}}(F_{2})={\downarrow}c\cap
K(L)$. This shows that there exists $\\{c\\}\in\mathcal{F}_{L}$ such that
$\overline{R_{K(L)}}(F_{1})\wedge\overline{R_{K(L)}}(F_{2})=\overline{R_{K(L)}}(\\{c\\})$.
This shows that $\\{\overline{R_{K(L)}}(F)\mid F\in\mathcal{F}\\}$ is a
semilttice. By Theorem 4.6, we see that the second half of the theorem holds.
$\Box$
## 5 CF-approximable Relations and Equivalence of Categories
In this section, we define and study CF-approximable relations between CF-
approximation spaces and prove that the category of CF-approximation spaces
and CF-approximable relations is equivalent to the category of continuous
domains and Scott continuous maps.
###### Definition 5.1
Let $(U_{1},R_{1},\mathcal{F}_{1})$, $(U_{2},R_{2},\mathcal{F}_{2})$ be CF-
approximation spaces, and
$\mathrel{\Theta}\subseteq\mathcal{F}_{1}\times\mathcal{F}_{2}$ a binary
relation. If
$(1)$ for all $F\in\mathcal{F}_{1}$, there is $G\in\mathcal{F}_{2}$ such that
$F\mathrel{\Theta}G$;
$(2)$ for all $F,F^{\prime}\in\mathcal{F}_{1}$, $G\in\mathcal{F}_{2}$, if
$F\subseteq\overline{R_{1}}(F^{\prime})$, $F\mathrel{\Theta}G$, then
$F^{\prime}\mathrel{\Theta}G$;
$(3)$ for all $F\in\mathcal{F}_{1}$, $G,G^{\prime}\in\mathcal{F}_{2}$, if
$F\mathrel{\Theta}G$, $G^{\prime}\subseteq\overline{R_{2}}(G)$, then
$F\mathrel{\Theta}G^{\prime}$;
$(4)$ for all $F\in\mathcal{F}_{1}$, $G\in\mathcal{F}_{2}$, if
$F\mathrel{\Theta}G$, then there are $F^{\prime}\in\mathcal{F}_{1}$,
$G^{\prime}\in\mathcal{F}_{2}$ such that
$F^{\prime}\subseteq\overline{R_{1}}(F)$,
$G\subseteq\overline{R_{2}}(G^{\prime})$ and
$F^{\prime}\mathrel{\Theta}G^{\prime}$; and
$(5)$ for all $F\in\mathcal{F}_{1}$, $G_{1},G_{2}\in\mathcal{F}_{2}$, if
$F\mathrel{\Theta}G_{1}$ and $F\mathrel{\Theta}G_{2}$, then there is
$G_{3}\in\mathcal{F}_{2}$ such that $G_{1}\cup
G_{2}\subseteq\overline{R_{2}}(G_{3})$ and $F\mathrel{\Theta}G_{3}$,
then $\mathrel{\Theta}$ is called a CF-approximable relation from
$(U_{1},R_{1},\mathcal{F}_{1})$ to $(U_{2},R_{2},\mathcal{F}_{2})$.
###### Proposition 5.2
Let $\mathrel{\Theta}$ be a CF-approximable relation from
$(U_{1},R_{1},\mathcal{F}_{1})$ to $(U_{2},R_{2},\mathcal{F}_{2})$. Then for
all $F\in\mathcal{F}_{1},G\in\mathcal{F}_{2}$, the following statements are
equivalent:
$(1)$ $F\mathrel{\Theta}G$;
$(2)$ There exists $F^{\prime}\in\mathcal{F}_{1}$ such that
$F^{\prime}\subseteq\overline{R_{1}}(F)$ and $F^{\prime}\mathrel{\Theta}G$;
$(3)$ There exists $G^{\prime}\in\mathcal{F}_{2}$ such that
$F\mathrel{\Theta}G^{\prime}$ and $G\subseteq\overline{R_{2}}(G^{\prime})$;
$(4)$ There exist $F^{\prime}\in\mathcal{F}_{1}$ and
$G^{\prime}\in\mathcal{F}_{2}$ such that
$F^{\prime}\subseteq\overline{R_{1}}(F)$,
$G\subseteq\overline{R_{2}}(G^{\prime})$ and
$F^{\prime}\mathrel{\Theta}G^{\prime}$.
Proof. Follows directly from Definition 5.1(2)-(4). $\Box$
Let $\mathrel{\Theta}$ be a CF-approximable relation from
$(U_{1},R_{1},\mathcal{F}_{1})$ to $(U_{2},R_{2},\mathcal{F}_{2})$. For all
$F\in\mathcal{F}_{1}$, set
$\widetilde{\mathrel{\Theta}}(F)=\bigcup\\{\overline{R_{2}}(G)\mid
F\mathrel{\Theta}G\mbox{\ and\ }G\in\mathcal{F}_{2}\\}$. Define a map
$f_{\mathrel{\Theta}}:\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})\longrightarrow\mathcal{P}(U_{2})$
such that for all $E\in\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})$,
$f_{\mathrel{\Theta}}(E)=\bigcup\\{\widetilde{\mathrel{\Theta}}(F)\mid
F\subseteq E\mbox{\ and\ }F\in\mathcal{F}_{1}\\}$.
###### Proposition 5.3
Let $\mathrel{\Theta}$ be a CF-approximable relation from
$(U_{1},R_{1},\mathcal{F}_{1})$ to $(U_{2},R_{2},\mathcal{F}_{2})$,
$F\in\mathcal{F}_{1}$, $E\in\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})$. Then
the following hold:
$(1)$ $\\{\overline{R_{2}}(G)\mid F\mathrel{\Theta}G\mbox{\ and\
}G\in\mathcal{F}_{2}\\}$ is directed;
$(2)$
$\widetilde{\mathrel{\Theta}}(F)\in\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$;
$(3)$ For any $F\in\mathcal{F}_{1}$,
$f_{\mathrel{\Theta}}(\overline{R_{1}}(F))=\widetilde{\mathrel{\Theta}}(F)$;
$(4)$ $\\{\widetilde{\mathrel{\Theta}}(F)\mid F\subseteq
E,F\in\mathcal{F}_{1}\\}$ is directed and
$f_{\mathrel{\Theta}}(E)\in\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$.
Proof. (1) By Definition 5.1(1), we know that $\\{\overline{R_{2}}(G)\mid
F\mathrel{\Theta}G,G\in\mathcal{F}_{2}\\}$ is not empty. By Definition 5.1(5)
and Lemma 3.2, we know that $\\{\overline{R_{2}}(G)\mid
F\mathrel{\Theta}G,G\in\mathcal{F}_{2}\\}$ is directed.
(2) Follows directly from (1) and Proposition 3.7(1).
(3) It is easy to check that
$\begin{array}[]{lll}f_{\mathrel{\Theta}}(\overline{R_{1}}(F))&=&\bigcup\\{\widetilde{\mathrel{\Theta}}(F^{\prime})\mid
F^{\prime}\subseteq\overline{R_{1}}(F),F^{\prime}\in\mathcal{F}_{1}\\}\\\
&=&\bigcup\\{\overline{R_{2}}(G)\mid
F^{\prime}\in\mathcal{F}_{1},G\in\mathcal{F}_{2},F^{\prime}\subseteq\overline{R_{1}}(F)\mbox{~{}and~{}}F^{\prime}\mathrel{\Theta}G\\}~{}~{}(\mbox{by
the definition of~{}}\widetilde{\mathrel{\Theta}}(F^{\prime}))\\\
&=&\bigcup\\{\overline{R_{2}}(G)\mid
G\in\mathcal{F}_{2},F\mathrel{\Theta}G\\}~{}~{}(\mbox{by
Proposition~{}}\ref{pn1-CF-apprel})\\\
&=&\widetilde{\mathrel{\Theta}}(F)~{}~{}(\mbox{by the definition
of~{}}\widetilde{\mathrel{\Theta}}(F)).\end{array}$
(4) Firstly, we show that $\\{\widetilde{\mathrel{\Theta}}(F)\mid F\subseteq
E,F\in\mathcal{F}_{1}\\}$ is directed. Let $F_{1},F_{2}\in\mathcal{F}_{1}$. If
$F_{1},F_{2}\subseteq E$, then by Proposition 3.8(4), there exists
$F_{3}\in\mathcal{F}_{1}$ such that $F_{1}\cup
F_{2}\subseteq\overline{R_{1}}(F_{3})\subseteq E$. And by Definition 5.1(2),
it is easy to deduce that $\widetilde{\mathrel{\Theta}}(F_{1})$,
$\widetilde{\mathrel{\Theta}}(F_{2})\subseteq\widetilde{\mathrel{\Theta}}(F_{3})$.
This shows that the family $\\{\widetilde{\mathrel{\Theta}}(F)\mid F\subseteq
E,F\in\mathcal{F}_{1}\\}$ is directed. Noticing that
$f_{\mathrel{\Theta}}(E)=\bigcup\\{\widetilde{\mathrel{\Theta}}(F)\mid
F\subseteq E,F\in\mathcal{F}_{1}\\}$, by (2) and Proposition 3.7(3), we have
that $f_{\mathrel{\Theta}}(E)\in\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$.
$\Box$
Proposition 5.3 shows that $f_{\mathrel{\Theta}}$ can be seen as a map from
$\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})$ to
$\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$.
###### Proposition 5.4
Let $\mathrel{\Theta}$ be a CF-approximable relation from
$(U_{1},R_{1},\mathcal{F}_{1})$ to $(U_{2},R_{2},\mathcal{F}_{2})$ ,
$F\in\mathcal{F}_{1}$, $G\in\mathcal{F}_{2}$. Then
$G\subseteq\widetilde{\mathrel{\Theta}}(F)\Leftrightarrow F\mathrel{\Theta}G$.
Proof. It is easy to check that
$\begin{array}[]{lll}G\subseteq\widetilde{\mathrel{\Theta}}(F)&\Leftrightarrow&G\subseteq\bigcup\\{\overline{R_{2}}(G^{\prime})\mid
F\mathrel{\Theta}G^{\prime},G^{\prime}\in\mathcal{F}_{2}\\}\\\
&\Leftrightarrow&\exists
G^{\prime}\in\mathcal{F}_{2}\mbox{~{}s.t.~{}}F\mathrel{\Theta}G^{\prime},G\subseteq\overline{R_{2}}(G^{\prime})~{}~{}(\mbox{by
Proposition~{}}\ref{pn2-CF-apprel}(1)\mbox{~{}and the finiteness of~{}}G)\\\
&\Leftrightarrow&F\mathrel{\Theta}G~{}~{}(\mbox{by Proposition~{}}\ref{pn1-CF-
apprel}).\end{array}$
$\Box$
###### Theorem 5.5
Let $\mathrel{\Theta}$ be a CF-approximable relation from
$(U_{1},R_{1},\mathcal{F}_{1})$ to $(U_{2},R_{2},\mathcal{F}_{2})$. Then
$f_{\mathrel{\Theta}}$ is a Scott continuous map from
$\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})$ to
$\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$.
Proof. It follows from the definition of $f_{\mathrel{\Theta}}$ that
$f_{\mathrel{\Theta}}$ is order preserving. In order to prove
$f_{\mathrel{\Theta}}$ is Scott continuous, by Proposition 3.7(3), it suffices
to show that for any directed family $\\{E_{i}\\}_{i\in
I}\subseteq\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})$, we have
$f_{\mathrel{\Theta}}(\bigcup_{i\in I}E_{i})=\bigcup_{i\in
I}f_{\mathrel{\Theta}}(E_{i})$. In fact,
$\begin{array}[]{lll}f_{\mathrel{\Theta}}(\bigcup_{i\in
I}E_{i})&=&\bigcup\\{\widetilde{\mathrel{\Theta}}(F)\mid
F\subseteq\bigcup_{i\in I}E_{i},F\in\mathcal{F}_{1}\\}\\\ &=&\bigcup_{i\in
I}(\bigcup\\{\widetilde{\mathrel{\Theta}}(F)\mid F\subseteq
E_{i},F\in\mathcal{F}_{1}\\}\\\ &&(\mbox{by the finitness of~{}}F\mbox{ ~{}and
the directedness of ~{}}\\{E_{i}\\}_{i\in I})\\\ &=&\bigcup_{i\in
I}f_{\mathrel{\Theta}}(E_{i})~{}~{}(\mbox{by the definition
of~{}}f_{\mathrel{\Theta}}(E_{i})).\end{array}$
$\Box$
Theorem 5.5 shows that a CF-approximable relation between CF-approximation
spaces can induce a Scott continuous map between continuous domains.
Conversely, a Scott continuous map between relative continuous domains can
also induce a CF-approximable relation between CF-approximation spaces as
follows.
###### Theorem 5.6
Let
$f:\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})\longrightarrow\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$
be a Scott continuous map between CF-approximation spaces
$(U_{1},R_{1},\mathcal{F}_{1})$ and $(U_{2},R_{2},\mathcal{F}_{2})$. Define
$\mathrel{\Theta}_{f}\subseteq\mathcal{F}_{1}\times\mathcal{F}_{2}$ such that
$\forall
F\in\mathcal{F}_{1},G\in\mathcal{F}_{2},F\mathrel{\Theta}_{f}G\Leftrightarrow
G\subseteq f(\overline{R_{1}}(F)).$
Then $\mathrel{\Theta}_{f}$ is a CF-approximable relation from
$(U_{1},R_{1},\mathcal{F}_{1})$ to $(U_{2},R_{2},\mathcal{F}_{2})$.
Proof. It follows from
$f(\overline{R_{1}}(F))\in\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$ and
Definition 3.3 that $\mathrel{\Theta}_{f}$ satisfies Definition 5.1(1).
To check that $\mathrel{\Theta}_{f}$ satisfies Definition 5.1(2), let
$F,F^{\prime}\in\mathcal{F}_{1}$, $G\in\mathcal{F}_{2}$. Then
$\begin{array}[]{lll}&&F\subseteq\overline{R_{1}}(F^{\prime}),F\mathrel{\Theta}_{f}G\\\
&\Rightarrow&F\subseteq\overline{R_{1}}(F^{\prime}),G\subseteq
f(\overline{R_{1}}(F))~{}~{}(\mbox{by the definiton
of~{}}\mathrel{\Theta}_{f})\\\ &\Rightarrow&G\subseteq
f(\overline{R_{1}}(F))\subseteq f(\overline{R_{1}}(F^{\prime}))~{}~{}(\mbox{by
Lemma\ }\ref{lm2-tr-up}\mbox{~{}and the order preservation of~{}}f)\\\
&\Rightarrow&G\subseteq f(\overline{R_{1}}(F^{\prime}))\Leftrightarrow
F^{\prime}\mathrel{\Theta}_{f}G.\end{array}$
To check that $\mathrel{\Theta}_{f}$ satisfies Definition 5.1(3), let
$F\in\mathcal{F}_{1}$, $G,G^{\prime}\in\mathcal{F}_{2}$. Then
$\begin{array}[]{lll}&&F\mathrel{\Theta}_{f}G,G^{\prime}\subseteq\overline{R_{2}}(G)\\\
&\Rightarrow&G\subseteq
f(\overline{R_{1}}(F)),G^{\prime}\subseteq\overline{R_{2}}(G)\\\
&\Rightarrow&G^{\prime}\subseteq\overline{R_{2}}(G)\subseteq
f(\overline{R_{1}}(F))~{}~{}(\mbox{by Proposition\ }\ref{pn2-cf-
clo}(2)\mbox{~{}and~{}}f(\overline{R_{1}}(F))\in\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2}))\\\
&\Rightarrow&G^{\prime}\subseteq f(\overline{R_{1}}(F))\Leftrightarrow
F\mathrel{\Theta}_{f}G^{\prime}.\end{array}$
To check that $\mathrel{\Theta}_{f}$ satisfies Definition 5.1(4), let
$F\in\mathcal{F}_{1}$, $G\in\mathcal{F}_{2}$. If $F\mathrel{\Theta}_{f}G$,
then $G\subseteq f(\overline{R_{1}}(F))$. By Proposition 3.7, 3.8(2) and the
Scott continuity of $f$, we know that
$(\ast)$ $f(\overline{R_{1}}(F))=\bigcup\\{f(\overline{R_{1}}(F^{\prime}))\mid
F^{\prime}\subseteq\overline{R_{1}}(F),F^{\prime}\in\mathcal{F}_{1}\\}$.
Therefore we have that
$\begin{array}[]{lll}G\subseteq f(\overline{R_{1}}(F))&\Rightarrow&\exists
G^{\prime}\in\mathcal{F}_{2},\mbox{~{}s.t.~{}}G\subseteq\overline{R_{2}}(G^{\prime})\mbox{~{}and~{}}G^{\prime}\subseteq
f(\overline{R_{1}}(F))~{}~{}(\mbox{by Definition~{}}\ref{dn-cf-cl})\\\
&\Rightarrow&\exists
F^{\prime}\in\mathcal{F}_{1},G^{\prime}\in\mathcal{F}_{2},\mbox{~{}s.t.~{}}G\subseteq\overline{R_{2}}(G^{\prime}),F^{\prime}\subseteq\overline{R_{1}}(F)\mbox{~{}and~{}}G^{\prime}\subseteq
f(\overline{R_{1}}(F^{\prime}))\\\ &&(\mbox{by equation~{}}(\ast)\mbox{~{}and
the finiteness of~{}}G^{\prime})\\\ &\Rightarrow&\exists
F^{\prime}\in\mathcal{F}_{1},G^{\prime}\in\mathcal{F}_{2},\mbox{~{}s.t.~{}}G\subseteq\overline{R_{2}}(G^{\prime}),F^{\prime}\subseteq\overline{R_{1}}(F)\mbox{~{}and~{}}F^{\prime}\mathrel{\Theta}_{f}G^{\prime}\\\
&&(\mbox{by the definition of~{}}\mathrel{\Theta}_{f}).\end{array}$
To check that $\mathrel{\Theta}_{f}$ satisfies Definition 5.1(5), let
$F\in\mathcal{F}_{1}$, $G_{1},G_{2}\in\mathcal{F}_{2}$. If
$F\mathrel{\Theta}_{f}G_{1}$ and $F\mathrel{\Theta}_{f}G_{2}$, then $G_{1}\cup
G_{2}\subseteq f(\overline{R_{1}}({F}))$. By Definition 3.3 and
$f(\overline{R_{1}}({F}))\in\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$, there
exists $G_{3}\in\mathcal{F}_{2}$ such that $G_{1}\cup
G_{2}\subseteq\overline{R_{2}}(G_{3})\subseteq f(\overline{R_{1}}({F}))$ and
$G_{3}\subseteq f(\overline{R_{1}}({F}))$. So $F\mathrel{\Theta}_{f}G_{3}$,
showing that $\mathrel{\Theta}_{f}$ satisfies Definition 5.1(5). $\Box$
###### Theorem 5.7
Let
$f:\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})\longrightarrow\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$
be a Scott continuous map between CF-approximation spaces
$(U_{1},R_{1},\mathcal{F}_{1})$ and $(U_{2},R_{2},\mathcal{F}_{2})$,
$\mathrel{\Theta}$ a CF-approximable relation from
$(U_{1},R_{1},\mathcal{F}_{1})$ to $(U_{2},R_{2},\mathcal{F}_{2})$. Then
$\mathrel{\Theta}_{f_{\mathrel{\Theta}}}=\mathrel{\Theta}$ and
$f_{\mathrel{\Theta}_{f}}=f$.
Proof. Let $F\in\mathcal{F}_{1},G\in\mathcal{F}_{2}$. Then by Propositions
5.3(3) and 5.4, we have
$(F,G)\in\mathrel{\Theta}_{f_{\mathrel{\Theta}}}\Leftrightarrow G\subseteq
f_{\mathrel{\Theta}}(\overline{R_{1}}({F}))=\widetilde{\mathrel{\Theta}}(F)\Leftrightarrow(F,G)\in\mathrel{\Theta},$
showing that $\mathrel{\Theta}_{f_{\mathrel{\Theta}}}=\mathrel{\Theta}$.
For any $E\in\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})$, we have that
$\begin{array}[]{lll}f_{\mathrel{\Theta}_{f}}(E)&=&\bigcup\\{\widetilde{\mathrel{\Theta}_{f}}(F)\mid
F\subseteq E\mbox{~{}and~{}}F\in\mathcal{F}_{1}\\}\\\
&=&\bigcup\\{\overline{R_{2}}(G)\mid
F\in\mathcal{F}_{1},G\in\mathcal{F}_{2},F\subseteq
E\mbox{~{}and~{}}F\mathrel{\Theta}_{f}G\\}\\\
&=&\bigcup\\{\overline{R_{2}}(G)\mid
F\in\mathcal{F}_{1},G\in\mathcal{F}_{2},F\subseteq E\mbox{~{}and~{}}G\subseteq
f(\overline{R_{1}}({F}))\\}\\\ &=&\bigcup\\{f(\overline{R_{1}}({F}))\mid
F\in\mathcal{F}_{1},F\subseteq E\\}~{}~{}(\mbox{by Proposition~{}}\ref{pn3-cf-
cl}(2))\\\ &=&f(\bigcup\\{\overline{R_{1}}({F})\mid
F\in\mathcal{F}_{1},F\subseteq E\\})~{}~{}(\mbox{by Scott continuity
of~{}}f)\\\ &=&f(E).\end{array}$
This shows that $f_{\mathrel{\Theta}_{f}}=f$. $\Box$
Given a CF-approximation space $(U,R,\mathcal{F})$, define the identity on
$(U,R,\mathcal{F})$ to be a binary relation
$\operatorname{Id}_{(U,R,\mathcal{F})}\subseteq\mathcal{F}\times\mathcal{F}$
such that for all
$F,G\in\mathcal{F},(F,G)\in\operatorname{Id}_{(U,R,\mathcal{F})}\Leftrightarrow
G\subseteq\overline{R}(F).$
Let $(U_{1},R_{1},\mathcal{F}_{1})$, $(U_{2},R_{2},\mathcal{F}_{2})$,
$(U_{3},R_{3},\mathcal{F}_{3})$ be CF-approximation spaces,
$\mathrel{\Theta}\subseteq\mathcal{F}_{1}\times\mathcal{F}_{2}$,
$\mathrel{\Upsilon}\subseteq\mathcal{F}_{2}\times\mathcal{F}_{3}$ be CF-
approximable relations. Define
$\mathrel{\Upsilon}\circ\mathrel{\Theta}\subseteq\mathcal{F}_{1}\times\mathcal{F}_{3}$,
the composition of $\mathrel{\Upsilon}$ and $\mathrel{\Theta}$ by that for any
$F_{1}\in\mathcal{F}_{1},F_{3}\in\mathcal{F}_{3}$,
$(F_{1},F_{3})\in\mathrel{\Upsilon}\circ\mathrel{\Theta}$ iff there exists
$F_{2}\in\mathcal{F}_{2}$ satisfying $(F_{1},F_{2})\in\mathrel{\Theta}$ and
$(F_{2},F_{3})\in\mathrel{\Upsilon}$.
It is a routine work to check that $\operatorname{Id}_{(U,R,\mathcal{F})}$ is
a CF-approximable relation from $(U,R,\mathcal{F})$ to itself. Thus, CF-
approximation spaces as objects and CF-approximable relations as morphisms
with the identities and compositions defined above, form a category, and
denoted by CF-GA.
Let CDOM be the category of continuous domains and Scott continuous maps. We
next show that categories CF-GA and CDOM are equivalent.
###### Lemma 5.8
([1]) Let $\mathcal{C},\mathcal{D}$ be two categories. If there is a functor
$\Phi:\mathcal{C}\longrightarrow\mathcal{D}$ such that
$(1)$ $\Phi$ is full, namely, for all $A,B\in ob(\mathcal{C})$, $g\in
Mor_{\mathcal{D}}(\Phi(A),\Phi(B))$, there is $f\in Mor_{\mathcal{C}}(A,B)$
such that $\Phi(f)=g$;
$(2)$ $\Phi$ is faithful, namely, for all $A,B\in ob(\mathcal{C})$, $f,g\in
Mor_{\mathcal{C}}(A,B)$, if $f\neq g$, then $\Phi(f)\neq\Phi(g)$;
$(3)$ for all $B\in ob(\mathcal{D})$, there is $A\in ob(\mathcal{C})$ such
that $\Phi(A)\cong B$,
then $\mathcal{C}$ and $\mathcal{D}$ are equivalent.
###### Theorem 5.9
The categories CF-GA and CDOM are equivalent.
Proof. Define $\Psi:$ CF-GA$\to$ CDOM such that for all $(U,R,\mathcal{F})\in
ob$(CF-GA),
$\Psi((U,R,\mathcal{F}))=(\mathfrak{C}(U,R,\mathcal{F}),\subseteq)\in ob({\bf
CDOM})$; for all $\mathrel{\Theta}\in Mor$(CF-GA),
$\Psi(\mathrel{\Theta})=f_{\mathrel{\Theta}}\in Mor({\bf CDOM})$.
Give a CF-approximation space $(U,R,\mathcal{F})$, for any
$E\in\mathfrak{C}(U,R,\mathcal{F})$, we have
$\begin{array}[]{lll}\Psi(\operatorname{Id}_{(U,R,\mathcal{F})})(E)&=&f_{\operatorname{Id}_{(U,R,\mathcal{F})}}(E)\\\
&=&\bigcup\\{\overline{R}({G})\mid F,G\in\mathcal{F},F\subseteq
E\mbox{~{}and~{}}(F,G)\in\operatorname{Id}_{(U,R,\mathcal{F})}\\}\\\
&=&\bigcup\\{\overline{R}({G})\mid F,G\in\mathcal{F},F\subseteq
E\mbox{~{}and~{}}G\subseteq\overline{R}({F})\\}\\\
&=&\bigcup\\{\overline{R}({F})\mid F\in\mathcal{F},F\subseteq
E\\}~{}~{}(\mbox{by Proposition~{}}\ref{pn2-cf-
clo}(1)\mbox{~{}and~{}}\ref{pn3-cf-cl}(2))\\\ &=&E~{}~{}(\mbox{by
Proposition~{}}\ref{pn3-cf-cl}(2))\\\
&=&id_{\mathfrak{C}(U,R,\mathcal{F})}(E).\end{array}$
This shows that
$\Psi(\operatorname{Id}_{(U,R,\mathcal{F})})=id_{\mathfrak{C}(U,R,\mathcal{F})}$.
Let $\mathrel{\Theta}\subseteq\mathcal{F}_{1}\times\mathcal{F}_{2}$,
$\mathrel{\Upsilon}\subseteq\mathcal{F}_{2}\times\mathcal{F}_{3}$ be CF-
approximable relations. Then for any $E\in\mathfrak{C}(U,R,\mathcal{F})$, we
have
$\begin{array}[]{lll}&&\Psi(\mathrel{\Upsilon})\circ\Psi(\mathrel{\Theta})(E)=f_{\mathrel{\Upsilon}}(f_{\mathrel{\Theta}}(E))\\\
&=&\bigcup\\{\widetilde{\mathrel{\Upsilon}}(F)\mid F\subseteq
f_{\mathrel{\Theta}}(E),F\in\mathcal{F}_{2}\\}\\\
&=&\bigcup\\{\overline{R_{3}}({G})\mid F\subseteq
f_{\mathrel{\Theta}}(E),F\in\mathcal{F}_{2},G\in\mathcal{F}_{3}\mbox{~{}and~{}}F\mathrel{\Upsilon}G\\}~{}~{}(\mbox{by
the definiton of ~{}}\widetilde{\mathrel{\Upsilon}})\\\
&=&\bigcup\\{\overline{R_{3}}({G})\mid F_{1}\in\mathcal{F}_{1},F_{1}\subseteq
E,G_{1}\in\mathcal{F}_{2},F_{1}\mathrel{\Theta}G_{1},F\subseteq\overline{R_{2}}(G_{1}),F\in\mathcal{F}_{2},G\in\mathcal{F}_{3}\mbox{~{}and~{}}F\mathrel{\Upsilon}G\\}\\\
&&(\mbox{by the definiton of\ }f_{\mathrel{\Theta}}(E),\mbox{
Proposition~{}}\ref{pn2-CF-apprel}(1),\mbox{ Theorem~{}}\ref{tm-cfap-
sco}\mbox{~{}and finiteness of members in~{}}\mathcal{F}_{2})\\\
&=&\bigcup\\{\overline{R_{3}}({G})\mid F_{1}\in\mathcal{F}_{1},F_{1}\subseteq
E,G_{1}\in\mathcal{F}_{2},F_{1}\mathrel{\Theta}G_{1},G\in\mathcal{F}_{3}\mbox{~{}and~{}}G_{1}\mathrel{\Upsilon}G\\}\\\
&&(\mbox{by~{}}F\subseteq\overline{R_{2}}(G_{1}),F\mathrel{\Upsilon}G,\mbox{~{}and
Definition~{}}\ref{dn-CF-app}(2))\\\ &=&\bigcup\\{\overline{R_{3}}({G})\mid
F_{1}\in\mathcal{F}_{1},F_{1}\subseteq
E,G\in\mathcal{F}_{3}\mbox{~{}and~{}}(F_{1},G)\in\mathrel{\Upsilon}\circ\mathrel{\Theta}\\}~{}~{}(\mbox{by~{}}F_{1}\mathrel{\Theta}G_{1}\mbox{~{}and~{}}G_{1}\mathrel{\Upsilon}G)\\\
&=&\bigcup\\{\widetilde{\mathrel{\Upsilon}\circ\mathrel{\Theta}}(F_{1})\mid
F_{1}\in\mathcal{F}_{1},F_{1}\subseteq E\\}~{}~{}(\mbox{by the definition
of~{}}\widetilde{\mathrel{\Upsilon}\circ\mathrel{\Theta}}(F_{1}))\\\
&=&f_{\mathrel{\Upsilon}\circ\mathrel{\Theta}}(E)=\Psi(\mathrel{\Upsilon}\circ\mathrel{\Theta})(E).\end{array}$
This shows that
$\Psi(\mathrel{\Upsilon})\circ\Psi(\mathrel{\Theta})=\Psi(\mathrel{\Upsilon}\circ\mathrel{\Theta})$,
and thus $\Psi$ is a functor.
To show that CF-GA is equivalent to CDOM, it suffices to check that $\Psi$
satisfies the three conditions in Lemma 5.8.
Let $(U_{1},R_{1},\mathcal{F}_{1})$, $(U_{2},R_{2},\mathcal{F}_{2})$ be CF-
approximation spaces, $\mathrel{\Theta}_{1},\mathrel{\Theta}_{2}$ be CF-
approximable relations from $(U_{1},R_{1},\mathcal{F}_{1})$ to
$(U_{2},R_{2},\mathcal{F}_{2})$. If
$\mathrel{\Theta}_{1}\neq\mathrel{\Theta}_{2}$, then by Theorem 3.14 we know
that
$\mathrel{\Theta}_{1}=\mathrel{\Theta}_{f_{\mathrel{\Theta}_{1}}}\neq\mathrel{\Theta}_{f_{\mathrel{\Theta}_{2}}}=\mathrel{\Theta}_{2}.$
Thus $f_{\mathrel{\Theta}_{1}}\neq f_{\mathrel{\Theta}_{2}}$, showing that
$\Psi$ is faithful.
Let
$f:\mathfrak{C}(U_{1},R_{1},\mathcal{F}_{1})\longrightarrow\mathfrak{C}(U_{2},R_{2},\mathcal{F}_{2})$
be a Scott continuous map, by Theorem 3.14, there is $\mathrel{\Theta}_{f}\in
Mor$(CF-GA) such that $\Psi(\mathrel{\Theta}_{f})=f_{\mathrel{\Theta}_{f}}=f$,
showing that $\Psi$ is full.
It is clear by Theorem 3.14 that $\Psi$ satisfies the condition (3) in Lemma
5.8. $\Box$
Similarly, we can also establish categorical equivalences between the category
of algebraic domains with Scott continuous maps as morphisms and the category
of topological CF-approxiamtion spaces with CF-approximable relations as
morphisms. For the details, we leave them to interested readers.
## 6 Conclusions
This paper generalizes abstract bases to CF-approximation spaces and
generalizes the family of round ideals of an abstract basis to the family of
CF-closed sets of a CF-approximation space. Thus a representation method of
various continuous domains including continuous semilattices, continuous sup-
semilattices, continuous domains with bottom, continuous domains with top,
continuous lattices, bc-domains, algebraic domains and arithmetic semilattices
in the framework of rough set theory is obtained. CF-approximable relations
between CF-approximation spaces are defined, and categorical equivalence
between categories CF-GA of CF-approximation spaces and continuous domains and
CDOM of continuous domains and Scott continuous maps is established. This work
strengthens the links among rough set theory, domain theory and topology,
widens the scope of application of rough set theory and domain theory.
Acknowledgment We would like to thank the referees for their valuable
suggestions and comments.
## References
* [1] M. Barr, C. Wells. Category theory for computing science (3rd edtion). Prentice Hall, 1990.
* [2] G. Gierz, et al. Continuous Lattices and Domains. Cambridge University Press 2003.
* [3] J. Goubault-Larrecq. Non-Hausdorff Topology and Domain Theory. Cambridge University Press 2013.
* [4] L. K. Guo, Q. G. Li, et al. Representation of algebraic domains by formal association rule systems. Math. Struct. in Comp. Science 27 (2017) 470-490.
* [5] L. K. Guo, Q. G. Li, L. J. Yao. Locally complete consistent F-augmented contexts: A category-theoretic representation of algebraic L-domains. Discrete Applied Mathematics 249 (2018) 53-63.
* [6] L. K. Guo, Q. G. Li, G. Q. Zhang. A representation of continuous domains via relationally approximable concepts in a generalized framework of formal concept analysis. International Journal of Approximate Reasoning 114 (2019) 29-43.
* [7] J. Järvinen. Lattice theory for rough sets. Transactions on Rough Sets VI, LNCS 4374. Springer-Verlag, Berlin Heidelberg 2007, 400-498.
* [8] A. Jung. Cartesian Closed Categories of Domains. CWI Tract, vol. 66. Amsterdam 1989.
* [9] G. L. Liu, W. Zhu. The algebraic structures of generalized rough set theory. Information Sciences 178 (2008) 4105-4113.
* [10] D. Spreen, L. S. Xu, X. X. Mao. Information systems revisited–the general continuous case. Theoret. Comput. Sci. 405 (2008) 176-187.
* [11] L. C. Wang, L. K. Guo, Q. G. Li. Continuous Domains in Formal Concept Analysis. Fundamenta Informaticae 179 (2021) 295-319.
* [12] L. C. Wang, Q. G. Li. Representations of stably continuous semi-lattices by information systems and abstract bases. Information Processing Letters 165 (2021) 1-8.
* [13] L. S. Xu, and X. X. Mao. Formal topological characterizations of various continuous domains. Comput. Math. Appl. 56 (2008) 444-452.
* [14] L. Y. Yang, L. S. Xu. Algebraic aspects of generalized approximation spaces. Information Sciences 51 (2009) 151-161.
* [15] L. Y. Yang, L. S. Xu. Topological properties of generalized approximation spaces. Information Sciences 181 (2011) 3570-3580.
* [16] Y. Y. Yao, Neighborhood systems and approximate retrieval. Information Sciences 176 (2006) 3431-3452.
* [17] W. Zhu. Generalized rough sets based on relations. Information Sciences 177 (2007) 4997-5011.
|
# Vector meson-nucleon scattering length $|\alpha_{VN}|$ and trace anomalous
energy contribution to the nucleon mass $T_{A}$
Chengdong Han<EMAIL_ADDRESS>Institute of Modern Physics, Chinese
Academy of Sciences, Lanzhou 730000, China University of Chinese Academy of
Sciences, Beijing 100049, China Wei Kou<EMAIL_ADDRESS>Institute of
Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China University
of Chinese Academy of Sciences, Beijing 100049, China Rong Wang
<EMAIL_ADDRESS>Institute of Modern Physics, Chinese Academy of Sciences,
Lanzhou 730000, China University of Chinese Academy of Sciences, Beijing
100049, China Xurong Chen<EMAIL_ADDRESS>(Corresponding author)
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000,
China University of Chinese Academy of Sciences, Beijing 100049, China
Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum
Matter, South China Normal University, Guangzhou 510006, China
###### Abstract
Low-energy scattering processes of vector meson and nucleon are an important
window for studying non-perturbative QCD. The interaction of vector meson with
nucleon, vector meson-nucleon scattering length $|\alpha_{VN}|$, is an
important component of the study of hadronic interactions. Nowadays many
scattering length values $|\alpha_{VN}|$ have been reported using the recent
photoproduction experiment data or quasi data. In addition, the study of trace
anomalous energy contribution to the proton mass is also a hot topic in non-
perturbative QCD and hadron physics. However, it is difficult to measure
proton trace anomalous energy experimentally, and the study of the trace
anomaly of proton is still inconclusive. In this study, we established the
relationship between the scattering length of the vector meson-proton
$|\alpha_{Vp}|$ and the trace anomaly contribution of the proton mass $T_{A}$.
With the scattering length values extracted by using the Vector Meson
Dominance model, we obtained the trace anomaly contribution of the proton mass
$T_{A}$ = (22.8$\%$ $\pm$ 1.2$\%$), which is of similar order of magnitude as
the 23$\%$ given by Lattice QCD calculation. We conjecture that the trace
anomaly contribution of nucleon is independent of the type of vector meson
probe. We hope that high precision measurements of vector meson-nucleon
scattering length could give us a better chance to explore the origin of the
nucleon mass.
###### pacs:
12.38.?t, 14.20.Dh
## I Introduction
Hadronic interactions, hadron internal structure and dynamical hadron-mass
generation are the hot research fields in the non-perturbative quantum
chromodynamics (QCD). Since the discovery of the vector mesons, vector mesons
have become useful probe for the study of hadronic matter and hadronic
interaction. Experimentally, the vector meson-nucleon interaction can be
investigated using vector meson photoproduction within the Vector Meson
Dominance (VMD) model Sakurai (1960). Since the vector meson-nucleon
interaction is an important area of investigation in the non-perturbation
domain of QCD, the absolute value of the scattering length can be determined
from the total near-threshold vector-meson photoproduction cross-section Gell-
Mann and Zachariasen (1961) or the differential cross-section of vector-meson
photoproduction at threshold Titov _et al._ (2007) based on the theoretical
model. At present, the $\omega p$, $\omega n$, $\rho^{0}p$, $\phi p$, $J/\psi
p$, $\psi(2S)p$ and $\Upsilon$p scattering lengths have been fully analysed in
Refs. Strakovsky _et al._ (2015, 2020a, 2020b); Pentchev and Strakovsky
(2021); Wang _et al._ (2022a, b); Han _et al._ (2022); Strakovsky _et al._
(2021). In addition, the measurement of proton trace anomalous energy is
always a challenge in the experiments. Theoretically, the QCD interpretation
of proton trace anomaly is still ambiguous. To connect the theory with the
experiment for the nucleon trace anomalous energy, the authors Wang _et al._
(2020); Kou _et al._ (2022) recently extracted the trace anomaly by analyzing
the near-threshold photoproduction data of $\phi$ and $J/\psi$ vector mesons,
which give ones a method to estimate the proton mass decomposition Ji (1995a).
The proton mass is an endogenous property and is generally considered not to
vary with the type of probe. The various components of the decomposed proton
mass should also follow this principle and should be consistent within a
certain error range. To study the nature of the trace anomaly energy of the
proton, we consider the near-threshold photoproduction process of the vector
mesons. We start from the vector meson-nucleon scattering length and expect a
uniform nucleon trace anomaly contribution based on different meson probe
experiments.
In this work, we extend our previous works Wang _et al._ (2020); Kou _et
al._ (2022); Han _et al._ (2022) and relate two potential observables of the
near-threshold photoproduction process, the scattering length and the proton
trace anomaly energy. We find a new method for determining the nucleon trace
anomaly energy, which requires the vector meson-nucleon scattering length as
inputs. The paper is organized as follows. In Sec. II, we briefly describe the
scattering length of vector meson-nucleon which obtained from the VMD model.
We then discuss the method of extracting proton trace anomalous contributions
under the VMD model and relate it to the scattering length. Our main results
are discussed in Sec. III. Finally, we display some discussion and summary.
## II Vector meson-nucleon scattering length $|\alpha_{VN}|$ from
differential cross sections
Within the VMD model, the total $\gamma N\to VN$ cross-section is related to
both the total $VN\to VN$ cross-section at threshold energy and the scattering
length $\alpha_{VN}$ by Titov _et al._ (2007):
$\begin{split}&\sigma^{\gamma
N}\left(s_{thr}\right)=\frac{\alpha\pi}{\gamma_{V}^{2}}\frac{q_{VN}}{k_{\gamma
N}}\cdot\sigma^{VN}\left(s_{thr}\right)=\frac{\alpha\pi}{\gamma_{V}^{2}}\frac{q_{VN}}{k_{\gamma
N}}\cdot 4\pi\alpha_{VN}^{2}\\\ \end{split}$ (1)
where $\alpha=1/137$ is the fine structure constant, and $V$ is a index which
represents the vector mesons (e.g. $\omega$, $\rho^{0}$, J/$\psi$, etc.) The
$k_{\gamma N}$ and $q_{VN}$ in the above equation are the momenta in the
center-of-mass of the initial and final state particles, respectively, and
$\gamma_{V}$ is the photon-vector meson coupling constant obtained from the
$V\to e^{+}e^{-}$ decay width. Eq.LABEL:eq:total is taken at the threshold
energy, where sthr = $(M+m)^{2}$ with M and m being the masses of the vector
meson and nucleon, respectively.
In order to estimate the scattering length with the experimental data of the
differential photoproduction cross section $d\sigma^{\gamma N}/dt$, Pentchev
and Strakovsky Pentchev and Strakovsky (2021) established the relation between
the total and differential cross sections at threshold. The total cross
section is defined as an integral over the interval t [tmin(s), tmax(s)],
$\begin{split}&\sigma^{\gamma
N}(s)=\int_{t_{min}}^{t_{max}}\frac{d\sigma^{\gamma N}}{dt}(s,t)dt\\\
&\xlongequal{t_{min}\rightarrow t_{max}}\Delta t\frac{d\sigma^{\gamma
N}}{dt}(s_{thr},t_{thr})\\\ &=4q_{VN}k_{\gamma N}\frac{d\sigma^{\gamma
N}}{dt}(s_{thr},t_{thr}).\end{split}$ (2)
When approaching threshold $t_{min}$ $\rightarrow$ $t_{max}$ in Eq. (2), the
relationship between the total and differential cross sections at threshold is
established. Where the $\Delta t$ = $\left|t_{max}-t_{min}\right|$ =
4$q_{VN}k_{\gamma N}$ and
$t_{thr}=t_{min}(s_{thr})=t_{max}(s_{thr})=-M^{2}m/(M+m)$. Combining Eq. (2)
and Eq. (LABEL:eq:total), the relationship between the scattering length and
the differential cross-section is expressed as:
$\begin{split}\frac{d\sigma^{\gamma
V}}{dt}\left(s_{thr},t=t_{thr}\right)=\frac{\alpha\pi}{\gamma_{V}^{2}}\frac{\pi}{k_{\gamma
N}^{2}}\cdot\alpha_{VN}^{2},\\\ \end{split}$ (3)
A key problem in determining the scattering length at threshold tthr is to
extrapolate the cross section to the point of t$\rightarrow$tthr or
s$\rightarrow$sthr.
$\begin{split}\frac{d\sigma^{\gamma
V}}{dt}\left(s_{thr},t=0\right)=\frac{\alpha\pi}{\gamma_{V}^{2}}\frac{\pi}{k_{\gamma
N}^{2}}\cdot\alpha_{VN}^{2}.\end{split}$ (4)
The $d\sigma^{\gamma V}/dt(s_{thr},t=0)$ at left-hand side of Eq. (4) is not a
directly measurable quantity, as it requires extrapolation of the energy to
the threshold and extrapolation of t from the physical region
($t_{min}<t<t_{max}$) to the non-physical point t = 0. Therefore, when the
vector meson nucleon scattering length is extracted from the differential
cross-section data, we can not only extrapolate the energy at the threshold,
but also extrapolate t to t = 0.
The following exponential function was used to fit the differential cross-
section measurements of near-threshold vector meson photoproductions on
hydrogen target or deuterium target.
$\begin{split}\frac{d\sigma}{dt}=Ae^{-bt},\end{split}$ (5)
where $A=d\sigma/dt|_{t=0}$ denotes the forward differential cross-section and
$b$ describes the slope parameter. Combining Eqs. (LABEL:eq:diffxsection), (4)
and (5), the forward differential cross-section $d\sigma/dt|_{t=0}$ or
$d\sigma/dt|_{t=thr}$ can be obtained, and furhter extract the vector meson-
nucleon scattering length $\alpha_{VN}$.
A dispersive analysis to extract the $\Upsilon-p$ scattering length from
$\gamma p$ $\rightarrow$ $\Upsilon p$ experiments was presented by Gryniuk et
al. Gryniuk _et al._ (2020). For their framework, the imaginary part of the
$\Upsilon-p$ forward scattering amplitude $T_{\Upsilon p}$ determined by
$\gamma p$ $\rightarrow$ $\Upsilon p$ cross section measurements, and the real
part of the scattering amplitude $T_{\Upsilon p}$ is obtained from a once-
subtracted dispersion relation. The subtraction constant related to
$\Upsilon-p$ scattering amplitude $T_{\Upsilon p}$ is determined by fitting
the $\gamma p$ $\rightarrow$ $\Upsilon p$ differential photoproduction cross
section data at t = 0, as follows
$\displaystyle\frac{d\sigma_{\gamma p\rightarrow\Upsilon
p}}{dt}\bigg{|}_{t=0}=(\frac{ef_{\Upsilon}}{M_{\Upsilon}})^{2}\frac{1}{64\pi
sq^{2}_{\gamma p}}|T_{\Upsilon p}|^{2},$ (6)
where $f_{\Upsilon}$ is the $\Upsilon$ decay constant, and $q_{\gamma p}$
represents the magnitude of the photon three-momentum in the center mass frame
of the photoproduction process of $\Upsilon$ meson. The real part of the
forward scattering amplitude at threshold $T_{VN}$ in Eq.(6) is directly
related to the $V-N$ scattering length $\alpha_{VN}$ as Gryniuk _et al._
(2020)
$\begin{split}T_{VN}=8\pi(M+m)\alpha_{VN}.\end{split}$ (7)
## III Scattering lengths $|\alpha_{VN}|$ and the trace anomaly contribution
of nucleon mass $T_{A}$
Understanding how the hadron mass emerges in QCD is of utmost importance. An
effective approach is to consider the contribution of the quark and gluon that
are the basic unit of nucleon. How quarks and gluons dynamically produce the
whole nucleon mass is a very fundamental question. In Refs. Ji (1995b, a), Ji
first defined the proton mass decomposition with QCD Hamiltonian operators and
assumed that the hadron mass is calculated as the expectation value of the
Hamiltonian at the hadron rest frame:
$M_{N}=\left.\frac{\left\langle
P\left|H_{\mathrm{QCD}}\right|P\right\rangle}{\langle P\mid
P\rangle}\right|_{\text{rest frame }},$ (8)
which is decomposed into four terms characterized by the QCD trace anomaly
parameter $b(\mu^{2})$ and the momentum fraction $a(\mu^{2})$ carried by all
quarks Ji (1995a). The four terms of the proton mass partitions are written as
Ji (1995a),
$\displaystyle M_{q}=\frac{3}{4}\left(a-\frac{b}{1+\gamma_{m}}\right)M_{N},\ \
M_{g}=\frac{3}{4}(1-a)M_{N},$ (9) $\displaystyle
M_{m}=\frac{4+\gamma_{m}}{4\left(1+\gamma_{m}\right)}bM_{N},\ \
M_{a}=\frac{1}{4}(1-b)M_{N},$
where the anomalous dimension of quark mass $\gamma_{m}$ Buras (1980)
describes the renormalization information, and $b$ denotes the gluon trace
anomaly parameter. The $a(\mu^{2})$ of the quarks is calculated with all the
quark distributions determined by experimental measurements, as follows
$a\left(\mu^{2}\right)=\sum_{f}\int_{0}^{1}x\left[q_{f}\left(x,\mu^{2}\right)+\bar{q}_{f}\left(x,\mu^{2}\right)\right]dx.$
(10)
The first three terms in Eq. (9) can be easily understood with the classical
field theory. However the last term is an extension of the classical
description in the quantum field theory – the quantum anomaly Kou _et al._
(2022).
With the VMD model, the forward differential cross section of the vector meson
$V$ ($\omega$, $\rho^{0}$, $\phi$, J/$\psi$, etc.) photoproductions on the
proton target is followed as,
$\displaystyle\frac{d\sigma_{\gamma N\rightarrow VN}}{dt}\bigg{|}_{t=0}$ (11)
$\displaystyle=\frac{3\Gamma(V\rightarrow e^{+}e^{-})}{\alpha
m_{V}}(\frac{k_{VN}}{k_{\gamma N}})^{2}\frac{d\sigma_{VN\rightarrow
VN}}{dt}\bigg{|}_{t=0}$ $\displaystyle=\frac{3\Gamma(V\rightarrow
e^{+}e^{-})}{\alpha m_{V}}(\frac{k_{VN}}{k_{\gamma
N}})^{2}\frac{1}{64\pi}\frac{1}{m^{2}_{V}(\lambda^{2}-m^{2}_{N})}|F_{VN}|^{2}$
where $\alpha$ = 1/137 is the fine structure constant, $k^{2}_{ab}$ denotes
the center-of-mass momentum square of the corresponding two-body system,
$\Gamma$ is the partial decay width of the $V\rightarrow e^{+}e^{-}$,
$\lambda=(p_{N}p_{V}/m_{V})$ is the nucleon energy at the quarkonium rest
frame Kharzeev _et al._ (1999), and $F_{VN}$ represents the invariant
amplitude of $V-N$ elastic scattering Kharzeev (1996); Kharzeev _et al._
(1999).
In order to determine the trace anomaly contribution of proton mass, the
previous analysis method is to obtain the trace anomaly contribution of proton
mass by fitting the experimental data of vector meson photoproductions near
the threshold Kou _et al._ (2022); Wang _et al._ (2020). With the Refs.
Kharzeev (1996); Kharzeev _et al._ (1999), the invariant amplitude of $V-N$
elastic scattering takes the form
$\displaystyle F_{VN}$ $\displaystyle\simeq
r_{0}^{3}d_{2}\frac{8\pi^{2}M_{N}m_{V}}{27}\left(M_{N}-\left\langle
N\left|\sum_{i=u,d,s}m_{i}\bar{q}_{i}q_{i}\right|N\right\rangle\right)$ (12)
$\displaystyle=r_{0}^{3}d_{2}\frac{8\pi^{2}}{27}(1-b)M_{N}^{2}m_{V}.$
Taking the forward limit ($t\to 0$) and comparing with Eq. (7) we have
$\begin{split}|\alpha_{VN}|=\frac{4\pi
d_{n}^{1S}m^{2}_{N}m_{V}r^{3}_{0}T_{A}}{27\sqrt{s_{thr}}},\end{split}$ (13)
where $|\alpha_{VN}|$ denotes the vector meson-nucleon scattering lengths,
$m_{N}$ is the nucleon mass, and $m_{V}$ is the vector meson mass. $T_{A}$ is
the trace anomalous energy contribution to the nucleon mass, which is
expressed as $T_{A}=(1-b)/4$. The “Bohr” radius $r_{0}$ of the vector meson
$V$ is given by Kharzeev (1996),
$\begin{split}r_{0}=\frac{4}{3\alpha_{s}}\frac{1}{m_{q}}\end{split}$ (14)
where $m_{q}$ represents the constituent mass of quark and $\alpha_{s}$ is the
running coupling. In this study, we choose the “Bohr” radius size with
$r_{0}(\omega)$ = 0.75 fm, $r_{0}(\rho)$ = 0.75 fm Krutov _et al._ (2016);
Bhagwat and Maris (2008); Grigoryan and Radyushkin (2007), $r_{0}(\phi)$ =
0.41 fm, $r_{0}(J/\psi)$ = 0.20 fm Kharzeev (1996), and $r_{0}(\Upsilon)$ =
0.10 fm. Where $r_{0}(J/\psi)$, $r_{0}(\phi)$ and $r_{0}(\Upsilon)$ radii are
determined using the “Riedberg” energy of the quark-antiquark pairs. For the
charmonium ground state J/$\psi$, a naive estimate of the “Rydberg” energy
$E_{J/\psi}$ is $m_{D}+m_{\bar{D}}-m_{J/\psi}$, which means that the
$c\bar{c}$ pair will be pulled apart to generate the $D\bar{D}$ pair. The
above relation could be used to obtain the J/$\psi$’s “Bohr” radius by
$E_{J/\psi}=\left(1/m_{c}r_{J/\psi}^{2}\right)$ (see Refs. Kharzeev (1996);
Wang _et al._ (2020) for details). Although the selection of the “Bohr”
radius is not very rigorous, we explained the validity of the above radius
values and analyzed the uncertainties associated with the selection of the
“Bohr” radius in Ref. Kou _et al._ (2022). The Wilson coefficient
$d_{n}^{1S}$ is found in Refs. Kharzeev (1996); Peskin (1979); Kharzeev _et
al._ (1996) as
$\begin{split}d_{n}^{1S}=(\frac{32}{N_{c}})^{2}\sqrt{\pi}\frac{\Gamma(n+\frac{5}{2})}{\Gamma(n+5)},\end{split}$
(15)
where Nc = 3 is the number of colors. Eq. (13) combines the information of
vector meson-nucleon scattering length with the trace anomalous energy
contribution to the nucleon mass, the later cannot be directly determined by
experimental measurements.
Figure 1: The relationship between the scattering lengths $|\alpha_{VN}|$ as
a function of the $m_{N}^{2}m_{V}r_{0}^{3}/\sqrt{s_{thr}}$ of the vector
mesons, including $\omega$, $\rho^{0}$, $\phi$, J/$\psi$ and $\Upsilon$
mesons.
Table 1 shows the value of scattering length $|\alpha_{Vp}|$ with $\omega$p
Ishikawa _et al._ (2020), $\rho^{0}$p Wang _et al._ (2022b); Klein (1997),
$\phi$p Strakovsky _et al._ (2020b); Seraydaryan _et al._ (2014); Dey _et
al._ (2014), $J/\psi$p Pentchev and Strakovsky (2021) and $\Upsilon$p
Strakovsky _et al._ (2021) mesons and the value of “Bohr” radius, and
threshold energy of different vector meson-proton interaction. According to
Wang’s Wang _et al._ (2022b) previous work, we extracted the $\rho^{0}$p
scattering length from the differential cross-section data Klein (1997) of
near-threshold $\rho^{0}$ photoproductions in t to t = 0, which is used for
this work. In addition, the differential cross-section $\phi$-meson
photoproduction data Seraydaryan _et al._ (2014); Dey _et al._ (2014) from
the CLAS threshold measurements are used for evaluating the $\phi$p scattering
length $|\alpha_{\phi p}|$ in t to t = 0, and the extracted $\alpha_{\phi p}$
is used for this analysis. The Fig. 1 shows the relationship between the
vector meson-proton scattering length and the trace anomaly contribution of
the proton mass. By fitting the distribution in Fig. 1 with a linear function,
the fitting result is $\frac{4\pi d_{2}^{1s}}{27}T_{A}$ = (0.060 $\pm$ 0.003).
Since $\frac{4\pi d_{2}^{1S}}{27}$ is a constant quantity equal to 1.333,
$T_{A}$ = (0.228 $\pm$ 0.012) is calculated.
Table 1: The values of scattering length $|\alpha_{Vp}|$ of $\omega$p, $\rho^{0}$p, $\phi$0, J/$\psi$p and $\Upsilon$p ,the value of “Bohr” radius $r_{0}$, and threshold energy $\sqrt{s_{thr}}$ of different vector meson-nucleon interaction. Vector meson | mV (GeV) | $|\alpha_{Vp}|$ (fm) | r0 (fm) | $\sqrt{s_{thr}}$ (GeV) |
---|---|---|---|---|---
$\omega$ | 0.7827 | 0.97 $\pm$ 0.16 Ishikawa _et al._ (2020) | 0.75 | 1.721 |
$\rho^{0}$ | 0.7753 | 0.45 $\pm$ 0.05 Wang _et al._ (2022b); Klein (1997) | 0.75 | 1.713 |
$\phi$ | 1.0195 | 0.19 $\pm$ 0.01 Strakovsky _et al._ (2020b); Seraydaryan _et al._ (2014); Dey _et al._ (2014) | 0.41 | 1.958 |
J/$\psi$ | 3.0960 | 0.0245 $\pm$ 0.039 Pentchev and Strakovsky (2021) | 0.20 | 4.035 |
$\Upsilon$ | 9.4603 | (0.51 $\pm$ 0.03) $\times$ 10-3 Strakovsky _et al._ (2021) | 0.10 | 10.40 |
## IV Discussion and summary
In the study, we established the relationship between the vector meson-nucleon
scattering length and the trace anomaly contribution of the nucleon mass for
the first time through the differential cross section of vector meson
photoproductions process at near threshold. This means that as long as the
scattering length of the vector meson-proton is accurately measured of near-
threshold meson photoproductions, the proton trace anomaly energy can be
obtained with the relationship we obtained. The contribution of trace anomaly
in proton we extracted is (22.8$\%$ $\pm$ 1.2$\%$), which is consistent with
the result of lattice QCD calculation Yang _et al._ (2018). The percentage of
trace anomaly energy inside the proton $T_{A}$ we obtained depends on the
“Bohr” radius $r_{0}$ of vector meson from the theoretical model calculations,
and the extraction value of scattering lengths $|\alpha_{VN}|$ from the
differential cross-section data of near-threshold meson photoproductions. A
related analysis of systematical uncertainties can be found in Ref. Kou _et
al._ (2022). As we knows, the origin of the mass of proton is very complicated
in modern particle physics and no definite conclusion has been made about it
in the recent years. In addition, it is very difficult to measure the proton
trace anomalous energy in experiments. While different vector meson probes
lead to differences in the specific process of near-threshold photoproduction
as well as scattering length, we can still glimpse that the components of the
proton mass are independent of the specific reaction process. In this work, we
aim to understand the near-threshold photoproduction of vector mesons from
other perspectives and to find a experimental measurement quantity as the
input for the study of the origin of the proton mass. The above result of
trace anomaly contribution in proton may provide useful theoretical
information for an in-depth understanding of nucleon interaction with vector
mesons and the trace anomalous energy contribution to the nucleon mass.
Furthermore, the Electron-Ion Collider in the USA (EIC) Accardi _et al._
(2016) and the Electron-ion collider in China (EicC) Chen (2018); Chen _et
al._ (2020); Anderle _et al._ (2021) will provide a favourable circumstances
to study the near-threshold vector meson photoproduction by exploiting the
virtual photon flux. The vector meson photoproduction experiments at EIC and
EicC will be further test the VMD model and will also strengthen our
understanding on the properties of hadronic matter and hadronic interactions.
###### Acknowledgements.
This work is supported by the Strategic Priority Research Program of Chinese
Academy of Sciences under the Grant NO. XDB34030301, the National Natural Sci-
ence Foundation of China No. 12005266 and Guangdong Major Project of Basic and
Applied Basic Research No. 2020B0301030008.
## References
* Sakurai (1960) J. Sakurai, Annals of Physics 11, 1 (1960).
* Gell-Mann and Zachariasen (1961) M. Gell-Mann and F. Zachariasen, Phys. Rev. 124, 953 (1961).
* Titov _et al._ (2007) A. I. Titov, T. Nakano, S. Date, and Y. Ohashi, Phys. Rev. C 76, 048202 (2007), arXiv:hep-ph/0703227 .
* Strakovsky _et al._ (2015) I. I. Strakovsky _et al._ , Phys. Rev. C 91, 045207 (2015), arXiv:1407.3465 [nucl-ex] .
* Strakovsky _et al._ (2020a) I. Strakovsky, D. Epifanov, and L. Pentchev, Phys. Rev. C 101, 042201 (2020a), arXiv:1911.12686 [hep-ph] .
* Strakovsky _et al._ (2020b) I. I. Strakovsky, L. Pentchev, and A. Titov, Phys. Rev. C 101, 045201 (2020b), arXiv:2001.08851 [hep-ph] .
* Pentchev and Strakovsky (2021) L. Pentchev and I. I. Strakovsky, Eur. Phys. J. A 57, 56 (2021), arXiv:2009.04502 [hep-ph] .
* Wang _et al._ (2022a) X.-Y. Wang, F. Zeng, and I. I. Strakovsky, Phys. Rev. C 106, 015202 (2022a), arXiv:2205.07661 [hep-ph] .
* Wang _et al._ (2022b) X.-Y. Wang, F. Zeng, Q. Wang, and L. Zhang, (2022b), arXiv:2206.09170 [nucl-th] .
* Han _et al._ (2022) C. Han, W. Kou, R. Wang, and X. Chen, (2022), arXiv:2210.11276 [nucl-th] .
* Strakovsky _et al._ (2021) I. I. Strakovsky, W. J. Briscoe, L. Pentchev, and A. Schmidt, Phys. Rev. D 104, 074028 (2021), arXiv:2108.02871 [hep-ph] .
* Wang _et al._ (2020) R. Wang, J. Evslin, and X. Chen, Eur. Phys. J. C 80, 507 (2020), arXiv:1912.12040 [hep-ph] .
* Kou _et al._ (2022) W. Kou, R. Wang, and X. Chen, Eur. Phys. J. A 58, 155 (2022), arXiv:2103.10017 [hep-ph] .
* Ji (1995a) X.-D. Ji, Phys. Rev. D 52, 271 (1995a), arXiv:hep-ph/9502213 .
* Gryniuk _et al._ (2020) O. Gryniuk, S. Joosten, Z.-E. Meziani, and M. Vanderhaeghen, Phys. Rev. D 102, 014016 (2020), arXiv:2005.09293 [hep-ph] .
* Ji (1995b) X.-D. Ji, Phys. Rev. Lett. 74, 1071 (1995b), arXiv:hep-ph/9410274 .
* Buras (1980) A. J. Buras, Rev. Mod. Phys. 52, 199 (1980).
* Kharzeev _et al._ (1999) D. Kharzeev, H. Satz, A. Syamtomov, and G. Zinovjev, Eur. Phys. J. C 9, 459 (1999), arXiv:hep-ph/9901375 .
* Kharzeev (1996) D. Kharzeev, Proc. Int. Sch. Phys. Fermi 130, 105 (1996), arXiv:nucl-th/9601029 .
* Krutov _et al._ (2016) A. F. Krutov, R. G. Polezhaev, and V. E. Troitsky, Phys. Rev. D 93, 036007 (2016), arXiv:1602.00907 [hep-ph] .
* Bhagwat and Maris (2008) M. S. Bhagwat and P. Maris, Phys. Rev. C 77, 025203 (2008), arXiv:nucl-th/0612069 .
* Grigoryan and Radyushkin (2007) H. R. Grigoryan and A. V. Radyushkin, Phys. Rev. D 76, 095007 (2007), arXiv:0706.1543 [hep-ph] .
* Peskin (1979) M. E. Peskin, Nucl. Phys. B 156, 365 (1979).
* Kharzeev _et al._ (1996) D. Kharzeev, H. Satz, A. Syamtomov, and G. Zinovev, Phys. Lett. B 389, 595 (1996), arXiv:hep-ph/9605448 .
* Ishikawa _et al._ (2020) T. Ishikawa _et al._ , Phys. Rev. C 101, 052201 (2020), arXiv:1904.02797 [nucl-ex] .
* Klein (1997) F. Klein, in _TJNAF Workshop on N ∗ Physics,(Washington, DC)_ (1997).
* Seraydaryan _et al._ (2014) H. Seraydaryan _et al._ (CLAS), Phys. Rev. C 89, 055206 (2014), arXiv:1308.1363 [hep-ex] .
* Dey _et al._ (2014) B. Dey, C. A. Meyer, M. Bellis, and M. Williams (CLAS), Phys. Rev. C 89, 055208 (2014), [Addendum: Phys.Rev.C 90, 019901 (2014)], arXiv:1403.2110 [nucl-ex] .
* Yang _et al._ (2018) Y.-B. Yang, J. Liang, Y.-J. Bi, Y. Chen, T. Draper, K.-F. Liu, and Z. Liu, Phys. Rev. Lett. 121, 212001 (2018), arXiv:1808.08677 [hep-lat] .
* Accardi _et al._ (2016) A. Accardi _et al._ , Eur. Phys. J. A 52, 268 (2016), arXiv:1212.1701 [nucl-ex] .
* Chen (2018) X. Chen, _Proceedings, 26th International Workshop on Deep Inelastic Scattering and Related Subjects (DIS 2018): Port Island, Kobe, Japan, April 16-20, 2018_ , PoS DIS2018, 170 (2018), arXiv:1809.00448 [nucl-ex] .
* Chen _et al._ (2020) X. Chen, F.-K. Guo, C. D. Roberts, and R. Wang, Few Body Syst. 61, 43 (2020), arXiv:2008.00102 [hep-ph] .
* Anderle _et al._ (2021) D. P. Anderle _et al._ , (2021), arXiv:2102.09222 [nucl-ex] .
|
# Diffusion Probabilistic Model Made Slim
Xingyi Yang1 Daquan Zhou2 Jiashi Feng2 Xinchao Wang1
National University of Singapore1 ByteDance Inc.2
<EMAIL_ADDRESS>{daquanzhou<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Despite the recent visually-pleasing results achieved, the massive
computational cost has been a long-standing flaw for diffusion probabilistic
models (DPMs), which, in turn, greatly limits their applications on resource-
limited platforms. Prior methods towards efficient DPM, however, have largely
focused on accelerating the testing yet overlooked their huge complexity and
sizes. In this paper, we make a dedicated attempt to lighten DPM while
striving to preserve its favourable performance. We start by training a small-
sized latent diffusion model (LDM) from scratch, but observe a significant
fidelity drop in the synthetic images. Through a thorough assessment, we find
that DPM is intrinsically biased against high-frequency generation, and learns
to recover different frequency components at different time-steps. These
properties make compact networks unable to represent frequency dynamics with
accurate high-frequency estimation. Towards this end, we introduce a
customized design for slim DPM, which we term as Spectral Diffusion (SD), for
light-weight image synthesis. SD incorporates wavelet gating in its
architecture to enable frequency dynamic feature extraction at every reverse
steps, and conducts spectrum-aware distillation to promote high-frequency
recovery by inverse weighting the objective based on spectrum magni-
tudes.Experimental results demonstrate that, SD achieves 8-18$\times$
computational complexity reduction as compared to the latent diffusion models
on a series of conditional and unconditional image generation tasks while
retaining competitive image fidelity.
## 1 Introduction
Diffusion Probabilistic Models (DPMs) [16, 55, 53] have recently emerged as a
power tool for generative modeling, and have demonstrated impressive results
in image synthesis [42, 8, 43], video generation [19, 15, 68] and 3D editing
[39]. Nevertheless, the gratifying results come with a price: DPMs suffer from
massive model sizes. In fact, state-of-the-art DPMs requires billions of
parameters, with hundreds or even thousands of inference steps per image. For
example, _DALL $\cdot$ E 2_ [42], which is composed of 4 separate diffusion
models, requires 5.5B parameters and 356 sampling steps in total. Such
enormous model size, in turn, makes DPMs extremely cumbersome to be employed
in resource-limited platforms.
However, existing efforts towards efficient DPMs have focused on model
acceleration but largely overlooked model lightening. For examples, the
approaches of [36, 47, 34, 51, 1, 30, 33] strive for faster sampling, while
those of [17, 13, 57, 43] rely on reducing the input size. Admittedly, all of
these methods give rise to shortened training or inference time, yet still,
the large sizes preclude them from many real-world application scenarios.
Figure 1: (1) Visualizing the frequency domain gap among generated images with
the full DPM [43], Lite DPM and our SD on FFHQ [26] dataset. Lite-DPM is
unable to recover fine-grained textures, while SD can produce sharp edges and
realistic patterns. (2) Model size, Multiply-Add cumulation (MACs) and FID
score on class-conditioned ImageNet [7]. Our model achieves compelling visual
quality, with minimal parameters and computational cost. ∗ indicates our re-
implemented version.
In this paper, we make a dedicated attempt towards building compact DPMs. To
start with, we train a lite version of the popular latent diffusion model
(LDM) [43] by reducing the channel size. We show the image generated by the
original and and lite DPM in Figure 1. Although the lite LDM indeed sketches
the overall structure of the faces, the high-frequency components, such as the
skin and hair textures, are unfortunately poorly recovered. This phenomenon
can be in fact revealed by the Discrete Fourier Transform (DFT) coefficient
shown on the right column, indicating that the conventional design for DPMs
leads to high-frequency deficiency when the model is made slim.
We then take an in-depth inspection on the DPMs through the lens of frequency,
which results in two key observations. (1) Frequency Evolution. Under mild
assumptions, we mathematically prove that DPMs learn different functionalities
at different time-steps. Specifically, we show that the optimal denoiser in
fact boils down to a cascade of wiener filters [61] with growing bandwidths.
After recovering the low-frequency components, high-frequency features are
added gradually in the later denoising stages. This evolution property, as a
consequence, small DPMs fails to learn dynamic bandwidths with limited
parameters. (2) Frequency Bias. DPM is biased towards dominant frequency
components of the data distribution. It is most obvious when the noise
amplitude is small, leading to inaccurate noise prediction at the end of the
reverse process. As such, small DPMs struggle to recover the high-frequency
band and image details.
Motivated by these observations, we propose a novel Spectral Diffusion (SD)
model, tailored for light-weight image synthesis. Our core idea is to
introduce the frequency dynamics and priors into the architecture design and
training objective of the small DPM, so as to explicitly preserve the high-
frequency details. The proposed solution consists of two parts, each
accounting for one aforementioned observations. For the frequency evolution,
we propose a wavelet gating operation, which enables the network to
dynamically adapt to the spectrum response at different time-steps. In the
upsample and downsample stage, the input feature is first decomposed through
wavelet transforms and the coefficients are re-weighted through a learnable
gating function. It significantly lowers the parameter requirements to
represent the frequency evolution in the reverse process.
To compensate for the frequency bias for small DPMs, we distill the high-
frequency knowledge from teacher DPM to a compact network. This is implemented
by inverse weighting the distillation loss based on spectrum magnitudes.
Specifically, high-frequency recovery is strengthened by over-weighting
frequency bands of small magnitudes. Students thereby focus on the textual
recovery for image generation. By seamlessly integrating both designs, we are
able to build a slim latent diffusion model, SD, which largely preserve the
performance of LDM. Notably, SD by nature inherits the merits of DPMs,
including superior sample diversity, training stability and tractable
parameterization. As shown in Figure 1, our model is $8\sim 18\times$ smaller
and runs $2\sim 5\times$ faster than the original LDM, while achieving
competitive image fidelity.
The contributions of this study are threefold:
1. 1.
This study investigates the task of diffusion model slimming, which remains
largely unexplored before.
2. 2.
We identify that the key challenge lies in its unrealistic recovery for the
high-frequency components. By probing DPMs from a frequency perspective, we
show that there exists a spectrum evolution over different denoising steps,
and the rare frequencies cannot be accurately estimated by small models.
3. 3.
We propose SD, a slim DPM that effectively restores imagery textures by
enhancing high-frequency generation performance. SD achieves gratifying
performance on image generation tasks at a low cost.
## 2 Related Work
Diffusion Probabilistic Models. DPMs [50, 16] have achieved state-of-the-art
results in terms of both log-likelihood estimation [52] and sample quality
[8], compared to Generative adversarial Network (GAN)-based [26, 12, 25]
approaches. It has been pointed out that DPM, in its essence, is a score-based
model [60, 55, 54] with annealed noise scheduling [53]. The reverse process is
considered as solving reverse stochastic differential equations (SDE) [55].
Current best-performed DPMs are implemented as a time-conditioned UNet [45, 8,
55] armed with self-attention [59] and cross-attention [43, 21]. Parameter
moving average [38], re-weighted objective [16] and advanced scheduling [38]
significantly improves the visual quality. In this work, we focus on small
diffusion designed for image generation, which has rarely been studied before.
Efficient Diffusion. The efficient diffusion model for low-resource inferences
has recently become a popular research topic. One approach is through reducing
the sampling steps, which is either done by distilling multiple steps into a
single step [36, 47, 34], or shortening the reverse steps while maintaining
the image fidelity [51, 1, 30, 33]. Another possible solution explores the
idea of diffusing in a lower dimensional space , and then scaling it up, with
a cascade structure [17] or in the latent space [57, 43]. In distinction from
them, we build an efficient diffusion model using light-weight architecture
and knowledge distillation.
Frequency Analysis for Generative Model. Neural networks tend to fit low-
frequency signals first and shift to the high-frequency components, which is
referred to as _frequency principle_ of deep neural network [63, 64, 2]. The
frequency bias is also observed when training deep generative models like GANs
[10, 5, 27, 49], where the generator struggles to build up natural high-
frequency details.
In this paper, we examine the frequency behavior of DPMs. Taking advantage of
its frequency properties, our SD achieves realistic image generation at a low
cost.
## 3 Background
### 3.1 Denoising Diffusion Probabilistic Models
Diffusion model reverses a progressive noise process based on latent
variables. Given data $\mathbf{x}_{0}\sim q(\mathbf{x}_{0})$ sampled from the
real distribution, we consider perturbing data with Gaussian noise with zero
mean and $\beta_{t}$ variance for $T$ steps
$\displaystyle
q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I})$
(1)
where $t\in[1,T]$ and $0<\beta_{1:T}<1$ denote the noise scale scheduling. At
the end of day, $\mathbf{x}_{T}\to\mathcal{N}(0,\mathbf{I})$ converge to a
Gaussian white noise. Although sampling from noise-perturbed distribution
$q(\mathbf{x}_{t})=\int q(\mathbf{x}_{1:t}|\mathbf{x}_{0})d\mathbf{x}_{1:t-1}$
requires a tedious numerical integration over steps, the choice of Gaussian
noise provides a close-form solution to generate arbitrary time-step
$\mathbf{x}_{t}$ through
$\displaystyle\mathbf{x}_{t}=\sqrt{\bar{\alpha}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}}\bm{\epsilon},\quad\text{where}\quad\epsilon\sim\mathcal{N}(0,\mathbf{I})$
(2)
where $\alpha_{t}=1-\beta_{t}$ and
$\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s}$. A variational Markov chain in
the reverse process is parameterized as a time-conditioned denoising neural
network $\mathbf{s}(\mathbf{x},t;\bm{\theta})$ with
$p_{\bm{\theta}}(\mathbf{x}_{t-1}|\mathbf{x}_{t})=\mathcal{N}(\mathbf{x}_{t-1};\frac{1}{\sqrt{1-\beta_{t}}}(\mathbf{x}_{t}+\beta_{t}\mathbf{s}(\mathbf{x}_{t},t;\bm{\theta})),\beta_{t}\mathbf{I})$.
The denoiser is trained to minimize a re-weighted evidence lower bound (ELBO)
that fits the noise
$\displaystyle\mathcal{L}_{\text{DDPM}}$
$\displaystyle=\mathbb{E}_{t,\mathbf{x}_{0},\bm{\epsilon}}\Big{[}||\bm{\epsilon}-\mathbf{s}(\mathbf{x}_{t},t;\bm{\theta})||_{2}^{2}\Big{]}$
(3)
$\displaystyle=\mathbb{E}_{t,\mathbf{x}_{0},\bm{\epsilon}}\Big{[}||\nabla_{\mathbf{x}_{t}}\log
p(\mathbf{x}_{t}|\mathbf{x}_{0})-\mathbf{s}(\mathbf{x}_{t},t;\bm{\theta})||_{2}^{2}\Big{]}$
(4)
where the $\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t}|\mathbf{x}_{0})$ are
also called the score function [53]. Thus, the denoiser equivalently learns to
recover the derivative that maximize the data log-likelihood [22, 60]. With a
trained
$\mathbf{s}(\mathbf{x},t;\bm{\theta}^{*})\approx\nabla_{\mathbf{x}_{t}}\log
p(\mathbf{x}_{t}|\mathbf{x}_{0})$, we generate the data by reversing the
Markov chain
$\displaystyle\mathbf{x}_{t-1}\leftarrow\frac{1}{\sqrt{1-\beta_{t}}}(\mathbf{x}_{t}+\beta_{t}\mathbf{s}(\mathbf{x}_{t},t;\bm{\theta}))+\sqrt{\beta_{t}}\bm{\epsilon}_{t}$
(5)
The reverse process could be understood as going along
$\nabla_{\mathbf{x}_{t}}\log p(\mathbf{x}_{t}|\mathbf{x}_{0})$ from
$\mathbf{x}_{T}$ to maximize the data likelihood.
### 3.2 Frequency Domain Representation of Images
Frequency domain analysis decomposes a image according to a sets of basis
functions. We focus on two discrete transformations: _Fourier_ and _Wavelet_
Transform.
Given a $H\times W$ input signal111For simplicity, we only introduce the
formulation for gray-image, while it is extendable to multi-channel inputs.
$\mathbf{x}\in\mathbb{R}^{H\times W}$, Discrete Fourier Transform (DFT)
$\mathcal{F}$ projects it onto a collections of sine and cosine waves of
different frequencies and phases
$\displaystyle\mathcal{X}(u,v)=\mathcal{F}[\mathbf{x}]=\sum_{x=1}^{H}\sum_{y=1}^{W}\mathbf{x}(x,y)e^{-j2\pi(\frac{u}{H}x+\frac{v}{W}y)}$
$\mathbf{x}(x,y)$ is the pixel value at $(x,y)$; $\mathcal{X}(u,v)$ represents
complex value at frequency $(u,v)$; $e$ and $j$ are Euler’s number and the
imaginary unit.
On the other hand, Discrete Wavelet Transform (DWT) projects it onto multi-
resolution wavelets functions. In a singlescale case, $\mathbf{x}$ is
decomposed into 4 wavelet coefficients
$\mathbf{x}_{\textsf{LL}},\mathbf{x}_{\textsf{LH}},\mathbf{x}_{\textsf{HL}},\mathbf{x}_{\textsf{HH}}=\textsf{DWT}(\mathbf{X})$
with halving the scale, where
$\mathbf{x}_{\\{\textsf{LL},\textsf{LH},\textsf{HL},\textsf{HH}\\}}\in\mathbb{R}^{\frac{H}{2}\times\frac{W}{2}}$.
$\mathbf{x}_{\textsf{LL}}$ stands for low-frequency component and
$\mathbf{x}_{\\{\textsf{LH},\textsf{HL},\textsf{HH}\\}}$ are high-frequency
components that contains the textural details. The coefficients could then be
inverted and up-sampled back to the original input
$\mathbf{x}=\textsf{IDWT}(\mathbf{x}_{\textsf{LL}},\mathbf{x}_{\textsf{LH}},\mathbf{x}_{\textsf{HL}},\mathbf{x}_{\textsf{HH}})$.
## 4 Frequency Perspective for Diffusion
In general signal processing, denoising is often performed in frequency space.
Similar to Figure 1, Table 1 compares Low-freq and High-freq error222The error
computed as the
$\mathbb{E}_{f}[\mathbb{E}[|\mathcal{F}_{real}|]-\mathbb{E}[|\mathcal{F}_{gen}|]]$
over 300 real and generated samples, with the low-high cut-off frequency of
28Hz. for different DPMs on FFHQ dataset. Lite-LDM performs poorly due to its
lack of high-frequency generation.
Method | #Param | FID$\downarrow$ | Low-freq Error$\downarrow$ | High-freq Error$\downarrow$
---|---|---|---|---
LDM | 274.1M | 5.0 | 0.11 | 0.75
Lite-LDM | 22.4M | 17.3 | 0.28(+0.17) | 3.35(+2.17)
Table 1: Low-freq and High-freq error for different model size.
Thus, we examine DPM’s behavior in the frequency domain. As illustrated in
Figure 2, we make two findings: (1) _Frequency Evolution._ Diffusion model
learns to recover the low-frequency components at first, and gradually adds in
photo-realistic and high-frequency details. (2) _Frequency Bias._ Diffusion
model makes biased recovery for the minority frequency band.
Figure 2: Illustration of the Frequency Evolution and Bias for Diffusion
Models. In the reverse process, the optimal filters recover low-frequency
components first and add on the details at the end. The predicted score
functions may be incorrect for rare patterns, thus failing to recover complex
and fine-grained textures.
### 4.1 Spectrum Evolution over Time
DPM optimizes a time-conditioned network to fit noise at multiple scales,
which gives rise to a denoising trajectory over time-steps. We take a close
look at this trajectory through the lens of frequency. When assuming the
network is a linear filter, we give the optimal filter in terms of its
spectrum response at every timestep. This filter is commonly known as Wiener
filter [61].
Proposition 1. Assume $\mathbf{x}_{0}$ is a wide-sense stationary signal and
$\bm{\epsilon}$ is white noise of variance $\sigma^{2}=1$. For
$\mathbf{x}_{t}=\sqrt{\bar{\alpha}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}}\bm{\epsilon}$,
the optimal linear denoising filter $h_{t}$ at time $t$ that minimize
$J_{t}=\|h_{t}\ast\mathbf{x}_{t}-\bm{\epsilon}\|^{2}$ has a closed-form
solution
$\displaystyle\mathcal{H}_{t}^{*}(f)=\frac{1}{\bar{\alpha}|\mathcal{X}_{0}(f)|^{2}+1-\bar{\alpha}}$
(6)
where $|\mathcal{X}_{0}(f)|^{2}$ is the power spectrum of $\mathbf{x}_{0}$ and
$\mathcal{H}^{*}_{t}(f)$ is the frequency response of $h_{t}^{*}$.
Although the linear assumption poses a strong restriction on the model
architecture, we believe it provides valuable insights into what has been done
in the reverse process.
DPM goes from structure to details. In this study, we make a widely accepted
assumption about the power spectra of natural images
$\mathbb{E}[|X_{0}(f)|^{2}]=A_{s}(\theta)/f^{\alpha_{S}(\theta)}$that follows
a power law [58, 3, 9, 56]. $A_{s}(\theta)$ is called an amplitude scaling
factor and $\alpha_{S}(\theta)$ is the frequency exponent. If we set
$A_{s}(\theta)=1$ and $\alpha_{S}(\theta)=2$, the frequency response of the
signal reconstruction filter $1-\sqrt{1-\bar{\alpha}}h$ is in Figure 3.
In the reverse process, $t$ goes from $T\to 0$, and $\bar{\alpha}$ increases
from $0\to 1$. Therefore, DPM displays a spectrum-varying behavior over time.
In the beginning, we have a narrow-banded filter ($\bar{\alpha}=0.1$ and
$\bar{\alpha}=0.01$) that only restores the low-frequency components that
control the rough structures. $t$ goes down and $\bar{\alpha}$ gradually
increases, with more details and high-frequency components restored in the
images, like the human hairs, wrinkles, and pores.
We plot the denoised predictions $\hat{\mathbf{x}}_{0}$ at different steps
using pre-trained LDM [43] in Figure 2, which shows that DPM generates low-
frequency first and transits into high-frequency. The same empirical
observation that DPM goes from rough to details has been shown in [16, 35, 6,
43], while we are the first to give its numerical solutions.
Figure 3: $1-(1-\bar{\alpha})|H^{*}(f)|^{2}$ of the optimal linear denoising
filter with different $\bar{\alpha}$. Figure 4: Toy example for 1D signal
fitting. Small DPM is unable to recover minority frequency components.
### 4.2 Frequency Bias in Diffusion Model
Another challenge in diffusion-based model is the inaccurate denoising
estimation in low-density regions [53]. It results from the expectation over
$p(\mathbf{x}_{0})$ in the loss function
$\displaystyle\mathcal{L}_{\text{DDPM}}=\int
p(\mathbf{x}_{0})\mathbb{E}_{t,\bm{\epsilon}}\Big{[}||\bm{\epsilon}-\mathbf{s}(\mathbf{x}_{t},t;\bm{\theta})||_{2}^{2}\Big{]}\text{d}\mathbf{x}_{0}$
(7)
Since the denoising objective is weighted by $p(\mathbf{x}_{0})$, the trained
diffusion will be biased towards the high-density region, while ignoring the
long-tail patterns.
For image generation tasks, one long-tail pattern is the frequency bias. While
the low-frequency images are dominant with large $p(\mathbf{x}_{0})$, very few
samples contain high-frequency components. Training small DPMs on the biased
data distribution makes it difficult to generate samples with complex textures
and realistic high-frequency patterns.
Example 1. We fit a toy diffusion model to 1D functions $f(x)=cos(\alpha 2\pi
x)$, where $P(\alpha=3)=0.2$ and $P(\alpha=5)=0.8$. We adopt a two-layer feed-
forward neural network, with 1000 denoising steps and hidden units
$M=\\{64,1024\\}$. More details in in Supplementary.
We plot the 300 generated signals in Figure 4 (Top), their DFT magnitudes in
(Button Right), and the mean frequency histogram in (Button Left). Small model
($M=64$) faces difficulty recovering the minority frequencies other than
$\alpha=3$, while large model ($M=1024$) achieves smooth denoised results over
all freq bands, especially when $\alpha=5$.
It provides concrete evidence that small DPMs have intrinsic defects in
recovering the high frequencies.
## 5 Spectral Diffusion Model
As explained above, our goal is to slim down the DPMs by introducing the
frequency dynamics and priors into the architecture design and training
objectives. Taking the LDM [43] as our baseline, we design a wavelet-gating
module to enable time-dynamic inference for the network with a limited model
size. A spectrum-aware distillation is applied to enhance the high-frequency
generation performance. Both modifications allow us to achieve photo-realistic
image generation with minimal model size and computational effort.
### 5.1 Dynamic Wavelet Gating
As depicted in Section 4.1, the reverse process requires a cascade of filters
with dynamic frequency response. Vanilla UNet [45], while being effective in
reconstructing image details, is incapable to incorporate dynamic spectrum
into a single set of parameters. As a result, the small-size DPM is incapable
to compensate for the changing bandwidth.
In response to such frequency evolution, we propose to insert the Wavelet
Gating (WG) module into the network to automatically adapt it to varing
frequency response. WG decomposes the feature map into wavelet bands and
selectively attends to the proper frequency at different reverse steps, which
is uniquely tailored for the diffusion model.
Gating over the Wavelet Coefficients. We replace all down-sample and up-sample
in UNet with DWT and IDWT [11, 65], and pose a soft gating operation on
wavelet coefficients to facilitate step-adaptive image denoising. We call them
WG-Down and WG-Up, as shown in Figure 5.
Following the channel attention operation [62, 20, 40], information from input
feature $\mathbf{X}$ is aggregated to produce a soft gating mask
$\displaystyle g_{\\{\textsf{LL},\textsf{LH},\textsf{HL},\textsf{HH}\\}}$
$\displaystyle=\text{Sigmoid}(\text{FFN}(\text{Avgpool}(\mathbf{X})))$ (8)
where $g_{i}$ is the gating score of each wavelet band; FFN is a 2 layer where
feed-forward network and Avgpool stands for the average pooling. The
coefficients are then gated with $g_{i}$ to produce the output
$\mathbf{X}^{\prime}$.
Figure 5: WG-Down and WG-Up with wavelet gating.
In the WG-Down, we apply WG after the DWT operation to fuse the sub-band
coefficients with weighted summation
$\mathbf{X}^{\prime}=\sum_{i\in\\{\textsf{LL},\textsf{LH},\textsf{HL},\textsf{HH}\\}}g_{i}\odot\mathbf{X}_{i}$,
where $\odot$ is the element-wise multiplication. In the WG-Up, the input
feature is splitted into 4 chunks as the wavelet coefficients. Then, WG is
carried out to re-weight each sub-band before
$\mathbf{X}^{\prime}=\textsf{IDWT}(g_{\textsf{LL}}\odot\mathbf{X}_{\textsf{LL}},g_{\textsf{LH}}\odot\mathbf{X}_{\textsf{LH}},g_{\textsf{HL}}\odot\mathbf{X}_{\textsf{HL}},g_{\textsf{HH}}\odot\mathbf{X}_{\textsf{HH}})$.
In this paper, we apply Haar wavelet by default.
### 5.2 Spectrum-Aware Knowledge Distillation
Diffusion model has difficulty in modelling the high-frequency components (in
Section 4.2), especially for efficient requirements. In combat with spectrum
deficiency in image generation, we distill the prediction of a large pre-
trained teacher model to a compact WG-Unet student. Beyond output matching
with a L2 loss, a Spectrum-Aware Distillation is applied to guide the student
to synthesize naturalistic image details. Our intuition is to re-weight the
distillation loss according to the spectrum magnitude. For components with low
magnitudes, such as high-frequency bands, we increase the error penalty; while
the weight for the low-frequency elements is reduced.
Suppose a teacher diffusion model $\bm{s}_{T}(\cdot;\bm{\theta}_{T})$, we
would like to distill a student $\bm{s}_{T}(\cdot;\bm{\theta}_{T})$ by
mimicking the outputs and features . At time-step $t$, the perturbed image
$\textbf{x}_{t}$ is fed into both networks to produce the outputs and
features. A L2 loss [44, 31] is use to quantify their spatial distance
$\displaystyle\mathcal{L}_{\text{spatial}}=\sum_{i}\|\mathbf{X}^{(i)}_{T}-\mathbf{X}^{(i)}_{S}\|_{2}^{2}$
(9)
where $\mathbf{X}^{(i)}_{T}$ and $\mathbf{X}^{(i)}_{S}$ stand for the pair of
teacher/student’s output features or outputs of the same scale. A single
1$\times$1 Conv layer is used to align the dimensions between a prediction
pair.
In addition to the spatial distillation, inspired by the imbalanced learning
[28, 4, 23] and long-tail learning [67, 24], we design a distillation loss to
encourage the model for minority frequency recovery. Given a pair of model
predictions and the clean image $\mathbf{x}_{0}$, we first interpolate
$\mathbf{x}_{0}$ to the same size of the feature map, then take their 2D DFT
$\displaystyle\mathcal{X}^{(i)}_{T}=\mathcal{F}[\mathbf{X}^{(i)}_{T}],\mathcal{X}^{(i)}_{S}=\mathcal{F}[\mathbf{X}^{(i)}_{S}],\mathcal{X}^{(i)}=\mathcal{F}[\text{Resize}(\mathbf{x}_{0})]$
(10)
The $\mathcal{X}_{0}$ is then applied to modulate the difference between
$\mathcal{X}^{(i)}_{T}$ and $\mathcal{X}^{(j)}_{S}$
$\displaystyle\mathcal{L}_{\text{freq}}=\frac{}{}\sum_{i}\omega_{i}\|\mathcal{X}^{(i)}_{T}-\mathcal{X}^{(j)}_{S}\|_{2}^{2},\text{where
}\omega=|\mathcal{X}^{(i)}|^{\alpha}$ (11)
with a scaling factor $\alpha<0$ ($\alpha=-1$ in our experiment),
$\mathcal{L}_{\text{freq}}$ pushes the student towards learning the minority
frequencies yet down-weights the majority components. Together with the DDPM
objective in Eq. 3, our training objective becomes
$\mathcal{L}=\mathcal{L}_{\text{DDPM}}+\lambda_{s}\mathcal{L}_{\text{spatial}}+\lambda_{f}\mathcal{L}_{\text{freq}}$
with weighting factors $\lambda_{s}=0.1$ and $\lambda_{f}=0.1$.
Note that our method aims to learn accurate score prediction at each denoising
step, which is orthogonal to existing distillation on sampling step reduction
[47, 36].
## 6 Experiments
This section demonstrates the ability of our approach SD on high-resolution
image synthesis (Section 6.1) with limited computation, and validates the
significance of each proposed module via ablation study in Section 6.2.
FFHQ $256\times 256$
---
Model | #Param | MACs | FID$\downarrow$
DDPM [16] | 113.7M | 248.7G | 8.4
P2 [6] | 113.7M | 248.7G | 7.0
LDM [43] | 274.1M | 96.1G | 5.0
Lite-LDM | 22.4M($12.2\times$) | 7.9G($12.2\times$) | 17.3($-12.3$)
Ours | 21.1M($13.0\times$) | 6.7G($14.3\times$) | 10.5($-5.5$)
CelebA-HQ $256\times 256$
---
Model | #Param | MACs | FID$\downarrow$
Score SDE [55] | 65.57M | 266.4G | 7.2
DDGAN [57] | 39.73M | 69.9G | 7.6
LDM [43] | 274.1M | 96.1G | 5.1
Lite-LDM | 22.4M($12.2\times$) | 7.9G($12.2\times$) | 14.3($-9.2$)
Ours | 21.1M($13.0\times$) | 6.7G($14.3\times$) | 9.3($-4.2$)
LSUN-Bedroom $256\times 256$
---
Model | #Param | MACs | FID$\downarrow$
DDPM [16] | 113.7M | 248.7G | 4.9
IDDPM [38] | 113.7M | 248.6G | 4.2
ADM [8] | 552.8M | 1114.2G | 1.9
LDM [43] | 274.1M | 96.1G | 3.0
Lite-LDM | 22.4M($12.2\times$) | 7.9G($12.2\times$) | 10.9($-7.9$)
Ours | 21.1M($13.0\times$) | 6.7G($14.3\times$) | 5.2($-2.2$)
LSUN-Church $256\times 256$
---
Model | #Param | MACs | FID$\downarrow$
DDPM [16] | 113.7M | 248.7G | 4.9
IDDPM [38] | 113.7M | 248.6G | 4.3
ADM [8] | 552.8M | 1114.2G | 1.9
LDM [43] | 295.0M | 18.7G | 4.0
Lite-LDM | 32.8M($9.0\times$) | 2.1G($8.9\times$) | 13.6($-9.6$)
Ours | 33.8M($8.7\times$) | 2.1G($8.9\times$) | 8.4($-4.4$)
Table 2: Unconditional generation results comparison to prior DPMs. The
results are taken from the original paper, except that DDPM is take from the
[6].
Figure 6: Throughput for unconditional image generation.
Datasets and Evaluation. We evaluate our model on 4 unconditional generation
datasets and 2 conditional benckmarks. Specially, we train our unconditional
SSD models on LSUN-Churches/Bedrooms [66], FFHQ [26], and CelebA-HQ [25]. We
also validate the model on class-conditioned ImageNet [7] and MS-COCO [29]
text-to-image generation. For the text-to-image task, we first train on
LAION-400M [48] and test on MS-COCO directly.
Training and Evaluation Details. We build our model on the LDM [43]
frameworks. All pre-trained teachers and auto-encoders are downloaded from the
official repository333https://github.com/CompVis/latent-diffusion. For fair
comparison444[Generative models from other families (e.g. GAN, VAE, and Flow)
are excluded intentionally for fair computation comparison.], we implement a
lite-version of LDM, with a channel dimension of $64$ as our baseline model.
We call it Lite-LDM.
On 4 unconditional benchmarks, we train our spectral diffusion for 150k
iterations with a mini-batch size of $512$. We use AadmW [32] optimizer with
initial learning rate $1.024\text{\times}{10}^{-3}$ and linear lr decay. For
the class- and text-conditioned generation, the initial learning rate is set
to $5.12\text{\times}{10}^{-4}$ instead, with other parameters unchanged.
Classifier-free guidance [18] is applied. The synthesized image quality is
measured by the FID score [14] with 50k generated samples at the resolution of
$256$. We use a 200-step DDIM [51] sampling by default. We also compare the
model size and computational cost in terms of parameter number and Multiply-
Add cumulation (MACs)555https://github.com/sovrasov/flops-counter.pytorch.
Throughput is reported as our measurement of running speed. All experiments
are run on 8 NVIDIA Tesla V100 GPUs. More details are specified in the
Supplementary Material.
Figure 7: Randomly sampled $256\times 256$ images generated by our models
trained on CelebA-HQ [25], FFHQ [26], LSUN-Bedroom and LSUN-Church [66],
ImageNet [7]. All images are sampled with 200 DDIM steps.
### 6.1 Image Generation Results
Unconditional Image Generation. We train our SD on LSUN-Churches/Bedrooms [66]
FFHQ [26], and CelebA-HQ [25], and evaluate the sample quality. As shown in
Table 2, directly training small-sized diffusion models largely deteriorates
the model performance, such that Lite-LDM achieves an FID drop of $12.3$ on
FFHQ and $13.2$ on CelebA-HQ. Our proposed SD achieves $8\sim 14$ times
parameter and computation reduction compared to official LDM while being
competitive in image fidelity. For example, with a 21.1M Unet model and 6.7G
MACs, our SD gets an FID score of 5.2, which is very close to the 4.9 FID in
DDPM, but with only $\frac{1}{37}$ of its computation cost.
Throughput is reported in Figure 6. It refers to the number of time steps that
model runs per second. We measure its value with a batch-size of 64 by
averaging over 30 runs. We see that, Lite-LDM, while being fast, suffer
greatly from low visual quality. In comparison, our SD is $4.6\times$ faster
on CPU and $3.6\times$ on GPU compared to LDM on 3 of the 4 datasets.
We inspect the visual quality of the synthesized sample in Figure 7, row 1-4.
With much less parameters and complexity, our SD still produces realistic
samples with decent high-frequency details and sample diversity.
Class-conditional Image Generation. We validate our performance for class-
conditioned image generation on ImageNet. The results are demonstrated in
Table 3. With super-mini architecture and classifier-free guidance of $w=3.0$,
our SD reaches an FID score of 10.6. As the comparison, the ADM [8] only gets
FID=10.9, but with 553.8M parameters and 1114.2 MACs. Lite-LDM, though being
comparably fast, suffers from its inability for high-frequency generation,
gets a high FID score of 20.1.
Generated results are visualized in Figure 7 row 5-10. Our SD is able to
produce diverse images of different categories, particularly good at animal
generation like corgi and bear. However, we still observe failure cases with
distorted faces and shapes. For example, our models suffer in crowded instance
generation such as on banana.
Method | #Param | MACs | FID$\downarrow$
---|---|---|---
IDDPM [38] | 273.1M | 1416.3G | 12.3
ADM [8] | 553.8M | 1114.2G | 10.9
LDM [43] | 400.9M | 99.8G | 10.6
ADM-G [8] | 553.8+54.1M | 1114.2+72.2G | 4.6
LDM-CFG [43] | 400.9M | 99.8G | 3.6
Lite-LDM-CFG | 47.0M($8.5\times$) | 11.1G ($9.0\times$) | 20.1($-16.5$)
Ours-CFG | 45.4M($8.8\times$) | 9.9G ($10.1\times$) | 10.6($-7.0$)
Table 3: Comparison of class-conditional image generation methods on ImageNet [7] with recent state-of-the-art methods. “G” stands for the classifier guidance and “CFG” refers to the classifer-free guidance for conditional image generation. Method | #Param | FID$\downarrow$
---|---|---
GLIDE [37] | 5.0B | 12.24
DALLE2 [42] | 5.5B | 10.39
Imagen [46] | 3.0B | 7.27
LDM [43] | 1.45B | 12.63
Ours | 77.6M($18.7\times$) | 18.43
Table 4: Zero-Shot evaluation on MS-COCO text-to-image generation. We only
count the model size of diffusion part but exclude language encoder. Figure
8: Selected samples from Spectral Diffusion using classifier-free guidance
$w=5.0$ for text-to-image generation.
Text-to-Image Generation. Following prior work [43], we train our text-
conditioned SD with a fixed CLIP encoder [41] on LAION-400M [48], and then do
zero-shot inference on MS-COCO [29] with $w=2.0$. Since each MS-COCO images
contains multiple captions, during evaluations, we randomly select 50k
descriptions from the train set, with one caption corresponding to a unique
image.
The evaluation results are provided in Table 4. Again, with a 77.6M model, we
gets to a FID score of 18.43, while being $18.7\times$ smaller than LDM. We
also provide qualitative analysis for text-to-image generation with new
prompts, in Figure 8. Although the image quality is not as perfect as in those
large-sized diffusion models, our model learns to compose vivid drawing
according to the descriptions, with minimal computational cost and portable
model size. Our SD is good at abstract or carton style paintings. However, it
is still challenging to generate human body and faces, as in the “basketball
player” example.
### 6.2 Ablation Study and Analysis
In this section, we validate the effectiveness of wavelet gating and spectrum-
aware distillation, on whether and how they help to improve the image
fidelity.
Effectiveness of Wavelet Gating. We validate the effectiveness of the Wavelet
Gating by replacing our WG upsample and downsample with the nearest neighbor
resizer in LDM [43] and train on the FFHQ dataset. As shown in Table 5,
removing WG significantly increases the FID from $10.5\to 12.4$. Besides, WG
alone improves Lite-LDM’s FID score by $2.6$. Both results indicate that WG
effectively promote the sample quality of the small DPMs.
In addition, we plot the values of the gating functions at different denoising
steps for a pre-trained text-to-image SD model in Figure 9. Each curve is
calculated by averaging the gating coefficient for 100 generated images. The
trends of the downsample and upsample operations diverge. In the end of
denoising (large $t$), high-frequency details emerged in
$\hat{\mathbf{x}}_{t}$. The WG-Down thus enhances the high-frequency signals
with increased $g_{\\{\textsf{HL},\textsf{LH},\textsf{HH}\\}}$ while keeping
the low-frequency part constant. In contrast, the WG-Up (Right) promotes
$g_{\textsf{LL}}$ in the late stage of denoising. Predicted noises boost its
low-frequency components, resulting in high-frequency element recovery in the
$\hat{\mathbf{x}}_{0}=\frac{\mathbf{x}_{t}-\sqrt{1-\bar{\alpha}\bm{\epsilon}}}{\sqrt{\bar{\alpha}}}$.
Effectiveness of Spectrum-Aware Distillation. To understand the value of the
proposed SA-Distillation, we sequentially remove each loss term. Figure 5
shows that, while the spatial term only accounts for 0.9 FID, the frequency
term takes up 1.8 FID improvement, highlighting its importance in high-quality
image generation.
We also visualize the images generated by trained models with (W) or without
(W/O) the frequency term in Figure 10, with their DFT difference. The model
without $\mathcal{L}_{freq}$ makes smoother predictions, while our method
recovers the details like hair or architectural textures. By penalizing high-
frequency distillation, our proposed SA-Distillation resulted in large
differences and improvements in high-frequency components in
$|\mathcal{F}_{f}-\mathcal{F}_{nof}|$.
Method | FFHQ $256\times 256$ |
---|---|---
\+ Wavelet Gating | | ✓ | | | ✓ | | ✓ | ✓
\+ Spatial Distill | | | ✓ | | ✓ | ✓ | | ✓
\+ Freq Distill | | | | ✓ | | ✓ | ✓ | ✓
FID$\downarrow$ | 17.3 | 14.7 | 16.6 | 15.3 | 12.3 | 12.4 | 11.4 | 10.5
Table 5: Ablation study on FFHQ dataset. Figure 9: Wavelet gating function
values at different $t$. We plot the mean$\pm$std for 100 generated images.
Figure 10: Generated images W or W/O the freq term, as well as their DFT
difference $|\mathcal{F}_{\text{f}}-\mathcal{F}_{\text{nof}}|$.
## 7 Conclusion
In the study, we focus on reducing the computation cost for diffusion models.
The primary obstacle to training small DPMs is their inability to provide
realistic high-frequency, which results from the frequency evolution and bias
of diffusion process. In order to resolve these problems, we propose Spectral
Diffusion (SD) for efficient image generation. It performs spectrum dynamic
denoising by using a wavelet gating operation, which automatically enhances
different frequency bands at different reverse steps. A large pre-trained
network helps to improve the performance of high-frequency generation by
knowledge distillation. By seamlessly integrating both modifications, our
model is 8-18 $\times$ slimer and runs 2-5$\times$ faster than the latent
diffusion model, with negligible performance drop.
## References
* [1] Fan Bao, Chongxuan Li, Jun Zhu, and Bo Zhang. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. arXiv preprint arXiv:2201.06503, 2022.
* [2] Ronen Basri, Meirav Galun, Amnon Geifman, David Jacobs, Yoni Kasten, and Shira Kritchman. Frequency bias in neural networks for input of non-uniform density. In International Conference on Machine Learning, pages 685–694. PMLR, 2020.
* [3] Geoffrey J Burton and Ian R Moorhead. Color and spatial structure in natural scenes. Applied optics, 26(1):157–170, 1987.
* [4] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. Advances in neural information processing systems, 32, 2019.
* [5] Yuanqi Chen, Ge Li, Cece Jin, Shan Liu, and Thomas Li. Ssd-gan: Measuring the realness in the spatial and spectral domains. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 1105–1112, 2021.
* [6] Jooyoung Choi, Jungbeom Lee, Chaehun Shin, Sungwon Kim, Hyunwoo Kim, and Sungroh Yoon. Perception prioritized training of diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11472–11481, 2022.
* [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [8] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780–8794, 2021.
* [9] David J Field. Relations between the statistics of natural images and the response properties of cortical cells. Josa a, 4(12):2379–2394, 1987.
* [10] Joel Frank, Thorsten Eisenhofer, Lea Schönherr, Asja Fischer, Dorothea Kolossa, and Thorsten Holz. Leveraging frequency analysis for deep fake image recognition. In International conference on machine learning, pages 3247–3258. PMLR, 2020.
* [11] Minghan Fu, Huan Liu, Yankun Yu, Jun Chen, and Keyan Wang. Dw-gan: A discrete wavelet transform gan for nonhomogeneous dehazing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 203–212, 2021.
* [12] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
* [13] Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, and Baining Guo. Vector quantized diffusion model for text-to-image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10696–10706, 2022.
* [14] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
* [15] Jonathan Ho, William Chan, Chitwan Saharia, Jay Whang, Ruiqi Gao, Alexey Gritsenko, Diederik P Kingma, Ben Poole, Mohammad Norouzi, David J Fleet, et al. Imagen video: High definition video generation with diffusion models. arXiv preprint arXiv:2210.02303, 2022.
* [16] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
* [17] Jonathan Ho, Chitwan Saharia, William Chan, David J Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. J. Mach. Learn. Res., 23:47–1, 2022.
* [18] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
* [19] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. arXiv:2204.03458, 2022.
* [20] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
* [21] Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, and Wenyu Liu. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 603–612, 2019.
* [22] Aapo Hyvärinen and Peter Dayan. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4), 2005.
* [23] Liming Jiang, Bo Dai, Wayne Wu, and Chen Change Loy. Focal frequency loss for image reconstruction and synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13919–13929, 2021.
* [24] Bingyi Kang, Saining Xie, Marcus Rohrbach, Zhicheng Yan, Albert Gordo, Jiashi Feng, and Yannis Kalantidis. Decoupling representation and classifier for long-tailed recognition. In International Conference on Learning Representations, 2020.
* [25] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
* [26] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401–4410, 2019.
* [27] Mahyar Khayatkhoei and Ahmed Elgammal. Spatial frequency bias in convolutional generative adversarial networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 7152–7159, 2022.
* [28] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980–2988, 2017.
* [29] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
* [30] Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao. Pseudo numerical methods for diffusion models on manifolds. In International Conference on Learning Representations, 2022.
* [31] Yifan Liu, Ke Chen, Chris Liu, Zengchang Qin, Zhenbo Luo, and Jingdong Wang. Structured knowledge distillation for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2604–2613, 2019.
* [32] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
* [33] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. arXiv preprint arXiv:2206.00927, 2022.
* [34] Eric Luhman and Troy Luhman. Knowledge distillation in iterative generative models for improved sampling speed. arXiv preprint arXiv:2101.02388, 2021.
* [35] Hengyuan Ma, Li Zhang, Xiatian Zhu, and Jianfeng Feng. Accelerating score-based generative models with preconditioned diffusion sampling. In European Conference on Computer Vision, 2022.
* [36] Chenlin Meng, Ruiqi Gao, Diederik P Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans. On distillation of guided diffusion models. arXiv preprint arXiv:2210.03142, 2022.
* [37] Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
* [38] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162–8171. PMLR, 2021.
* [39] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv, 2022.
* [40] Zequn Qin, Pengyi Zhang, Fei Wu, and Xi Li. Fcanet: Frequency channel attention networks. In Proceedings of the IEEE/CVF international conference on computer vision, pages 783–792, 2021.
* [41] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763. PMLR, 2021.
* [42] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
* [43] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695, 2022.
* [44] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
* [45] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
* [46] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
* [47] Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In International Conference on Learning Representations, 2022.
* [48] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. LAION-400M: open dataset of clip-filtered 400 million image-text pairs. CoRR, abs/2111.02114, 2021.
* [49] Katja Schwarz, Yiyi Liao, and Andreas Geiger. On the frequency bias of generative models. Advances in Neural Information Processing Systems, 34:18126–18136, 2021.
* [50] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256–2265. PMLR, 2015.
* [51] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
* [52] Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum likelihood training of score-based diffusion models. Advances in Neural Information Processing Systems, 34:1415–1428, 2021.
* [53] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in Neural Information Processing Systems, 32, 2019.
* [54] Yang Song, Sahaj Garg, Jiaxin Shi, and Stefano Ermon. Sliced score matching: A scalable approach to density and score estimation. In Uncertainty in Artificial Intelligence, pages 574–584. PMLR, 2020.
* [55] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, 2021.
* [56] David J Tolhurst, Yoav Tadmor, and Tang Chao. Amplitude spectra of natural images. Ophthalmic and Physiological Optics, 12(2):229–232, 1992.
* [57] Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. Advances in Neural Information Processing Systems, 34:11287–11302, 2021.
* [58] van A Van der Schaaf and JH van van Hateren. Modelling the power spectra of natural images: statistics and information. Vision research, 36(17):2759–2770, 1996.
* [59] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
* [60] Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661–1674, 2011.
* [61] Norbert Wiener, Norbert Wiener, Cyberneticist Mathematician, Norbert Wiener, Norbert Wiener, and Cybernéticien Mathématicien. Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications, volume 113. MIT press Cambridge, MA, 1949.
* [62] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
* [63] Zhi-Qin John Xu, Yaoyu Zhang, Tao Luo, Yanyang Xiao, and Zheng Ma. Frequency principle: Fourier analysis sheds light on deep neural networks. arXiv preprint arXiv:1901.06523, 2019.
* [64] Zhi-Qin John Xu, Yaoyu Zhang, and Yanyang Xiao. Training behavior of deep neural network in frequency domain. In International Conference on Neural Information Processing, pages 264–274. Springer, 2019.
* [65] Mengping Yang, Zhe Wang, Ziqiu Chi, and Wenyi Feng. Wavegan: Frequency-aware gan for high-fidelity few-shot image generation. arXiv preprint arXiv:2207.07288, 2022.
* [66] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
* [67] Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, and Jiashi Feng. Deep long-tailed learning: A survey. arXiv preprint arXiv:2110.04596, 2021.
* [68] Daquan Zhou, Weimin Wang, Hanshu Yan, Weiwei Lv, Yizhe Zhu, and Jiashi Feng. Magicvideo: Efficient video generation with latent diffusion models. arXiv preprint arXiv:2211.11018, 2022.
|
# An Emotion-guided Approach to Domain Adaptive Fake News Detection using
Adversarial Learning (Student Abstract)
Arkajyoti Chakraborty1, Inder Khatri1, Arjun Choudhry1, Pankaj Gupta1, Dinesh
Kumar Vishwakarma1, Mukesh Prasad2
###### Abstract
Recent works on fake news detection have shown the efficacy of using emotions
as a feature for improved performance. However, the cross-domain impact of
emotion-guided features for fake news detection still remains an open problem.
In this work, we propose an emotion-guided, domain-adaptive, multi-task
approach for cross-domain fake news detection, proving the efficacy of
emotion-guided models in cross-domain settings for various datasets.
## Introduction
Over the years, our reliance on social media as an information source has
increased, leading to an exponential increase in the spread of _fake news_. To
counter this, researchers have proposed various approaches for fake news
detection (FND). Models trained on one domain often perform poorly on datasets
from other domains due to the domain shift (Figure 1(1)). Some works show the
efficacy of domain adaptation for cross-domain FND by extracting domain-
invariant features (Figure 1(2)) for classification (Zhang et al. 2020).
However, adapting domains does not ensure that features in different classes
align correctly across domains, which sometimes has a negative impact on
performance. Some works have shown a correlation between fake news and their
intrinsic emotions (Guo et al. 2019; Choudhry, Khatri, and Jain 2022) (Figure
1(3)), having successfully used it for fake news detection. However, these
works are restricted to in-domain settings and don’t consider cross-domain
evaluation. We propose the use of emotion-guided multi-task models for
improved cross-domain fake news detection, experimentally proving its
efficacy, and present an emotion-guided domain adaptive approach for improved
cross-domain fake news detection by leveraging better feature alignment across
domains due to the use of emotion labels (Figure 1(4)).
Figure 1: (1) Cross-domain texts not aligned. (2) Domain adaptation for
improved alignment. (3) Emotion-guided classification. (4) Emotion-guided
domain adaptation.
## Proposed Methodology
### Datasets, Emotion Annotation & Preprocessing
We use the FakeNewsAMT & Celeb (Pérez-Rosas et al. 2018),
Politifact111https://www.politifact.com/, and
Gossipcop222https://www.gossipcop.com/ datasets. We annotate them with the
core emotions from Ekman’s (Ekman 1992) (6 emotions: Joy, Surprise, Anger,
Sadness, Disgust, Fear) and Plutchik’s (Plutchik 1982) (8 emotions: Joy,
Surprise, Trust, Anger, Anticipation, Sadness, Disgust, Fear) emotion
theories. We use the Unison model (Colneric and Demsar 2018) for annotating
the datasets with emotion tags. During preprocessing, we convert text to lower
case, remove punctuation, and decontract verb forms (eg. “I’d” to “I would”).
Figure 2: Graphical representation of our emotion-guided domain-adaptive
framework for cross-domain fake news detection.
### Emotion-guided Domain-adaptive Framework
We propose the cumulative use of domain adaptation and emotion-guided feature
extraction for cross-domain fake news detection. Our approach aims to improve
the feature alignment between different domains using adversarial domain
adaptation by leveraging the correlation between the emotion and the veracity
of a text (as shown in Figure 1(4)). Figure 2 shows our proposed framework. We
use an LSTM-based multi-task learning (MTL) feature extractor which is trained
by the cumulative losses from fake news classifier, emotion classifier, and
discriminator (aids in learning domain-invariant features). LSTM can be
replaced with better feature extractors. We use it specifically for easier
comparison to non-adapted emotion-guided and non-adapted single-task models.
The domain classifier acts as the discriminator. Fake news classification
loss, emotion classification loss, adversarial loss, and total loss are
defined as:
$\scriptstyle L_{FND}\ =\
\min\limits_{\theta_{l},\theta_{f}}\sum_{i=1}^{n_{s}}L_{f}^{i}$ (1)
$\scriptstyle L_{emo}\ =\
\min\limits_{\theta_{l},\theta_{e}}\sum_{i=1}^{n_{s}}L_{es}^{i}\ +\
\sum_{j=1}^{n_{t}}L_{et}^{j}))$ (2)
$\scriptstyle L_{adv}\ =\
\min\limits_{\theta_{d}}(\max\limits_{\theta_{l}}(\sum_{i=1}^{n_{s}}L_{ds}^{i}\
+\ \sum_{j=1}^{n_{t}}L_{dt}^{j}))$ (3) $\scriptstyle L_{Total}\ =\
(1-\alpha-\beta)*L_{FND}\ +\ \alpha\ *\ (L_{adv})\ +\ \beta\ *\ (L_{emo})$ (4)
where $n_{s}$ and $n_{t}$ are number of samples in source and target sets;
$\theta_{d}$, $\theta_{f}$, $\theta_{e}$, and $\theta_{l}$ are parameters for
discriminator, fake news classifier, emotion classifier, and LSTM feature
extractor; $L_{d_{s}}$ and $L_{d_{t}}$ are binary crossentropy loss for source
and target classification; $L_{es}$ and $L_{et}$ are crossentropy loss for
emotion classification; $L_{f}$ is binary crossentropy loss for Fake News
Classifier; $\alpha$ and $\beta$ are weight parameters in $L_{Total}$. We
optimized $\alpha$ and $\beta$ for each setting.
## Experimental Results & Discussion
Each model used for evaluation was optimized on an in-domain validation set.
Table 1 illustrates our results proving the efficacy of using emotion-guided
models in non-adapted cross-domain settings. Table 2 compares non-adaptive
models, domain adaptive models, and our emotion-guided domain adaptive models
in various settings. MTL (E) and MTL (P) refer to emotion-guided multi-task
frameworks using Ekman’s and Plutchik’s emotions respectively. STL refers to
single-task framework. DA refers to domain-adaptive framework with a
discriminator. Non-DA refers to a non-adapted model. Some findings observed
are:
Emotions-guided non-adaptive multi-task models outperform their single-task
counterparts in cross-domain settings, as seen in Table 1, indicating improved
extraction of features that are applicable across different datasets.
Emotion-guided domain-adaptive models improve performance in cross-domain
settings. Table 2 shows the advantage of emotion-guided adversarial domain-
adaptive models over their non-adaptive counterparts. This shows the scope for
improved feature extraction even after adversarial adaptation, and emotion-
guided models act as a solution.
Source | Target | Accuracy Non-DA STL | Accuracy Non-DA MTL(E) | Accuracy Non-DA MTL(P)
---|---|---|---|---
FAMT | Celeb | 0.420 | 0.520 | 0.530
Celeb | FAMT | 0.432 | 0.471 | 0.476
Table 1: Cross-domain evaluation of non-adaptive models on FakeNewsAMT (FAMT)
& Celeb datasets. Emotion-guided models (MTL (E) and MTL (P)) outperform their
corresponding STL models in cross-domain settings.
Source | Target | Accuracy Non-DA STL | Accuracy DA STL | Accuracy DA MTL(E) | Accuracy DA MTL(P)
---|---|---|---|---|---
FAMT | Celeb | 0.420 | 0.560 | 0.540 | 0.600
Celeb | FAMT | 0.432 | 0.395 | 0.501 | 0.551
Politi | Gossip | 0.527 | 0.585 | 0.698 | 0.671
Celeb | Gossip | 0.488 | 0.525 | 0.555 | 0.587
FAMT | Gossip | 0.451 | 0.790 | 0.805 | 0.795
FAMT | Politi | 0.363 | 0.621 | 0.704 | 0.621
Table 2: Cross-domain evaluation of non-adaptive, adaptive and emotion-guided
adaptive models on various datasets.
## References
* Choudhry, Khatri, and Jain (2022) Choudhry, A.; Khatri, I.; and Jain, M. 2022. An Emotion-Based Multi-Task Approach to Fake News Detection (Student Abstract). _AAAI_ , 36(11).
* Colneric and Demsar (2018) Colneric, N.; and Demsar, J. 2018. Emotion Recognition on Twitter: Comparative Study and Training a Unison Model. _IEEE Transactions on Affective Computing_ , 11(3).
* Ekman (1992) Ekman, P. 1992. An argument for basic emotions. _Cognition & Emotion_, 6.
* Guo et al. (2019) Guo, C.; Cao, J.; Zhang, X.; Shu, K.; and Yu, M. 2019. Exploiting Emotions for Fake News Detection on Social Media. _ArXiv_ , abs/1903.01728.
* Pérez-Rosas et al. (2018) Pérez-Rosas, V.; Kleinberg, B.; Lefevre, A.; and Mihalcea, R. 2018. Automatic Detection of Fake News. In _COLING_. ACL.
* Plutchik (1982) Plutchik, R. 1982. A psychoevolutionary theory of emotions. _Social Science Information_ , 21(4-5).
* Zhang et al. (2020) Zhang, T.; Wang, D.; Chen, H.; Zeng, Z.; Guo, W.; Miao, C.; and Cui, L. 2020. BDANN: BERT-Based Domain Adaptation Neural Network for Multi-Modal Fake News Detection. In _IJCNN_.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.